Job Description
Job Description: Role Descriptions: Design| build| and maintain batch and streaming ETL pipelines using Python| PySpark| and orchestration tools (e.g.| Airflow| AWS Step Functions| Glue workflows). Strong hands-on experience in Python| PySpark| SQL| AWS| and ETL to build and optimize scalable data pipelines and warehouse solutions. Work closely with Data Scientists| Analytics| and Business stakeholders to ensure reliable| high-quality data is available for reporting and advanced analytics Develop optimized SQL for data modeling| transformations| and performance tuning across Data WarehousesLakes. Implement robust data ingestion frameworks from APIs| files| and RDBMS manage schema evolution and partitioning strategies. Build and maintain data models (star snowflake schemas| dimensional modeling) to support BIAnalytics and downstream ML workloads. Ensure data quality (validations| profiling| observability| lineage) and implement error handling and recovery patterns. Optimize PySpark jobs...
Ready to Apply?
Submit your application today and join our talented team at Astra North Infoteck Inc..
Submit ApplicationJob Details
- Location Toronto, ON
- Job Type Full-time
- Category other-general
- Posted Date February 25, 2026
- Application Deadline April 06, 2026