Job Description
Job Description
Key Responsibilities:
- Design, develop, and implement efficient ELT/ETL processes for large datasets.
- Build and optimize data processing workflows using Apache Spark.
- Utilize Python for data manipulation, transformation, and analysis.
- Develop and manage data pipelines using Apache Airflow.
- Write and optimize SQL queries for data extraction, transformation, and loading.
- Collaborate with data scientists, analysts, and other engineers to understand data requirements and deliver effective solutions.
- Work within an on-premise computing environment for data processing and storage.
- Ensure data quality, integrity, and performance throughout the data lifecycle.
- Participate in the implementation and maintenance of CI/CD pipelines for data processes.
- Utilize Git for version control and collaborative development.
- Troubleshoot and resolve issues related to data pipe...
Ready to Apply?
Take the next step in your AI career. Submit your application to Inetum Polska today.
Submit Application