Job Description
The project:
- International project in the financial sector
- Data platform in Azure, with ADF and Databricks as core
- Strong focus on data engineering, governance and quality
- Critical platform in production, with high reliability and stability
Requirements
Responsibilities
- Develop and maintain data pipelines in Azure Databricks (PySpark / Spark SQL).
- Implement and optimize Delta Lake tables (ETL/ELT, medallion architecture).
- Collaborate in the design of analytical datasets.
- Integrate Databricks workloads with Azure Data Factory (orchestration).
- Apply good practices in data performance, quality and reliability.
- Contribute to the evolution of the platform for analytics and AI scenarios.
Requirements
- At least 4 years of experience as a Data Engineer.
- Hands-on experience with Azure Databricks and Apache Spark.
- Good knowledge of Python ...
Ready to Apply?
Take the next step in your AI career. Submit your application to emagine today.
Submit Application