Job Description

We're seeking a hands on Data Engineer to help plan and execute the migration of analytics and ETL/ELT workloads from their current solution to an Azure + Databricks lakehouse stack. You'll help design ingestion and transformation pipelines, optimize Spark jobs, and ensure governance and compliance suitable for a pharmaceutical environment
Key Responsibilities
Analyze current pipelines and re‑platform to Databricks (PySpark/SQL) following bronze, silver, gold patterns.
Rebuild schedules/orchestrations (e.g., Databricks Workflows, ADF/Synapse/Fabric) and replace current instance specific operators with native Spark/Delta patterns.
Map data models/virtualized objects to Delta Lake tables with partitioning, Z Ordering, and optimized file layouts.
Develop scalable ETL/ELT in PySpark and SQL
Implement unit/integration tests and data quality checks (freshness, completeness, schema)
Tune Spark (shuffle partitions, broadcast joins, AQE), implement caching, and cost‑optimiz...

Ready to Apply?

Take the next step in your AI career. Submit your application to RED Global today.

Submit Application