Job Description
Budget: 8-18 LPA
Location: Pune
Data Engineer – Spark, dbt, Python & SQL
Experience: 3–6 Years
Primary Skills: Apache Spark, dbt, Python, SQL, Airflow, Linux, Kubernetes
Role Overview
We are looking for a hands-on Data Engineer / Analytics Engineer with strong experience in Spark-based data processing, dbt-driven transformations, and advanced SQL. The role requires working in Linux environments, building and running containerized workloads using Podman, orchestrating pipelines with Airflow, and operating data jobs on Kubernetes-based platforms.
Key Responsibilities
• Design and develop batch data pipelines using Apache Spark
• Build and maintain dbt models, tests, macros, and documentation
• Write optimized and maintainable SQL for large-scale data transformations
• Develop data processing logic using Python (PyS...
Ready to Apply?
Take the next step in your AI career. Submit your application to Zigsaw today.
Submit Application