Job Description

Role: Databricks PySpark Developer

Experience: 5+ years

Location: Bangalore (onsite-5days) /no relocation candidates

Notice period-immediate joiners/serving notice period

 


Role Overview :

We are looking for a highly skilled Databricks PySpark Developer to join our data platform implementation team. In this role, you will be responsible for designing, developing, and optimizing scalable ETL pipelines and data workflows using Databricks and Apache Spark. You will work closely with data engineers, data scientists, and BI teams to support advanced analytics and reporting requirements.

 

Key Responsibilities :

1. ETL Development & Data Engineering

Design, develop, and maintain scalable ETL processes using Databricks PySpark.

Extract, transform, and load data from heterogeneous sources into Data Lake and Data ...

Ready to Apply?

Take the next step in your AI career. Submit your application to r3 Consultant today.

Submit Application