Job Description

Key Responsibilities:

  • Develop and optimize data pipelines using Spark and Databricks.
  • Write complex SQL queries to analyze and manipulate large datasets.
  • Implement Python-based scripts for data processing and automation.
  • Design and maintain ETL workflows for structured and unstructured data.
  • Collaborate with cross-functional teams to ensure high-performance data architectures.
  • Ensure data quality, governance & security within the pipelines.

 

Mandatory Skills:

  • Strong proficiency in SQLPythonSpark, and Databricks.
  • Hands-on experience with distributed computing frameworks.

 

Good-to-Have Skills (Optional):

  • Experience with Airflow / Prefect

Ready to Apply?

Take the next step in your AI career. Submit your application to Tavant today.

Submit Application