Job Description

Description

:

We are looking for an experienced Data Engineer with proven expertise in building and optimizing ETL pipelines on Databricks, leveraging Delta Lake and Spark SQL. The ideal candidate will have a strong foundation in Python and SQL, a solid understanding of data storage formats such as Parquet and Delta, and experience in performance optimization, testing, and automated workflows.

Responsibilities:
  • ETL Development: Design and implement well-structured Databricks notebooks for ETL workflows, following best practices
  • Data Storage: Utilize Delta Lake for data storage, demonstrating understanding of its benefits such as ACID transactions, schema enforcement, and time travel
  • Data Transformation: Apply Spark SQL for complex data transformations and aggregations
  • Delta Live Tables (DLT): Design and manage declarative, incremental pipelines on top of Delta Lake using Delta Live Tables. Leverage built-in orchest...
  • Ready to Apply?

    Take the next step in your AI career. Submit your application to QBurst today.

    Submit Application