Job Description

Key Responsibilities:

  • Develop scalable ETL/ELT pipelines using Databricks technologies (Delta Lake, Auto Loader, DLT) and PySpark for efficient data transformation and validation
  • Implement secure, modular, and reusable pipelines with PII masking, checkpointing, schema evolution, and medallion layering (Bronze/Silver/Gold)
  • Configure and manage Unity Catalog for secure data access, audit logging, masking, and lineage tracking; enable Delta Sharing internally and externally
  • Integrate with BI tools (Power BI, Tableau, Looker) and prepare GenAI-ready datasets using Databricks Feature Store and Vector Search
  • Optimize jobs for performance, cost efficiency, and SLA reliability; deploy pipelines via Databricks Asset Bundles through CI/CD (GitHub/GitLab)

Requirements and Skills:

  • At least 2 years of relevant work experience with cloud-based services relevant to data engineering, data storage, data processing, data wareho...

Ready to Apply?

Take the next step in your AI career. Submit your application to Bravissimo Resourcing today.

Submit Application