Job Description

TITLE: Databricks Data Engineer
WORK SET UP: On-site in Cebu City
WORK SHIFT: Shifting
Summary:
Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Responsibilities:
• Develop high-quality, scalable ETL/ELT pipelines using Databricks technologies including Delta Lake, Auto Loader, and DLT
• Excellent programming and debugging skills in Python
• Strong hands-on experience with PySpark to build efficient data transformation and validation logic
• Must be proficient in at least one cloud platform: AWS, GCP, or Azure
• Create modular dbx functions for transformation, PII masking, and validation logic — reusable across DLT and notebook pipelines
• Implement ingestion patterns using Auto Loader with checkpointing and schema evolution for structured and semi-structured data

Ready to Apply?

Take the next step in your AI career. Submit your application to myglitters today.

Submit Application