Job Description

Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems.
Key Responsibilities:
Develop high-quality, scalable ETL/ELT pipelines using Databricks technologies including Delta Lake, Auto Loader, and DLT
Excellent programming and debugging skills in Python
Strong hands-on experience with PySpark to build efficient data transformation and validation logic
Must be proficient in at least one cloud platform: AWS, GCP, or Azure
Create modular dbx functions for transformation, PII masking, and validation logic — reusable across DLT and notebook pipelines
Implement ingestion patterns using Auto Loader with checkpointing and schema evolution for structured and semi-structured data
Build secure and observable DLT pipelines with DLT Expectations, supporting Bronze/Silver/Gold medallion layering

Ready to Apply?

Take the next step in your AI career. Submit your application to Accenture in the Philippines today.

Submit Application