Job Description
Responsibilities
- Build and maintain ingestion pipelines from operational systems, files, APIs and events
- Design batch and streaming data pipelines that are resilient, observable and testable
- Normalize, enrich and transform raw data into reusable, well-structured datasets
- Enforce schemas, validate inputs and manage schema evolution
- Implement data quality checks and surface issues early
- Ensure data pipelines respect tenant and line-of-business isolation
- Collaborate closely with the Data Architect to implement modelling and governance standards
- Support migration of logic out of dashboards and into shared data pipelines
- Balance delivery speed with reliability and long-term maintainability
What you will work with
- Azure, Databricks, BigID and others
- Data ingestion frameworks and transformation pipelines
- Lakehouse-style storage and processing...
Ready to Apply?
Take the next step in your AI career. Submit your application to Confidential today.
Submit Application