Job Description

Design, build, and maintain high performance data pipelines that power analytics and machine learning products. Collaborate with data scientists, product, and infrastructure teams to turn raw data into scalable, reliable assets.

Key Responsibilities

  • Architect end to end batch and Spark Streaming pipelines on cloud (AWS/GCP/Azure).
  • Implement ML feature pipelines and Realtime inference services.
  • Optimize peta byte scale processing with Spark, Kafka, and Flink.
  • Build and maintain data warehouses/lakes (Redshift, BigQuery, Snowflake).
  • Enforce data quality, governance, and security.
  • Develop CI/CD, monitoring, and alerting for pipelines.
  • Mentor engineers and drive best practice documentation.

Required Experience

  • 5+ years production grade data engineering.
  • Deep expertise with Apache Spark (batch & streaming)

Ready to Apply?

Take the next step in your AI career. Submit your application to Newbridge today.

Submit Application