Job Description

Big Data DeveLoper

Responsibilities

  • Design and implement data pipelines for migration from HDFS/Hive to cloud object storage (e.g., S3, Ceph).
  • Optimize Spark (and optionally Flink) jobs for performance and scalability in a Kubernetes environment.
  • Ensure data consistency, schema evolution, and governance with Apache Iceberg or equivalent table formats.
  • Support migration strategy definition by providing technical input and identifying risks.
  • Mentor junior developers and review their code / design decisions.
  • Collaborate with platform engineers, cloud architects, and product stakeholders to align technical implementation with project goals.
  • Troubleshoot complex distributed system issues in data pipelines or storage integration.

Requirements

  • Experience 7 to 15 Years
  • Scala and Python
  • Apache Spark (batch & streaming) – must!
  • De...

Ready to Apply?

Take the next step in your AI career. Submit your application to Grid Dynamics today.

Submit Application