Job Description

  • Design, build and maintain real-time and batch data pipelines using Python, PySpark, and AWS services.
  • Architect and manage cloud infrastructure with a focus on scalability, cost-efficiency, and performance (AWS Glue, Lambda, S3, Athena).
  • Implement and optimize ETL/ELT workflows, ensuring data quality and reliability.
  • Integrate data from diverse sources, including APIs (REST/GraphQL), into centralized data warehouses.
  • Work with streaming technologies like Kafka or Amazon Kinesis to handle high-throughput, low-latency data ingestion.
  • Lead data modeling efforts and contribute to the design of enterprise data warehouses and data lakes.
  • Collaborate with product, engineering, and analytics teams to identify data needs and translate them into scalable solutions.
  • Mentor junior data engineers and contribute to best practices, code reviews, and process improvements.
  • Drive innovation by evaluating and implementing emerging t...

Ready to Apply?

Take the next step in your AI career. Submit your application to Bajaj Broking today.

Submit Application