Job Description

Roles and Responsibilities

  • Architect, build, and optimize scalable data pipelines using Python, SQL, and distributed processing frameworks such as Spark.
  • Design and maintain robust ETL/ELT workflows to integrate structured and semi-structured data from multiple internal and external systems.
  • Ensure data quality, lineage, validation, and monitoring across all ingestion and transformation layers.
  • Partner with analytics and product teams to operationalize analytical models, KPI dashboards, and metric frameworks.
  • Build reusable datasets, analytics marts, and query-optimized schemas to support diverse use cases including reporting, forecasting, personalization, and operational automation.
  • Translate raw data into accessible, well-governed, analysis-ready formats.
  • Develop SOPs, quality checks, and review workflows to ensure accurate, consistent, and reliable data delivery
  • Establish coding standards, ...

Ready to Apply?

Take the next step in your AI career. Submit your application to ReNew today.

Submit Application