Job Description

Responsibilities

  • Design, develop, and maintain scalable data pipelines using Spark and Python to support large-scale data processing and analytics.
  • Implement data transformation, integration, and ETL workflows using PySpark and other relevant tools.
  • Apply object-oriented programming principles to build efficient, modular, and reusable code.
  • Troubleshoot complex software issues, identify root causes, and implement robust solutions.
  • Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and provide optimized solutions.
  • Ensure data quality, reliability, and performance across all pipelines and applications.
  • Participate in code reviews, design discussions, and contribute to best practices for data engineering and software development.
  • Document processes, data pipelines, and technical solutions to facilitate knowledge sharing and maintain operational efficiency....

Ready to Apply?

Take the next step in your AI career. Submit your application to Kairos Technologies Private Limited today.

Submit Application