Job Description
Job description
Design, develop, and optimize data pipelines using Apache Spark and Databricks.
Implement ETL/ELT processes to ingest, transform, and store structured and unstructured data.
Work with large-scale distributed computing systems to ensure efficient data processing.
Develop and maintain data lake and data warehouse solutions on cloud platforms like AWS,
Azure, or GCP.
Collaborate with Data Scientists, Analysts, and Business Stakeholders to understand data
requirements.
Ensure data quality, governance, and security compliance.
Monitor and improve the performance of big data infrastructure.
Automate workflows and data integration tasks using Airflow, Python, or Scala.
Troubleshoot and debug data pipeline issues in a fast-paced environment.
interested candidate can apply through
email id -
call -
Ready to Apply?
Take the next step in your AI career. Submit your application to White Force today.
Submit Application