Job Description

Responsibilities

  • Design, build, and maintain efficient and reliable data pipelines using Scala, Databricks, and Kafka.
  • Collaborate with data scientists and analysts to understand data requirements and provide solutions that meet their needs.
  • Optimize ETL processes for performance and scalability, ensuring high data quality and integrity.
  • Implement data streaming solutions to facilitate real-time data processing and analytics initiatives.
  • Monitor and troubleshoot data pipelines, addressing performance issues and minimizing downtime.
  • Participate in code reviews and ensure adherence to best practices in data engineering and software development.
  • Stay up-to-date with emerging technologies and industry trends to continuously improve data engineering practices.

Requirements

  • Bachelor's degree in Computer Science, Engineering, or a related field; Master's degre...

Ready to Apply?

Take the next step in your AI career. Submit your application to AppSierra today.

Submit Application