Job Description
- Design, develop, and maintain scalable and efficient big data processing pipelines distributed computing systems.
- Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions.
- Implement data ingestion, processing, and transformation processes to support various analytical and machine learning use cases.
- Optimize and tune data pipelines for performance, scalability, and reliability.
- Monitor and troubleshoot pipeline performance issues, identifying and resolving bottlenecks.
- Ensure data quality and integrity throughout the pipeline, implementing data validation and error handling mechanisms.
- Stay updated on emerging technologies and best practices in big data processing and analytics, incorporating them into our data engineering practices.
- Document design decisions, technical specifications, and data workflows.
Skills Required
Data Quality, Machine Le...
Ready to Apply?
Take the next step in your AI career. Submit your application to Litmus7 today.
Submit Application