Job Description

Data Engineer

Exp - 8 years

Roles and Responsibilities
● Develop and manage robust ETL pipelines using Apache Spark (Scala)
● Understand Spark concepts, performance optimization techniques, and governance
tools
● Develop a highly scalable, reliable, and high-performance data processing pipeline to
extract, transform, and load data from various systems to the Enterprise Data
Warehouse/Data Lake/Data Mesh hosted on AWS or Azure
● Collaborate cross-functionally to design effective data solutions
● Implement data workflows utilizing AWS Step Functions or Azure Logic Apps for
efficient orchestration. Leverage AWS Glue and Crawler or Azure Data Factory and
Data Catalog for seamless data cataloging and automation
● Monitor, troubleshoot, and optimize pipeline performance and data quality
● Maintain high coding standards and produce thorough documentation. Contribute to
high-level (HLD) a...

Ready to Apply?

Take the next step in your AI career. Submit your application to Awign today.

Submit Application