Job Description
Exp: 5 to 8 yrs
Skills: Scala + Spark
Hands‑on technical lead responsible for designing, developing, optimizing and stabilizing core Scala‑Spark data pipelines while mentoring junior engineers and ensuring delivery quality.
Core Responsibilities (Scala + Spark Delivery
- Hands‑On Ownership)
- Design and implement Scala + Spark pipelines using Dataset/DataFrame APIs with strong emphasis on typed, performant, and modular code.
- Translate functional requirements into efficient transformations, ingestion logic, and data models using best‑practice Scala design patterns.
- Build reusable libraries/utilities for data parsing, validation, transformation, and Spark job orchestration.
- Analyze Spark jobs using Spark UI, event logs, and metrics to identify bottlenecks such as skew, shuffles, and spills.
- Apply optimization techniques such as broadcast joins, partitioning strategies, file‑size tuning,...
Ready to Apply?
Take the next step in your AI career. Submit your application to LTIMindtree today.
Submit Application