Job Description

Job Summary

We are hiring a Data Engineer to design, build, and operate batch and event-driven data pipelines on a modern on-premise data platform.

This role focuses on data ingestion, transformation, and processing, using Apache Spark and Kafka, supporting analytics, reporting, and operational dashboards. You will work closely with Platform Integration Engineers, who manage the underlying infrastructure and streaming platform.

Tools & Environment

Apache Spark (SQL / PySpark / Structured Processing), Apache Kafka, Batch & Streaming Data Pipelines, ETL / ELT, CDC (Change Data Capture), PostgreSQL / Relational Databases, Docker / Kubernetes, Linux, On-Prem Data Platform

Key Responsibilities

Data Pipelines & Processing


• Design, build, and operate large-scale batch processing pipelines using Apache Spark


• Develop ETL / ELT workflows for analytics and reporting


• Implement data ingestion pipelines from databases...

Ready to Apply?

Take the next step in your AI career. Submit your application to ITMAX SYSTEM BERHAD today.

Submit Application