Job Description


Job Description:

  • Design and implement scalable real-time data processing pipelines using Apache Kafka, Kafka Streams (kstream), and Apache Flink.
  • Collaborate with data engineers, software developers, and data scientists to build robust streaming solutions that meet business needs.
  • Optimize and monitor streaming jobs for performance, reliability, and scalability.
  • Ensure data quality, consistency, and governance in real-time pipelines.
  • Develop metrics and monitoring dashboards to ensure observability across streaming systems.
  • Work on integration of real-time data pipelines with downstream data stores (, PostgreSQL, Elasticsearch, S, DWHs).
  • Participate in architecture discussions and help define best practices for stream processing.
  • Required Skills:

  • + years of experience in real-time data processing and streaming analytics.
  • Strong hands-on experience with Apache Kafka, particularly...
  • Ready to Apply?

    Take the next step in your AI career. Submit your application to HAN today.

    Submit Application