Job Description

Job Description

  • 3+ years of experience in AWS Data Engineering.
  • Design and build ETL pipelines & Data lakes to automate ingestion of structured and unstructured data
  • Experience working with AWS big data technologies (Redshift, S3, AWS Glue, Kinesis, Athena ,DMS, EMR and Lambda for Serverless ETL)
  • Should have knowledge in SQL and NoSQL programming languages.
  • Have worked on batch and real time pipelines.
  • Excellent programming and debugging skills in Scala or Python & Spark.
  • Good Experience in Data Lake formation, Apache spark, python, hands on experience in deploying the models.
  • Must have experience in Production migration Process
  • Nice to have experience with Power BI visualization tools and connectivity
  • Roles & Responsibilities:

  • Design, build and operationalize large scale enterprise data solutions and applications
  • Analyze, re-architect and re-platform on premise...
  • Ready to Apply?

    Take the next step in your AI career. Submit your application to Helius today.

    Submit Application