Job Description

  • Design, develop, and maintain scalable data pipelines and data assets using modern data engineering techniques
  • Strong experience in code optimisation using Spark SQL and PySpark
  • Apply AWS architecture knowledge, especially S3, EC2, Lambda, Redshift, CloudFormation
  • Refactor legacy codebase to improve readability, maintainability, and performance
  • Write tests before code to ensure functionality and catch bugs early
  • Debug complex code and resolve performance, concurrency, or logic issues
  • Role Requirements and Qualifications:
  • Minimum 7+ years of strong hands-on programming experience with PySpark / Python / Boto3
  • Experience using Python frameworks and libraries in line with Python best practices
  • Understanding of code versioning tools (Git), repositories (e.g., JFrog Artifactory)
  • Strong commitment to TDD (Test-Driven Development), unit testing, and participating in code reviews
  • Ex...

Ready to Apply?

Take the next step in your AI career. Submit your application to Acenet today.

Submit Application