Job Description
- Developing ETL pipelines involving big data.
- Developing data processinganalytics applications primarily using PySpark.
- Experience of developing applications on cloud(AWS) mostly using services related to storage, compute, ETL, DWH, Analytics and streaming.
- Clear understanding and ability to implement distributed storage, processing and scalable applications.
- Experience of working with SQL and NoSQL database.
- Ability to write and analyze SQL, HQL and other query languages for NoSQL databases.
- Proficiency is writing disitributed scalable data processing code using PySpark, Python and related libraries.
Data Engineer AEP Comptency
- Experience of developing applications that consume the services exposed as ReST APIs.
- Special Consideration given forExperience of working with Container-orchestration systems like Kubernetes.
- Experience of working with any enterprise grade ETL tools.<...
Ready to Apply?
Take the next step in your AI career. Submit your application to Stack Digital today.
Submit Application