Job Description
Job Description
- 6-8 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive
- Good hands on experience of python and Bash Scripts
- Good understanding of SQL and data warehouse concepts
- Strong analytical, problem-solving, data analysis and research skills
- Demonstrable ability to think outside of the box and not be dependent on readily available tools
- Excellent communication, presentation and interpersonal skills are a must
- Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis)
- Orchestration with Airflow and Any job scheduler experience
- Experience in migrating workload from on-premise to cloud and cloud to cloud migrations
Roles & Responsibilities
- Develop efficient ETL pipelines as per business requirements...
Ready to Apply?
Take the next step in your AI career. Submit your application to Impetus today.
Submit Application