Job Description
Job Description:
· Design, develop, and maintain scalable data processing pipelines using Hadoop and Spark.
· Implement data integration and ETL processes to ingest and transform large datasets.
· Collaborate with data scientists, analysts, business partners and other stakeholders to understand data requirements and deliver solutions.
· Optimize and tune Hadoop and Spark jobs for performance and efficiency.
· Manage and maintain data storage solutions, ensuring data integrity and security.
· Utilize GitHub for version control and collaboration on code development.
· Work with CDP (Cloudera Data Platform) to manage and deploy data applications.
· Integrate and manage data solutions on Cloud Azure, and Snowflake ensuring seamless data flow and accessibility.
· Monitor and troubleshoot data processing workflows, resolving issues promptly.
· Stay updated with the latest industry trends and technologies in big data and...
Ready to Apply?
Take the next step in your AI career. Submit your application to Dexian Asia Pacific today.
Submit Application