Job Description
We are seeking an experienced Big Data Engineer to design and maintain scalable data processing systems and pipelines across large-scale, distributed environments. This role requires deep expertise in tools such as Snowflake (Snowpark), Spark, Hadoop, Sqoop, Pig, and HBase. You will work closely with data scientists and stakeholders to transform raw data into actionable intelligence and power analytics platforms.
Key Responsibilities:
- Design and develop high-performance, scalable data pipelines for batch and streaming processing.
- Implement data transformations and ETL workflows using Spark, Snowflake (Snowpark), Pig, Sqoop, and related tools.
- Manage large-scale data ingestion from various structured and unstructured data sources.
- Work with Hadoop ecosystem components including MapReduce, HBase, Hive...
Ready to Apply?
Take the next step in your AI career. Submit your application to Suzva Software Technologies today.
Submit Application