Job Description
- Build and maintain data pipelines using Python and Apache Spark.
- Orchestrate workflows using Airflow and Google Cloud Workflows (CloudFlow).
- Develop, deploy, and manage containerized data services using Docker and Cloud Run.
- Design, optimize, and monitor datasets and queries in BigQuery.
- Ingest, transform, and integrate external data through REST APIs.
- Manage data lifecycle and storage using Google Cloud Storage (GCS).
- Implement data quality, monitoring, and observability best practices.
- Collaborate with cross-functional engineering, product, and data science teams.
Requirements
- 2–4+ years of experience as a Data Engineer or similar role.
- Strong proficiency in Python, SQL, and Spark/PySpark.
- Hands-on experience with Airflow and cloud-native orchestration (e.g., Cloud Workflows).
- Experience with Docker, containers, and deplo...
Ready to Apply?
Take the next step in your AI career. Submit your application to Taraki today.
Submit Application