Job Description
Description
: A career within….Responsibilities:
· Design and implement scalable big data architectures and solutions utilizing PySpark, SparkSQL, and Databricks on Azure or AWS. · Experience in building robust data models and maintain metadata-driven frameworks to optimize data processing and analytics. · Build, test, and deploy sophisticated ETL pipelines using Azure Data Factory and other Azure-based tools. · Ensure seamless data flow from various sources to destinations including ADLS Gen 2. · Implement data quality checks and validation frameworks. · Establish and enforce data governance principles ensuring data security and compliance with industry standards and regulations. · Manage version control and deployment pipelines using Git and DevOps best practices. · Provide accurate effort estimation and manage project timelines effectively. · Collaborate with cross-functional teams to ensure aligned project goals and objectives. · Leverage industry know...
Ready to Apply?
Take the next step in your AI career. Submit your application to PwC today.
Submit Application