Key Responsibilities
Participate in the modernization and migration of current Azure data and analytics systems to the Databricks Lakehouse Platform .
Support Databricks platform enablement, configuration, and deployment activities.
Design, develop, and maintain ETL / ELT pipelines using Azure Data Factory, Azure Functions, and Databricks.
Build scalable data integration solutions across Azure Data Lake, Azure SQL Database, Azure Storage (Blob / File) , and Databricks.
Assist in optimizing ingestion and transformation processes for performance, reliability, and cost efficiency.
Migrate legacy data pipelines, workflows, and processing logic into Databricks notebooks or Delta pipelines.
Work with cross-functional teams to understand business requirements and translate them into technical solutions.
Ensure all solutions comply with Marsh MMA standards for security, governance, and data quality .
Perform data validation, unit testing, troubleshooting, and system performance tuning.
Document architecture, workflows, design decisions, and operational procedures.
Required Technical Skills
Strong hands-on experience with :
Azure Data Factory (ADF)
SQL / T-SQL
Azure Functions
Azure SQL Database
Azure Data Lake (ADLS Gen2)
Azure Storage (Blob / File)
Databricks (Notebooks, Delta Lake, Spark)
Qualifications
Bachelors degree in Computer Science, Data Engineering, Information Technology, or related field.
Proven experience working with cloud-based data engineering solutions, preferably in Azure ecosystems.
Experience supporting cloud migration or modernization initiatives is a plus.
Strong understanding of ETL / ELT concepts, data modeling, and distributed data processing.
Soft Skills
Strong analytical and problem-solving mindset.
Excellent communication and documentation abilities.
Ability to collaborate with cross-functional teams across technical and business domains.
Detail-oriented with a focus on quality and compliance.
Azure Data Engineer • Phoenix, AZ, USA