Data Pipeline Development : Design, build, and optimize ETL / ELT pipelines using Azure Databricks (PySpark, Delta Lake) and Azure Data Factory (ADF).Data Flows & Transformations : Develop pipelines, data flows, and complex transformations with ADF, PySpark, and T-SQL for seamless data extraction, transformation, and loading.Data Processing : Develop Databricks Python notebooks for tasks such as joining, filtering, and pre-aggregation.Database & Query Optimization : Optimize database performance through SQL query tuning, index optimization, and code improvements to ensure efficient data retrieval and manipulation.SSIS & Migration Support : Maintain and enhance SSIS package design and deployment for legacy workloads; contribute to migration and modernization into cloud-native pipelines.Collaboration & DevOps : Work with cross-functional teams using Git (Azure Repos) for version control and Azure DevOps pipelines (CI / CD) for deployment.Data Governance & Security : Partner with governance teams to integrate Microsoft Purview and Unity Catalog for cataloging, lineage tracking, and role-based security.API & External Integration : Implement REST APIs to retrieve analytics data from diverse external data feeds, enhancing accessibility and interoperability.Automation : Automate ETL processes and database maintenance tasks using SQL Agent Jobs, ensuring data integrity and operational reliability.Advanced SQL Expertise : Craft and optimize complex T-SQL queries to support efficient data processing and analytical workloads.Pay will vary depending on experience - 50-65 / hour
We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity / affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and / or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy :
Required Skills & Experience
- 5+ years of hands-on expertise with Azure Databricks, Python, PySpark, and Delta Lake.
- 5+ years of proven experience with Azure Data Factory for orchestrating and monitoring pipelines.
- Strong SQL Server / T-SQL experience with a focus on query optimization, indexing strategies, and coding best practices.
- Demonstrated experience in SSIS package design, deployment, and performance tuning.
- Hands-on knowledge of Unity Catalog for governance.
- Experience with Git (Azure DevOps Repos) and CI / CD practices in data engineering projects.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and / or other paid time off as provided by applicable law.