Job Description
Job Description
Job Title : Sr. Databricks Engineer
Location : Dallas, TX
Duration : FTE
Schedule : Hybrid 2 days onsite (Tue / Thurs)
About the job :
Seeking a hands-on Sr. Databricks Data Engineer to design, develop, and optimize data pipelines and analytics solutions. The ideal candidate will have strong experience in data engineering, ETL development, and production support, ensuring reliable, scalable, and high-performing data operations within Azure environment and can work in a fast-paced environment. Knowledge of insurance domain and Power BI is a plus but not mandatory.
Key Responsibilities :
Development
- Design, develop, and deploy scalable ETL / ELT data pipelines using Apache Spark, PySpark, and Databricks.
- Develop and optimize SQL queries for data transformation and analysis.
- Collaborate with product owners, data architects and analysts to build data models, delta lake structures, and data workflows.
- Collaborate with data analysts and business teams to deliver actionable insights.
- Build job orchestration and monitoring solutions
- Ensure data quality, performance, and reliability across workflows.
- Develop and maintain CI / CD pipelines for Databricks notebooks, jobs, and workflows.
- Work with cloud-based data platforms (Azure preferred).
Production Support
Provide L2 / L3 support for Databricks production jobs, resolving incidents and performance issues.Monitor and troubleshoot Spark jobs, cluster performance, job failures, and resource utilization.Ensure data pipelines meet SLAs and data accuracy standards.Coordinate with infrastructure and DevOps teams to optimize cluster management and costs.Perform root cause analysis (RCA) and implement long-term fixes for recurring issues.Maintain documentation for production jobs, data flows, and incident management procedures.Required Skills & Experience :
10+ years overall experience in data engineering or related fields.3–5 years hands-on experience with Databricks and Spark.Strong proficiency in SQL and data analysis techniques.Experience with ETL processes, data modeling, and performance tuning.Familiarity with Python or Scala for data engineering tasks.Excellent problem-solving and communication skills.Nice-to-Have :
Knowledge of insurance industry data.Experience with Power BI or other BI tools.