Job Description
Job Description
We are looking for a skilled Databricks Engineer to join our team in Columbus, Ohio. In this role, you will focus on designing and optimizing data pipelines within a modern Data Lakehouse ecosystem. You will play a key part in transforming raw data into valuable insights while collaborating with cross-functional teams to enhance data accessibility and quality.
Responsibilities :
- Design and implement scalable data pipelines using Databricks and Delta Lake architecture.
- Develop and optimize data transformation processes using Python and ETL tools.
- Manage and integrate Azure Data Lake Gen2, Unity Catalog, and Data Factory components.
- Establish CI / CD practices and version control workflows using Git and GitHub.
- Perform data modeling and fine-tune performance for business intelligence applications.
- Collaborate with various teams to maintain high standards of data quality and governance.
- Troubleshoot and resolve issues in data workflows to ensure seamless operations.
- Document processes and solutions to maintain knowledge continuity and support future development.
- Bachelor's degree in Computer Science, IT, Engineering, or a related field, or equivalent experience.
- At least 5 years of experience in data engineering within cloud-based environments.
- Proficiency in Databricks and the Azure data ecosystem.
- Strong skills in Python programming and ETL processes.
- Familiarity with Data Lakehouse architecture and governance practices.
- Experience with CI / CD pipelines and version control systems like Git / GitHub.
- Excellent problem-solving abilities and effective communication skills.
- Ability to work collaboratively across teams to deliver data solutions.