Role : Databricks Architect
Duration : 12 months
Location : Troy MI- Remote
Minimum exp.- 14-15 yrs.
Role Overview
We are looking for a Databricks Architect to design and lead modern Lakehouse data platforms using Databricks. The role focuses on building scalable high-performance data pipelines and enabling analytics and AI use cases on cloud-native data platforms.
Key Responsibilities
- Architect and implement Databricks Lakehouse solutions for large-scale data platforms
- Design and optimize batch & streaming data pipelines using Apache Spark (PySpark / SQL)
- Implement Delta Lake best practices (ACID schema enforcement time travel performance tuning)
- Build and manage Databricks jobs workflows notebooks and clusters
- Enable data governance using Unity Catalog (access control lineage)
- Integrate Databricks with cloud data services (ADLS / S3 ADF Synapse etc.)
- Support analytics BI and AI / ML workloads (MLflow exposure is a plus)
- Lead solution design discussions and mentor data engineering teams
Must-Have Skills
10 years in data engineering / data architecture5 years of strong hands-on experience with DatabricksExpert in Apache Spark PySpark SQLStrong experience with Delta Lake & Lakehouse architectureCloud experience on Azure Databricks / AWS DatabricksProven experience in designing high-volume scalable data pipelinesGood-to-Have
Unity Catalog MLflow Databricks WorkflowsStreaming experience (Kafka / Event Hubs)CI / CD for Databricks (Azure DevOps / GitHub)Key Skills
APIs,Pegasystems,Spring,SOAP,.NET,Hybris,Solution Architecture,Service-Oriented Architecture,Adobe Experience Manager,J2EE,Java,Oracle
Employment Type : Full Time
Experience : years
Vacancy : 1