We are seeking a Data Engineering Developer with strong expertise in Databricks, AWS Data Services, ETL, and Python / Scala programming . The ideal candidate should have hands-on experience in building scalable data pipelines, performance tuning, and working with cross-functional teams in a dynamic environment.
Key Responsibilities
- Design, develop, and optimize ETL / ELT pipelines using Databricks, AWS, SQL, Python, and Scala .
- Build data ingestion workflows for structured, semi-structured, and unstructured data sources.
- Develop reusable components, frameworks, and enforce best practices for data quality and governance.
- Monitor and tune Databricks jobs for performance and scalability in a cloud environment.
- Maintain and enhance data lake / Lakehouse architecture ensuring availability, security, and integrity.
- Collaborate with data architects, analysts, and business stakeholders to define requirements.
- Support troubleshooting, debugging, and root cause analysis for production workloads.
- Create and maintain technical documentation for data pipelines, workflows, and models.
Must-Have Skills
3+ years of progressive Data Engineering experience.Strong hands-on expertise in Databricks and AWS Data Engineering Services.Proficiency in Python, Scala, SQL , and workflow orchestration tools.Experience in ETL / ELT development and real-time / batch data pipelines.Working knowledge of Kafka and REST API development.Experience with Agile methodology, JIRA, and cross-functional collaboration.Good to Have
Knowledge of GCP data services .Exposure to advanced data processing and architecture design.Diverse Lynx LLC is an Equal Employment Opportunity employer. All qualified applicants will receive due consideration for employment without any discrimination. All applicants will be evaluated solely on the basis of their ability, competence and their proven capability to perform the functions outlined in the corresponding role. We promote and support a diverse workforce across all levels in the company.