Job Posting
Job Description : Key Accountabilities Include :
- Design and develop scalable data pipelines to collect, process, and store large volumes of data.
- Collaborate with data scientists and analysts to drive scope & requirements as well as deliver strong technical design of data analytics solutions.
- Ensure data quality and integrity through cleansing processes, validation, and automated testing.
- Develop and maintain requirements, design documentation and test plans.
- Implement data integration solutions to combine data from various sources.
- Optimize data workflows for performance and reliability and reduced cloud consumption.
- Monitor and troubleshoot data pipeline issues to ensure smooth operation.
- Establish release management & CI / CD for data solutions.
- Provide direction and coordination for development and support teams including globally located resources.
- Participate in the development of a safe and healthy workplace. Comply with instructions given for their own safety and health and that of others, in adhering to safe work procedures. Co-operate with management in fulfilment of its legislative obligations.
- Other duties as assigned by management.
Requirements :
Must be a self-starter capable of independently meeting objectives and interacting with members of various teams successfullyStrong analytical background and mindsetDemonstrated expertise in SOP development, training strategy, and process improvementAbility to elicit buy-in and cooperation from a variety of individuals as well as departmentsHands-on, flexible, and responsive to dynamic fast paced work environmentCapability managing several initiativesStrong team player with a continuous improvement mindsetEducation & Experience :
Minimum :
Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent work experience).5 years' experience in a data engineering role.3 years' experience building and supporting data lakehouse architectures using delta lake and change data feeds.3 years' experience with Spark and Python.3 years' experience writing code using OO programming.3 years' experience designing data warehouse table architecture such as star schema or Kimball methodExperience developing and installing wheelhouse packages for managing dependencies and distributing code.Experience creating CI / CD pipelines for analytics solutions.Preferred :
Hands-on experience implementing data solutions using Microsoft Fabric.Experience with machine learning and data science tools.Knowledge of data governance and security best practices.Experience in a larger IT environment with over 3,000 users and multiple domains.Specialist Certifications :
Current industry certifications from Microsoft cloud / data platforms or equivalent certifications. One or more of the following :
Microsoft Certified : Fabric Data Engineer AssociateMicrosoft Certified : Azure Data Scientist AssociateMicrosoft Certified : Azure Data FundamentalsGoogle Professional Data EngineerCertified Data Management Professional (CDMP)IBM Certified Data Architect Big DataSEKO Worldwide is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.