Description :
- Bachelor or master's degree in computer science, Software Engineering, or a related field.
- Proven experience (7+ years) as a data Engineer, preferably with a focus on software development, distributed systems.
- Solid understanding of data engineering concepts, database design, ETL processes and data mining.
- Proficiency in working with data technologies including SQL, Python, Spark, Scala, Hadoop and related technologies and tools.
- Experience with ETL tools such as Apache Airflow, digdag, oozie.
- Experience with?CI / CD?processes and tools such as Jenkins and Maven?
- Strong understanding of metadata management, data lineage and data quality and tools.
- Familiarity with Cloud Computing Services such as Google GCP and Microsoft Azure, as well as Distributed Storage Systems like Hive and Elastic Search a plus.
- Experience implementing operational best practices such as monitoring, alerting, validation and exception handling.
- Strong analytical, problem-solving skills with the ability to optimize, troubleshoot and debug data process and pipelines.
- Excellent communication skills to collaborate effectively with cross-functional teams and present performance analysis findings.
- Experience supporting and working with cross-functional teams and influence them with solutions in a dynamic environment.
- Experience in?AdTech and advertising measurement
- Knowledge of analytical tools such as Tableau / or Looker is preferred.
What project or initiative will they be working on?
Universal Onboarding / Account.Will this role be hybrid?
YesIf hybrid, how many days per week will need to be in office?
2.Top 3 Skills Needed or Required?
GCP,Airflow,Pyspark,Tableau.What are the day-to-day responsibilities?
Build and maintain robust, scalable data pipelines for ingesting, transforming, and storing large volume of data to support client connect and its advertising needs.Work on cloud platforms like Azure and Google Cloud for data storage, processing and analytics.Create and deploy large-scale, containerized applications using Docker and Kubernetes in public clouds like Google GCP and Microsoft Azure.Create and deploy large-scale, containerized applications using Docker and Kubernetes in public clouds like Google GCP and Microsoft Azure.Manage data engineering projects from design to deployment, ensuring timely delivery and meeting project goals.Self-starter, mentor who can lead engineers and engineering projects.Coordinate, create, and complete technical design discussions to drive technical architecture.Navigate complex systems?and deliver highly scalable and reliable production-ready code?Understands, articulates, and applies principles of the defined strategy to routine business problems that involve a single function.Support data operations including on-call rotations to troubleshoot production issues partnering with cross-functional teams.Collaborate with cross-functional tech teams including developers, architects, operations to identify performance bottlenecks, system issues, and areas for optimization.What is the makeup of the team?
Currently ABS-Data team consists of 10 associates who work on delivering various initiatives to support client Connect's roadmap.Additional Job Details
GCP,Airflow,Pyspark,Tableau.Required Skills : Data Analysis
Additional Skills : Data EngineerThis is a high PRIORITY requisition. This is a PROACTIVE requisition