What is The Role
We are seeking a Staff Data Engineer to join our Data Engineering & Architecture team. In this role, you will contribute to our mission of building a world-class Data Platform. Your work will directly impact our strategy by making it easier for internal customers to leverage data as a strategic asset. This position offers an excellent opportunity to help craft critical initiatives that support real business outcomes.
What You Will Be Doing
- Assist in developing, implementing, and optimizing data pipelines using Apache Airflow for workflow orchestration.
- Develop and maintain ETL / ELT processes using Apache Spark, Apache Iceberg, and dbt.
- Implement real-time data streaming solutions using Apache Kafka and Apache Flink.
- Use Terraform for infrastructure-as-code to deploy resources on Google Cloud Platform (GCP).
- Document & Guide Users on Best Practices for using our platform.
- Monitor and Audit the Data Platform to ensure policies and procedures are maintained.
- Participate in code reviews to improve code quality and validate standard processes are met.
- Process Access Requests ensuring all approval processes are followed.
What You Bring
Demonstrated interest in data engineering through personal projects, coursework, or contributions to open-source projects.Strong Analytical and Problem Solving Skills.Excellent Verbal and Written Communication Skills.Eagerness to learn and stay updated with the latest data engineering trends.Proficiency in programming languages, particularly Bash, Python, and SQL.Knowledge of database systems, including OLAP, OLTP, Document, and Vector databases.General understanding of data pipelines, ETL / ELT processes, data modeling, and data warehousing concepts.Basic Knowledge of Git and DevOps practices.Proficiency with data processing frameworks such as Kafka, Flink, and Spark.Experience with Apache Iceberg and Data Lake houses