Talent.com
Data Engineer ID43228

Data Engineer ID43228

AgileEngineBoca Raton, FL, us
5 hours ago
Job type
  • Full-time
  • Quick Apply
Job description

Job Description

AgileEngine is an Inc. 5000 company that creates award-winning software for Fortune 500 brands and trailblazing startups across 17+ industries. We rank among the leaders in areas like application development and AI / ML, and our people-first culture has earned us multiple Best Place to Work awards.

WHY JOIN US

If you're looking for a place to grow, make an impact, and work with people who care, we'd love to meet you!

ABOUT THE ROLE

As a Middle Data Engineer , you’ll build and optimize ETL pipelines and cloud data solutions that provide reliable insights for data scientists and analysts. You’ll tackle complex data challenges, collaborate with cross-functional teams, and grow your expertise in Python, Airflow, Spark, and AWS in a dynamic, innovative environment.

WHAT YOU WILL DO

  • Build and support ETL pipelines;
  • Monitor data pipelines, identify bottlenecks, optimize data processing and storage for performance and cost-effectiveness;
  • Collaborate effectively with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders;
  • Working with Terraform to build AWS infrastructure;
  • Analyze sources and build Cloud Data Warehouse and Data Lake solution.

MUST HAVES

  • You must be authorized to work for ANY employer in the US (e.g., Green card holders, TN visa holders, GC EAD, H4 EAD, U4U with EAD), as we are unable to sponsor or take over employment visa sponsorship at this time;
  • 3+ years of professional experience with Python ;
  • 3+ years of professional experience in a Data Engineering role;
  • Proficiency in programming languages commonly used in data engineering such as Python , SQL , and optionally Scala for working with data processing frameworks like Spark and libs like Pandas;
  • Proficiency in designing, deploying, and managing data pipelines using Apache Airflow for workflow orchestration and scheduling;
  • Ability to design, develop, and optimize ETL processes to move and transform data from various sources into the data warehouse, ensuring data quality, reliability, and efficiency;
  • Knowledge of big data technologies and frameworks such as Apache Spark for processing large volumes of data efficiently;
  • Extensive hands-on experience with various AWS services relevant to data engineering, including but not limited to Amazon MWAA, Amazon S3, Amazon RDS, Amazon EMR, AWS Lambda, AWS Glue, Amazon Redshift, AWS Data Pipeline, Amazon DynamoDB;
  • Deep understanding and practical experience in building and optimizing cloud data warehousing solutions;
  • Ability to monitor data pipelines, identify bottlenecks, and optimize data processing and storage for performance and cost-effectiveness;
  • Excellent communication skills to collaborate effectively with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders;
  • Bachelor’s degree in computer science / engineering or other technical field, or equivalent experience;
  • Upper-intermediate English level.
  • NICE TO HAVES

  • Familiarity with the fintech industry, understanding of financial data, regulatory requirements, and business processes specific to the domain;
  • Documentation skills to document data pipelines, architecture designs, and best practices for knowledge sharing and future reference;
  • GCP services relevant to data engineering;
  • Snowflake;
  • OpenSearch, Elasticsearch;
  • Jupyter for analyze data;
  • Bitbucket, Bamboo;
  • Terraform.
  • PERKS AND BENEFITS

  • Professional growth : Accelerate your professional journey with mentorship, TechTalks, and personalized growth roadmaps.
  • Competitive compensation : We match your ever-growing skills, talent, and contributions with competitive USD-based compensation and budgets for education, fitness, and team activities.
  • A selection of exciting projects : Join projects with modern solutions development and top-tier clients that include Fortune 500 enterprises and leading product brands.
  • Flextime : Tailor your schedule for an optimal work-life balance, by having the options of working from home and going to the office – whatever makes you the happiest and most productive.
  • Requirements

    3+ years of professional experience with Python; 3+ years of professional experience in a Data Engineering role; Proficiency in programming languages commonly used in data engineering such as Python, SQL, and optionally Scala for working with data processing frameworks like Spark and libs like Pandas; Proficiency in designing, deploying, and managing data pipelines using Apache Airflow for workflow orchestration and scheduling; Ability to design, develop, and optimize ETL processes to move and transform data from various sources into the data warehouse, ensuring data quality, reliability, and efficiency; Knowledge of big data technologies and frameworks such as Apache Spark for processing large volumes of data efficiently; Extensive hands-on experience with various AWS services relevant to data engineering, including but not limited to Amazon MWAA, Amazon S3, Amazon RDS, Amazon EMR, AWS Lambda, AWS Glue, Amazon Redshift, AWS Data Pipeline, Amazon DynamoDB; Deep understanding and practical experience in building and optimizing cloud data warehousing solutions; Ability to monitor data pipelines, identify bottlenecks, and optimize data processing and storage for performance and cost-effectiveness; Excellent communication skills to collaborate effectively with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders; Bachelor’s degree in computer science / engineering or other technical field, or equivalent experience; Upper-intermediate English level.

    Create a job alert for this search

    Data Engineer • Boca Raton, FL, us