- Search jobs
- Grand Prairie, TX
- data engineer
Data engineer Jobs in Grand Prairie, TX
Lead Data Engineer
JobotGrand Prairie, TX, United StatesSenior Data Engineer
Object Technology Solutions IncIrving, Texas, United States- Promoted
GCP Data Engineer
Apptad IncIrving, TX, United States- Promoted
GCP Data Engineer
Donato Technologies, IncIrving, TX, United States- Promoted
Data Engineer (Controls / Automation)
WorldLinkIrving, TX, USPrincipal Data Engineer
The University of Texas at AustinTexas- Promoted
Senior Data Engineer
VerizonIrving, TX, United States- Promoted
Data Engineer
VirtualVocationsArlington, Texas, United States- Promoted
ML Data Engineer
Aim4HireAustin, Texas Metropolitan Area, United States- Promoted
Principal Data Engineer
HireStarter, Inc.Austin, Texas Metropolitan Area, United StatesData Engineer
CVS HealthIrving, TXData Engineer
ITL USATexas, USData Engineer
BerkleyLas Colinas Boulevard W, TX, Irving- Promoted
Data Engineer
JS ConsultingIrving, TX, United StatesPrincipal Data Engineer
Vistra Corporate Services CompanyIrving, Texas- Promoted
Lead Data Scientist / Data Engineer
GridMatrixAustin, Texas Metropolitan Area, United States- Promoted
Lead Data Engineer
Resource Informatics Group IncIrving, TX, USData Engineer
Mastech DigitalIrving, TX, US- Promoted
Senior Data Engineer
OTSIIrving, Texas, USASenior Data Engineer
PresidioIrving, TX, USLead Data Engineer
JobotGrand Prairie, TX, United States- Full-time
- Permanent
Remote - Lead Data Engineer - up to $190K base - join a team building systems to make data driven business decisions
This Jobot Job is hosted by : Lucas Watson
Are you a fit? Easy Apply now by clicking the "Apply Now" buttonand sending us your resume.
Salary : $150,000 - $190,000 per year
A bit about us :
Our client, in the financial services industry, is seeking a Lead Data Engineer to join their team. This is a full-time, direct hire, remote role that can pay $150-190K base salary plus benefits, depending on experience.
Why join us?
This role is ideal for someone who thrives in a dynamic, fast-paced environment, enjoys solving complex data problems, and is passionate about driving innovation in data engineering. If you're looking to make an impact on the financial landscape with cutting-edge data solutions this could be for you!
Job Details
Core Responsibilities :
- Lead the design and implementation of end-to-end data pipelines, from extraction (API, scraping, pyodbc) to cleansing / transformation (Python, TSQL) and loading into SQL databases or data lakes.
- Oversee the development of robust data architectures that support efficient querying and analytics, ensuring high-performance and scalable data workflows.
- Collaborate with data scientists, software developers, business intelligence teams, and stakeholders to develop and deploy data solutions that meet business needs.
- Ensure smooth coordination between engineering and other teams to translate business requirements into technical solutions.
- Guide the development of data models and business schemas, ensuring that they are optimized for both relational (3NF) and dimensional (Kimball) architectures.
- Lead the creation of scalable, reliable data models and optimize them for performance and usability.
- Develop and maintain the infrastructure for large-scale data solutions, leveraging cloud platforms (e.g., Azure) and containerization technologies (e.g., Docker).
- Lead the use of modern data platforms such as Snowflake and Fabric, ensuring their effective use in large-scale data solutions.
- Manage and optimize data pipelines using tools such as Apache Airflow, Prefect, DBT, and SSIS, ensuring that all stages of the pipeline (ETL) are efficient, scalable, and reliable.
- Ensure robust testing, monitoring, and validation of all data systems and pipelines.
- Drive continuous improvement in data engineering processes and practices, ensuring they remain cutting-edge, efficient, and aligned with industry best practices.
- Foster a culture of clean code, best practices, and rigorous testing across the team.
- Strong experience with data pipeline design and implementation, including data extraction, transformation, and loading (ETL) processes.
- Proficiency in SQL (Postgres, SQL Server) and experience with modern data warehouse solutions (e.g., Snowflake, Fabric).
- Expertise in Python for data engineering tasks, including data manipulation (Pandas, NumPy) and workflow management (Dask, PySpark, FastAPI).
- Solid knowledge of cloud platforms (Azure, AWS) and big data technologies (Hadoop, Spark).
- Hands-on experience with Docker, Kubernetes, and containerized environments.
- Strong understanding of dimensional modeling (Kimball), relational database design (3NF), and best practices in data architecture.
- Experience with API development, including building and managing API integrations.
- Proficiency with orchestration tools like Prefect or Airflow for workflow management.
- Strong focus on testing and validation, ensuring that all data systems meet reliability and performance standards.
Experience & Qualifications :
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.