- Search jobs
- Santa Clarita, CA
- data engineer
Data engineer Jobs in Santa Clarita, CA
- Promoted
Data Engineer 3
V R Della Infotech IncSanta Clara County, California, USADatabricks -Data Engineer
Innova SolutionsRemote, California- Promoted
Lead Data Platform Engineer / Remote
Motion RecruitmentCalifornia, United StatesSenior Big Data Engineer
Highmark HealthCA, Working at Home, CaliforniaSr Data QA Engineer
Graebel Companies Inc.Remote, CA- Promoted
Data Architect
ConsultNetValencia, CA, United StatesSr Field Application Engineer- Data & Devices
TE ConnectivityCA, US- Promoted
Senior Data Analyst
AbbottSylmar, CA, USSenior Data Engineer
MindlanceRemote, CAPython Data Engineer
E-SolutionsCalifornia, United StatesBig Data Hadoop Engineer
Harvey NashCA- Promoted
Principal Site Reliability Engineer Cortex Data Lake
Palo Alto NetworksSanta Clara County, California, USA- Promoted
Data Analytics Bootcamp
General AssemblySanta Clarita, CA, United StatesLead Engineer, Data Platform
SephoraRemote, CA, US- Promoted
Data Engineer-Hybrid
Logix Federal Credit UnionValencia, CA, United StatesData Scientist
Action Urgent CareCA, US- Promoted
Data Engineer
ZEST Consulting LLCSanta Clara County, California, USAData Engineer
Rodan Energy Solutions700 University Ave (100% Remote), CAData Engineer, Consultant
Blue Shield of CaliforniaCA, United StatesData Engineer 3
V R Della Infotech IncSanta Clara County, California, USADescription :
IMPORTANT : Suppliers should not submit workers whose physical residence is within the following states due to Intuitive tax and operating entity structure :
Alabama Arkansas Delaware Florida Indiana Iowa Louisiana Maryland Mississippi Missouri Oklahoma Pennsylvania South Carolina and Tennessee.
Please interpret this as Intuitive policy to which all suppliers are required to comply.
- Important Notes to supplier :
Subvending not allowed Worker need to be on your direct W2.
Need to do a properpre technical screening.
Need to have recruiter screening summary on top of resume.
Including Skill matrix and Writeup from worker on these skills is a plus.
All past projects should have Durations and locations.
Current location must be mentioned on the resume.
Actual Title of the role : Data Engineer
Duration : 6 Months
Contract / possibility for conversions : NA
Max billBR / hr.
Onsite / Hybrid / Remote : Hybrid
Only Locals / Nonlocals can be submitted : Locals
Mode of interview : Zoom
No of rounds of interview : 2
New JP or Backfill position New position
Job Description : Data Engineer
We are seeking a skilled Data Application Engineer to design build and maintain datadriven applications and pipelines that enable seamless data integration transformation and delivery across systems. The ideal candidate will have a strong foundation in software engineering database technologies and cloud data platforms with a focus on building scalable robust and efficient data applications.
Key Responsibilities :
Develop Data Applications : Build and maintain datacentric applications tools and APIs to enable realtime and batch data processing.
Data Integration : Design and implement data ingestion pipelines integrating data from various sources such as databases APIs and file systems.
Data Transformation : Create reusable ETL / ELT pipelines to process and transform raw data into consumable formats using tools like Snowflake DBT or Python.
Collaboration : Work closely with analysts and stakeholders to understand requirements and translate them into scalable solutions.
Documentation : Maintain comprehensive documentation for data applications workflows and processes.
Required Skills and Qualifications :
Education : Bachelor s degree in Computer Science Engineering or a related field (or equivalent experience).
Programming : Proficiency in programming languages Python C# ASP.NET (Core)
Databases : Strong understanding of SQL database design and experience with relational (e.g. Snowflake SQL Server) databases
Data Tools : Handson experience with ETL / ELT tools and frameworks such as Apache Airflow (DBT Nice to Have)
Cloud Platforms : Familiarity with cloud platforms such as AWS Azure or Google Cloud and their data services (e.g. S3 AWS Lambda etc.).
Data Pipelines : Experience with realtime data processing tools (e.g. Kafka Spark) and batch data processing.
APIs : Experience designing and integrating RESTful APIs for data access and application communication.
Version Control : Knowledge of version control systems like Git for code management.
ProblemSolving : Strong analytical and problemsolving skills with the ability to troubleshoot complex data issues.
Preferred Skills :
Knowledge of containerization tools like Docker and orchestration platforms like Kubernetes.
Experience with BI tools like Tableau Power BI or Looker.
Soft Skills :
Excellent communication and collaboration skills to work effectively in crossfunctional teams.
Ability to prioritize tasks and manage projects in a fastpaced environment.
Strong attention to detail and commitment to delivering highquality results.
Job Posting Type
Additional Details
Key Skills
Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala
Employment Type : Full Time
Vacancy : 1