Hiring : W2 Candidates Only
Visa : Open to any visa type with valid work authorization in the USA
Key Responsibilities :
- Design and implement scalable data ingestion and transformation pipelines using PySpark or Scala Hadoop Hive and Dremio.
- Build and manage Kafka batch pipelines for reliable data streaming and integration.
- Work with on-prem Hadoop ecosystems (Cloudera Hortonworks MapR) or cloud-native big data platforms.
- Develop and maintain RESTful APIs using Python (FastAPI Flask or Django) to expose data and services.
- Collaborate with data scientists ML engineers and platform teams to ensure seamless data flow and system performance.
- Monitor troubleshoot and optimize production data pipelines and services.
- Ensure security scalability and reliability across all data engineering components.
- (Optional but valuable) Contribute to the design and deployment of AI-driven RAG systems for enterprise use cases.
Required Skills & Qualifications
Experience in Big Data Engineering.Strong hands-on experience with PySpark or Scala.Deep expertise in on-prem Hadoop distributions (Cloudera Hortonworks MapR) or cloud-based big data platforms.Proficiency in Kafka batch processing Hive and Dremio.Solid understanding of REST API development using Python frameworks.Familiarity with cloud platforms (Google Cloud Platform AWS or Azure).Experience or exposure to AI and RAG architectures is a plus.Excellent problem-solving communication and collaboration skills.Key Skills
CCTV,Computer Science,Corporate Marketing,E Learning,Arabic English Translation
Employment Type : Full Time
Experience : years
Vacancy : 1