Talent.com
Hadoop Architect

Hadoop Architect

OpenkyberNJ, United States
1 day ago
Job type
  • Temporary
  • Quick Apply
Job description

Title : Full Stack Java Developer

Location : Iselin, NJ (Hybrid 3 days onsite)

Duration : 12+ Months Contract W2 Role

Job Description

We are seeking an experienced Full Stack Java Developer with a strong emphasis on backend development , big data engineering , and AWS cloud technologies . In this role, you will design, build, and optimize high-performance backend services and large-scale data processing pipelines within a fast-paced financial domain environment. The ideal candidate will have hands-on expertise with Java , Hadoop / Spark ecosystems , ETL frameworks , and AWS services , along with the ability to work on both development and data engineering initiatives. You will collaborate with cross-functional teams to deliver scalable, secure, and robust data solutions that support enterprise-level applications and analytics platforms.

Key Responsibilities

  • Develop and enhance backend services using Java, ensuring scalability, security, and performance.
  • Design and implement big data pipelines leveraging Hadoop, Spark, Hive, and distributed processing frameworks.
  • Build and maintain ETL / ELT workflows using AWS Glue, PySpark, Databricks, and related cloud-native tools.
  • Work with structured and unstructured data to build data models, transformation logic, and processing workflows.
  • Implement real-time and batch data integration solutions using Kafka, data lakes, and cloud platforms.
  • Deploy and monitor data and application workloads on AWS (S3, Lambda, Redshift, CloudWatch).
  • Collaborate with DevOps teams to automate CI / CD pipelines using Jenkins, Git, Autosys, and Airflow.
  • Troubleshoot production issues, optimize performance, and ensure reliability of data and application systems.
  • Participate in code reviews, architectural discussions, and design documentation.

Ideal Candidate Profile

  • Strong backend Java development background.
  • Deep knowledge of the big data ecosystem including Hadoop, Hive, Spark, HDFS.
  • Experience building and running workloads on AWS cloud.
  • Proven experience creating data pipelines, handling large datasets, and supporting enterprise ETL workflows.
  • Solid understanding of data modeling, transformations, and performance tuning.
  • Ability to work effectively in a hybrid onsite / remote environment.
  • Create a job alert for this search

    Architect • NJ, United States