Description :
- Designs, develops, and implements Hadoop eco-system based applications to support business requirements.
- Follows approved life cycle methodologies, creates design documents, and performs program coding and testing.
- Resolves technical issues through debugging, research, and investigation.
Experience / Skills Required :
Bachelor's degree in Computer Science, Information Technology, or related field and 5 years experience in computer programming, software development or related3+ years of solid Java and 2+ years experience in design, implementation, and support of solutions big data solution in Hadoop using Hive, Spark, Drill, Impala, HBase3.Hands on experience with Unix, GCP and other relational databases. Experience with @Scale a plusStrong communication and problem-solving skill.Additonal Information :
Hybrid-2-3 days in office..Top 3 Skills Needed or Required :
Strong skills in java, spark, cloud technologies.Quick learner, proactiveness to learn new technologies, flexibility required in the candidate.What are the day-to-day responsibilities?
Data analysis and monitoring.Spark pipeline building for customer data.Implementing privacy functions for new states.What is the makeup of the team?
6 people with 3 senior, 2 mid-level and 1 juniorWhat project or initiative will they be working on?
Customer datalake privacyRequired Skills : Data Analysis
Additional Skills : Data Engineer