Data Architect
Infosys is seeking an AWS Data Architect. This position will significantly contribute to identifying best-fit architectural solutions for one or more projects;
develop design of application, provide regular support / guidance to project teams on complex coding, issue resolution and execution.
You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs.
You will be part of a learning culture, where teamwork and collaboration are encouraged, excellence is rewarded, and diversity is respected and valued.
Required Qualifications :
Candidate must be located within commuting distance of Denver, CO or be willing to relocate to the area. This position may require travel in the US.
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education.
At least 7 years of experience in Bigdata AWS
Preferred Qualifications :
Deploy enterprise-ready, secure and compliant data-oriented solutions leveraging Data Warehouse, Big Data and Machine Learning frameworks
Optimizing data engineering and machine learning pipelines
Reviews architectural designs to ensure consistency & alignment with defined target architecture and adherence to established architecture standards
Support data and cloud transformation initiative
Contribute to our cloud strategy based on prior experience
Understand the latest technologies in a rapidly innovative marketplace
Independently work with all stakeholders across the organization to deliver point and strategic solutions
Assist solution providers with the definition and implementation of technical and business strategies
Should have prior experience in working as a Data Warehouse / Big Data Architect.
Experience in advanced Apache Spark processing framework, spark programming languages such as Scala / Python / Advanced Java with sound knowledge in shell scripting.
Should have experience in both functional programming and Spark SQL programming dealing with processing terabytes of data
Specifically, this experience must be in writing Big Data data engineering jobs for large scale data integration in AWS. Prior experience in writing Machine Learning data pipeline using Spark programming language is an added advantage.
Advanced SQL experience including SQL performance tuning is a must.
Should have worked on other big data frameworks such as MapReduce, HDFS, Hive / Impala, AWS Athena.
Experience in logical & physical table design in Big Data environment to suite processing frameworks
Knowledge of using, setting up and tuning resource management frameworks such as Yarn, Mesos or standalone spark.
Experience in writing spark streaming jobs (producers / consumers) using Apache Kafka or AWS Kinesis is required
Should have knowledge in variety of data platforms such as Redshift, S3, Teradata, Hbase, MySQL / Postgres, MongoDB
Experience in AWS services such as EMR, Glue, S3, Athena, DynamoDB, IAM, Lambda, Cloud watch and Data pipeline
Must have used the technologies for deploying specific solutions in the area of Big Data and Machine learning.
Experience in AWS cloud transformation projects are required.
Telecommunication experience is an added advantage.