Role: Java Big Data Developer (Hybrid - 3 days in office, 2 days home). Role Summary: A senior java engineer with very strong java understanding along with subject matter expertise in big data. Implement application features/function using Java , Spring , Big Data technologies. Tune various Big Data...
A&I Data Service help to create data-driven organizations by providing the right solutions in Big Data technologies, Master Data Management, Business Intelligence Analytics by taking advantage of the Cloud based computing and storage. The Big Data Cloud Solution Architect will be responsible for gui...
The hands-on Technical Specialist, Big Data Engineering is responsible for designing and architecting highly scalable data storage solutions and platforms/tools for vehicle data, manufacturing, sales, finance and other data systems. Data Platform Leadership: Lead the architecture, design, and develo...
Design and implement data pipelines and transformations using big data technologies such as Spark, Hadoop, and related ecosystems. Big Data technologies, including, Spark, and SQL databases. Please consider joining our technology team at Throtle as a Big Data Developer. Ensure data integrity and qua...
Development of big data technologies like Apache Spark and Azure Databricks. Development of big data technologies like Apache Spark and Azure Databricks. Single core platform, open architecture, designed for change, itemized $cost metrics, automated data lineage. Understanding of information modelli...
Architect and design large-scale, distributed big data solutions using Java and big data technologies to handle high-volume data processing and analytics. Expertise in Bigdata ecosystem (Cloudera Distribution) using spark and Map Reduce. Bigdata ecosystem (Good understanding of Hadoop, preferable cl...
Development of big data technologies like Apache Spark and Azure Databricks. Development of big data technologies like Apache Spark and Azure Databricks. Single core platform, open architecture, designed for change, itemised $cost metrics, automated data lineage. Understanding of information modelli...
Strong programming skills in PySpark in a BigData environment. Familiarity with big data processing tools and techniques. Knowledge on data modeling and data design is essential. Should be familiar with data warehouse concepts. ...
Experience in building Big Data/ML/AI applications and optimizing data pipelines, architectures and data sets. Build data pipelines, jobs using Spark and Databricks to ingest into Data Lake/Delta Lake on AWS. Experience with big data tools: Apache Spark, Databricks, Parquet/Delta, PySpark, SparkSQL,...
A seasoned practitioner with experience of working with enterprise data warehouses and experience with data intensive systems. Experience with developing metadata driven frameworks for data processing/transformation and built real-time processing solutions. Data modeling experience in a financial se...
AWS, Big Data, Scala, Python, Spark, Performance tuning, Big data processing, Infrastructure maintenance,. Need to develop and maintain applications using Bigdata framework and AWS technologies. ...
If interested please reply to me with your updated resume or feel free to reach out to me for more details at <strong>(949) 522-6255</strong><br /><br /><strong>Hybrid - 3 Days Onsite and 2 days Remote</strong><br /><br /><strong>Job Descrip...
Design and implement big data pipelines to ingest and process data in real-time and monitor for data miss or error in progress to ensure that data reaches the end system. Master’s in Computer Science/Applications, Information Technology/Systems or Electronics/Electrical Engineering + minimum 1 year ...
Below are the details: Role: Big Data Engineer with AWS Location: Warren , NJ Position Type: Contract on W2 Only Experience Required: 8+ Years of Experience Eligible Visas: USC/GC/H4-EAD Note: Need only Genuine profiles Job Description: "Data Engineer will be responsible for the implementa...
Must have hands-on experiences in migrating from standard relational database to non-relational databases leveraging Big Data stack and alternate mechanisms like HDFS, Spark, Neoj, Snowflake, Airflow, etc. Stay current with industry trends and emerging technologies in both traditional databases and ...
Key Skills: Azure, Big data knowledge and Java/Scala knowledge is mandatory. You will be expected to be able to work independently producing sound Kubernetes based micro services or extending Scala-based big data frameworks Your team :. Data: Storage accounts, PostgreSQL, knowledge on Data Bricks AP...
Experience in Big data technologies , real time data processing platform(Spark Streaming) experience would be an advantage. Good Big Data resource with the below Skillset:. ...
Role: Java Big Data Developer (Hybrid – 3 days in office, 2 days home). Role Summary: A senior java engineer with very strong java understanding along with subject matter expertise in big data. Implement application features/function using Java , Spring , Big Data technologies. Tune various Bi...
Databricks Certified Data Engineer Professional certification. Understanding of the Databricks platform and developer tools like Apache Spark, Delta Lake, MLflow, and the Databricks CLI and REST API. Knowledge of general data modeling concepts. Ability to ensure that data pipelines are secure, relia...
Significant experience in data analysis using SQL + excel and creating data mappings, including source, target, and transformations. Financial Domain: Banking, Reference Data, Traded Products. ...
A senior java engineer with very strong java understanding along with subject matter expertise in big data. Implement application features/function using Java, Spring, Big Data technologies . Tune various Big Data processes for egress/ingress optimization, monitoring, scaling . At least 5 years of w...
The ideal candidate will have an eye for building and optimizing data systems and will work closely with our systems architects, data scientists, and analysts to help direct the flow of data within the pipeline and ensure consistency of data delivery and utilization across multiple projects. Citi's ...
Big Data Hadoop, Hive, Java, Python and Spark experience. ...
A seasoned practitioner with experience of working with enterprise data warehouses and experience with data intensive systems. Experience with developing metadata driven frameworks for data processing/transformation and built real-time processing solutions. Data modeling experience in a financial se...
The ideal candidate will have an eye for building and optimizing data systems and will work closely with our systems architects, data scientists, and analysts to help direct the flow of data within the pipeline and ensure consistency of data delivery and utilization across multiple projects. Citi's ...