Talent.com

Java developer Jobs in Buckeye, AZ

Create a job alert for this search

Java developer • buckeye az

Last updated: 1 hour ago

Remote Software Engineering, Data Science, and Systems Design Experts - AI Trainer ($60-$100 per hour)

MercorBuckeye, Arizona, US
Remote
Full-time +1

Location:** US-Based and Non-US-Based**Type**: Full-time or Part-time Contract Work **Fluent Language Skills Required:** English **Why This Role Exists** Mercor partners with leading AI teams to im...Show more

 • New!

Remote Senior Java Engineer - AI Trainer

SuperAnnotateBuckeye, Arizona, US
Remote
Full-time

As a Senior Java Engineer, you will work remotely on an hourly paid basis to review AI-generated Java code, architectural solutions, and technical explanations, as well as generate high-quality ref...Show more

Java Software Engineer

SynergisticITBuckeye, AZ, United States
Full-time

Almost 390,000 tech employees have been laid off since 2022 and it's still ongoing.The effect of this has led hundreds of thousands of laid off tech employees compet.Show more

 • Promoted

Online Data Verification Technician (Work-at-Home)

FocusGroupPanelBuckeye, Arizona, United States
Remote
Part-time

Work From Home, Entry Level Data Entry Clerk As A Research Participant.We are looking for people who want to work remotely from home.You'll need an Internet connection and a mobile device or comput...Show more

People also ask
Remote Software Engineering, Data Science, and Systems Design Experts - AI Trainer ($60-$100 per hour)

Remote Software Engineering, Data Science, and Systems Design Experts - AI Trainer ($60-$100 per hour)

MercorBuckeye, Arizona, US
1 hour ago
Job type
  • Full-time
  • Part-time
  • Remote
Job description
  • Location : US-Based and Non-US-Based Type : Full-time or Part-time Contract Work
  • Fluent Language Skills Required : English Why This Role Exists Mercor partners with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems. These systems are used across a wide range of everyday and professional scenarios, and their effectiveness depends on how clearly, accurately, and helpfully they respond to real user questions. In coding and software engineering contexts, conversational AI systems must demonstrate correct reasoning, strong problem-solving ability, and adherence to real-world engineering best practices. This project focuses on evaluating and improving how models reason about code, generate solutions, and explain technical concepts across a variety of programming tasks and complexity levels. What You’ll Do - Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness - Conduct fact-checking using trusted public sources and authoritative references - Conduct accuracy testing by executing code and validating outputs using appropriate tools - Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies - Assess code quality, readability, algorithmic soundness, and explanation quality - Ensure model responses align with expected conversational behavior and system guidelines - Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines Who You Are - You hold a BS, MS, or PhD in Computer Science or a closely related field - You have significant (3+ years) real-world experience in software engineering or related technical roles - You are an expert in at least two relevant programming languages (e.g., Python, Java, C++, C, JavaScript, Go, Rust, Ruby, SQL, Powershell, Bash, Swift, Kotlin, R, TypeScript, HTML / CSS) - You are able to solve HackerRank or LeetCode Medium and Hard–level problems independently - You have experience contributing to well-known open-source projects, including merged pull requests - You have significant experience using LLMs while coding and understand their strengths and failure modes - You have strong attention to detail and are comfortable evaluating complex technical reasoning , identifying subtle bugs or logical flaws Nice-to-Have Specialties - Prior experience with RLHF, model evaluation, or data annotation work - Track record in competitive programming - Experience reviewing code in production environments - Familiarity with multiple programming paradigms or ecosystems - Experience explaining complex technical concepts to non-expert audiences What Success Looks Like - You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions - Your feedback improves the correctness, robustness, and clarity of AI coding outputs - You deliver reproducible evaluation artifacts that strengthen model performance - Mercor customers trust AI systems to assist reliably with real-world coding tasks Why Join Mercor At Mercor, experienced software engineers play a direct role in shaping how AI systems reason about and generate code. This remote role allows you to apply your technical expertise to high-impact AI development work, improving systems used by developers around the world.