- Search jobs
- Grand Prairie, TX
- automation
Automation Jobs in Grand Prairie, TX
Create a job alert for this search
Automation • grand prairie tx
Automation Tester (Playwright)
ExperisIrving, Texas, USSenior QA Automation Engineer
Peyton Resource GroupIrving, TX, USAutomation Engineer
eTeamIrving, TXQA Automation w / Playwright
Staffing the UniverseArlington, TX, United StatesAutomation Tester / QA
Central Business SolutionsIrving, TX, United StatesSecurity Automation Engineer
Secur-ServDallas-Fort Worth, TX, USAutomation Engineer
Intellisoft TechnologiesIrving, TXField Service Engineer - Automation Industry
ProAutomated Inc.Arlington, TX, USTraveling Project Manager Building Automation
Divcon ControlsIrving, TX, United StatesManager, Detection Engineering and Security Automation
GartnerIrving, TX, United StatesQA Automation
MassGenicsIrving, TexasAutomation Applications Technician
Olympus ControlsIrving, TX, USTesting / QA - Test Automation Engineer Senior
Mastech DigitalIrving, TX, USQA Automation
Innova SolutionsIrving, TexasAutomation Engineer
ModineGrand Prairie, TXAutomation Engineer
MindlanceIrving, TXFunctional + Automation(Selenium) Tester - Denver, CO
CapgeminiIrving, TX, United StatesThe average salary range is between $ 64,523 and $ 135,900 year , with the average salary hovering around $ 95,000 year .
- physician recruiter (from $ 62,712 to $ 250,000 year)
- flooring installer (from $ 39,000 to $ 234,000 year)
- vp of engineering (from $ 215,625 to $ 225,000 year)
- dentist (from $ 110,945 to $ 220,000 year)
- psychiatrist (from $ 72,000 to $ 200,175 year)
- medical director (from $ 20,000 to $ 200,000 year)
- dog groomer (from $ 38,675 to $ 200,000 year)
- health psychologist (from $ 109,200 to $ 200,000 year)
- nurse practitioner (from $ 101,283 to $ 197,500 year)
- director of software engineering (from $ 160,950 to $ 195,950 year)
- Rockford, IL (from $ 94,120 to $ 180,000 year)
- Palm Bay, FL (from $ 90,000 to $ 179,000 year)
- Green Bay, WI (from $ 90,284 to $ 178,598 year)
- Santa Maria, CA (from $ 102,674 to $ 178,294 year)
- Santa Clarita, CA (from $ 103,369 to $ 178,268 year)
- Santa Rosa, CA (from $ 102,826 to $ 177,934 year)
- Oceanside, CA (from $ 140,400 to $ 177,736 year)
- Santa Ana, CA (from $ 102,428 to $ 177,568 year)
- Cambridge, MA (from $ 110,000 to $ 175,000 year)
- San Bernardino, CA (from $ 95,000 to $ 170,960 year)
The average salary range is between $ 66,063 and $ 135,000 year , with the average salary hovering around $ 94,936 year .
Related searches
Automation Tester (Playwright)
ExperisIrving, Texas, US- Full-time
Position Title : Automation Tester (with Playwright)
Start Date : ASAP
Approved On-Site Location(s) : Irving, TX
On-Site Schedule : Hybrid (3 days / week in office)
# Of Openings : 1
Skillset : API, UI, Performance Testing, Playwright
Overview :
The IAM Tools Engineering team (ITE) is looking for a highly motivated and skilled AI Test Engineer to join our growing team. The AI Test Engineer will be responsible for designing, developing, and executing comprehensive test strategies and plans for AI / ML models and applications. This role requires a strong understanding of AI principles, machine learning workflows, data quality, and various testing methodologies specific to intelligent systems. The ideal candidate will work closely with the ITE engineering team and application users, have related experience developing tests in an iterative manner, be passionate about quality, possess excellent problem-solving skills, and thrive in a collaborative, fast-paced environment. This person must have a proven track record of demonstrating their ability to effectively design and develop test frameworks, execute tests manually and in an automated manner, and have a strong technical focus. Critical success factors will include the ability to effectively engage in a matrixed organization, develop partnerships with many business, analyst, and development teams, and drive the quality assurance strategy for a cutting-edge AI enable self-service identity solution.
Responsibilities include :
- Develop and implement robust test plans and strategies tailored for AI / ML systems, including defining test objectives, methodologies, and scenarios.
- Review design specifications to understand the scope, requirements, and requested functions of the software product, partnering with the team during delivery of the stories to ensure stories are accurately implemented
- Identify appropriate parameters, functions, and data to test and validate requirements and acceptance criteria; ensuring a successful development and deployment that meets the business needs
- Ability to create effective granular and end to end test scenarios for user interface, data transactions, integrated systems, and functional aspects of web-based software applications
- Collaborate with engineers, and product managers to understand AI model requirements, use cases, and potential risks.
- Define quality metrics for AI models, focusing on aspects like accuracy, fairness, robustness, and interpretability.
- Design and create detailed test cases for various types of AI testing, including : Validating AI model outputs against expected behavior, prompt-response accuracy, and contextual understanding.
- Assessing model speed, latency, throughput, and scalability under different load conditions.
- Bias and Fairness Testing : Identifying and mitigating biases in training data and model predictions to ensure ethical and equitable AI behavior.
- Evaluating model resilience against various inputs, including malicious or unexpected data.
- Data Quality Testing : Ensuring the integrity, consistency, relevance, and representativeness of data used for training and testing AI models.
- Identifying vulnerabilities in AI systems and ensuring compliance with data protection standards (e.g., GDPR, CCPA).
- Verifying seamless integration of AI models with other systems, APIs, and existing software.
- Execute manual and automated test cases, analyzing results and identifying defects, anomalies, and inconsistencies.
- Implement and maintain automated testing frameworks and tools specifically designed for AI / ML models and pipelines
- Develop test scripts using programming languages commonly used in AI / ML (e.g., Python, Java, JavaScript).
- Utilize AI-specific testing tools and frameworks to enhance testing efficiency and accuracy.
- Integrate testing processes into Continuous Integration / Continuous Deployment (CI / CD) pipelines.
- Accurately document, track, and prioritize defects and issues found during testing.
- Work closely with development teams to ensure timely resolution of identified problems.
- Prepare comprehensive test reports, providing actionable insights into AI model performance and quality.
- Work with leaders, stakeholders, SMEs, scrum team, project managers, etc. to ensure success against stated objectives, ensuring the team is delivering a successful product
- Collaborate with internal and external subject matter experts, developers, and other QA Engineers
- Produce effective test scenarios to capture and communicate business process and requirements
- Ability to communicate daily status, along with defects / issues found, at daily standups.
Required Qualifications :
Desired Qualifications :