AI Test Engineer

Testing with AI / Agentic AI

Nationwide

Are you a Test/QA Engineer who gets excited about smart test automation, but the next generation of it? Not just maintaining scripts, but using AI and Agentic AI to test software more autonomously, faster, and smarter, from UI and API to end‑to‑end flows, regression, and exploratory testing?
Do you want to build a testing approach where LLMs and test agents generate test cases, “understand” applications, reproduce bugs, and summarize test results as if you had an extra senior tester on your team? Then we’re looking for you as an AI Test Engineer (Testing with AI).

Your mission

  • Develop and maintain an AI‑enabled test framework where LLMs and (semi‑)autonomous agents can plan, execute, and evaluate tests.
  • Use Agentic AI for exploratory testing: autonomously walk through journeys, detect anomalies, and reproduce findings with steps and evidence (logs/screenshots/video).
  • Automatically generate test cases, edge cases, and regression sets using requirements, user stories, incidents, code changes, and product analytics.
  • Build “AI test oracles” that automatically determine whether outcomes are “good” (assertions, invariants, contracts, golden paths, snapshot comparison, heuristics).
  • Integrate AI into existing tooling (e.g., Playwright/Cypress/Selenium, Postman/REST‑assured, k6/JMeter) and CI/CD pipelines.
  • Create synthetic test data and scenarios (including privacy‑by‑design) so agents can test safely with realistic data.
  • Develop self‑healing tests that are resilient to UI changes, dynamic selectors, variable content, and feature flags.
  • Test complex systems: microservices, event‑driven flows, async processing, caching, permissions/roles, and multi‑tenant behavior.
  • Detect and prevent flaky tests using AI‑driven root cause analysis (timing, data, dependencies, environment drift).
  • Report test outcomes with actionable feedback: impact, reproduceability, risk assessment, and priority for engineering/product.

Impactful Use Cases

  • UI regression that normally takes weeks of maintenance → agents that adapt and validate tests with minor UI changes.
  • New feature based on specs → LLM generates the test design and scenarios (happy flow, edge cases, negative tests).
  • Production incident → agent reproduces it, creates minimal reproduction steps, and sets up a permanent regression test.
  • Exploratory testing for release → agent “plays” different personas and finds deviations that scripts miss.

What you bring

  • Solid foundation in testing and test automation (UI, API, integration, and end-to-end) and a strong sense of quality and risk
  • Experience with a modern test stack (e.g., Playwright/Cypress/Selenium, pytest/JUnit, REST-assured, Postman, contract testing)
  • Affinity with or experience in LLMs/agents: prompt design, tool usage, structured outputs (JSON), and evaluating output quality
  • Understanding that AI is not magic: you focus on reproducibility, determinism where needed, logging, measurability, and governance
  • An analytical mindset: able to “read” a system, identify risks, and translate test strategy into concrete coverage
  • Clear communication: making findings understandable to engineers, product owners, and stakeholders with evidence and prioritization

Nice to have

  • Experience with CI/CD (GitHub Actions/GitLab/Jenkins), Docker/Kubernetes, observability (Grafana/ELK), and test data management
  • Basic knowledge of security/privacy (PII) and how to test safely in enterprise environments

Key goals

  • Build a scalable approach where AI structurally adds test capacity: faster feedback, more coverage, less maintenance
  • Deliver reliable releases by finding regressions, edge cases, and integration risks early (shift-left, including in pipelines)
  • Create a test ecosystem where agentic testing demonstrably contributes to quality: fewer bugs, fewer flaky tests, higher confidence
  • Ensure governance & safety: deploying AI in testing in a way that secures privacy, security, and auditability

Interested?

Contact jelleschutte@qacompany.nl

LinkedIn
WhatsApp
Email
Are you a Test/QA Engineer who gets excited about smart test automation, but the next generation of it? Not just maintaining scripts, but using AI and Agentic AI to test software more autonomously, faster, and smarter, from UI and API to end‑to‑end flows, regression, and exploratory testing?
Do you want to build a testing approach where LLMs and test agents generate test cases, “understand” applications, reproduce bugs, and summarize test results as if you had an extra senior tester on your team? Then we’re looking for you as an AI Test Engineer (Testing with AI).

Your mission

  • Develop and maintain an AI‑enabled test framework where LLMs and (semi‑)autonomous agents can plan, execute, and evaluate tests.
  • Use Agentic AI for exploratory testing: autonomously walk through journeys, detect anomalies, and reproduce findings with steps and evidence (logs/screenshots/video).
  • Automatically generate test cases, edge cases, and regression sets using requirements, user stories, incidents, code changes, and product analytics.
  • Build “AI test oracles” that automatically determine whether outcomes are “good” (assertions, invariants, contracts, golden paths, snapshot comparison, heuristics).
  • Integrate AI into existing tooling (e.g., Playwright/Cypress/Selenium, Postman/REST‑assured, k6/JMeter) and CI/CD pipelines.
  • Create synthetic test data and scenarios (including privacy‑by‑design) so agents can test safely with realistic data.
  • Develop self‑healing tests that are resilient to UI changes, dynamic selectors, variable content, and feature flags.
  • Test complex systems: microservices, event‑driven flows, async processing, caching, permissions/roles, and multi‑tenant behavior.
  • Detect and prevent flaky tests using AI‑driven root cause analysis (timing, data, dependencies, environment drift).
  • Report test outcomes with actionable feedback: impact, reproduceability, risk assessment, and priority for engineering/product.

Impactful Use Cases

  • UI regression that normally takes weeks of maintenance → agents that adapt and validate tests with minor UI changes.
  • New feature based on specs → LLM generates the test design and scenarios (happy flow, edge cases, negative tests).
  • Production incident → agent reproduces it, creates minimal reproduction steps, and sets up a permanent regression test.
  • Exploratory testing for release → agent “plays” different personas and finds deviations that scripts miss.

What you bring

  • Solid foundation in testing and test automation (UI, API, integration, and end-to-end) and a strong sense of quality and risk
  • Experience with a modern test stack (e.g., Playwright/Cypress/Selenium, pytest/JUnit, REST-assured, Postman, contract testing)
  • Affinity with or experience in LLMs/agents: prompt design, tool usage, structured outputs (JSON), and evaluating output quality
  • Understanding that AI is not magic: you focus on reproducibility, determinism where needed, logging, measurability, and governance
  • An analytical mindset: able to “read” a system, identify risks, and translate test strategy into concrete coverage
  • Clear communication: making findings understandable to engineers, product owners, and stakeholders with evidence and prioritization

Nice to have

  • Experience with CI/CD (GitHub Actions/GitLab/Jenkins), Docker/Kubernetes, observability (Grafana/ELK), and test data management
  • Basic knowledge of security/privacy (PII) and how to test safely in enterprise environments

Key goals

  • Build a scalable approach where AI structurally adds test capacity: faster feedback, more coverage, less maintenance
  • Deliver reliable releases by finding regressions, edge cases, and integration risks early (shift-left, including in pipelines)
  • Create a test ecosystem where agentic testing demonstrably contributes to quality: fewer bugs, fewer flaky tests, higher confidence
  • Ensure governance & safety: deploying AI in testing in a way that secures privacy, security, and auditability

Interested?

Contact jelleschutte@qacompany.nl

QA Company rugtas