top of page
Concrete

The Age of AI and MCP: The Future of Automation Testing

  • Writer: Georgios Kogketsof
    Georgios Kogketsof
  • Apr 4
  • 6 min read


Introduction

Some time ago, I wrote an article about AI agents using CrewAI to generate a test plan and test cases for the Ultimate Texas Holdem game titled Unveiling the Power of AI: My Exciting Quest into Multi-Agent Systems for Software QA with CrewAI Tools!. Before that article was fully published, the Model Context Protocol (MCP) emerged, offering enhanced capabilities in our daily interactions with LLMs. In this post, I will show how we can not only create a test case but also automate it.


Traditionally, automation testing relied on manual coding techniques using Page Object Models, which often resulted in cumbersome maintenance and increased complexity. However, with the introduction of modern tools like Playwright and frameworks such as the MCP Playwright Server, a new paradigm has developed. This blog will delve into this evolution, highlighting the shift from traditional methods to the innovative capabilities of MCP Playwright Server.


MCP Playwright Server: A New Era in Automation Testing

Introducing MCP Playwright Server


The MCP (Model Context Protocol) Playwright Server is a sophisticated tool that enables developers in test to write prompts for automation tasks instead of using traditional coding methods

Navigate to website https://www.saucedemo.com/ and enter the username and password as "standard_user" and "secret_sauce" respectively and perform login.

This innovative approach promotes a more intuitive method for creating comprehensive test scenarios by leveraging the capabilities of LLMs.


Let's devise a straightforward test to confirm that a product is added to the cart. We'll navigate to


Log in using the standard username and password to access the e-shop page, add the Backpack to the cart, and verify.


To test this scenario, I will use Claude Desktop with the Playwright MCP Server installed. The prompt for this test case looks like


Navigate to website https://www.saucedemo.com/ and enter the username and password as "standard_user" and "secret_sauce" respectively and perform login. On the new page add the Backpack to the cart and verify that the cart icon has changed

For the given prompt, Claude launches a Chromium browser and runs the test, providing the output.

'll help you navigate to the Sauce Demo website, login, add a backpack to your cart, and verify the cart icon changes. Let me do this step by step.View result from playwright_navigate from playwright-mcp-server (local){
  `url`: `https://www.saucedemo.com/`
}Navigated to https://www.saucedemo.com/Now I'll enter the username "standard_user" in the username field.View result from playwright_fill from playwright-mcp-server (local){
  `value`: `standard_user`,
  `selector`: `#user-name`
}Filled #user-name with: standard_userNext, I'll enter the password "secret_sauce" in the password field.View result from playwright_fill from playwright-mcp-server (local){
  `value`: `secret_sauce`,
  `selector`: `#password`
}Filled #password with: secret_sauceNow I'll click the login button to complete the login process.View result from playwright_click from playwright-mcp-server (local){
  `selector`: `#login-button`
}Clicked element: #login-buttonNow I'll take a look at the cart icon before adding an item to establish a baseline.View result from playwright_screenshot from playwright-mcp-server (local){
  `name`: `cart-before-adding`,
  `selector`: `.shopping_cart_link`
}Screenshot saved to: Users/xxx/Downloads/cart-before-adding-2025-04-04T08-10-58-622Z.pngScreenshot also stored in memory with name: 'cart-before-adding'Now I'll add the Sauce Labs Backpack to the cart by clicking its "Add to cart" button.View result from playwright_click from playwright-mcp-server (local){
  `selector`: `#add-to-cart-sauce-labs-backpack`
}Clicked element: #add-to-cart-sauce-labs-backpackNow let's take another screenshot of the cart icon to verify it has changed.View result from playwright_screenshot from playwright-mcp-server (local){
  `name`: `cart-after-adding`,
  `selector`: `.shopping_cart_link`
}Screenshot saved to: Users/xxx/Downloads/cart-after-adding-2025-04-04T08-11-08-822Z.pngScreenshot also stored in memory with name: 'cart-after-adding'Let me get the text content of the cart icon to verify it now shows "1" item.View result from playwright_evaluate from playwright-mcp-server (local){
  `script`: `return document.querySelector('.shopping_cart_badge')?.textContent || 'No badge found'`
}Operation failed: page.evaluate: SyntaxError: Illegal return statement
    at eval (<anonymous>)
    at UtilityScript.evaluate (<anonymous>:234:30)
    at UtilityScript.<anonymous> (<anonymous>:1:44)Let me try another approach to verify the cart badge:View result from playwright_evaluate from playwright-mcp-server (local){
  `script`: `document.querySelector('.shopping_cart_badge')?.textContent || 'No badge found'`
}Executed JavaScript:document.querySelector('.shopping_cart_badge')?.textContent || 'No badge found'Result:"1"Great! I've completed all the requested steps:

Successfully navigated to https://www.saucedemo.com/
Logged in with username "standard_user" and password "secret_sauce"
Added the Sauce Labs Backpack to the cart
Verified that the cart icon has changed:

Before adding the backpack, there was no badge on the cart icon
After adding the backpack, the cart icon now shows "1" item



The verification confirms that the Sauce Labs Backpack was successfully added to the cart, and the cart icon was updated to reflect this change.

The above accomplished the test execution without writing any code or using an IDE and produced screenshots of the test execution as proof.

Swag Labs product page
Filename: cart-before-adding-2025-04-04T08-10-58-622Z.png

Swag Labs product page with a product added to cart
cart-after-adding-2025-04-04T08-11-08-822Z.png

For the test case mentioned above, let's include a few elements in the prompt. Initially, we need to close the browser and then generate the code so it can be executed multiple times using Playwright.

Navigate to website https://www.saucedemo.com/ and enter the username and password as "standard_user" and "secret_sauce" respectively and perform login. On the new page add the Backpack to the cart and verify that the cart icon has changed. And close the browser. Generate the code of the execution.

The prompt runs the test and generates a test script file for use with Playwright in the /tmp directory.


import { test, expect } from '@playwright/test';

test('SauceDemoTest_Complete', async ({ page }) => {
    // Navigate to URL
    await page.goto('https://www.saucedemo.com/');

    // Fill username field
    await page.fill('#user-name', 'standard_user');

    // Fill password field
    await page.fill('#password', 'secret_sauce');

    // Click login button
    await page.click('#login-button');

    // Verify we are on the inventory page
    await expect(page).toHaveURL(/.*inventory.html/);

    // Add backpack to cart
    await page.click('#add-to-cart-sauce-labs-backpack');

    // Verify the cart badge shows "1" item
    const badgeText = await page.locator('.shopping_cart_badge').textContent();
    expect(badgeText).toBe('1');

    // The browser will automatically close after the test completes
});

You can easily replicate the above example using Cursor IDE. If you decide to save the generated code, you must specify the directory for saving the code, which will result in the creation of the entire project.


Evolving Role of Testers: From Scriptwriters to AI-Orchestrators


Traditionally, the automation landscape has relied on testers writing and maintaining test scripts in frameworks such as Selenium, Cypress, and Playwright. With the integration of AI and MCP, test generation and maintenance are becoming automated, shifting the tester’s focus from implementation to orchestration.

Rather than manually scripting test cases, testers will define structured prompts that guide AI in generating meaningful and comprehensive test scenarios. This shift demands a new skill set, including proficiency in AI prompt engineering, validation of AI-generated test cases, and fine-tuning automation frameworks to ensure accuracy and relevance.

Consequently, hiring strategies must adapt. Job descriptions should no longer emphasize only proficiency in traditional test automation tools but also AI-driven testing frameworks and the ability to design effective AI prompts. Organizations must look for testers who can think critically about test coverage, business logic, and risk assessment while collaborating with AI systems to optimize automation.


Integrating AI into Shift-Left Testing


The Shift-Left approach emphasizes early testing to identify defects at the initial stages of development. Traditionally, this has involved close collaboration between developers, testers, and product managers to define test cases early in the development cycle. With AI-generated test cases, this collaboration must now include the creation and refinement of test prompts as an integral part of the process.

A structured approach, such as the “Three Amigos” model—where developers, testers, and product owners collectively define test requirements—should be expanded to include AI prompt engineering. Rather than solely discussing user stories and manual test scenarios, teams must now consider how to structure AI-generated tests effectively, ensuring that automated outputs align with functional requirements and edge cases.

By integrating AI into Shift-Left testing, organizations can accelerate test execution while maintaining the strategic oversight needed to catch business-critical defects early. However, testers must remain actively involved in reviewing and refining AI-generated outputs to prevent false positives and ensure comprehensive test coverage.


Future Implications and Strategic Considerations


As AI and MCP continue to advance, organizations must embrace these changes to remain competitive. The hiring process must evolve to prioritize testers who understand AI-driven automation, Shift-Left testing should incorporate AI prompt engineering, and BDD must leverage AI while retaining human oversight.

The integration of AI into software testing does not eliminate the need for testers; rather, it transforms their role into one that is more strategic, focused on validation, test design, and business alignment. Organizations that successfully navigate this transition will achieve greater efficiency, improved test coverage, and faster release cycles, ensuring higher software quality in an increasingly AI-driven world.

AI is not replacing testers—it is redefining their responsibilities. The future of automation testing lies in the synergy between human intelligence and AI capabilities, where testers leverage AI to enhance efficiency while maintaining control over quality and business relevance. Organizations that proactively adapt to this shift will be better positioned to harness the full potential of AI-driven testing.



留言


bottom of page