Hello, I’m Anna Kovalova, a seasoned Quality Assurance (QA) professional with over 15 years of experience. As I embarked on my career in manual QA, I transitioned gradually into automation tools and now embrace the capabilities of AI. I’m genuinely passionate about the groundbreaking changes in our industry, and the remarkable potential AI holds for transforming manual testing. In today’s article, I share my journey of using AI to elevate manual testing practices. Over time, I’ve witnessed AI’s instrumental role in advancing what I term “autonomous testing”—ushering manual testing into a new era. This experience has indeed been transformative, providing invaluable insights into enhancing and streamlining our testing operations for increased efficiency, effectiveness, and precision. I’ll further illustrate this journey with real-world examples and actionable strategies that underscore the power of AI in augmenting manual testing.
Risks and benefits of Introducing AI Tools for Manual Testing
Modern AI systems continue to evolve, presenting test teams with new opportunities and potential risks. Based on extensive research and firsthand experience, I outline the benefits and concerns associated with harnessing AI in manual testing:
Risks:
- AI lacks the nuanced human judgment essential for complex decision-making in testing.
- AI outputs require meticulous proofreading and bug resolution.
- Privacy concerns arise from leveraging real customer data for AI model training.
- Overreliance on AI may overshadow in-house skill development.
- Increased automation fuels job insecurity among manual testers.
Benefits:
- Automates repetitive test case generation, drafting over 200 API test cases for a new endpoint.
- Accelerates the creation of comprehensive 50+ page test plan frameworks.
- Expands test coverage with diverse test scenarios—generating 100+ boundary and negative test variants.
- Takes over regression suite maintenance—updating thousands of outdated test steps.
- Provides ongoing testing knowledge support.
In light of prevailing risks, I resonate with the sentiment: “AI is best positioned currently as an enhancing collaboration technology versus a replacement technology.” Harmonizing the strengths of AI with human oversight, validation, and adjustment is advised. From my extensive experience, combining AI’s efficacy with our creative touch significantly amplifies productivity, complementing our invaluable testing team members. Integrating AI into manual testing harnesses smarter and more efficient work practices, which in turn, enhances software quality significantly.
Examples of AI Tool Usage in Manual testing
As I progressed in enhancing my manual testing routines, I selected three AI tools:
- Claude (https://claude.ai/)
- ChatGPT (https://chatgpt.com/)
- Postbot (https://www.postman.com/product/postbot/)
Below, I will provide detailed information on each tool, accompanied by real-life examples.
Claude
In my workflows, Claude.ai utilizes advanced natural language processing (NLP) to process large test data files, analyze information, and provide recommendations to boost testing coverage and quality. Claude.ai excels at managing diverse datasets and offering insights through interactive dialogues, making it an intuitive tool for testers.
Below are examples of how you can use Claude on a daily basis:
Example 1. Full analyse of test approaches:
Preconditions: export bug report from bugtracking system and upload to Claude.
Prompt: Analyze attached bug report found in real estate app and give me full analyse of my test approaches.
Results:
I’ve attached part of the answer, and it seems very helpful for a QA Engineer. A comprehensive analysis of test approaches equips a QA Engineer with the insights needed to effectively identify, prioritize, and mitigate potential issues, thereby enhancing the quality and reliability of software products.
Example 2. Testing process enhancement:
Preconditions: do steps from example 1.
Prompt: Provide a detailed, step-by-step guide on how to enhance my testing process using the data provided.
Results:
Again it seems very helpful for a QA Engineer. This organized approach enhances the accuracy and efficiency of identifying bugs and potential improvements, ultimately contributing to a more robust and reliable product.
Example 3. Bug prevention:
Preconditions: do steps from example 1.
Prompt: What is the likelihood of this bug being reproduced, and how can I prevent it from occurring again?
Results:
It is definitely useful information. Understanding the likelihood and prevention methods of a bug can empower a QA Engineer to prioritize testing efforts and implement effective solutions to minimize future occurrences, thereby enhancing the overall software quality.
In general Claude responses appear to be extremely beneficial for the daily responsibilities of QA engineers, particularly aiding junior QA members. Moreover, this tool proves invaluable for evaluating test tasks submitted by new team members and offering feedback on various documentation like test cases, test plans, and test strategies. Additionally, it’s a great resource to get ready for grooming sessions.
ChatGPT
ChatGPT is another powerful tool for a QA engineer that can be used on a daily basis. Below are a couple of examples of how it can be utilized.
Example 1. Create a new bug:
Preconditions: find a bug, file it quickly in the notes.
Prompt: new bug found server error while trying to login on firefox with invalid credentials. Staging. UI 12.12.2023
Results:
QA Engineers can definitely use this on daily basis. ChatGPT enhances bug tracking efficiency. A simple prompt detailing a new bug, for instance, quickly generates a structured report ready for elaboration—expediting the QA process substantially.
Example 2. Create a test report:
Preconditions: Create a new user story for a login page in an e-market app that sells socks in Canada. Claude or ChatGPT can be used for this purpose.
Prompt: All acceptance criteria from the user story below have been met. Please write a full test report detailing what has been tested.
Results:
This can be easily added to the JIRA ticket before closing it. The test report provides a comprehensive analysis of the testing outcomes and any identified issues, enabling QA engineers to verify the functionality and quality of the user story implementation and make informed improvements. Additionally ChatGPT can create test cases for the user story for manual testing. Test cases can be converted in JSON format for automated testing. QA leads and managers can use ChatGPT to create test plans or test strategy documents. From my experience, Claude does it better, but I think both tools can be used, and the better result can be selected.
Example 3. Risk registry document:
Preconditions: gather as much information as possible on the project or functionality where you want to receive the risk analysis.
Prompt: create a risk registry document based on the information as followed: the project is heavily UI loaded, it has 3 main portals and the back end is on php, the team is in the different countries. Ask me at least 2 clarification questions before generating the full document.
NOTE: pay attention at additional questions. It can be helpful.
Results:
A risk registry document can help a QA Engineer by identifying and prioritizing potential risks, allowing them to focus their testing efforts on the most critical areas that could impact software quality and project success.
Postbot
Nestled within Postman, Postbot facilitates generating collection-level test cases—a seamless process to cover particular test requests. This capability includes leveraging simple API requests for efficient test generation and validation, exemplifying precision in handling complex testing elements.
Example 1. Test cases can be generated on collection level which is very convenient, especially when you’re initially setting up test suites or performing regression testing.
Example 2. Test cases can be generated for particular request. It is used when you need to thoroughly validate the behavior of an API endpoint under different input scenarios, including edge cases, to ensure it responds correctly across various data types and conditions
Conclusion
To conclude, the strategic infusion of AI capabilities with human creativity underscores responsible AI adoption in manual testing. Sole reliance on AI tools like ChatGPT or Claude are not advisable without refining, editing, and validating outputs. Ensuring oversight by managing inaccuracies through feedback loops, advancing skillsets in analytical tasks, and maintaining critical evaluation of risks and strategies complements AI in accelerating test cycles effectively. AI indeed redefines manual testing by not just optimizing efficiency but enhancing collaborative engagement to overcome challenges and elevate software quality.