The new OpenAI GPT-3 text generation tool is a HUGE leap forward in Artificial Intelligence as a whole and a massive improvement in technology when compared to their previous GPT-2 tool. Allowing users to generate text, perform translations, generate code, and even create poetry from minimal input. Honestly, it sounds too good to be true and over the past few weeks, I have been working with GPT-3 to see if it holds up to the excitement that is being built around it, specifically in the test automation space. (**SPOILER ALERT**) It is.
Why is GPT-3 such a big advancement?
One of the improvements OpenAI GPT-3 has over its previous form is its unique training model. GPT-3 will train itself using lossless generative transformer model, which is only trained through open-ended reinforcement learning methods. Reinforcement learning is able to learn from scratch on an empty data set through self-play. That is, no example needs to be fed to the model. Instead, it creates its own rules through the process of trial and error. What is also interesting is the training model is also able to have its input be partially and continuously generated by itself, so that it can continue to learn through the entire process, similar to how humans learn how to play video games.
However, in most cases, reinforcement learning is limited in many ways. It will only learn from the most likely successful state-action pairs, so some pair of actions and states could be "invisible" to it, which means it cannot learn from them. That is why it is necessary to first train a model through supervised learning so that a series of positive/negative pairs will be generated to be used later as the unlabeled data set for reinforcement learning. Therefore, in GPT-3, the first step is to train it using supervised learning through an unlabeled data set. Then, the unlabeled data set will be fed to GPT-3 through the entire reinforcement learning process.
The combination of these two learning methods means that the time of learning and the quality of learning will be drastically improved. With this in place, it will be important for test automation to evaluate integrating it as a replacement for manual tasks in test automation development.
The following are examples of how OpenAI GPT-3 can speed up your test automation development:
Test Code Generation:
We are starting off with a big one here, and quite honestly, the most common use case for a text generation model within testing. If you are not familiar with what Test Code Generation is, here is a brief rundown of what that entails. Test Code Generation is the process of automatically generating test scripts based on data. No manual intervention, no time spent looking up Id’s, selectors, or xpath’s.
Test Script Generation & Test Case Generation:
With GPT-3 this process is made fairly painless, as GTP-3 uses a Prompt, Example, and Output model. To generate test scripts, the test engineer simply needs to provide a Prompt that includes the context of what they are trying to do. For example: simple text: “Go to the Home Page, and login”, test cases, or analytics data. Then the test engineer needs to provide an example of what they expect back from GPT-3, which in this case would be an example of the code in the language you wish to convert the data to. Supplying approximately 4-6 examples will yield the best results. Once those two things are supplied to GPT-3, the output will return the code for the prompt given which can then be saved to a file either permanently or temporarily and then executed.
Test Framework Generation:
Taking what we know from the Test Script Generation we discussed above, we can also apply these same principals to generating entire test frameworks based on input loaded into GPT-3 and converting it into customized test frameworks for the application under test (Web, Mobile, API…). The engineer can simply specify the application under test, what language, and type of automation framework they would like to begin with, and then the framework can be automatically generated within a very short period of time.
Now for some fun:
If you’ve read this far, as an example of how powerful GPT-3 can be, there is actually one portion within this post that was completely written by GPT-3 itself, were you able to detect it?
OpenAI GPT-3 is powerful and a perfect indication of where AI is headed in terms of integrating AI systems into test automation via its quick to setup and easy to use integration. Having worked with this tool over the past few weeks, I can say that there are some learning curve areas that should be taken into account when evaluating if this tool would be right for your team. But in comparison to generating your own custom model, the learning curve is completely manageable.
Intrigued by how AI and Machine Learning can impact your Test Automation efforts?
tapQA is here to help! Our Artificial Intelligence and Machine Learning experts will help you evaluate and implement solutions to boost your automated testing efforts.
Would you like to learn more? Contact us today for a conversation with our AI experts!
Michael Wagner is a Test Architect and Principal Consultant with tapQA. He has over 13 years of industry experience as a Software Tester, Engineer, Developer, and Architect; He is primarily focused on driving innovative test automation practices and strategies within a number of organizations ranging from software to hardware. He enjoys sharing his technical prowess with industry colleagues and has given several technical presentations on test automation strategies and best practices. His areas of expertise are software testing, artificial intelligence, test automation and open source technologies.
Have a QA question?
Our team would love to help!