Making the Most of Exploratory Testing
This chapter introduces exploratory testing: manually trying out a new feature to get rapid feedback on its behavior. We’ll describe exploratory testing in detail, consider its strengths and weaknesses, and when you should perform it in a project.
We’ll look at the prerequisites you need to begin exploratory testing and the approaches you should take. This testing can be a miniature version of the complete test plan, taking a customer’s point of view, and using your naivety about how the feature works to identify confusing areas.
Exploratory testing should be used as part of a larger test strategy but can be run in isolation when time is short. We’ll finish by looking at what you should check when performing this testing, and the importance of curiosity, both here and throughout the testing process.
In this chapter, we will cover the following topics:
- What is exploratory testing?
- Advantages, disadvantages, and alternatives
- Understanding when testing should begin
- Understanding the test activities
- The spiral model of test improvement
- Performing the first test
- Mapping out new features
- Using your naivety while testing
- Running complete exploratory testing
- Using exploratory testing by necessity
- Checking exploratory test results
- Using curiosity in testing
What is exploratory testing?
Exploratory testing is a vital tool in your armory. It involves you using a new feature in an ad hoc, unstructured way to quickly find issues and understand its workings. It is typically used early in the test cycle to achieve three main goals: to let you understand the feature you are testing, to discover any tools or knowledge you need, and to find blocking bugs that may delay the testing later on.
In exploratory testing, the test design, implementation, and interpretation are conducted simultaneously. This is also known as ad hoc testing, which has been frowned upon due to its unstructured format. However, it is a valuable stage in a project if deployed alongside other testing techniques. In a large project, it can be used early on to help plan more comprehensive tests, or if deadlines are tight, exploratory testing might be the only testing there’s time for.
In an ideal cycle, the team will plan both the feature and its testing at the start of development. In that case, exploratory testing enhances the test plans that already exist. In other development teams, testers are involved later in the project and may only plan the testing in earnest once the first version is already running. In both cases, exploratory testing is a necessary step for you to see a new feature working in practice so that you can write detailed test plans with the exact behavior in mind.
Exploratory testing must be performed manually so that you can get your hands on the new feature and see what inputs and outputs are available. Other parts of testing can be run manually or with automation, but exploratory testing must be manual because the main aim isn’t to find bugs but to understand the feature. Based on that understanding, you can plan further testing.
Not everyone can perform exploratory testing. In particular, it needs input from someone other than the developer who worked on the code. The developer should ensure the feature is working for them, but exploratory testing shows whether it can work in another environment for another engineer. That is the second goal of exploratory testing: to find issues significant enough to block further tests.
If the new feature doesn’t run in the test environment, if a page throws an error, or parts of the functionality aren’t available, that will block testing of those whole areas. Exploratory testing can quickly find those issues so that they can be fixed early in the development cycle and not cause delays later on.
Advantages, disadvantages, and alternatives
Exploratory testing has distinct strengths and weaknesses. It is an important part of the testing cycle, but only if combined with other forms of testing to mitigate its shortcomings. Throughout this book, we’ll see how the advantages of different forms of testing complement each other and why you need a mixture of different approaches to get great coverage of a feature. Here are a few of the advantages and disadvantages of exploratory testing:
Table 1.1 – Advantages and disadvantages of exploratory testing
Exploratory testing is quick and easy. So long as you have the running code, you don’t need any other prerequisites, such as documentation or testing tools. That said, there should be user stories or other guidance material that you can work from to enable the feature and improve the effectiveness of your testing. Finding bugs is simple because you are watching the product as you use it; you don’t need to check a monitoring console or automated test output for errors. You can see what the system was doing at the time because you were using it yourself.
On the downside, it can be difficult to reproduce issues. Unlike an automated test that precisely performs a documented set of actions, if you were just clicking around when something strange happened, it may not be obvious what you did to trigger the problem. I once found a bug because I rearranged the windows on my screen partway through testing – changing the size of the browser window caused the issue. It took me some time to realize that had caused the problem, instead of all the actions I had taken in the application itself.
To make it easier to find the cause of bugs, you can record your session, either simply on video, within the application, or in the web browser that you are using for your tests. That takes a little time to set up and review but can be very helpful when trying to reproduce issues.
While exploratory testing doesn’t need many prerequisites, it helps to have a description of the feature so that you know any areas of functionality that aren’t obvious from its interface.
Another requirement is that it should be carried out by an experienced tester. Because exploratory testing is not reviewed or planned, its success depends on the skill of the individual carrying it out. A tester with experience will know the areas that are likely to cause bugs and can try tests that have failed in the past. They will also know what to check to find issues. A junior tester will not reach the same level of coverage.
The skills involved in exploratory testing – the curiosity and alertness it requires – are required throughout the test process, even when running more formalized test plans. Whether or not you are officially performing exploratory testing, you should always approach manual tests with this same mindset.
The coverage provided by exploratory tests is also difficult to measure. How much of a feature have you tested with exploratory testing? In practice, you will need to perform all these tests again as part of a more rigorous test plan, so you will repeat anything you do during exploratory testing. For this reason, you should limit how long you spend on these tests. They provide valuable feedback, but only so long as you are learning from it. Comprehensive testing starts later in the process.
To measure what coverage you have achieved, you can have a debrief to talk through the testing you’ve done with the product owner and developer. They can also suggest other cases to try. See Chapter 3, How to Run Successful Specification Reviews, to learn more about reviewing the specification and the scenarios that should be tested.
The final weakness of exploratory testing is that it does not cover non-functional tests. Here, you are just checking whether the feature works at all, not that it can achieve high loads, work in many environments, or recover from failure conditions. All those tests will come later but are not a priority at this stage.
The alternatives to exploratory testing include detailed specifications and preparing user stories and UI mockups, although, in practice, exploratory testing complements all those tasks. A sufficiently detailed specification should give enough detail that you can write the test plan from it directly. However, in practice, it is much easier to finish the details of the specification when you can use the code for exploratory testing. The same is true of user stories. They are very useful for defining and refining the core functionality but don’t usually cover error cases. Those can also be easier to find in real usage. User interface mockups are also massively helpful to define how a feature will look and its options, but it is still valuable to try those for real when they are available.
It may seem strange to begin the discussion of testing with exploratory testing, which can only begin once an initial version of the feature is complete. The following section describes the place of exploratory testing in the development life cycle and shows that while it might not be the first testing task that you start, it is the first you can finish.
Understanding when testing should begin
There is no clear line between development and testing, even when testing is a dedicated role within a company. It might seem obvious that first, you design your intended product, then you build it, and then you test it, as shown in the following diagram:
Figure 1.1 – An idealized waterfall model
This is known as the waterfall model of development. It has the advantage of having a clear point where testing should begin – that is, when the development team hands over the code. However, while such strict and simple handovers might work for bridges and houses, software development has more complexity – it has to handle many different inputs, and has the opportunity to be far more flexible.
In Agile approaches to software development, development cycles have multiple interacting feedback loops that contribute to the final product. As a tester, you begin your work at some point within those cycles. While testing could wait until the developers are perfectly happy with their product, in practice, this adds too much delay. Getting testers involved early in a project has many advantages:
- Helps the testers understand the feature
- Gives testers time to prepare tools or datasets that they need
- Lets testers provide feedback on the design process
- Lets testers begin testing earlier (than usual), even if only in a limited way
This is a part of shift-left testing to involve testers early in projects, which is a worthy goal but is challenging as there is no clear place to start. There is no day when a task is complete and testing should begin. Instead, testers need to gradually start testing based on the available specifications and code.
Another challenge for early testing is to integrate with other methods of verification. The developers will (hopefully) perform manual checks on their code and write unit tests for individual functions. This will already produce bugs and suggestions that feed back into the code implementation and the feature specification, as shown in the following diagram:
Figure 1.2 – Interactions and feedback in an Agile development model
In an Agile model, the specification still guides the implementation and suggests tests for developers and testers to run. But now, there can be far more feedback – the specification can change based on technical limitations and possibilities, then the developers will fix their bugs, and some developer tests may uncover changes needed in the specification.
While the waterfall model had a specific flow in time – in Figure 1.1, from left to right – in an Agile model, there is a constant back and forth between the different tasks. They are performed almost simultaneously. The specification is still a necessary first step to state what the developer should implement, but after that, the phases largely overlap. In test-driven development (TDD), for example, the testing will come before the implementation.
Notice that system testing performed by a separate test function isn’t included in this diagram yet. The only testing described there is performed by the developers themselves, which is an important ingredient in the testing mix. The system testing described in this book extends and completes the testing started by the developers.
Crucially, in an Agile project, this flow won’t happen to an entire feature in one go. Instead, the feature is broken down into multiple parts, each of which might be developed in parallel by different team members:
Figure 1.3 – Different parts of a feature developed in parallel within an Agile model
Instead of a feature being fully specified and fully implemented, as shown in the waterfall model in Figure 1.1, Agile recommends splitting tasks into the smallest possible functional units so that work can proceed on each unit in parallel. In this model, all those interactions happen simultaneously for different parts of the project as they are implemented and developers start their testing. This parallel working, with the opportunity for feedback at multiple levels, give Agile projects great flexibility.
The situation becomes even more complicated than this, of course, since the lessons you learn while implementing part 2 of a feature can feed back into part 1, whether from its specification, implementation, or initial testing, as shown in the following diagram:
Figure 1.4 – Interaction between different parts of an Agile feature development
While writing the specification for part 2, you may identify changes you need in the specification for part 1. Likewise, the implementation or developer testing of part 2 might need a specification change. The implementation and developer testing of part 1 might also benefit from insights learned while implementing part 2 so that there is constant feedback between those development tasks.
There are several lines of interaction not drawn in Figure 1.4. The developer testing also feeds back into the specification, and the work in part 3 of the feature feeds back into the tasks in part 1, and so on. I left those lines off the preceding diagram for sanity’s sake.
Where, in that mess of interactions, should system testing start? Where should it fit into this development flow overall? At any time, any part of the feature will be partially specified, partially implemented, and partially covered by development tests.
Unlike the waterfall model, there is no clear starting point where system tests should begin. However, there is a place where system tests can fit into this development cycle, which we can see if we simplify the diagram by considering a single feature. System testing should build from the developer testing while being guided and providing feedback to all previous stages, as shown in the following diagram:
Figure 1.5 – System testing as part of an Agile development model
Considering a single feature, the system test design can begin as soon as the specification has been started. Once there are requirements, you can develop the tests to check them. Once the implementation has begun, you can design tests using white-box techniques (see Chapter 6, White-Box Functional Testing) to ensure you have checked every code path. And the system tests should extend and complement the testing the developers perform.
The system testing should also provide feedback on the other tasks in the development cycle. Most obviously, bugs found in system testing need to be fixed in the implementation. Some bugs will lead to specification changes, especially clarifications or descriptions of complex situations that weren’t initially described. System tests also interact with the developer testing. They should avoid overlap but cover the gaps in unit tests, and some tests identified at the system test level might be best implemented as unit tests and should be moved there.
Remember, the preceding diagram doesn’t show the tasks in a timely order. Just because system testing appears on the right doesn’t mean it is performed last. Like the implementation and developer testing, it can start as soon as the first parts of the feature specification are ready. You can design tests against those specifications before any work by the developers starts and you can provide feedback on that specification.
Figure 1.6 – Interaction between different parts of a feature under development, including system testing
As the preceding diagram shows, there is a lot to manage, even at this high level of detail. This book will show how to write great feature specifications (see Chapter 2, Writing Great Feature Specifications, and Chapter 3, How to Run Successful Specification Reviews) and how to build on the developer’s implementation and testing (Chapter 6, White-Box Functional Testing). In addition, you need to consider the details of the system tests themselves, which will be covered in the remainder of this book.
Next, we’ll consider the main tasks involved in system testing and where to begin.
Understanding the test activities
Figure 1.7 – The main test activities and their interactions
Test design includes all the information gathering and preparation activities performed before you begin testing, excluding exploratory testing, which is significant enough to have its own category. Test design means reviewing all the available information for this feature to plan your testing. The written materials you base your testing on are collectively known as the test basis and may include documents such as the following:
- User stories
- User interface designs
- Technical design documents
- Competitive research
These documents help show the new behavior, although you will need to add extra details yourself. This will be described further in Chapter 2, Writing Great Feature Specifications. Even the written information is insufficient and needs to be augmented with practical experience of the feature, which comes from exploratory testing.
Exploratory testing is an oddity. Technically, it is part of the test design since its main focus is gathering information to inform future testing. However, it occurs relatively late in the process, when there is code ready to be tested, and unlike the rest of the test design, it involves testing and potentially finding bugs. Because of this dual role, it gets its own category.
With the information from the test basis and exploratory testing, you can document the feature specification and the test plan based on it. This should exhaustively describe a feature’s behavior, covering all possible eventualities.
The detailed testing then methodically runs the entire test plan. That may involve manual testing or writing and running automated tests. This book does not describe how to run your tests; other titles in the Packt library have excellent descriptions of those possibilities. This book focuses on the test design to show what tests you should run.
The final step is further documentation, this time of your test results. This includes all the bugs you need to raise and describing the tests that have passed.
These are the main activities a tester performs. There are other important jobs around planning, including allocating personnel and resources, estimating timescales, and scheduling work. Those are not shown in the preceding diagram and are out of scope for this book because they primarily require skills in project management. They must also be in place to run a successful test project, but here, I am concentrating on the details of what testing is required so that you can plan these activities as accurately as possible.
Each of those four test activities feeds back into the others. Designing tests is necessary before they can be documented and run, but the test results also show where more tests are needed. Some test planning is essential for exploratory testing, but exploratory testing also shows which tests need to be designed and executed for complete coverage. Documenting the feature specification should start before exploratory testing so that you know what changes to expect. However, it can only be finished once exploratory testing is complete, to answer any questions that arise.
Where in those interrelated activities should we begin? As you can see from the title of this chapter, I believe exploratory testing is a good place to start. There will be specifications and planning before that, but exploratory testing is the first test task you can finish. With that in place, you can aim to complete the specification, the test plan design, and the detailed testing itself. Because of that unique attribute, we will start with it here.
For each part of the feature, testing should move between those different test activities and feed back to the other design tasks of writing specifications, implementing, and developer testing described previously. How do these tasks fit together, and how do they progress toward the goal of releasing a well-specified and tested feature? The following section describes the ordered flow of activities and their progression.
The spiral model of test improvement
Developing tests from the initial specification into detailed, completed test plans can be thought of as a spiral looping through four repeated stages. Of course, it is more complex in practice, and there is extensive back and forth between the different stages. This simplification illustrates the main milestones required to generate a test plan and the main flow between them. It is similar to Barry Boehm’s spiral model of software development. However, this model only considers the development of the test plan, rather than the entire software development cycle, and doesn’t spiral outwards but instead inwards toward test perfection:
Figure 1.8 – The spiral model of test development
- Preparing specifications and plans
- Discussions and review
- Performing testing
- Analyzing and feeding back the result
Software development begins with an initial specification from the product owner, which is a vital start but needs several iterations before it is complete. The product owner then introduces and discusses that feature. Based on that initial specification, the development team can prepare an initial implementation, and you can generate ideas for exploratory testing.
Once an initial implementation is complete, you can start improving the specification, the test plan, and the code itself. This begins with Exploratory testing, which is step 3 in the preceding diagram. By trying the code for real, you will understand it better and prepare further tests, as described in this chapter. While there are several essential steps beforehand, the process of improving the code begins with exploratory testing.
Armed with the exploratory test results in step 4, you can then write a feature specification, as shown in step 5 in the preceding diagram. This will be covered in more detail in Chapter 2, Writing Great Feature Specifications. This specification then needs a review – a formal discussion to step through its details to improve them. That review is step 6 and is described in Chapter 3, How to Run Successful Specification Reviews.
When that review is complete, you can perform detailed testing of the feature. That one small box – step 7 in the preceding diagram – is the subject of most of this book and is covered in Chapter 4 to Chapter 13.
Preparing the test plan isn’t the end, however. Based on the detailed testing results, you can refine the specification, discuss it, and perform further targeted testing. That may be to verify the bugs you raised, for example, or to expand the test plan in areas with clusters of bugs. The results of the testing should inform future test tasks. That feedback improves the testing in this and subsequent cycles, asymptotically trending toward, though never quite reaching, test perfection.
Behind this spiral, the code is also going through cycles of improved documentation and quality as its functions are checked and its bugs are fixed.
The preceding diagram shows how both the theoretical descriptions of the feature from the specification and other parts of the test basis must be combined with practical results from testing the code itself to give comprehensive test coverage. Relying only on the documentation means you miss out on the chance to react to issues with the code. Testing without documentation relies on your assumptions of what the code should do instead of its intended behavior.
By looping through this cycle, you can thoroughly get to know the features you are working on and test them to a high quality. While it is just a point on a cycle, we begin that process with a description of exploratory testing, starting with the first important question: is this feature ready to test yet?
Identifying if a feature is ready for testing
It is very easy to waste time testing a feature that is not ready. There is no point in raising a bug when the developers already know they haven’t implemented that function yet. On the other hand, testing should start as early as possible to quickly flag up issues while the code is still fresh in the developers’ minds.
The way to reconcile those conflicting aims is through communication. Testing should start as early as possible, but the developers should be clear about what is testable and what is not working yet. If you are working from a detailed, numbered specification (see Chapter 2, Writing Great Feature Specifications), then they can specify which build fulfills which requirements. It may be that even the developers don’t know if a particular function is working yet – for instance, if they believe a new function will just work but haven’t tried it for themselves. There’s no need to spend a lot of time gathering that information, so long as the developers are clear that they are unsure about the behavior so that you can try it out.
Also, look out for testing code that is rapidly changing or is subject to extensive architectural alterations. If the code you test today will be rewritten tomorrow, you have wasted your time. While it’s good to start testing as soon as possible, that doesn’t mean as soon as there’s working code. That code has to be stable and part of the proposed release. Unit tests written by the developer can indicate that code is stable enough to be worth testing; but if that code isn’t ready yet, find something else to do with your time.
Real-world example – The magical disappearing interface
I was once part of a test team for a new hardware project that would perform video conferencing. There would be two products – one that would handle the media processing and another that would handle the calls and user interface, with a detailed API between the two. The test team was very organized and started testing early in the development cycle, implementing a suite of tests on the API between the two products.
Then, the architecture changed. For simplicity, the two products would be combined, and we would always sell them together. The API wouldn’t be exposed to customers and would be substantially changed to work internally instead. All our testing had been a waste of time.
It sounds obvious that you shouldn’t start testing too early. However, in the middle of a project, it can be hard to notice that a product isn’t ready – the developers are busy coding, and you are testing and finding serious bugs. But look out for repeated testing in the same area, significant architectural changes, and confusion over which parts of a feature have been implemented and which are still under development. That shows you need better communication with the development team on which areas of code they have finished, and which are genuinely ready for testing.
When the development team has finalized the architecture and completed the initial implementation, then you should start testing. Getting a new feature working for the first time is a challenge, though, so the next section describes how to make that process as smooth as possible.
Performing the first test
A major milestone in a project is being able to test at all. Getting a feature working for the first time means that all the code is in place and working in the test environment. It has been successfully moved from the development system and works elsewhere, be that in a full test environment, a test harness, your local machine, or a containerized environment (see Chapter 5, Black-Box Functional Testing, for a discussion of test systems). Getting the feature running opens up a vast array of possible tests.
For instance, if you are testing the signup page of a website, does that page load and accept input? If so, then there are many follow-on test types you can perform. If not, let the developer know that you can’t even start yet.
Carrying out that first test requires many development tasks to be completed. Another easy way to waste time is by testing a feature that hasn’t been enabled yet. You can also solve this with better communication with the development team.
The specification will say what a feature should do (see Chapter 2, Writing Great Feature Specifications), but testers need another level of detail, which is how the feature is configured. Before testing begins, ensure you understand the following:
- What the minimum version requirements for all relevant parts of the system are
- What the necessary configuration is
- How to check that a feature is working
Version requirements are clear enough – you have to be running the version with the new code before you can test. However, sometimes, it is far from obvious which parts of a system are dependent on each other. Or, while the feature is implemented on the 5.3 branch, is it in build 5.3.8 or 5.3.9? A feature may be delivered piecemeal, in which case, exactly which functionality is in each build?
For your very first test, only try the most basic functionality. Does the new screen load, are API calls accepted, or is the new option available? Be clear on which versions you need before spending your time on the first case. Which feature flag do you need to enable for this feature? Which setting needs to be updated, and which file is that in? Again, the challenge is to get all the necessary details from the development team to avoid wasting time looking for them.
If you have all the requirements in place but have found several blocking issues, check with the development team that this feature is ready for testing. The developer should have done enough testing on their system to be confident that it will work for others, but that isn’t always the case. If there are repeated problems, get the developer to recheck their code.
Finally, how can you tell if the feature is working? Sometimes, features are obviously customer visible – is the new web page present, or does the new option appear? Sometimes, however, it’s hard to tell if the new feature is enabled, especially during code refactoring or subtle performance changes. Which log lines should you look out for? What statistics will indicate this change is being used?
Those details won’t be in the feature specification; again, this is an extra level of detail that the test team requires to check on the behavior or even the existence of a feature in a particular build of code.
From a project point of view, getting the first test running is on the critical path. Being able to start testing delays everything else, so make sure you complete that early. Don’t spend ages getting everything in place, for instance, finishing six other projects so that a large team of testers is ready to descend on a feature… only to find that it’s not working, and you can’t test it at all. Check the feature early to make sure the functionality is basically in place, and quickly bounce it back to the developer if they need to make any changes. Once that first test has passed, you can leave it for a while until you can test it properly. But make sure it’s ready first.
Mapping out new features
To start exploratory testing, you need three things: to be confident that the code is stable enough to test, to be running the required versions, and to have the correct configuration in place. Once they are ready, exploratory testing can begin.
It’s important to keep in mind the purpose of exploratory testing. This is not detailed testing with results you will rely on in the future. Everything you do during exploratory testing is likely to be repeated later in a more formal round of testing, so exploratory tests should be limited in time and not take too long. It is easy to waste time duplicating effort between this and later test rounds. You should only test until you have met the three goals of exploratory testing:
- To learn about the feature to plan further testing.
- To identify the tools and knowledge you need to perform further testing.
- To uncover bugs that block further testing.
Firstly, and most importantly, you can learn about the feature to prepare the feature specification. In an ideal world, you would prepare the feature specification in advance, and the implementation would match it exactly. However, during development, there are often changes; for instance, some functions might be postponed for later releases. Sometimes, later discussions and changes don’t make it to the specification, especially about the user interface. Product specifications often don’t detail error cases, security, and loading behavior. Exploratory testing is your chance to see what changes were completed and which weren’t in this release, which parts of the feature were dropped, and any that were added.
Good exploratory testing will check every screen, read every field, enter details in every input, and press every button at least once. Don’t aim to cover all possible input values or perform exhaustive testing on the functionality; just see as much as possible. Some examples of areas you should aim to cover are as follows:
- Loading every page/screen
- Entering details into every input
- Using every function
- Transitioning through every state (dialing, ringing, on-call, hanging up, for example, or signing up, awaiting verification, verified, logged in, and so on)
- Checking user-visible outputs
- Checking internal state (via logs, database contents, and more)
By touring the feature, you can find out how it works for real and bring the specification to life. It is much easier to find issues and think through consequences when you have a working feature in front of you than for the developers and product owners who could only imagine what it looked like. Make the most of that advantage.
It may be that some aspects of the feature cannot be used, for instance, if it is an API and you need to implement a client to drive it, or if you need to generate specific datasets before you can use them. This is the second aim of exploratory testing: to discover what you are missing to perform detailed testing in practice. Again, hopefully, this was clear from the initial feature specification and test planning. However, exploratory testing is a vital check that those plans can be used for real and to find any alterations you need to make.
By the end of this testing, you should know all the configuration options relevant to this feature and their effects. That will be vital to map out the dependent and independent variables for comprehensive functional testing. For instance, tracking a user’s age may just be for information and not change any behavior of this feature or product. It is written to the database and only read to simply be displayed back to the user. In that case, it is independent of other aspects of the feature. Or it may be that certain options or behaviors only appear for users of certain ages. Then, you will need to check each combination of age and those features to ensure correct behavior in each case. This is your chance to see those interactions.
The third aim of exploratory testing is to find any major issues that will block further testing. In the same way that getting to test number one is on the critical path for releasing a feature, so are any bugs so serious that they block entire sections of the test plan. For instance, if a page doesn’t load at all, you can’t test any of the inputs or outputs on it, or if an application crashes early in its use, you cannot try scenarios after that. Exploratory testing is your chance to quickly check that everything is testable and ready to go. You can raise issues with the development team while preparing comprehensive tests rather than delaying the project when you are ready.
By the end of exploratory testing, the aim is to be able to conduct more thorough testing later. You should come away with detailed knowledge of the interface, all inputs and outputs that were implemented in this version, the configuration options, and the basic functionality. What’s missing are the details and interactions, which we will discuss in Chapter 5, Black-Box Functional Testing.
Exploratory testing is a quick, fun way to get to know a new feature, which doesn’t require the rigor of the full test plans you will design later. The challenge at this early stage is that you know very little about the product, but you can use that to your advantage, as described in the next section.
Using your naivety while testing
Exploratory testing is an ideal opportunity for feedback about the usability of a feature. When you start testing, you may be the first person to see this feature who wasn’t part of its design. To begin with, that lack of experience is a strength. Your task, as a tester, is to notice and explore possibilities, avoiding the prejudgment and expectations that come from greater experience in this area.
Later, it is important to understand the design and implementation of the code. The technique of white-box testing, described in Chapter 6, White-Box Functional Testing, requires you to check all code paths and try each special case using knowledge of the system. However, at the outset, this lack of knowledge is important to discover surprising or unexpected results, especially for user-facing interfaces and functionality. Anything that surprises you may also surprise your customers, so look out for anything that wasn’t obvious.
Keep track of anything you had trouble finding, any text you had to read twice, and anything that caught you by surprise while using the feature. That is all vital feedback for user experience design. Don’t assume it’s your fault if you didn’t understand something on first use – it may be the designer’s fault for not making it clearer. Some topics are inherently complex and require background knowledge before users will understand; however, any users of your product will probably have background knowledge of its existing functionality or other products within this domain. A well-designed interface should be able to build on that knowledge intuitively. If that’s not the case, then that’s a defect too. See Chapter 8, User Experience Testing, to learn more about usability testing.
The world of user experience has no firm answers, and just because something wasn’t obvious to you doesn’t mean that it will be for anyone else. Unlike other parts of testing, where there should be a clear answer as to whether the product meets the specification, user experience is much more subjective. It is worth raising any points you find challenging to gather others’ opinions. If enough people agree that something is confusing, it is a good argument to change it. You have to highlight those issues to start that discussion and decide on improvements.
Armed with this naïve approach, open to possibilities, and examining each one, you should aim to touch all the major functions of your new feature. From there, you can complete a miniature version of new feature testing, using all the different types of testing available. They are described in more detail in all the subsequent chapters of this book, but this is your chance to perform a cut-down version of different types of testing quickly and early in the project, as we’ll learn in the next section.
Running complete exploratory testing
Exploratory testing is a smaller version of all the following testing, pulling out a few of the most important tests from each section. It is introductory testing, which means it briefly covers many areas in the same way that this chapter has introduced this book. Exploratory testing should cover the following aspects:
- Black-box functional testing
- White-box functional testing
- Error cases
- User experience testing
- Security testing
- Tests for maintainability
- Non-functional testing
With less time available for exploratory testing than in the complete test plan, you should prioritize these aspects carefully.
As we’ve seen already, exploratory testing starts with black-box functional testing, using no knowledge of the underlying implementation, and concentrating only on the working cases. This should form the majority of the work you do here.
While there are advantages to naivety about how a feature is implemented, even during exploratory testing, it is helpful to know some details of its architecture and design. If one minor aspect of the feature requires a large section of code, then you should test that aspect far more than its use might suggest. If there’s more code required for its implementation, then there is more to go wrong and a higher chance of defects. So, even while exploring, it’s essential to do some white-box testing, informed by knowledge of the feature’s design. This should come after black-box testing so that you can make the most of your lack of assumptions about its behavior first.
You can also start trying error cases during exploratory testing, for instance, by deliberately leaving fields blank, entering invalid information, or attempting to trigger errors. This shouldn’t be the focus – making sure the feature works comes first. But even this early, you can probe the behavior with incorrect inputs.
As described previously, exploratory testing is a great time for finding usability issues when a feature is new to you before you learn how it works, inside and out. Feedback on usability should be a key deliverable from all exploratory testing.
You can also test security during the exploratory phase. Again, you need to prioritize these tests – the most obvious attacks can be quick to run SQL injection and scripting attacks or attempting to access information without the necessary permissions. Are the required certificates in place, and is the traffic encrypted? See Chapter 9, Security Testing, for more details on how to run those kinds of tests. Major deficiencies such as those can be easily spotted and raised early. Security shouldn’t be a focus for exploratory testing compared to functional and usability testing, but this is where it can start.
Exploratory testing can also start to examine the maintainability of the code. How useful are the logs? Do they record the application’s actions? Are the relevant events generated? What monitoring is in place for this service? Early in the project, in the first version of the code, the answer might be as simple as noting that events are not ready yet, or the gaps within them. This is the time to start writing the list of those requirements. Maintainability can be low on the priority list for a project, so it’s important to note the requirements early.
Exploratory testing does not typically cover non-functional testing since that often requires scripts and tools to exercise, which takes longer than the available time. However, if you have tools prepared from previous testing and this feature is sufficiently similar that you can quickly modify them, you can run initial performance and reliability testing (see Chapter 12, Load Testing, for more details). Again, this isn’t a priority compared to the usability and functional testing elements.
As you go through these types of testing, note down ideas and areas that you should cover in the complete test plan. Exploratory testing is a time to try out some ideas, but also to identify what you want to spend more time on later. You won’t be able to cover everything initially, so record what you have missed so that you can return to it.
This is also your chance to uncover trends within the bugs. Defects aren’t randomly scattered through a product; they are grouped. So, check what parts of this feature or product are particularly suffering from issues. Adapt your test plans to add extra detail in those areas, to find further problems. This lets your testing be reactive to the feature, to give the best chance of finding bugs. See Chapter 5, Black-Box Functional Testing, for more on reactive testing.
By the end of this testing, you should have a very good idea of how this feature works and have ensured it doesn’t have any major issues. Usually, that will prepare you for more detailed testing to follow, but sometimes, it is all that is possible, as the next section explains.
Using exploratory testing by necessity
While normally used at the start of testing large features, exploratory testing is also vital when you don’t have time for anything else. As we’ve seen, exploratory testing combines test design, execution, and interpretation into a single step and is highly efficient at the cost of having less planning and documentation. In the hands of an experienced tester, it can provide broad coverage quickly and rapidly gain confidence in a change.
This is useful toward the end of a project when there are minor changes to fix critical bugs. You need to check those changes, but you must also perform regression testing to ensure that no other errors have been introduced. Regression testing involves carrying out tests on existing functionality to make sure it still works and hasn’t been broken. Exploratory testing mainly focuses on new behavior, but it can also check your product’s main functions.
Real-world example – Fixing XML
At the end of a long, waterfall development cycle that lasted over 6 months, we were almost ready to release a new version of our product. We found one last blocking bug that required a new build, so the development team made that change and provided us with the release candidate code. Unfortunately, they hadn’t only fixed the blocking bug. While in that area, the developer had noticed that our XML formatting was wrong and had taken the opportunity to fix it.
The developer was right. The formatting was incorrect, but he hadn’t realized that the system that read that data relied on that incorrectness. By fixing the formatting, the other system could no longer read the message. That critical bug was introduced at the end of the release cycle, but we quickly found it through exploratory testing. It delayed the whole project since everything had to stop to wait for the fix, but it didn’t take long to roll back the change.
When pressed for time, exploratory testing is the fastest way to get broad coverage of a change. You can rapidly gain confidence that a feature is working but beware of overconfidence if these are the only tests you have run. There can still be blocking issues even after exploratory testing has passed. Non-functional tests are often poorly covered by exploratory testing: does the feature work on all web browsers and operating system versions? Does it work with a high load? Does it handle poor network conditions? Even if the feature works well in one tester’s environment, real-world situations may hit problems.
So, be aware that exploratory testing is not comprehensive. Rather than putting more effort into this form of testing, use it to plan out exhausting test plans that will be documented and automated. Even when you have more time, you should always limit how long you spend on this form of testing.
Checking exploratory test results
While the input of exploratory testing is often clear – use all the new features, choose all the new options – checking its behavior may be obscure. If parts of the user interface haven’t been implemented yet, there may be no visible changes. The only available output might be database fields or log lines showing that the internal state has changed. If there are user-visible changes, they might be incidental – for instance, a changing interface that indicates that another change has occurred.
Sometimes, complete testing is impossible until some other change has been made. One part of the system is ready, but another part that uses it is not, for example. In that case, you can check whether the new functionality is ready and that it hasn’t broken any existing behavior, but full system testing will have to wait until all the elements can be used together. If so, you can complete some testing, and leave a task to complete it when the code is ready.
Early in the project, there may not be much documentation, or the specifications might not go into sufficient detail to describe the database implementation and logging output. Either way, you need to have a conversation with the developer to check what exact changes you expect to see in this code version. Only armed with that information can you be confident that not only are you exercising all the variables but that they also have their intended effect.
As well as the outputs the development team suggests, keep your eyes open for anything else that is strange or unusual. Be curious, both in the tests you run and in the checks you make. Curiosity is vital throughout testing, but especially in exploratory testing, as covered in the next section.
Using curiosity in testing
The approach to take throughout exploratory testing is that of maximum curiosity. Constantly ask yourself, “I wonder what happens if I do that?” All the questions you can think of are valuable while testing; testers are there to ask the right questions.
All testing requires that same curiosity, but you can see it most clearly in exploratory testing, where there is no test plan, and you only have your curiosity to work from.
All curiosity is not equal, however, and exploratory testing is an area that benefits massively from experience. With only a limited time, you must aim directly for the weak spots in your system to avoid wasting time testing mature code that has already been well covered in previous cycles or with automated testing.
One area of potential weakness is the new change that hasn’t had any test coverage before. But what existing features should you combine with these new ones to find problems? This is where your experience comes into play. What are the weaknesses of your system? Where have bugs been seen before?
The International Software Testing Qualifications Board (ISTQB) is an organization dedicated to improving the software testing practice, which runs certification schemes for practitioners. They note that bugs are not evenly spread throughout code but tend to cluster in certain areas.
Examples of classic areas for problems that affect many systems are as follows:
- Running on low specification machines with limited memory or CPU
- Running on low-resolution screens or small window sizes
- Behavior after the upgrade when you transition to using the new feature
- Problems around boundaries and edge conditions
- Problems with times and time zones
- Problems with text inputs (blank, too long, Unicode characters, SQL injection, and more)
- Backup and restore
- Behavior under poor network conditions
In addition to the preceding problems, track all the weaknesses that affect your particular application or web page. Create a document that lists the areas of weakness that you’ve hit in the past, and keep it updated as you discover new issues.
This technique is known as error guessing, as described in the ISTQB syllabus. It stresses that the tester’s experience is needed to anticipate where the errors might fall. Often, you find the best bugs when you go off the test plan. This is the corollary to the ISTQB testing principle about the weedkiller paradox – the more you test in one particular way, the fewer bugs you find, analogous to using a weedkiller that gradually kills fewer weeds. Only those less affected by it thrive. Tests that you have run many times before are unlikely to find an issue this time around, and in addition, the idea behind them may be well known to the development and test teams, so the developers will remember to handle that case. You’re more likely to find bugs with new ideas for tests and in new combinations of functionality that haven’t been previously considered.
Because of the dependence on experience for exploratory testing, it is best carried out by senior testers. Junior testers can cover other parts of the testing process, such as implementing test plans designed by others or regression testing. For exploratory testing, make sure experienced team members lead the way. If you are a test manager, you can make that decision; if you are a tester who has been given the task, let others know if there is someone more suitable. Experience can be domain-specific, of course. While you may be highly experienced in one area, perhaps someone else is better placed to do exploratory testing for this particular feature.
Experience is also important when it comes to choosing what to check. While some errors are visible on the user interface, others may be subtle, such as warnings in the logs or database fields being written incorrectly. As well as using your curiosity when deciding what to do, ensure you scour the system for issues and items you can check.
Curiosity is vital throughout the test process, not just during exploratory testing. There is no hard divide between ad hoc testing and that which follows a plan. Instead, it’s a spectrum. While exploratory testing has little structure, lists of likely weaknesses such as the preceding one can also help to guide it. So, even exploratory testing can have some documentation.
Functional testing can be descriptive – for example, a test such as “Upload a .jpeg image," which means that each tester will choose a different image to use. That reduces the reproducibility of the test while broadening coverage. Alternatively, tests can be prescriptive, describing exactly what to do; for example, “Upload image test1.jpeg." Even within these tests, the environment can change, the state of the system may be different, or you might run tasks in an unusual order. Always look out for new ways to test and new things to try.
Testing is an arms race of creativity, with testers trying to find new ways to break products and developers inadvertently adding new problems and issues. This is the fun and the skill of testing, so go crazy and enjoy yourself! When testing a new feature, you are the first to do something no one has done before. You are the vanguard of the army, charging into battle; you are the pioneer heading into an unknown land. So, use your experience, prepare for surprises, and keep your eyes open for the unexpected.
This chapter described exploratory testing – when it should be carried out and by whom. We saw where exploratory testing fits into the development cycle, and that it is a powerful tool to find issues soon after code has first been implemented because it is quick and requires little planning. However, it needs a senior engineer to do it well, is not widely reviewed, and doesn’t produce extensive documentation. It can be hard to judge the coverage that exploratory testing provides, and non-functional tests may receive little coverage. That leaves the risk of issues in real-world usage even after exploratory testing has passed. The aim of exploratory testing should be to understand the feature better so that you can prepare comprehensive test plans later.
This chapter has shown when to start exploratory testing, not beginning too early when the code is still in flux, and the steps to get the first test running successfully. We’ve seen the importance of curiosity and naivety at the start of the test process, both in choosing what to test and checking the outcomes. Finally, we learned how to map out a feature, ready to perform a miniature version of the complete test process.
The next chapter takes the experience from exploratory testing and applies it to writing a detailed feature specification that will guide all the subsequent testing.