I am often asked about the difference between record and playback testing approach and data-driven testing methodology.
This post outlines the difference between the two, and illustrates why one of them is killing your productivity.
Record and playback testing methods were developed in the 1980’s, and were a great use of technology at the time. It allows business users and/or quality assurance testers to walk through a business process or test flow one step at a time while it records each screen, mouse click and data entry the user encounters.
The result is test cases that follow a single path through the application under test, with very specific data for that path. The user then walks through that process again to capture a different path in the process, which is required in nearly one-hundred percent of the cases.
Compared to manual testing, this was clearly an improvement and gave many organizations their first taste of automated testing.
Sounds great. So what’s the problem?
If your processes are very simple and rarely change, this could be an excellent solution. In most organizations, however, the applications that require comprehensive testing are complex applications that change frequently.
Imagine in the scenario above what would happen if a field was added to the screen that had already been recorded? Or the test path changed? Or a data-dependent operation was modified? And imagine if you had to run that test 50 different times with 50 different sets of data? You guessed it. One would have to re-record the process each time to get that single-path test case.
Two fundamental problems with this approach are:
PATH-LOCKED: In record and playback, you are immediately “path-locked.” Path-locked means that the test cases created with record and playback are recordings of a single path in a business process. If
one small part of that flow changes, such a new field on one of the application screens in that flow, the scripts have to be either found and edited, or completely re-record. Now consider how often your applications under test actually change. For most companies this is hundreds of times a year.
This spawns a related challenge in creating a vast body of potentially useless recordings and no easy way of knowing what is valid at any given moment. Companies often end up with different people doing this work, which also means that often times the names of the files are inconsistent. This makes it harder to find the right files. Rarely do people go back and archive or dispose of outdated recordings, leaving you with a multitude of test cases and no true way of know what is valid anymore.
Unfortunately, I have seen many companies simply start over with their record and playback, scrapping the time, effort and expertise that went into creating these assets, because it is easier to start over.
TEST COVERAGE: Record and playback makes it difficult to get a handle on your test coverage. It is nearly impossible, without a lot of manual work which is what most companies are trying to get away from, to lay out the business process visually to make sure you have the right test coverage in the right areas.
Test Coverage into the breadth and depths of test coverage is crucial in ensuring defects are found. Ensuring adequate test coverage is even more important in highly regulated industries, in companies that rely on their applications under test to run their businesses, and in organizations that require a high degree of accuracy. In most cases, that’s every medium to large business out there.
What is data-driven testing?
Data-driven testing, sometimes known as keywords-based testing, is a testing method that is driven by the data.
For example, when using TurnKey’s cFactory for your automated test creation and maintenance, you’d click the “learn” button on every screen
as you walk through the process. Test components are automatically built for everything that can be interacted with by the user that appears on the screen. This includes checkboxes, data verification, order of operation, click buttons, and more.
After having walked through your process, an Excel datasheet is automatically created showing every single component field in the process, allowing you to drive any combination of data through your test. For each component, there is a screen shot attached so you know exactly where you are in the application.
Once the process has been “learned”, you can now execute multiple scenarios through the business components, and multiple data scenarios at the test case level. This is the essence of data-driven testing.
Data-driven component-based testing has enormous cost-saving benefits, including:
- 90% increase in test coverage: companies have seen a 90% increase in their test coverage simply by having cFactory automatically create their test cases.
- Test cycles reduced from months to days: Almac went from a 3-month to a 3-day test cycles with automated maintenance– a patented process by which cFactory detects changes in your application and automatically updates your test cases.
- No programming required: the user interface is designed for non-technical users (most often the people who are closest to the application under test).
When people ask me about the difference between record and playback, or data-driven testing technologies, I sometimes think about the difference between the linear, fixed (path-locked) cassette tape recordings and the decidedly non-linear world of digital music. It’s kind of like that.
If you found this blog useful, check out these: