The notion of IT testing is as old as the first bug and that goes way back to the predecessors of the computers we know today. Way back in the 1940’s, computers where nothing more than super calculators and mainly only available in University environments or other institutes of higher learning. Back in those days, a scientist by the name of Grace Murray Hopper was working on the Harvard University Mark II Aiken Relay Calculator. On the 9th of September of 1947 as Mrs. Murray’s technicians were investigating the root cause of why the system was malfunctioning they uncovered a moth trapped in the relay system. They removed the bug and logged and affixed it to the fix report with a note stating “first actual case of a bug being found” and this is where the term computer bug originated. The said report is actually on display as an important part of IT history at the Smithsonian institute.
From the time we began creating programs via punch cards and other methods, testing has been a part of the process albeit much less defined than in current times.
If we go back to these early programs consisting of hundreds of thousands of lines of sequential code only a small subset of individuals knew the complex language involved. As such it often fell upon the actual developers and coders to test out the functionalities of the application they had built. Seeing as IT applications where used for very specific advanced calculations this was not truly a problem in those days.
As we all know, IT very rapidly evolved and the personal computer brought this technology to the end users’ desks. In the early 80’s, the PC and the programs that ran on it via DOS where taking on more and more place in the daily jobs of many individuals. The arrival of the Windows environment and its point and click interface followed rapidly by the expansion and growth of the internet pushed the use of IT programs forward more and more from the science labs to our livings rooms and beyond.
With this evolution of applications it became gradually more and more apparent that relying on the individuals who programed them to properly test them was simply not sufficient. Having built the application with an understanding of the proper path, too often the false paths where omitted from testing and thus errors where left to be dealt with by the need users. This fact is what drove the creation of independent QA organizations within IT departments and the practice of Software Quality Assurance or SQA for short came to be. The more complex from a business transactional perspective an application became and the more data specific knowledge that was needed, the more we relied on the subject matter experts (SME’s) to conduct the complex test flows properly and efficiently.
This segregation of duties also became more and more necessary due to compliance regulations such as the Sarbanes-Oxley act of 2002 that obligated an organization to protect its investors’ interests by safeguarding the production environment from potential system failures that could impact the operations.
Complex ERP/CRM systems such as Oracle or SAP can become so critical to the organizations running them that in reality what is acquired to help run the business more efficiently in fact ends up becoming so embedded that it is essential for it to properly function. Naturally, this fact enforces the necessity for formal and detailed testing of any change that can potentially impact these business processes in their operations.
With the web being so prevalent and necessary for marketing our brands, with more and more individuals relying on mobile technologies to connect to the world via the internet, it is again very important that these systems that are end user, customer facing be properly and thoroughly tested.
With all these compounding factors it is no wonder that many a CIO are concerned about how change is managed. A testing phase of a critical system implementation can very easily equate to half a project. The cost involved in time and money is not negligible and this is one of the main reason that led the charge towards test automation.
Test automation has been around now for well over 20 years. It no wonder that the early programmers tasked with the tedious and repetitive responsibility of testing would have come up with an approach to automate the process, it’s in their nature.
Everyone would agree that the notion of build once and reuse leads to a compelling business case. So the question begs to be asked. If automation has been around for over 20 years why have so few organizations been truly successful in using it?
In my career, I’ve seen hundreds of organizations acquire automated testing technology; invest time, dollars and effort in deploying and building out regression test beds; and ultimately begin seeing their tests fail before arriving at the end goal. Is this because they had the wrong individuals on the team? Is it a lack of having the proper tool? Or simply a fact that the organization is not well process geared toward success? The answer is a complex one and it actually varies depending on each case. Reality is that there is a path to the successful deployment of an automated framework.
Naturally, different tool-sets will require different best practices and team structures. The tool and as such its proper use will depend on the proficiencies of the end user, the understanding of the team on when and how to best leverage it, and its proper maintenance and use. There have been multiple evolutions of software testing solutions from a purely manual approach to full-fledged frameworks we have seen a constant evolution of the technology but one thing is constant and that is the fact that without a proper methodology and best practices it will ultimately fail.
At TurnKey, we have assisted organizations in defining their specific methodology to successful and cost efficiently build QA practices specific their approach and environment. We can help you navigate the course to success as well, give us a call and let’s get you on the path to success!