Improving Software Quality

Since 1968 end users have come to depend more on software, and their expectations for product quality have risen dramatically. Moreover, the pace of development has accelerated in the new millennium, thanks to the Internet, competition and the tools developers use. It is easier to write Java code and port it than it ever was to write C code and port it. The crop of rapid prototyping languages—scripting languages—like Python, Perl and Ruby makes it easy to build Web sites quickly. Databases have become commodities and don’t need to be reinvented each time.

“QA is still a challenge, still generally left to the end, and the staff is treated as second-class citizens,” said Ed Hirgelt, manager of services development for Quest Software. However, because of the speed of development and time-to-market requirements, QA is becoming more visible. Test-driven development moves testing to earlier in the life cycle. Tools like JUnit and Ant make it easier to run tests as part of the nightly build process. The concept of a continuous build is helping produce reliable software.

Hirgelt characterizes a continuous build process as one in which a build is initiated when a developer commits code back to the source repository. The product is built and tests run automatically. Problems are caught sooner rather than later.

QA also has been changing as the result of such factors as the wind-down following Y2K and the subsequent business decline. As software companies faced hard times, one solution was to improve efficiency by hiring the most skilled testers available and automating as much testing as possible, according to Elfriede Dustin, internal SQA consultant for global security services at Symantec.

The loss of jobs following the dot-com implosion meant software companies went from having to hire practically anyone with a pulse in the late 1990s to the luxury of choosing from only the most highly qualified candidates. That change has affected who is being hired for QA positions. In some large companies, “coding skills are what you are judged and hired by, with testing skills coming in a distant second,” said Duri Price of Exceed Training, who has worked in the software QA field since 1992. Jeff Feldstein, who manages a team of 35 test engineers at Cisco Systems, concurred. He hires software engineers exclusively, and then sells them on test engineering.

“Testers need to be involved from the beginning of the development life cycle,” said Symantec’s Dustin. More important, however, is that so much depends on the developers’ skills. The most efficient and knowledgeable testers cannot succeed if developers implement bad software or ineffective software development life cycles and processes are in place. If testing is the only quality phase implemented as part of the QA process, it can at most be considered a Band-Aid often too late in the development life cycle to make much of a quality difference. Testing is only one piece of the quality puzzle.
Agile Methods

The growing influence of agile processes has had direct and indirect consequences on quality. With the advent of Extreme Programming (XP) and the agile movement, testing has become more of a developer activity, said Dustin. Agile methodologies have provided a model for moving testing forward and putting more of the responsibility in the hands of developers.

“With the onset of agile methodologies comes the concept of test-driven development, which was introduced with Extreme Programming,” said Bob Galen, a senior QA manager at Thomson-Dialog. The principle is to design tests before designing code, usually using a unit-testing framework such as JUnit or xUnit to help support the practice. Test-driven development has gone beyond XP to become a mainstream development practice, he noted. “It has also sensitized a new generation of software developers on the importance of and skills required to properly test software.”

Testers are getting better code and can’t simply repeat the same tests as the development team. Galen said they must find other value areas, such as getting involved at the front end with requirement definition and acceptance test development, working in parallel with the development teams, and at the back end working with the customer to run the acceptance testing and doing performance and load testing or usability testing.


Not everyone is convinced that agile processes are the answer. Dustin said she doubts that Extreme Programming can work in a development effort larger than 10 or so developers. Feldstein’s group hasn’t embraced agile methodologies (though he acknowledged that they have been used successfully elsewhere in Cisco) because he doesn’t see them as a way to get high-quality software, but as a way to get software fast. “It’s not clear that agile puts quality at the forefront,” he said. “Getting it out quickly isn’t a priority for us. Getting it right is.”

At the Cisco facility where Feldstein works, testers become involved during the development of the product requirements document. “Marketing, development and test are all equal in the team, and they all get involved early on,” he explained. The whole team owns the quality, and the whole team decides when to ship. The process is requirements-driven, he said, and they don’t need test-driven development. He also noted that the processes are constantly being refined.

“Once developers have committed to a schedule, which occurs when the functionality spec is complete,” he said, “they tell you what the API looks like. We can start coding simultaneously. We do unit testing on the test software.” When a stand-alone component is complete, it’s handed off for functional and performance testing. A stand-alone component is complete after developers have done unit testing, and integration testing between components, and have established baseline performance.
Automation

Another agile community influence has been to drive testers toward automation, according to Thomson-Dialog’s Galen. “The notion of complete automated unit-testing capabilities is carrying over into an expectation of general automated testing. Why have automated unit testing and the increased flexibility and safety of executing them, when you have to manually test the application from a QA point of view? It simply doesn’t make sense.” He said that there is pressure for testers to create high degrees of automation leveraging agile development practices.

The tools are evolving from capture/playback methods toward alternative methods of driving tests that are longer lived and require less maintenance, said Galen. Concepts like “keyword driven,” “model driven,” “coverage driven” and “database driven” are coming into play.

The advantage of manual testing is that it’s not done the same way every time. In classic automation, you execute the same set of test cases every time. Model-based testing, which is a recent development, is a form of automated testing that adds randomness by inserting random behavior into test automation software. You run through tests in different order so you exercise different areas of code.

Automation is not altogether a panacea. Many companies are spending lots of time and effort on automation that doesn’t return the investment, said Exceed Training’s Price. “I’ve seen huge batches of automated tests that merely informed us that what we thought should work, did work. They verified. But they didn’t usually do a great job at testing.”

Price said that a competent tester with or without coding skills could often break the system manually. Coding skills aren’t the only skills needed, or even the most important ones, he insisted. The most important thing a tester needs to know how to do is figure out what to test, why to test it and how to test it. The implementation of the test will vary, but first you have to figure out your target. That skill set is getting less and less attention.

Exploratory Testing

Test automation, by executing a large number of planned tests, frees up resources to do more free-form exploratory testing, according to Price. Exploratory, or context-based testing, is a movement spearheaded by James Bach and articulated in the book he co-authored with Cem Kaner and Bret Pettichord, “Lessons Learned in Software Testing” (Wiley, 2001). Dion Johnson, an independent consultant who focuses on QA, QC, requirements analysis and process improvement, said that exploratory testing supports less initial test planning and documentation and more execution of spur-of the-moment testing concepts based on a tester’s intuition about what is happening with the application in real time.

Galen characterized it as testing within the context presented for a given effort or project. For example, in a schedule-driven context, management might give a team two days in which to test a release of the product. That’s the context. If the team is operating in a plan and scripted model, it picks which test cases to run that will fit within the two days. The team might try some prioritization, but it stays within the bounds of plan and then test.

Context-based and exploratory testing leverage the background, skills and experience of the testers in order to make better decisions under real conditions, rather than trying vainly to create a never-changing set of tests that anyone can run.

In an exploratory model, the team might first look at the intent of the release. If it is a beta test for a mortgage broker customer, the team might choose to test customer-centric and known problem areas in the allotted two days, using an intimate knowledge of the customer and of the product, to define the focus of testing. The team would spend little time planning but much more time reacting to the specific need.
Security

The Internet explosion has led to a new focus on security, usability and performance testing, said Johnson. Security testing offers a new set of challenges to testers since security flaws don’t necessarily affect how a system works from an application perspective. Functional testing will typically ferret out functional problems, but security vulnerabilities have nothing to do with application functionality.

Roger Thornton, founder and CTO of Fortify Software, said that application expertise doesn’t translate into security expertise. Security expertise comes from the operations side of the company. Even recognizing that a security flaw exists can be difficult. There is a tendency to blame hackers rather than accept the fact that the software is vulnerable and needs to be fixed. Security flaws may be features of the software. “The hacker’s best friend used to be the sys admin. Now it’s the programmer,” he said, citing SQL insertion as a way that theoretically inaccessible information can be retrieved.

Security testing involves testing for conditions that could lead to security breaches, and that means you have to know where to look. “Security bugs are hard to tease out,” said John Viega, co-author with Gary McGraw of “Building Secure Software: How to Avoid Security Problems the Right Way” (Addison-Wesley Professional, 2001). “You need to find vulnerabilities early.” He advocates using a static approach and examining source code to find potential vulnerabilities, such as buffer overflows. But he said that right now static testing suffers from too many false positives.
Performance Testing

Usability design and testing also have gained in importance as an approach for ensuring not only that applications meet customer expectations, but also that customers can navigate through them. And performance testing is important to ensure that the system can support the expected load on the system, given the potentially high traffic on Internet applications. Feldstein runs a performance test bed in parallel with functional testing, and he monitors resources all the time.

When performance testing specialist Scott Barber, chief technology officer at PerfTestPlus, started doing performance testing five years ago, it was seen as a minor add-on service after the completion of functional testing to validate the maximum supported load on the way to production. Today, performance testing is typically not considered an optional add-on service tacked on at the end, although Barber said it is still five years behind functional testing in terms of industry acceptance and maturity. While performance testing is considered early in the process and is thought to be important, it is still generally outside the overall development process and doesn’t start until a beta release.
Open-Source Tools

Not only have processes and methods evolved, but the tools landscape has changed rapidly over the past few years as well, particularly with the availability of open-source tools. “Open source is invading the test space as aggressively as it is within mainstream development,” said Galen, “and we need to be adapting towards the evolution.”

Open-source tools range from development tools to new scripting languages to competitive frameworks to off-the-shelf automation development tools. Besides the xUnit tools, there are tools for acceptance testing, such as FitNesse. Scripting languages such as Python, Jython and Ruby are gaining ground on Perl. “As a tester, it’s not good enough any longer to know a little Perl,” he said. “You must understand a variety of scripting languages to leverage in your day-to-day tasks.”

Testing frameworks that are becoming available in open source rival the traditional automation tool vendor offerings in breadth and capabilities. Barber said that vendors of commercial test tools are “about to be in for a shock, as they find that the new tools are significantly cheaper, and as they learn that the open-source tools are competitive.”
A Glance Forward

Beyond the impact of open-source tools, the test-tool market is on the verge of a major overhaul, according to Barber. Testing tools are being integrated into development environments, a confirmation that the industry is beginning to acknowledge that developers need to be the first to test and that testers need to work hand-in-hand with developers.

4 comments:

Unknown said...

Nice post.
Specially Lindsey Vereen's comments.


Frederic Torres
www.InCisif.net
Web Testing with C# or VB.NET

belle.me09 said...

Yes, these are nice observations and information you've got here. I also noticed how testings were adjusted to earlier stage in the lifecycle. Well, as this article suggests: http://www.coursework4you.co.uk/industry_lifecycle.htm, strategies can be different at different stages of the industry life cycle.

Unknown said...

Hi Sir,

I am a siebel developer and involved in regression and unit manual testing. Now i would like to learn Performance testing with siebel application.
I have seen an article on siebel performance testing with load runner. Its very nice.

Can you please teach me performance testing by online. I will pay fee as well. I am looking forward for your mail.

vadra99@gmail.com

Unknown said...

Manual testing plays an important role in Business critical applications and in applications where functionalities change quite often.Performance testing services predict the behavior of applications in a live environment. Our performance testing approach identifies and prioritizes high usage and critical application modules and increases effectiveness by prioritizing performance of these modules
SQA Services