< <
OUR NETWORK: CompTIA TechLore DijitCommunity TiVoCommunity MyOpenRouter About UsAdvertiseContact Us
The Largest Online Community
for Software CEOs and Executives.

Software Testing: How to Clean Up Your Code Before It Goes Out the Door (Page 1 of 2)

One of the many things that's changed in the software world with the advent of SaaS and computing in the cloud is QA and testing. 

In the "good" old days, developers would release a new version every 18 months (or annually, if they were really ambitious). That gave the R&D and QA teams plenty of time to bang the bejeezus out of the code to find flaws and fix them before product release.

Nowadays, monthly releases are common, and weekly updates are not uncommon, which puts a real strain on QA. How in the heck can you maintain quality, bug-free code with that kind of schedule?

Welcome to the buggy, boisterous new world.

We recently spoke with Andy Chou, co-founder and CTO of Coverity, a San Francisco-based developer that helps companies automate software testing.

Founded in 2003, Coverity is privately held. Revenues in fiscal 2011 were $49.5 million, and Chou says bookings are growing at a 20 to 30 percent clip. The company has 220 employees worldwide; in addition to San Francisco, Coverity has offices in Boston, Calgary, London, and Tokyo.

Bootstrapped for the first four years, Coverity received $22 million in funding from Foundation Capital and Benchmark Capital  in 2007. "That was the only funding round," Chou says. "We have cash in the bank -- more than we had after that first round, in fact -- so we don't see the need for another round. We are expanding and growing and generating cash."

More than 1,000 customers use Coverity's suite of development testing products. In addition to crash-inducing bugs, these customers are looking for any code flaws that might cause unexpected behavior or security breaches. (SoftwareCEO subscribers can read a couple of internal case studies here and here.)

Chou was instrumental in developing the IP behind Coverity's technology while earning his Ph.D. in Computer Science from Stanford University, and he now has nearly a decade of watching and helping software developers manage testing and QA. 

So, we figured he's a good source to talk about the problems (and solutions) for software companies trying to develop clean code.

Tip #1: Without a decent set of blueprints, your final structure is suspect.

"One of the most common ways to blow it, especially at the beginning, is to not nail down requirements and go into coding right away," Chou says. "To most developers, it seems like nothing is getting done until code is being written.

"But when you find out that requirements are not taken into account, schedules get very unpredictable, because you never really know when you're done."

Tip #2: Agile software development is a good start, but not a finish.

This need to start writing code ASAP has led to the rise in popularity of agile programming, Chou says. 

"The idea is to break out the work into small chunks, and build little fragments of functionality. Give those chunks to your users early, get their feedback early, and discover new requirements you wouldn't otherwise get. This allows you to reduce the amount of rework. 

"But in order to do that, you have to set up development process so that agile is possible; all phases have to happen in a compressed period of time."

Tip #3: Agile means a lot more testing, and that begs automation.

"With traditional development you'll get one giant testing cycle near the end," says Chou. "The QA team tests the hell out of it, bugs get fixed, and you ship. 

"With agile, you test at each iteration. The difference is it's very hard to test every two weeks; it gets bigger and bigger and harder to test. You need to automate, and that’s part of what we do."

Tip #4: Nowadays, a QA team can't do it all alone.

"The trend has been toward more and more development testing," Chou says. "The development team itself will do some of the testing. That trend -- moving the testing upstream -- is being caused by agile development." 

But, we asked, doesn't that put the fox in hen house, so to speak? Can you trust your developers to find the flaws in their own code?

"Developers understand their code and might choose to not test it very carefully or spend much time on it," says Chou. "That's why doing development testing does not eliminate the need for QA."

Tip #5: Understand your testing metrics.

There are lots of metrics that signal the effectiveness of your software testing; two of the most common are coverage and number of defects introduced.

"Coverage means you measure how much of the code the testing actually runs," Chou says. "The goal here is to see if you're missing something. It isn't a perfect metric, because just because you've run it doesn't mean you've run it thoroughly -- but it's better than nothing.

"The number of defects introduced in each code change is something we can detect automatically. We can analyze the code without running it. We evaluate all the paths through the code, and if you have a large number of these introduced, it's a sign that the coding was not executed very carefully."

Page 1 2 Next »


 
 

Please log in or register to participate in this community!

Log In

Remember

Not a member? Sign up!

Did you forget your password?

You can also log in using OpenID.

close this window
close this window