Is anybody out there interested in / knowledgeable about means of quantifying risks arising from buggy software? My interest is mainly in how this is done in the banking industry, for instance measuring the risk of an algorithmic trading system in an investment bank issuing erroneous trades.
Replies
Compared with 10 years ago, the software industry are both more complex and mature. It's more mature no necessary in term of the quality of the software, but the development environment and means to catch the defects. For example, there are quite a lot of study of software defect patterns in the last 10 years. And there are quite a few outstanding commercial and freeware tools to help you detecting the defects. The means are not only applied to the regular software, but also to trading software as well in the banking industry.
I actually used a tool called Fortify from HP to perform such an evaluation for a large bank in a project last month. The result was pretty positive. We caught hundreds of security defects in the software code base such as buffer overflow, memory leak, null dereference, potential DoS etc.. As a direct result of the project, the software code base is getting both safer, and at the same time, meeting the critical performance requirements.
Software-bugs are part of the Operational Risk and can only be dealt with through procedural changes. They are certainly not quantifyable, unless you assign ratings to different severities (ie. severity of a report-column header being wrong vs severity of a total system crash vs. severity of wrong numbers in a calculation).
As for op-risk procedure changes - quality measures (such as for example SixSigma) can be applied beautifully to Software development and help to drastically reduce the number of bugs in Software.
My experiences help to categorize three types of software applications (my opinion; may not be industry standard):
-- commercial software.
Commercial software are products of those software firms. Therefore such software products are of higher quality in general than non-for-sale software. However it is not being ruled out that some small companies may not pay much attention to the testing part.
-- proprietary software.
"In-house", proprietary software are developed for a company's own use, and not for sale. Therefore, their "buggy" condition largely depends on relevant managers. Ignorant managers/bosses particularly from small companies may not follow software engineering's recommended methodology. And then "buggy" proprietary application becomes part of a company's operational risk source.
-- opensource software.
Opensource software are 'open'. Therefore their conditions of being "buggy" or not are also open facts to careful people.