lauantai 15. joulukuuta 2012

On Measuring Software

Note: this text was written some two years ago and somehow it was never published. Publishing now in order to have a reference point for further posts.

Measuring software is something that I've been wondering for quite some time now. Nobody measures software as such, only it's internal or external side effects. These observations are then analysed and some conclusions are drawn out of those. In most systems the interactions are so complex that it is argued that software measurement is considered as necessity.

I had a interesting exchange of ideas the other day. Would there be a measure that would tell us is our software good or bad. Without doing any traditional testing. Can we just look at the software, running, and say "this is good" or alternatively "this is bad, not sufficient". The conclusion we drew was that no, we need to still inspect the software's side effects (do testing) and then determine if software is ready to be deployed or not.

I am sure that there has been several scientific articles written about how to measure software from the point of view of completeness, functionality, applicability, unwanted functionality and so on. I am not going to even try to challenge these in any way here. I believe that they are necessary tools to determine the state of software.

What I am after is the holistic set of tools and methods that could be used to make those determinations. Something that we could use in our daily work to create software systems that would actually satisfy users expectations.

Software theory teaches us that by applying the V-model you get satisfactory software most of the time. I bet that can even be proven and probably has been. Why then, according to my observations, there can be so much of substandard software around and in use? Is testing really seen as unnecessary investment of time and resources? Is it impossible to apply in all cases? Is it purely the business reasons that force people and companies to release unfinished software?

Applying theory to practise is difficult because the theories don't take into account the true complexity of real world situations. Theories work on models that make assumptions about the reality but quite often the modelling leaves out details that will break the applicability of that theory in question.

Despite that the management of software relies on those models and insists that they must be applied. If the methods produce data that supports determination that the software is indeed defective then that becomes the truth. That truth becomes dogma. Game is over.

Ei kommentteja:

Lähetä kommentti