Wednesday, 16 January 2002

An exploration of, and notes on, the process of Test Scripting

Software Test Scripting

This essay explores the test scripting in terms of software development as the two processes are very similar and share many of the same techniques and pitfalls. It is primarily aimed at manual test script construction because automated test script construction is software development.

An exploration of, and notes on, the test scripting process:


This text will explore the process of test scripting. It will do this through analogy to the development process and will try to show some of the things that testers can learn by studying development methodologies and best practises. The essay is not complete description of test scripting, it is an introduction to test scripting and to the study of development techniques but if you only take one thing from this essay take this:
Test script development involves the same processes and techniques used when constructing software programs, any experience that testers have had in the past when scripting is an experience which has been shared by the development teams when developing. Testers and developers are more alike than different. Testers should study developers and their techniques; developers should study testers and their techniques. We are all software engineers, we just have different areas of specialisation, but it is foolish to specialise without knowing the general techniques of your discipline.


A test script is the executable form of a test. It defines the set of actions to carry out in order to conduct a test and it defines the expected outcomes and results that are used to identify any deviance in the actual behaviour of the program from the logical behaviour in the script (errors during the course of that test). In essence it is a program written for a human computer (tester) to execute.
Testing uses a lot of terminology. In this text I will use the following definitions:
  • Test case:
    • a logical description of a test. It details the purpose of the test and the derivation audit trail.
  • Test Script:
    • the physical, executable, description of the test case.
  • Automated test script
    • a program that implements a test.
The development life cycle has a number of processes and tasks that the development community is involved in:
  • Requirements
  • Design
  • Coding
  • Testing
Testers are familiar with each of these stages in the context of system development and its relationship to the construction of tests. However a Test Script is a program and as such it has a life cycle that parallels that of the system development in microcosm.

The development life Cycle


Fortunately for testers, tests are derived before building scripts. The test description itself should contain the requirements for the test script. (The process of test case construction and the corresponding requirements analysis techniques are outside the scope of this text.)


Test Script design involves the construction of an executable model which represents the usage of a system. It is an executable model because the model contains enough information to allow the tester to work through the model and at any point unambiguously knows what they can do next.

Executable Models

Executable models use 3 main constructs:
  • Sequence, one action after another.
  • Selection, a choice between one or more actions
  • Iteration, a repeated sequence or selection
  • The model consists of three main stages done one after the other; initialise, Body, and Terminate.
  • The model consists of a selection between ‘Action 1’ or ‘Action 2’ or ‘Action 3’
  • The model will iterate while condition C1 is satisfied.
Or, representing the above diagram as a graph:
image of a graph
The graph provides us with the following two Meta paths:
  1. Initialize [Body (Action 1 | Action 2 | Action 3)]* Body Terminate
  2. Initialize Body Terminate
A test script is an interesting executable model as it only embodies the sequence construct. Leading to the familiar situation where testers write numerous scripts around the same area of the program, each script differing slightly from the one before:
  • Script 1: Initialize, Action 2, Action 1, Terminate
  • Script 2: Initialize, Action 1, Action 2, Action 2, Terminate
  • Script 3: Initialize, Terminate
It is obvious that this situation occurs because each test script is an instantiation of one of the script model’s Meta paths, each script is a single sensitised model path.
Test Scripts avoid the concepts of selection and non-deterministic iteration because each test script should be run in exactly the same way each time it is run in order to aid repeatability. The tester is given no choice when executing a test script but to follow it exactly and consistently, this allows errors, once identified, to be demonstrated repeatedly and aids the correction of errors because the exact circumstances surrounding the execution of that test were specified.
Computer Programs do not avoid selection and iteration therefore the scope of a computer program is larger than a single script and a number of scripts will be required to cover the scope of the selections and iterations modelled in the program. A computer program does not represent the instantiations of the model paths; a computer program provides an alternative executable model.

Iterations in Test Scripts

Having pointed out that test scripts do not use iteration constructs it is worth realising that in the real world testers do write test scripts using iteration constructs and then examining why.
One of the software development tenets is the avoidance of repeated code. This aids maintainability, can often aid readability and allows re-use, which increases the speed of construction of similar procedures.
Iteration constructs are used when constructing test scripts for the same reasons.
It is perhaps unfortunate that the test tools which testers use, particularly when constructing manual scripts, do not make re-use or iteration simple. This leads to a more informal implementation than would be found in program code:
  1. repeat steps (3-12) 4 times, but this time enter the details for Joe Bloggs, Mary Smith, John Bland and Michael No-one.
  2. Press the enter key 6 times.
Example 1 above uses iteration to avoid repeating the same set of steps in the script. However, the same steps (3-12) will typically be repeated in other scripts so re-use isn’t facilitated. This is primarily because most tools which testers use for manual testing will not support sub-scripts or procedures.
When testers do use loops it is obvious from the Meta model path description of the test script what they are doing. The tester will have identified a particular instance of the Meta model path and in order to increase maintainability of the script the tester finds it more appropriate to use a higher order description as the actual test script itself.
However, the test script must not implement a non-deterministic loop or an ambiguous selection condition otherwise the test script, will not be implementing an instantiation of a Meta path, it will be implementing an actual Meta path.

Use of the Design

A design that represents the flow of control that scripts can select particular paths from serves a number of purposes:
  • Oracle: expected results can be predicted.
  • Coverage: can be assessed.
If no design is produced then testers will have to assess coverage by examining the discrete set of tests and identify missing paths or actions not taken. Typically missing paths won’t be noticed until the tests have been executed a number of times and the tester has built up a model in their head and realises that they have never executed Action 3.
  • Automatic Transformation from Design to Script.
This obviously depends upon the design technique used and the availability of tool support. Testing models, particularly for scripting, can use models that do allow automatic code generation: Jackson, Flow Charts, State Transition diagrams. Typically tools exist to draw the models, and tools exist which can take the models and produce entire programs. Test scripting requires tools that can produce programs of the chosen path through the model. Testing may have to design and build their own tool to support this.
In practise the relationship between the design model and the test scripts involves less interpretation than the relationship between a software design model and the software program, therefore the test script design model must be maintained before any maintenance is carried out on the derived test scripts.


The coding of a test script refers to the writing of a test script.
Each test script should follow the path identified from the design and as such should be fairly easy to construct if a design has been produced.
Test Scripts are typically represented by a series of steps, each step being given an id or sequence number, an action and a result.
Some test scripts will be represented with columns for pass/fail attributes during execution. This is not actually an attribute of the test script but is an attribute of the specific execution instantiation of a test script, but given the crude nature of the tool support in testing it is often easier to add the column to the script.

Artfully Vague

Example: a script to save the current file in a word processor which has been saved before:
Step 1
  • Action - Click on the file option of the main menu bar.
  • Result - A drop down menu appears, one of the options is ‘Save’
Step 2
  • Action - Click on the ‘Save’ option of the drop down menu.
  • Result - The disk whirs and the date/time stamp of the file in explorer matches the time that the file was saved.

Test Scripts are often artfully vague. As can be seen from the example above. The script is written in English and a number of presuppositions are embodied, i.e. that the tester knows:
  • What a main menu bar is,
  • What a drop down menu is,
  • That ‘click’ means to maneuver the mouse pointer over the text and then click,
  • What explorer means,
  • How to check the date/time stamp in explorer.
When writing a test script the writer must take in to account the level of knowledge that the tester (the person executing the test) will have. If there is not enough information then the tester may not be able to run the test, or they may run the test but, having misunderstood some of the actions, have actually run an entirely different test which may lead to a false positive or false negative result being reported.
There are also time pressures that affect the writing of scripts. Unlike a computer program a script can be poorly written but can still execute provided the human computer has the correct knowledge.
The above script could be written as: “save the file using the file menu and then check the date”. This is possible when the person executing the script is the same person writing it. It doesn’t aid re-use, repeatability or maintainability but it can still be executed. A computer program cannot do this, a computer program can skimp on the documentation and the computer can still execute the program, but the computer has no more information than that presented in the program and the instructions presented to the computer cannot be artfully vague.

Other concerns

Development is rightly concerned about maintenance and ensuring that their code makes sense now and will make sense 18 months in the future when they have to update it.
Testing should have it easier as the transition from design model to test script should be an automated process but typically testers don’t have an automated mechanism for doing this and end up doing the translation manually.
It can be a lot of work to document each individual test script to a precise level.
Testers probably write as much source as the development teams and yet have fewer tools to support the development and maintenance process.


Testers are aware of the importance of testing software. They should also be aware of the importance of testing their test ware.
The process of constructing tests and executing them should give testers an appreciation of the difficulties of program construction. Defects slip into test scripts as often as they slip into programs. This should make testers more sympathetic towards the trials and tribulations of their development peers but for some reason, some testers do look at systems with disgust and wonder what on earth the developer could have been thinking to allow such an obvious defect to slip through. As a corollary, developers know how hard it is to program and know how easy it is to slip defects in to systems, but are often scornful towards the tester who has a bug in their test script.
We can blame these attitudes on human nature but they are also symptomatic of a competitive environment where the test team and the development team feel that they are in opposition with one another. Both sides have the same problems and each can try to learn from the other.
Testing a test script can be tricky:
  • Test scripts are often constructed when there is no system available. At this time the quality control techniques are involved in checking the design model used and double-checking the mapping of the script back to the model.
  • Test scripts are constructed to test a new version of the system but the only system that is available is the old buggy version. Again the script must be validated against the design model, but it may also be possible to execute portions of the script against the old version of the system.
  • Testing the script with desired version of the system is the most important of the situations but it does not refer to the execution of the script. It refers to the validation of the design model against the system, in essence testing the Meta paths identified by the model.
Testers are often under time pressure, and when under time pressure errors can creep in far more quickly. This is why the design model must be as accurate and thorough as possible.

The challenge of expected results

There is an interesting challenge set by the writing of test scripts, that of expected results.
A model of system usage upon which the construction of a test script is built may not always model the steps which have to be taken in order to check the pass/fail of an expected result.
The system usage model may tell the user how to create a customer but it may not tell the user how to check that a customer has been added to the system correctly. This may have to be done via an SQL query on a database. But this information must be present in the test script in order to execute the script and determine its pass/fail status.
This suggests that the construction of test scripts is not done through only one model. There is at least one other model available which describes the conditions of the test and how to validate the successful implementation of those conditions.
The challenge to the tester is in the integration of these two models; the condition model, and the system usage model, into a single test script.
Current testing best practice involves the construction of test conditions; these are typically developed from an analysis of specification documentation. (A thorough discussion of test conditions is beyond the scope of this text.)
There would appear to be no standard usage of test conditions:
  1. Test conditions are used to define the domain scope of tests: customer type = “Male”, currency = “USD”
  2. Test conditions are used to document ‘things’ that testing must concentrate on: “The system must allow the creation of customers”, “The system must allow the deletion of customers from the system when they have no active transactions”.
No doubt there are other uses of test conditions that I have not been exposed to.
Current testing tools, if they support test conditions at all, tend to model test conditions as a textual description. These textual descriptions are then cross-referenced to test cases in order to determine coverage of the fundamental features of the system analysed from the specification documents. There is no support in the tools to link the conditions to the scripts or script models. There is also no support for the modeling of the steps taken to validate those conditions that require validation i.e. the 2nd usage given above.
The test script construction process is analogous to the inversion, correspondence and structure clash processes presented so long ago by Jackson Structured Programming and more typically the informal mapping from design and specification to program code that developers do on a routine basis.
There are techniques to be learnt from these processes and tool support is required.
In the absence of tool support the software testers must document these models as effectively as possible.
This has the effect of expanding our modeling of test conditions to including instructions on how to validate that those conditions have been satisfied.
These conditions then have to be cross-referenced to the elements of the script design model at the points where those conditions would be satisfied. The conditions also have to be cross-referenced to the test cases so that the correct condition validation instructions are performed in the correct test scripts.


Test scripting is a time consuming, error prone and difficult process.
There is much that the tester learns from experience, many of these experiences are shared by, and have been documented by, the software development teams already.
Models are important in testing. They form the basis for all aspects of the testing process. It should be appreciated that the models used to derive tests are different from those used to construct test scripts and that in order to construct scripts effectively the overlap and intersection of these models must be identified and controlled.
This is a small section listing recommended reading for some of the development activities discussed in this text.

The Pragmatic Programmer, Andrew Hunt and David Thomas, Addison-Wesley, 2000
  • This is a set of examples, discussions and stories which illustrate the problems and best practice solutions associated with the development process.
Software Requirements & Specifications, Michael Jackson, Addison-Wesley, 1995
  • This is another set of small essays each of which provides insight and triggers contemplation of the various aspects of software development. The discussions of problem frames are particularly relevant.
Any programming manual for any programming language
  • It is important to attempt to learn a programming language, even at a rudimentary level, in order to appreciate the difficulties of software construction and the knowledge that is ready to be assimilated into your testing.

Tuesday, 15 January 2002

An exploration of, and notes on, model path analysis for Testing

Path Analysis for Software Testing

TLDR; This essay explores the use of graph models in testing and the practice of structural path derivation using 3 coverage concerns: node coverage, link coverage and loop coverage. Predicate coverage is considered but is not covered in detail. An exploration of, and notes on, model path analysis for testing:


This text will focus on the structural aspects of model path analysis.
This allows us to learn the basic techniques behind test path derivation. I suspect that most testers will use these techniques intuitively.
Whenever a process is conducted intuitively it always helps to examine it more formally for a number of reasons:
  • To allow it to be taught more effectively,
  • To explore our understanding of it,
  • To apply it consistently.
This text is primarily concerned with an exploration of the understanding of structural path analysis.

The example model

I will use the following model to explore the mechanisms behind path analysis:
graph image
This is undoubtedly a very basic model representation. No tester would ever want to work with a model like the above as it lacks a great deal of semantic information.
Node 4 is either a predicate node or initiates the links to 7, 5 and 3 in parallel. For the purposes of this exploration there are no parallel flows so any node with more than one exit point is a predicate node.
The model does not tell us the predicate conditions under which each link from the predicate node is taken so we work on the assumption that any exit node is equally likely. This makes the loops in the models non-deterministic and limits the strategies which we can apply to the derivation of loop Meta paths.
From this we can see that iterative determinism is not structurally represented but is provided by the semantic information that the model embodies.

Basic Testing approach for Graph Models

Testing Books typically present the reader with the following forms of coverage for flow models.
  • Node coverage
  • Link coverage
  • Loop coverage
Node coverage is achieved when the paths identified hit every node in the graph.
Link coverage is achieved when the paths identified traverse every link in the graph.
Loop coverage is achieved when the numerous paths identified explore the interaction between the sub-paths within a loop. This is a fairly vague description of loop coverage as loop coverage itself is a heuristic technique and will be explored in more detail later in the text.
Pursuing each of these types of coverage leads to path descriptions. Some of the path descriptions are ready to be sensitised and thereby map directly on to a test case. Others are Meta paths and numerous paths can typically be derived from them.
  • 1 2 is ready to be sensitised.
  • 1 [2 [13 4 ]+17 6 ]*2 10 is a Meta path and a number of paths can be derived from it:
    • 1 3 4 7 6 10
    • 1 3 4 3 4 7 6 3 4 7 6 10
    • etc…
Path 1 2 could be considered a meta path that can only have one path derived from it. Thinking of it in this way may help understanding or the construction of support tools.
Testers who do not do any formal modeling will do this type of path and Meta path identification automatically as it is essentially a form of pattern identification and humans are very good at identifying patterns. We may however miss certain paths or path intricacies and this can lead to defects slipping through the testing process that could have been found by tests within the potential scope of the testing strategies.
The next step, having identified the Meta paths is to apply strategies from them to derive paths:

Meta Path Strategy Path
1 [2 [1 3 4 ]*1 7 6 ]*2 10 => apply ‘once’ to loops => 1 3 4 7 6 10

The strategy-derived paths are then sensitised and tests are identified. For a more detailed treatment of this see Black Box Testing by Boris Beizer.
Node Coverage can be achieved with the following paths through the model:
  1. - 1 2
  2. - 1 3 4 7 6 10
  3. - 1 3 4 5 6 10
Link coverage can be achieved with the addition of the path:
  1. - 1 3 4 3 4 5 3 4 5 6 10
We could actually remove node coverage path 3 (- 1 3 4 5 6 10) and replace it with link coverage path 1 leaving us with three paths to derive tests from instead of four but still achieving both node and link coverage.
  1. - 1 2
  2. - 1 3 4 7 6 10
  3. - 1 3 4 3 4 5 3 4 5 6 10
We will see later that path 3 is actually a derived form of a Loop Meta path. Since at this point we are simply identifying paths for link coverage we will defer the identification of loop paths until later in the text.

Predicate Coverage

Link coverage introduces us to the issue of predicate coverage although the minimal nature of the model makes discussion of this type of coverage more difficult. The following discussion will not fully describe predicate coverage.
The information in the above model obscures the actual conditions associated with the predicate nodes. We do not know the conditions that differentiate the link 4-3, from the links 4-5 and 4-7
graph diagram
The link 4-3 may be conditional on the evaluation of a compound predicate e.g. when A OR (B AND C). This requires more tests to cover than a simpler predicate e.g. when A, although the path description will be the same for each of the predicate coverage tests e.g.:
  • 3 4 3 4 5
    • [3 4] (When A) 3 4 5
    • [3 4] (When B AND C) 3 4 5
    • etc. (see below)
We should expand A OR (B AND C) into a truth table to ensure that the link path is not taken under the wrong conditions and also to ensure that we identify all coverage conditions.

Path ID A B C Path
1 T T T 3 4 3 4 5
2 T T F 3 4 3 4 5
3 T F T 3 4 3 4 5
4 T F F 3 4 3 4 5
5 F T T 3 4 3 4 5
6 F T F 3 4 5
7 F F T 3 4 5
8 F F F 3 4 5

In order to achieve paths 1-5 there has to be a second set of ABC sensitisation values which prevent the loop from being executed again. This is an implementation issue associated with loop test paths.
Predicate Coverage is essentially path sensitisation issue, but it highlights the fact that a strategy aimed at achieving link coverage will provide a weak assurance of the system under test. Of course testers would never construct tests purely on the basis of a link coverage strategy.
Path sensitisation is an important issue that is not covered in this text. (See Beizer)

Loop Coverage

graph image
Re-reading the above model we can see it models a sequence of nested loops. The loops are represented by the links from loop exit node 4 to node 3, from 5 to 3 and from 6 to 3.
From our experience as a tester we suspect that achieving link coverage is not enough to assure us that the model has been implemented correctly. Most defect taxonomies will list a variety of defects associated with loops: infinite loops, incorrect exit criteria, etc. (see Testing Computer Software by Kaner et al).
Beizer describes loop testing as a heuristic technique and provides the following coverage strategies:
  • Bypass
  • Once
  • Twice
  • Typical
  • Max
  • Max + 1
  • Max - 1
  • Min
  • Min - 1
  • Null
  • Negative
Some of these strategies are not applicable for all loops and may lead to the construction of test cases that cannot be executed; this is fine, at least no potential paths have been missed. The scope for testing provided by the model may not represent the scope of testing provided by the system. We will only be able to identify these semantically incorrect paths when we sensitise the paths. Knowledge of these basic strategies allows the tester to be more confident that they are doing the best job that they can.
When analysing loops I generally work with paths using the forms below rather than attempt to follow the graphical representation above.
  1. 1 2
  2. 1 [2 [1 3 4 ]*1 7 6 ]*2 10
  3. 1 [3 [2 [1 3 4 ]*1 5 ]*2 6 ]*3 10
In the path forms above I have assigned each loop a number that represents the sequence of the loop exit node. E.g. loop exit node 4 linking to node 3 is the first loop in the model.
In the model above, the numbering of the loops is less important as each loop links to the same node i.e. node 3 is the loop entry node for all three loops. This statement is only true while the model is represented using paths 2 and 3 above as these only provide link coverage. The numbering of the loops becomes far more important when we consider the representation below, which allows us to go beyond link coverage.
The link coverage paths can be derived directly from the above Meta paths:
  • 1 [2[13 4 ]*1 7 6 ]*210
    • => 1 3 4 7 6 10
  • 1 [3[2[13 4 ]*1 5 ]*26 ]*310
    • => 1 3 4 3 4 5 3 4 5 6 10
If we look at one path (out of many) that we might expect to use during testing:
  • 1 3 4 7 6 3 4 5 6 10
Then we can see that we cannot derive that path from the representations above.
  • 1 [3 [2 ( [1 3 4 ]*1 7 )|( [2 [1 3 4 ]*1 5 ]*2)] 6 ]*3 10
The above path can be identified without first constructing an example that disproves the completeness of the link path descriptions. Both of the Meta paths have common elements
  • [2 [1 3 4 ]*1…6 ]*2 10
and this suggests that a more complete representation of the Meta path exists.
This form of representation helps to emphasise the different levels of model complexity that have to be considered for each of the coverage concerns.
Beizer’s heuristic strategies are applicable when there is more semantic information in the model than we have. Our model has the minimal amount of information and all the loops are post test loops, we are left with the following path construction strategies.
  • Once
  • Many

Beizers strategies considered

It is interesting to look at Beizer’s heuristic strategies from a purely structural point of view.


A Path that exercises the loop once would actually be derived during node coverage path analysis, as the loop exit node link is not considered.
  • 1 3 4 5 6 10 exercises each loop once.
    • (23/11/2001) Note: I’m assuming that these are post test loops (Beizer pg71).

Many: Twice, Typical, Max, Max +1, Min, Min -1

I am considering twice, typical, max, max+1, min and min - 1 as instances of the ‘many’ strategy because structurally the loops are non-deterministic and we have no way of knowing how often the loop can be executed.
As a consequence they are all sensitised instances of the paths identified by the ‘many’ strategy.

Bypass, Null, Negative

In order to affect a bypass strategy, structurally, there has to be a structural representation in the model. There is an example of this in the model in the sense that the sub path 4 7 6 represents a bypass of loop exit node 5.
But then this isn’t typically what is meant by a bypass strategy. The bypass strategy is a sensitisation approach to a path that does have a loop representation in it. The bypass strategy refers to a path such as 1 [3 4]* 5 6 10 and asks, is there a way to construct a case which sensitises that path representation in such a way that the path traversed is 1 5 6 10.
This is a sensitisation issue and is a mechanism for forcing testers to think outside the box, the box being the model that we are working with. This strategy encourages us to think, is there anyway that I can make it do what it is not supposed to do. It is very definitely a defect detection strategy.
In effect, this sensitisation strategy, if it can be used to produce a case either points out that our model is wrong and would require an extra link from 1 to 5 (see below), or it would point out that the system under test incorrectly implemented our model.
graph image
This strategy is interesting because the implications of this strategy do not just apply to loops; they apply to any part of the model. Is there any way to bypass node 7 when moving from node 4 to 6? If there is it implies the existence of a defect in our model or the system.
This strategy is the perfect example of testing thinking, thinking outside the box or constructing structurally ill formed paths. But in order to think outside the box we must know what the box is, hence my laborious presentation of the structural aspects of path analysis.
The skill of testing is to approach the construction of ill formed paths appropriately. But it is entirely possible that these are ill formed paths only because the model is not rich enough to test from. If we had a supplementary model that provided semantic information about the predicate nodes, or the data that is processed, then the combination of these models could result in the construction of structurally ill formed paths that are semantically well formed.
This strategy also highlights the assumptions used when constructing the model. The assumptions are that each link has an equal probability of occurring and that loops are not intrinsic to the model, rather that loops are represented by the links between nodes. Loops are in effect constructed from GOTOs rather than ‘do while’ loops. As a consequence each of my loops is represented by [3 4]* where the * means ‘1 or more times’. If the loops were intrinsic to the model then we could argue the same graph, and the same representation [3 4], but in this case the would mean ‘0 or more times’. This thinking is unlikely to occur in practise as most people will adopt a GOTO perspective when constructing models as this is the most natural way to think about a loop (in a graph) and may be why GOTOs were around before structured loop constructs.
Constructing the graphs and models is outside the scope of this set of notes.


This text has introduced path analysis from a purely structural perspective as this allows a clearer examination of some of the thinking processes and strategies involved in path analysis.
In the real world testing is unlikely ever to be conducted like this as our tests will never be purely structurally based. We have to sensitise paths in order to derive tests. Path sensitisation requires a more semantically rich model or supplementary models that provide the information required to process the model semantically i.e. a set of test conditions, domain analysis, requirements analysis.
The techniques presented here are worth thinking about. They are equally easily mapped on to test script production with the test cases themselves providing the sensitisation information. Using the information presented in this text we can assess the level of coverage achieved on the script model by the test cases compared to the potential coverage.

Friday, 4 January 2002

Modelling Tesla for Software Testing

Colarado Springs, 1900, and a round wooden building has been constructed 100 feet in diameter. Within the building a metal spike, fifty feet high, extends from floor to ceiling, this continues for another 70 feet into the desert air. A thin man sits on a wooden chair reading a hand written scientific text not more than 5 feet away from the spike. Suddenly a roar of tumultuous proportions fills the room, neighbours 10 miles away step out of their homes to see what has disturbed them. A shower of sparks erupts from the metal spike and tendrils of lightning arc around the man as millions of volts are discharged into the wooden room. Nicola Tesla smiles, and closes his text, his invention successful.

The Inventor

As an inventor, Tesla embodies the entire development lifecycle. The inspiration of genius, the feasibility study, the prototype, through to the documented patent. Every role in the development process can learn from Tesla, but as a tester I am most interested in Tesla's ability to model.

Born July 9 1856, Nicola Tesla was a visionary inventor who held patents and prototypes for a staggering array of scientific achievements of which the following are but a few: a radio controlled boat in 1890, the Tesla coil a prominent feature in every celluloid mad scientist's lab, creator of the Niagra Falls polyphase AC generator, robots, helicopters, radio and, it has been rumored, even the electric light bulb.

The Modeller

Tesla constructed models of his inventions in his head; to scale, with the correct materials and using the tools that he would use to construct them in real life.  Having built a mental model he could then subject it to
analysis, tests, and make changes until he was satisfied, then and only then would he build it.

"The pieces of Apparatus I conceived were to me absolutely real and tangible, every detail, even to the minutest marks and signs of wear." [1]

Unlike testers, Tesla had the benefit that he worked alone. He would conceive, build and test the systems he required. As testers we have to take in to account the need to communicate; with our fellow testers, the development teams, the users and ourselves (nine months down the line after we have forgotten why we created that test in the first place).

Tesla's prestigious success is surely aided by his ability to model. It would be a mistake to suggest that as testers we build models of the system in our head and then conduct the tests there. But as software development professionals how often do we fail to take the models that we have out of our heads and on to paper, how often do we fail to document the derivation trail of our tests?

Testing & Modelling

Adhoc testing typifies the building of, and testing from, an unconscious model in our head rather than a documented or conscious model. If we do this then we have no justification for measures of completeness or coverage. The model in our head allows us to distinguish between an error and a pass but it may become difficult to justify such a decision if, on the next stage of testing, our pass is deemed a false positive (a test that actually failed but we marked it as passed).

Tesla's model was as good as the real thing. The models that we use are not. Our models are maps, they represent the system but they are not the system. We build models to construct tests from, but we do not apply those tests to our model, we apply those tests to the system that someone else has built. Our testing is partly there to try and identify the differences between their system and our model.

Late in his life Tesla made some fantastic claims: free energy, cable free energy transmission, and death rays. But Tesla was ridiculed and very quickly after his death fell into obscurity. Tesla had nothing but excited visionary notes and his models to support these claims, but his models were in his head.

So to be good testers we sometimes have to learn from what Tesla did not do and what Tesla did inappropriately.

Learning from Tesla

Tesla worked in secret with dramatic announcements to the press, we instead have to document and inform from the very moment that we start thinking about testing. We should be tactful in our announcements. We have to document our models and we have to document the tests that we derive from those models.

We always derive our tests, whether we think we do or not. Our tests are a result of applying strategies to models, simple strategies like ensuring that we can't switch from a state to any other state unless it is documented in the state transition diagram. Heuristic strategies based on prior experience with similar products, which should be documented as test conditions. Testers use a multitude of strategies on many models.

What we have to understand about models is that if we construct models of a certain type e.g. state transition diagram. Then there are predefined strategies that we can apply to those models in order to construct tests. By having both the strategies and the models we can tell how consistently we have applied those strategies and how well we have built the models.

As testers we are in the position that if it is only in our head then it simply doesn't exist and we too would be ridiculed if we conducted our testing in this fashion. We cannot claim that we have achieved an adequate level of coverage unless we have a model of our intended scope.

Modelling is as fundamental an activity in software testing as it was for Tesla and engineering. As Tesla himself said, "There is scarcely a subject that cannot be mathematically treated and the effects calculated or the results determined beforehand from the available theoretical and practical data." [1]

References and consulted texts:
[1] Tesla: Man out of time, Margaret Cheney, 1981, Dell Publishing
[2] Strategies of Genius: Volume III, Robert B. Dilts, 1995, Meta Publications
[3] The man who invented the twentieth century, Robert Lomas, 199, Headline book publishing
[4] In search of Nikola Tesla, F David Peat, 1983, Ashgrove Press Limited

This essay originally appeared on  on 4th January 2002 as a 'journal notes' this was in the days before I had a 'blog' setup and still had a 'web site'. It seems more like a blog post than an essay so I've moved it over to

Charity Shop Shopping For Software Testing Books

A Fortunate Perusal...

I was recently browsing in a second hand charity book shop.
On the counter was a copy of Hetzel that they had neither priced nor put out on the shelf so I asked them how much it was, feigning disinterest in case they should quote a figure too close to Amazon's (£££). Fortunately they quoted £2, this was the first time I had ever been pleased to see testing undervalued.
There then followed a short conversation between myself and the staff, the Old Charity Shop Assistant (OCSA) and the Young Charity Shop Assistant (YCSA):

The Obligatory Sales Talk...

OCSA: Software testing, how dull.
Tester: Well... that's my job
OCSA: Oh, what do you test it for then 
YCSA: (knowingly) Viruses
Tester: No. Problems.
OCSA: Oooh, I had a problem with some software once, kept crashing my machine, every so often, boom, lost work I did, 
Tester: Well, yes software does that...
OCSA: We have one through the back, slow as anything when it works at all and then it doesn't, boom, many's the time I've had to start again I have,
Tester: Well...
OCSA: That was you not doing your job properly that was
Tester: Ehm...
OCSA: Always something going wrong with these software things, viruses and the like, why can't you do your job.
YCSA: (knowingly) Yeah, viruses
Tester:'s your two pounds thank you very much.

I considered telling them all about testing, about quality and about the software development process, about budgets and about compromise, possibly using the battered copy of Pressman that they had on the shelf, but decided not to. 

This is not an isolated incident, mistakes in the software process are often blamed on quality control, and testing in particular, rather than the development process as a whole. And it isn't a view held solely by the general public, but also by people who should know better - management.
The easy retort by testers is that they do not code bugs in to the system. But if testers are ever put in the position where they feel like making such a retort then the process has already failed and has gone into scapegoat mode. This is usually a process of denial rather than learning and very little good can come of it.
There are no silver bullets which will slay the error demon. The team must be vigilant when striding out on each new quest, step confidently on solid ground and guard against their comrades succumbing to weakness, for it is at those moments when the error demon strikes.
Quality is the responsibility of the entire development process. To blame the feedback loop is to ignore the symptom and the organism will most likely perish from disease.

This essay originally appeared on  on 4th January 2012 as a 'journal notes' this was in the days before I had a 'blog' setup and still had a 'web site'. It seems more like a blog post than an essay so I've moved it over to

The Cost of DIY Quality

The Choice

I bought a garden bench from a DIY store. It was one among many similar benches and I had difficulty deciding which to buy.
Fortunately the store had one of each assembled so I was able to sit on them and try them out. I picked one in the middle of the available price range, aesthetic enough to meet my visual bench criteria, and sturdy enough to cope with the demands of visiting nephews and feral squirrels.
Placing one of the flat packed benches into the trolley, I proceeded to the checkout.
The reader will no doubt have in their mind some direction in which this tale is going to go, based on their own experience with either flat packed products or DIY stores. It is highly likely, since I know how the tale will unfold, that it will map well onto those experiences of the reader that generated disappointment.

Construction & Planning

When I got home I started to unpack the bench components. Noting with assurance the box sticker informing me that the product inside was made to exactingly high standards and of the superlative quality that I could expect from the DIY store.
I estimated an hour to construct the bench and install it in the back garden so duly set aside the required time, this left me a spare half hour before I absolutely had to sit down and watch the weekly episode in the lives of a group of space adventurers on the television.
I will merely summarise what followed before moving on to the moral of the tale.
  1. The wood was untreated and had to be treated before assembly.
  2. Tabs A and C would not fit into Holes B and D as holes B and D had been drilled in the wrong places. 
  3. The head of the supplied screws snapped off after a few turns making it impossible to continue to screw the screw in or remove it to screw in another in its stead.
  4. Random nails were found poking out of the wood, the reason for the nails I could not fathom 
  5. Fortunately I had the appropriate wood-treating product in the shed, a drill that would allow me to make the requisite holes, a hammer that would allow me to flatten the nails somewhat, and a video to tape the essential space adventure.
I could make this an observation on the estimation process, reminding you how difficult estimating is and the need to monitor plans and take appropriate action when slippage occurs. Or an observation on the product evaluation process, stressing the need to consider installation in your own environment and the necessary tools and training.
I shall instead focus on the cost of quality and the management of user expectations.

Cost of Quality

Ultimate Quality is in the eye of the beholder, not the agent nor the producer. Quality is a subjective thing with quantifiably measurable attributes.
Ultimate Quality is determined when a product is in the hands of the user. Dr Genichi Taguchi has said that the cost of quality is the "cost to individuals, organisations or societies of the cost of poor quality". The cost to the DIY store was the loss of any future possible sales and the sneering derision of its products to anyone mentioning to the author of their intention to shop there.
This was partly because the user deemed the product to be of poor quality but also because the user expectations had been inflated rather than managed honestly.
The user had had demonstrated a complete and pristine product, whereas they would be supplied with a product that required much user intervention to complete. Had the expectations of the user been communicated at the demonstration then it is possible that the user might have bought an entirely different product but it would have meant that the user would have been surprised only by those failed quality attributes that had escaped detection than considering attributes that were part of the specification as part of the problem.
The seal of approval led the user to view the processes conducted by the company as flawed and would subsequently use that view to evaluate the company and their other products thereby eliminating the company entirely from their preferred supplier list.
I'm not suggesting that every product come with a disclaimer saying "use at your own risk", although most software products do seem to come with just such a disclaimer. But if you believe in the quality of your product, and state as much on the box, then have a money back guarantee rather than relying on the consumer's right of return. This is one of the Hypnotic Laws of Advertising set out by Joe Vitale.
The user then reacted, as users do, vocally and loudly against their perceived wrong, thereby spreading bad publicity about the product to other potential users.
So what can the producer do about it?

Mitigating Action

Well, if they are a software producer they can offer free upgrades, as many as it takes to get it right. This happens with commercial software products in the real world. But doesn't tend to happen if the software is written for internal users.
The user probably paid for internally produced software and they will probably be expected to pay for any fixes. This is a policy which leads to bad feeling in organisations, and may well be unavoidable due to the manner in which IT departments are funded, in which case the IT organisation has to do so much more to smooth it over.
By treating the users' complaints as justified, by responding to problems in the existing budgets as well as possible, by doing everything they can to get a happy user, because their future depends on a happy user and that happy user's budget.
And what can our DIY store do about the user dissatisfaction. Perhaps the internal IT department can learn from them.
The DIY store obviously can't offer a patch or an upgrade. Like an internal IT department they can't afford to give the money back, the user would simply take it, go to another store and continue to lambast about their poor quality products. But, what if the DIY store convinced the user that it was a 1 in a million failure and that they believed in their products so much that they were going to give the user vouchers to spend in their store at more than double the cost of the initial product. The user can obviously only spend the vouchers in the DIY store, that at least gets them back in the store, hopefully with a better shopping experience and possibly adding their money on top of the vouchers or coming back again.
Obviously internal IT departments can't offer the user 2 million dollars in free software development credits but they do have to go out of their way to make the user relationship work. Their job is to ensure that the user's experience of the quality of the software and the software process is second to none, after all, the user is paying for it.
Oh, and by the way, after various trials and tribulations, near death situations and false starts, the space adventurers won.

This essay originally appeared on  on 4th January 2012 as a 'journal notes' this was in the days before I had a 'blog' setup and still had a 'web site'. It seems more like a blog post than an essay so I've moved it over to