Tuesday, 8 March 2011

Dangerous Test Concepts Exposed

There exist test ‘concepts’ which, while seemingly simple, have a tendency to melt my brain.
  • Black Box/White Box Testing
  • Functional/Non-Functional Testing
  • Positive/Negative Testing
Years spent studying hypnosis and revelling in the ambiguities of communication have left me with an inability to parse language the way I did as a child.
I used to have the ability to read, and use, these terms without blinking. Now I throw a ParseException.
I realise the above concepts may not seem strange to the majority of people reading this blog post. So I will attempt to describe how I respond to reading these terms, in the hope of spreading a contagion against them.

First some generalities

First some generalities across all the ‘concepts’.
I think that I think, and test, in terms of Systems. More specifically in terms of models of systems.
Yes/No models jar my sensibilities. Testers lives in the grey areas and find treasures in the shadows. Absolutes do not help me explore the System. I do not want guidance by absolutes, I want guidance towards flexibility.
All of these concepts encourage me to think and communicate in ambiguities, while pretending specificity and authority. If I say White Box testing, you still have no idea what I mean. If I describe the models that I use to guide my testing then you have a better idea of the type of tests I might create, and the type of gaps I may encounter, regardless of the colour that I paint that model.
These concepts all encourage me, as a tester, to ignore the model and think in vague conceptual terms. They categorise techniques out of context and far removed from the model that the techniques apply to.
They encourage debate about irrelevancies. e.g. Should we do White Box, or Black Box testing? Developers do White Box Testing, testers do Black Box Testing? Where does Grey box testing come in?
The amount of time wasted on projects when people ‘debate’ if the requirement should form part of the ‘functional’ or ‘nun-functional’ specification, instead of getting on with communicating what they need shocks me. (I estimate this in terms of man-years for some projects)
etc.

Black Box / White Box Testing

Ah, the most pernicious of them all.
How comfortingly I viewed these terms in my early years. I could ignore the relationships between my test models. I remember times when I made no effort to learn the technology or architecture of the system. I could simply focus on inputs and ignore flow, because I did my testing all “Black Box” like.
Then over time I grew uneasy. I started to apply White Box techniques to ‘Black Box’ingly understood systems.
This concept started to seem like the devil on my shoulder whispering “Relax, don’t do the work to view it as a system, treat it as a ‘black box’, see how much simpler it all becomes”.
I started to feel as though the constant repetition of this phrase in the testing literature stopped me hearing the siren call of the technology.
I weaned myself away from these terms by using them as insults. I engaged in a form of aversion therapy.
I labelled all testing I did without understanding the technology, or without having made an effort to observe the architecture, or without building a structural model as ‘Black Box’ testing. I imbued the term with an association of laziness.
I view all my areas of superficial modelling, as having greater risk of defects unfound.
I associated my use of ‘White Box’ testing with over-confidence. I felt nervous if I only read the code, or only based my testing on a superficial understanding of the structure. For me, the phrase warns me about my complacency. I try not to plan for complacency.
This helped me move towards a more Systems focussed approach to my software testing. I no longer feel a pull towards exclusively the Darkness or the Light.
I try to concentrate on models and identify techniques to apply to models.
I ignore the categories and partitioning of techniques presented by the Canon.

Functional/Non-functional Testing

I periodically write non-functional automated test scripts… Sometimes my scripts simply don't work and they remain non-functional until I fix them to get them functional. <insert canned laughter here>
In the past, I have communicated my testing in terms of “I built tests to cover the ‘Non-Functional Requirements’”. Which in context could have read “Speed Monster Funky Requirements” and I would have just as happily used that phrase in my test communication.
When we mean requirement based testing, then we should say “requirement based”. Removing the word “requirement” suggested to me that “Herein lies magic”. That the “Functional” tester does something different from the “Non-functional” tester. That we can phase the testing differently and do “Functional” testing and then later do “Non-functional” testing.
I removed my unease towards this phrase by always appending/inserting the word requirement.
But … I don’t like the distinction between “Functional” and “Non-Functional” because of the ineffectual processes and discussion that I have seen result from the distinction. Sadly Software development has a history of considering functionality as separate from the other “ilities” that might relate to that functionality. A tradition or an old charter, or something.
And yet I don’t see why I should let this influence my testing. I’ll do whatever ‘it’ takes, and if that means merging the functional with the non-functional to get more advantages in my testing or tooling then I will jolly well do so.

Positive and Negative Testing

I assume “positive” and “negative” as testing terms, must have come from a more objective diagnostic context than I get involved in. From a context where a negative test, rules something out completely, and a positive test rules it in.
I classify much of the testing I do as exploratory or experiment based. I do things to try and rule out certain possibilities and I could possibly count that diagnostic. But I don’t think my tests demonstrate the certainty that “positive” and “negative” testing suggests to me. I find more value in thinking in terms of a general experimental process rather than a diagnostic process.
But even the above description doesn’t seem to map to how the testers I meet describe positive and negative testing.
Testers seem to describe negative testing in terms of exercising error validation and reporting. And positive as “stuff we think will work”. I tend to view the existence of error handling functionality in applications as a positive thing.
I find it hard to use the terms –ve and +ve testing. When I try to show that the system does not work, I talk in terms of trying to ‘break’ it – clearly I don’t break it, but it works as a term for me. I also view this as a positive thing for me to do.
Yes, I freely admit to misusing words, particularly if the misuse of the word frees my thinking. I try to stop using words that constrain me.

End Notes

For all of the above ‘concepts’ I can read the definitions and almost understand why those names fit those descriptions. But I try to find other, more exact, or more contextual, or more personal, ways of describing the testing that I do.
You may not need to go as far as I did to de-programme their effect.
Younger testers may even have an immunity to them.
I still feel their pull though. My brain still finds a ‘reasonableness’ in all of them while at the same time each of the ‘concepts’ strikes me with dread. Each of them seems to proudly state that we can easily view the System as two simple partitions and we can relax about modelling or understanding the complexity.
If you speak to me using these terms, I will have to ask you what you mean. Do not assume that my testing background will allow me to understand you.
I try to stand firmly in a graduated shading testing scheme. Sometimes I even imagine colour in my test approach. Someday I will attempt to employ synesthesia.

4 comments:

  1. Alan,

    To me, these are models - views of someone. Models used to represent and solve some problem. These models per se are innocent. When we forget that they only "models" not actual things - the problem starts -- it is the problem of "stagnation" that you mentioned in your earlier post.

    Thinking of SUT as a black box or a white box is a model - a means to represent - A provisional one. Models are maps - not the territory.

    Also notice the narrow-ness in these models - these are "polar opposites" black-white, positive-negative. As it happens these polar models have major problem - change your perspective or viewpoint - all falls appart.


    You must add to this list

    Manual Testing and Automated Testing

    - Can testing be automated? or portions of it?

    Shrini



    Thanks Shrini,

    "Thinking of a system as a black box" generates a different set of feelings in me than "Black Box Testing". The first has an open ended-ness. "as a black box" implies etc. ("and also as "). Which "Black Box Testing" never did for me, I found that a very closed concept.

    I find your use of the words "Black Box" acceptable in statements like "Think of the system as a black box"... The "as if" frame of mind works wonders for allowing possibilities and generating new models.

    I agree that "maps are not the territory". I too use that as a guiding principle.

    I could add many other 'Concepts' to the list. I find your suggestion of "Manual/Automated Testing" an interesting one because I have never viewed it as a boolean or polar opposite concept. So perhaps other people never viewed the concepts I listed as boolean in a similar way. In which case my post may serve as a parable, of what happens when I take words and phrases too seriously (Ah, to have the brain of humpty dumpty).

    Differing reactions to language can produce differing thought processes.

    And lastly... I certainly automate portions of my testing process.

    "testing" on its own, seems like such a loaded and overly ambiguous word to me :)

    ReplyDelete
  2. Enjoyed reading such unconventional (wonder what evil tester says to "unconventional") articulation of archaic testing definition. I wish more and more from testing community read this. And yes this line is going to be my status on Ymessenger for now -

    "I periodically write non-functional automated test scripts… Sometimes my scripts simply don't work and they remain non-functional until I fix them to get them functional"


    Thanks for the comment Tarun. I considered it unsaid-by-me-in-public-before. Please let me know when I start spouting conventional wisdom - I would much rather tend towards the uncon.

    ReplyDelete
  3. Hi, Alan...

    With respect to positive and negative testing, you might be very interested in the Klayman and Ha paper on the positive test strategy and other aspects of confirmation bias. See here (http://en.wikipedia.org/wiki/Assimilation_bias) and here (http://en.wikipedia.org/wiki/Assimilation_bias#Klayman_and_Ha.27s_critique) for background. The paper itself is here (http://www.stats.org.uk/statistical-inference/KlaymanHa1987.pdf). Beware: academic writing style takes time to wade through.

    I agree that "positive" and "negative" test seems to be used inconsistently enough to be considered unhelpful. A test in which do something such that we expect an error message to appear: is that a positive test or a negative test?

    Apropos of that, you might also be interested in what James Bach and I concluded (for now) in a transpection session about positive and negative test cases: for us, a positive test is one in which all required assumptions have been fulfilled. A negative test is one in which at least one required assumption has not been fulfilled. That seemed to us to be reasonably clear; maybe it will also be useful, so we're testing that.

    ---Michael B.

    Thanks for the comment and links Michael. Looking for ways of dealing with these concepts might start to seem like a conventional thing to do.

    Alan

    ReplyDelete
  4. Verification / Validation form a similar pair. I find the necessary mental judo causes my mind to stutter.

    They're not so much of a yes/no pair - but I've seen them used as both exclusive and exhaustive. I've heard (I paraphrase) "We don't do [pick your V] - we just do [other V]", and indeed "We're not doing that - it's neither verification nor validation".

    They're certainly ambiguous - except to those who believe in (often their own) unambiguous definitions. For those that rely on the standards, I give you BS 7925-2 (and ISO9000, too - though I rely on references and haven't checked):
    - validation: Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
    - verification: Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.

    So that's clear, then.

    Some of your fine readers may cheerily comment that one is "did we built it right?" and the other "did we built the right thing?". In real life, I'd ask them how often their colleagues get the Vs round the 'right' way. And whether they felt any disquiet over activities that seek only to confirm their expectations. But as this is your blog, I'll leave the questions to wriggle in the minds of those readers.

    Cheers - James

    Thanks for adding to the potential disquiet of the readership James. At the very least, if people see V & V in a testing context now, they might remember that the letters don't stand for Vengeance and Vendetta.

    Alan

    ReplyDelete