Friday, 17 June 2016

Register for Free Risk/Exploratory/Technical Testing Webinar on 28th June 2016

QASymphony have kindly invited me to present a webinar for them entitled "Risk Mitigation Using Exploratory and Technical Testing".

You can register to watch the webinar live. If you can't make it, then register and you'll be sent details of the free replay.

I've talked about Technical Testing and Exploratory Testing before. This time I want to approach it from the perspective of risk.

The blurb says:

"When we test our systems, we very often use business risk to prioritize and guide our testing. But there are so many more ways of modeling risk. If business risk is our only risk model then we ignore technology risks, and the risks that our processes themselves are adding to our project. Ignoring technical risk means that we don't improve our technical skills to allow us to model, observe and manipulate our systems at deeper levels and we miss finding important non-obvious problems. Too often people mistakenly equate 'technical testing' with automating because they don't model technical risk. In this webinar we'll explain how to model risk and use that to push our testing further. We'll also explain how to avoid some of the pitfalls people fall into while improving their technical testing."

I'm still working out the details, but I think I'll cover the following (and more):

  • What do I mean by risk?
  • Risks other than business risk.
  • How to identify risk?
  • Using risk to improve our process:
    • What risk do our tools introduce?
    • What risks does our process introduce?
  • Risk mitigation
  • Manifestation/Detection
  • What is technical risk?
  • How to use this to drive my testing
    • Risk as a coverage model
    • Risk as a derivation model
  • ...
Clearly I'm not using a statistical model of risk, or attempting to quantify it numerically. But I will be trying to explain how risk underpins the testing I conduct, and the processes I follow.

If I miss out any of the above for some reason, and you register,then you'll be able to ask about the topic in the Q&A section.

Hope to see you there.

Register for the free webinar

Monday, 13 June 2016

Text Adventure Games for Testers

TL;DR Announcing RestMud, a free text adventure game designed to improve your technical testing skills.

I love text adventure games. Playing. Writing. Programming. Love 'em. And now I have created a text adventure game for testers to improve their technical testing skills.
  • I wrote about text adventure games before, in the context of keyword driven automated execution.
  • I studied AI, compilers and interpreters because I wanted to understand Text Adventure Games
  • I've written more text adventure game parsers than I have adventure games. All lost to the mists of time in my cave of lost 'C' programming code:
  • I once wrote a wonderful text layout algorithm for text adventure games that only worked on the Mono screen resolution of the Atari ST - it looked great. 
  • I wrote a fantastic complex sentence handler with verbs, nouns, adverbs, pro-nouns, more nouns, conjunctions etc. Brilliant parser. But no game.
  • I once, with a friend of mine, started to pitch a point and click western horror sci-fi adventure game to a games publishing company - just as point and click adventure games faded and died as a genre.
  • I wrote small games in "The Quill", GAC, STAC, and others.

And now.

I unleash upon the world.


(That was dramatic by the way.)

A Text Adventure for the modern world, and to help improve your testing skills.

You'll have to:
  • explore
  • take things
  • hoard things
  • explore a maze
  • map the world
  • pay attention to clues
  • use the Browser Dev Tools
  • amend URLs to access commands not available from the GUI
  • remember things
  • possibly use REST tools (although for the Single Player Basic Test Game this isn't required)
Wow. I mean wow. Wow. Are you wowed yet? I'm wowed.

I played the test game again this morning and scored 1190 - but I made a mistake. Can you score higher than that?

Try it for yourself. Download it and see. You'll need Java 1.8, instructions are in the zip file.

You can use comments to let me know of any issues, or brag about your success, or whatever else you choose to use comments for. But now, brave adventurer... Go! Do some adventuresome stuff.

Tuesday, 7 June 2016

Some "Dear Evil Tester" Book Reviews

After publishing “Dear Evil Tester” I forgot to setup a Google Alert, but I set one up a few days ago and this morning it picked up a ‘book review’.
As far as I know, there are two book reviews out there in the wild:
I never know quite what I'll find when I click on a book review, and having written a few reviews myself, I know how cutting they can be if the reader didn't get on with the book.
Fortunately both Jason and Mel seemed to enjoy the book.
Jason has a lot of book notes and summaries on his web site. I've subscribed to his RSS feed now, so I'll see what other books and quotes have resonated with him.
Mel wrote the book in a “Dear Evil Tester” letter style which included kind words about the book:
This book had me laughing at my desk and my coworkers wondering if I had gone around the bin. I was enchanted by the sarcasm and wit, but drawn to the practical advice you had in your answers given with the Evil Tester persona.
Mel also recommended a few other books to readers of her blog. I'll let you jump over to her site and read the review to find out the other books she mentions.
Hopefully I'll find new book reviews popping up occasionally, of course the sales and marketing department here at EvilTesting Towers might selectively choose which reviews we mention.
Thanks Mel, and Jason - and also thanks to those that reviewed on Amazon - Isabel, Lisa and B. Long

Thursday, 19 May 2016

National Software Testing Conference 2016

On 17th May, 2016 I presented “The Art of Questioning to improve Software Testing, Agile and Automating” at the National Software Testing Conference.
It was only a 20 minute talk. So I had to focus on high level points very carefully.
You can read the slides over on my Conference Talk Page. I also wrote up a the process of pitching the talk, which might provide some value if you are curious about the process that goes on in the background for an invited talk.

Golden Ticket and Conference Programme

The presentation draws on lessons learned from various forms of fast, brief and systemic psychotherapy. With a few simple points:
  • Why? is a question that targets beliefs
  • How, What, Where, When, Who - all target structure and process
  • We all have models of the world and our questions reflect that model
  • Answers we give, reflect our model
  • Responses to answers give information on how well the models of the question asker, and answering person, match up
  • Testing can be modelled as a questioning process
  • Improving our ability to ask questions improves our ability to test, manage, and change behaviour.
You can read some early work I did in this area (2004) in my ‘NLP For Testers’’ papers.
The conference is aimed at managers so I thought that psychological tools might be more useful than Software Technology Tools.
I spent a lot of time between talks speaking to people and networking, so I didn’t get a chance to see many talks. But those I did get to see I made some notes on that I will have to think about.
A few of the talks overlapped - particularly Paul Gerrard, Daniel Morris, and Geoff Thompson. At least, they overlapped for me.
Paul Gerrard from Gerrard Consulting provided a good overview of how important modeling is for effective testing, and he mentioned his ‘New Model of Software Testing’ to illustrate how the ‘checking’ or ‘asserting’ part of testing is a very small subset of what we do. Paul also described some work he is doing on building some tool support for supporting exploratory testing. I’m looking forward to seeing this when Paul releases it.
Daniel Morris overlapped with Paul when he was describing the various social networks and online shopping tools. Daniel was drawing attention to the multiple views that social networks and shopping sites provide. They have a rich underlying model of the products, and the customers, and the shopping patters, what people buy when they buy this, the navigation habits of the users. etc. All very much aligned to the content Paul described and the tool support that Paul was building up.
Both Daniel and Paul described some of the difficulties in visualising or collating the work of multiple testers, e.g. when testing how do you see what defects have already been raised in this area - if we were navigating a shopping site, we would see it on screen as a navigate, also ‘*****’ starred reviews of ‘how good is this section of the application’. I found value in this because I’m always trying to find out how to better visualise and explore the models I make of software and I found interesting parallels here, and obvious gaps in my current tool support.
Geoff Thomson from Experimentus described some ‘silent assassins’ for projects and stressed that companies and outsourced providers seem to be moving to a focus on ‘cost’ rather than ‘quality’. Geoff also provided different views of project progress and cost, again demonstrating that ‘model’ of the project can be represented in different ways.
I also saw David Rondell provide an overview of various technologies and the rate of change that testing has to deal with. Container based technologies and rapid environment configuration tools like Docker, Mesos, Ansible, Vagrant, Chef, etc. were mentioned in many of the talks. Very often we don’t have time at a management level to really dive deep into these technologies but it was good to see them being discussed at a management level. (There is a good list of associated technologies on the XebiaLabs website)
The Gala in the evening gave us a chance to network further and I received an excellent masterclass in Sales from Peter Shkurko from Parasoft, it is always good to augment book learning with experience from real practitioners. I asked Peter a lot of questions over dinner and Peter’s experience helped me expand my model of sales with tips I hadn’t picked up from any Sales book or training.
For any conference organisers - if you can get the vendors to present, not just, ‘their tools’, but also their experience of ‘selling those tools’ e.g particularly in selling software testing, or selling software tools. I think participants would find that useful.
I tried to pay Peter, and the rest of the table back by contributing testing knowledge and experience in the Gala Quiz. (We got lucky because there was a 4 point value WebDriver question that we were able to ace.)
The result of our combined sales and testing knowledge meant that our table won the Gala Quiz and received ‘golden tickets’ which will grant us access to the European Software Testing Awards in November. Because sales, marketing training, development and testing can all work together.

Friday, 8 April 2016

How to Watch Repositories on Github via a NewsFeed

TL;DR subscribe to master commits on github with /commits/master.atom
There exist a lot of ‘lists’ and ‘notes’ on github, not just source code.
I would like to be able to be notified when these lists change.
There are official ways of watching repositories on github:
I primarily use news feeds through
The newsfeeds officially documented provides a bit too much information for me.
I really just want to know when new commits are pushed to master.
But the personal news feed functionality requires me to ‘watch’ a repo and then I’ll receive changes for:
  • issues
  • pull request actions
  • branch actions
  • comments
  • and all push commits
All I really care about are push commits to master
I wanted to know when changes are made to:
The approach I take is to subscribe to the commits feed on
  • /commits/master.atom
And therefore subscribe to the RSS Atom feed at:
If you find any good testing resources on github then please let me know, either via a comment or contact me.

Wednesday, 6 April 2016

Behind the Scenes: Tools and workflow for blogging on blogger and writing for other reasons

TLDR; Write offline. Copy/Paste to online.

This blog is powered by blogger. I still haven’t spent a lot of time creating a template that formats it nicely. Partly because I tend to read all my blog feeds through so I really don’t know what anyone’s blog looks like. I have ‘fix blogger formatting’ on my todo list, but it never seems to rise to the top.

I don’t particularly like the way that blogger uses html for posts: it avoids paragraphs and uses span, div and br.

But, it is easy and performant, so I use it.

What I don’t do, however, is write my posts in the blogger editor.

I thought I’d give a quick overview of my publishing and writing process for this blog because this is the same process I use when I’m working on Wordpress, Most Wikis, Jira, etc.

  • write in evernote using markdown
  • copy paste markdown to
  • “export as” “HTML”
  • open downloaded .html file
  • view source
  • copy paste everything between <body></body> into the ‘HTML’ view in blogger
  • review in preview in blogger
  • publish
  • review published form


  1. Web apps crash when I use them for editing
  2. I have a record of when I wrote the blog post because it is part of my Daily Notes ‘note’
  3. Writing in markdown means I focus on the content rather than the formatting
  4. multiple review points (yes, my writing goes through multiple reviews and still ends up like this!) each one shows a slightly different ‘view’ of it, so I pick up different errors.

##Web apps crash when I use them for editing

  • I’ve lost work in Wordpress.
  • I’ve lost defects raised in Jira.
  • I’ve lost edits to wiki pages.

You name the system that allows you to ‘create’ and ‘edit’ the ‘things’ online, and I’ve lost edits to it when:

  • the browser crashed
  • the browser hung
  • the tab froze
  • I accidentally pressed some magic button on the mouse that made everything go mental
  • etc.

I don’t trust online editing, so I do most of my writing offline in evernote or a text editor.

Secondary Gain

Because I’m writing it offline I have a record of when I wrote the blog post because it is part of my Daily Notes ‘note’.

Although Evernote seems to be slowing down these days when I write long notes. I don’t think it used to do this, I may have to start moving back to a ‘Day Notes’ txt file by default and import into Evernote at the end of the day.

Content rather than format

I have no fancy icons and gimmicks to distract me from my writing. Which means you get top quality content and no padding. Actually you probably get first draft text, but at least you know I wasn’t distracted by formatting.

Multiple Review Points.

Yes, my writing goes through multiple reviews and still ends up like this!

I first review the text in Evernote. Then in Preview in the blogger editor and then on the page after publishing.

Each stage shows a slightly different ‘view’ of it, so I pick up different errors.

If I do fix for formatting it is usually after publish, when it is live.


I write this way for most of the stuff I write.

  • emails
  • tweets
  • client reports
  • birthday card greetings
  • you name it

I also do this for my testing notes and test summary reports.

Which neatly brings us back to the topic of testing.

Happy testing.

Thursday, 31 March 2016

Everyday Browsing to improve your web testing skills - Why?

Who doesn’t like looking at the innards of a web page and fiddling with it?
  • Inspect Element
  • Find the src attribute of an image on a page
  • Edit it to the url of another, different image
You could, as my son enjoys doing; visit your school website, and replace images of people with blog fish and much hilarity doth ensue.
In my Sigist slides you’ll find some ‘tips’ for improving your web technical skills which cover this type of skill.
Asking and investigating:
  • How is the site doing that?
  • What are the risks of doing that?
  • Could you test that?
  • Do you understand it?
Some people ask: Why would I need to learn this stuff? Why would I use this?
I find that interesting. They have a different core set of beliefs underpinning their approach to testing than I do. They test differently, but it means I have to explain ‘why?’ for something that I do ‘because’.
I have studied the testing domain. I’ve read a lot of ‘testing’ books and have a fairly sound grasp of the testings techniques, principles and the many, varied reasons, why we might test.
None of those books, described the technology of the system.
Very few of those books used the technology of the system as a way of identifying risk.
Risk tends to be presented as something associated with the business. Business Risk. e.g. "Risk of loss of money if the user can’t do X".
I spend a lot of time on projects understanding the technology, to identify risk in how we are using the technology and putting it together.
  • If we are using multiple databases and they are replicating information across to stay in synch, then is there a risk that a user might visit the site and see one set of data, then the next visit see a different set (because now they are connected to a database that hasn’t had the information replicated across to it?).
  • Is there a risk that something goes wrong when we visit the site and it is pulling out information from a database that is currently being synched to?
If I ask questions like that and people don’t know the answers then I think we don’t understand the technology well enough and there might be a risk of that happening. I would need to learn the technology more to find out.
If people do know, and we have ‘strategies’ for coping with it - our load balancer directs the same user to the same database. Are there any risks with our implementation of that strategy? Will our test environment have the same implementation? Could we even encounter a manifestation of this risk when we are testing?
As well as knowing the requirements. I want to understand the pieces, and how they are put together.
Because I know from putting together plastic models as a child, or flatpack furniture as an adult, that there are risks associated with putting things together.
I have to learn about technology to do this. I then have to interpret that technology with my ‘testing mind’ and in terms of the system of the project I’m working on.
I suspect that is a better ‘why?’ answer; for learning the technology, and the associated technical skills, than my more flippant:
  • Q: Why would I use this?
  • A: Well, if you don’t have the skill, you never will. If you learn it right, then you might.
You'll find some simple tasks to help expand this in my Sigist slides