Thursday, 19 May 2016

National Software Testing Conference 2016

On 17th May, 2016 I presented “The Art of Questioning to improve Software Testing, Agile and Automating” at the National Software Testing Conference.
It was only a 20 minute talk. So I had to focus on high level points very carefully.
You can read the slides over on my Conference Talk Page. I also wrote up a the process of pitching the talk, which might provide some value if you are curious about the process that goes on in the background for an invited talk.

Golden Ticket and Conference Programme

The presentation draws on lessons learned from various forms of fast, brief and systemic psychotherapy. With a few simple points:
  • Why? is a question that targets beliefs
  • How, What, Where, When, Who - all target structure and process
  • We all have models of the world and our questions reflect that model
  • Answers we give, reflect our model
  • Responses to answers give information on how well the models of the question asker, and answering person, match up
  • Testing can be modelled as a questioning process
  • Improving our ability to ask questions improves our ability to test, manage, and change behaviour.
You can read some early work I did in this area (2004) in my ‘NLP For Testers’’ papers.
The conference is aimed at managers so I thought that psychological tools might be more useful than Software Technology Tools.
I spent a lot of time between talks speaking to people and networking, so I didn’t get a chance to see many talks. But those I did get to see I made some notes on that I will have to think about.
A few of the talks overlapped - particularly Paul Gerrard, Daniel Morris, and Geoff Thompson. At least, they overlapped for me.
Paul Gerrard from Gerrard Consulting provided a good overview of how important modeling is for effective testing, and he mentioned his ‘New Model of Software Testing’ to illustrate how the ‘checking’ or ‘asserting’ part of testing is a very small subset of what we do. Paul also described some work he is doing on building some tool support for supporting exploratory testing. I’m looking forward to seeing this when Paul releases it.
Daniel Morris overlapped with Paul when he was describing the various social networks and online shopping tools. Daniel was drawing attention to the multiple views that social networks and shopping sites provide. They have a rich underlying model of the products, and the customers, and the shopping patters, what people buy when they buy this, the navigation habits of the users. etc. All very much aligned to the content Paul described and the tool support that Paul was building up.
Both Daniel and Paul described some of the difficulties in visualising or collating the work of multiple testers, e.g. when testing how do you see what defects have already been raised in this area - if we were navigating a shopping site, we would see it on screen as a navigate, also ‘*****’ starred reviews of ‘how good is this section of the application’. I found value in this because I’m always trying to find out how to better visualise and explore the models I make of software and I found interesting parallels here, and obvious gaps in my current tool support.
Geoff Thomson from Experimentus described some ‘silent assassins’ for projects and stressed that companies and outsourced providers seem to be moving to a focus on ‘cost’ rather than ‘quality’. Geoff also provided different views of project progress and cost, again demonstrating that ‘model’ of the project can be represented in different ways.
I also saw David Rondell provide an overview of various technologies and the rate of change that testing has to deal with. Container based technologies and rapid environment configuration tools like Docker, Mesos, Ansible, Vagrant, Chef, etc. were mentioned in many of the talks. Very often we don’t have time at a management level to really dive deep into these technologies but it was good to see them being discussed at a management level. (There is a good list of associated technologies on the XebiaLabs website)
The Gala in the evening gave us a chance to network further and I received an excellent masterclass in Sales from Peter Shkurko from Parasoft, it is always good to augment book learning with experience from real practitioners. I asked Peter a lot of questions over dinner and Peter’s experience helped me expand my model of sales with tips I hadn’t picked up from any Sales book or training.
For any conference organisers - if you can get the vendors to present, not just, ‘their tools’, but also their experience of ‘selling those tools’ e.g particularly in selling software testing, or selling software tools. I think participants would find that useful.
I tried to pay Peter, and the rest of the table back by contributing testing knowledge and experience in the Gala Quiz. (We got lucky because there was a 4 point value WebDriver question that we were able to ace.)
The result of our combined sales and testing knowledge meant that our table won the Gala Quiz and received ‘golden tickets’ which will grant us access to the European Software Testing Awards in November. Because sales, marketing training, development and testing can all work together.

Friday, 8 April 2016

How to Watch Repositories on Github via a NewsFeed

TL;DR subscribe to master commits on github with /commits/master.atom
There exist a lot of ‘lists’ and ‘notes’ on github, not just source code.
I would like to be able to be notified when these lists change.
There are official ways of watching repositories on github:
I primarily use news feeds through
The newsfeeds officially documented provides a bit too much information for me.
I really just want to know when new commits are pushed to master.
But the personal news feed functionality requires me to ‘watch’ a repo and then I’ll receive changes for:
  • issues
  • pull request actions
  • branch actions
  • comments
  • and all push commits
All I really care about are push commits to master
I wanted to know when changes are made to:
The approach I take is to subscribe to the commits feed on
  • /commits/master.atom
And therefore subscribe to the RSS Atom feed at:
If you find any good testing resources on github then please let me know, either via a comment or contact me.

Wednesday, 6 April 2016

Behind the Scenes: Tools and workflow for blogging on blogger and writing for other reasons

TLDR; Write offline. Copy/Paste to online.

This blog is powered by blogger. I still haven’t spent a lot of time creating a template that formats it nicely. Partly because I tend to read all my blog feeds through so I really don’t know what anyone’s blog looks like. I have ‘fix blogger formatting’ on my todo list, but it never seems to rise to the top.

I don’t particularly like the way that blogger uses html for posts: it avoids paragraphs and uses span, div and br.

But, it is easy and performant, so I use it.

What I don’t do, however, is write my posts in the blogger editor.

I thought I’d give a quick overview of my publishing and writing process for this blog because this is the same process I use when I’m working on Wordpress, Most Wikis, Jira, etc.

  • write in evernote using markdown
  • copy paste markdown to
  • “export as” “HTML”
  • open downloaded .html file
  • view source
  • copy paste everything between <body></body> into the ‘HTML’ view in blogger
  • review in preview in blogger
  • publish
  • review published form


  1. Web apps crash when I use them for editing
  2. I have a record of when I wrote the blog post because it is part of my Daily Notes ‘note’
  3. Writing in markdown means I focus on the content rather than the formatting
  4. multiple review points (yes, my writing goes through multiple reviews and still ends up like this!) each one shows a slightly different ‘view’ of it, so I pick up different errors.

##Web apps crash when I use them for editing

  • I’ve lost work in Wordpress.
  • I’ve lost defects raised in Jira.
  • I’ve lost edits to wiki pages.

You name the system that allows you to ‘create’ and ‘edit’ the ‘things’ online, and I’ve lost edits to it when:

  • the browser crashed
  • the browser hung
  • the tab froze
  • I accidentally pressed some magic button on the mouse that made everything go mental
  • etc.

I don’t trust online editing, so I do most of my writing offline in evernote or a text editor.

Secondary Gain

Because I’m writing it offline I have a record of when I wrote the blog post because it is part of my Daily Notes ‘note’.

Although Evernote seems to be slowing down these days when I write long notes. I don’t think it used to do this, I may have to start moving back to a ‘Day Notes’ txt file by default and import into Evernote at the end of the day.

Content rather than format

I have no fancy icons and gimmicks to distract me from my writing. Which means you get top quality content and no padding. Actually you probably get first draft text, but at least you know I wasn’t distracted by formatting.

Multiple Review Points.

Yes, my writing goes through multiple reviews and still ends up like this!

I first review the text in Evernote. Then in Preview in the blogger editor and then on the page after publishing.

Each stage shows a slightly different ‘view’ of it, so I pick up different errors.

If I do fix for formatting it is usually after publish, when it is live.


I write this way for most of the stuff I write.

  • emails
  • tweets
  • client reports
  • birthday card greetings
  • you name it

I also do this for my testing notes and test summary reports.

Which neatly brings us back to the topic of testing.

Happy testing.

Thursday, 31 March 2016

Everyday Browsing to improve your web testing skills - Why?

Who doesn’t like looking at the innards of a web page and fiddling with it?
  • Inspect Element
  • Find the src attribute of an image on a page
  • Edit it to the url of another, different image
You could, as my son enjoys doing; visit your school website, and replace images of people with blog fish and much hilarity doth ensue.
In my Sigist slides you’ll find some ‘tips’ for improving your web technical skills which cover this type of skill.
Asking and investigating:
  • How is the site doing that?
  • What are the risks of doing that?
  • Could you test that?
  • Do you understand it?
Some people ask: Why would I need to learn this stuff? Why would I use this?
I find that interesting. They have a different core set of beliefs underpinning their approach to testing than I do. They test differently, but it means I have to explain ‘why?’ for something that I do ‘because’.
I have studied the testing domain. I’ve read a lot of ‘testing’ books and have a fairly sound grasp of the testings techniques, principles and the many, varied reasons, why we might test.
None of those books, described the technology of the system.
Very few of those books used the technology of the system as a way of identifying risk.
Risk tends to be presented as something associated with the business. Business Risk. e.g. "Risk of loss of money if the user can’t do X".
I spend a lot of time on projects understanding the technology, to identify risk in how we are using the technology and putting it together.
  • If we are using multiple databases and they are replicating information across to stay in synch, then is there a risk that a user might visit the site and see one set of data, then the next visit see a different set (because now they are connected to a database that hasn’t had the information replicated across to it?).
  • Is there a risk that something goes wrong when we visit the site and it is pulling out information from a database that is currently being synched to?
If I ask questions like that and people don’t know the answers then I think we don’t understand the technology well enough and there might be a risk of that happening. I would need to learn the technology more to find out.
If people do know, and we have ‘strategies’ for coping with it - our load balancer directs the same user to the same database. Are there any risks with our implementation of that strategy? Will our test environment have the same implementation? Could we even encounter a manifestation of this risk when we are testing?
As well as knowing the requirements. I want to understand the pieces, and how they are put together.
Because I know from putting together plastic models as a child, or flatpack furniture as an adult, that there are risks associated with putting things together.
I have to learn about technology to do this. I then have to interpret that technology with my ‘testing mind’ and in terms of the system of the project I’m working on.
I suspect that is a better ‘why?’ answer; for learning the technology, and the associated technical skills, than my more flippant:
  • Q: Why would I use this?
  • A: Well, if you don’t have the skill, you never will. If you learn it right, then you might.
You'll find some simple tasks to help expand this in my Sigist slides 

Monday, 21 March 2016

Behind the Scenes - "Dear Evil Tester" lessons in risk based release management

On Wednesday the 16th a paperback copy of “Dear Evil Tester” arrived through the post. This was my “final” proof copy. Was it ready for release? In this episode we learn that the “Go Live” release decision is always a business risk decision.

Errors found in the staging environment

Upon reading it, I found two tiny errors.
Did I delay publishing?
Heck no.
I’m Agile. I’m lean (with a thin layer of fat for winter).
Did I fix the errors?

Errors found late cost more to fix than early

Heck yes.
What were they?
Two typos.
  • I had written a " instead of a :
  • I missed out one character from the end of a word
I could probably have left the second error. But the first one annoyed me. Especially since I was sure I had fixed it prior to ordering the proof.
I could have hit publish, there and then, but I didn’t I fixed the error.

Integration Testing

But I don’t have any integration tests. How could I check I had fixed it?
I needed to improve my test approach. Previously I had been scanning the changed pages, and flipping through the pages to check for no unexpected changes. Then order a proof. Then wait 3 days. Then repeat.
This time I found a tool to help. diff-pdf
Perhaps, I could generate a ‘print ready pdf’ and ‘diff’ it with the previous one.
It worked. The diff showed me two pages with changes, and both exactly where I expected them.
I was pretty sure this was good enough to de-risk the release process.

Release Process

But I still had to go through a release process.
I had to resubmit the print ready proof to the CreateSpace process.
Then wait for the ‘we check your document’ process to complete.
But that would take too long. I had already started the “Coming Soon” hype machine. I couldn’t dial it back into “Coming a little bit less soon than I wanted, but still Coming Soon” mode. Or I could, but I didn’t want to.
So on Thursday morning. After the CreateSpace approval process was finished I was able to review the digital proof of the book.
I wasn’t so sure this was good enough to de-risk the release process. Because I didn’t gain any information. I didn’t notice any problems. I didn’t think there were any problems. So this just confirmed my world view.
In theory, this should mean that I found no additional risks and issues, and so should be good to go.
As a tester, I’m not used to finding no problems. I must have missed something.
I could order a print copy, wait a few days, then compare the text, and then release. If I did that I probably wouldn’t release the book for another 5 days.
What to do? What to do?

What to do?

On Thursday.
I ordered a print proof.
And I went live.
I thought - if there was a problem then I would receive the proof copy before anyone received there copy.

Friday, Saturday, Sunday

On Friday, people started tweeting that they had received their print copies from Amazon.
HUH! WHAT! My proof still hadn’t arrived!
On Saturday, people tweeted that they had received their print copies from Amazon.
WAIT! But my proof copy still hadn’t arrived!
On Sunday… etc.


And lo! The proof copy arrived.
I read through it and it looks fine.

Lessons learned

  • Release Go Live decisions are always a risk based decision.
  • Sometimes business priorities will take precedence over ‘testing’ risks and concerns.
  • Tools can help.
  • You have to know what you need a tool to do, before you find one.
  • If you self-publish, then try to order your final proof from Amazon rather than CreateSpace. It will arrive faster.

Friday, 18 March 2016

Behind the Scenes - The "Dear Evil Tester" Style Guide and its Impact on my Testing

During the editing stages of "Dear Evil Tester" I started to write and maintain a style guide.
This was to help me track the ‘basic’ errors I found in my writing, and as an attempt to ensure a consistent attitude and formatting in the text.
Most books, and "Dear Evil Tester" is no exception, have a ‘conventions used in this book section’. This is a mini style guide which contains a set of small templates that I can use during the writing process. This section doesn’t go so far as to list the weaknesses of the writer, which is one reason for making it public. Whereas the weaknesses can be inferred from the reading of a style guide, so we keep it secret.
I’ve embedded the style guide below.
But first… a quick reflection since I will carry this approach forward into my testing.
I generally make a lot of notes when I’m testing. I do not always summarise them into a concise guide. I will now.
I’ll write down the:
  • handy tips,
  • obvious errors I commit,
  • reminders of high level aims.
Previously I’ve relied on my archive of notes and ‘searching’ through them, which has worked well. But I think that the additional exercise of reviewing, and summarising, will help embed the knowledge in my head which might even cut down on the searching.
It should also make it possible to share with other people.
I know that we have tried to do this on projects through wiki’s but, they often fall into disuse. Partly because they are written to be ‘read’, rather than ‘remind’, so are often wordier than they need to be. Also they tend to be written for ‘others’ rather than for ‘ourselves’.
Having the guide for my testing, might also help ‘others’ spot weaknesses in my approach. For example, if I haven’t listed a shortcut/feature in the tools then I might be missing out on something that might help my testing. The guide might reveal that to someone more expert than myself.
I’ve had in mind to build these for each of my websites. When I do, this will also have the side-effect of creating a single page that I can quickly review for any obvious CSS or common JavaScript issues.
And now, the style guide…

Evil Tester Writing Style Guide

This is the ‘writing style guide’ to support editors for Dear Evil Tester.
But Alan, that’s you! Yeah, and I forget stuff. OK!

Sublime Tips:

  • ctrl + shift + F - Find across all files
  • F6 - Spellcheck

Publishing Tips

  • Remember to check in to Version Control before starting a preview
  • Remember to deploy to Dropbox before starting a preview
  • Remember to close down PDF viewer, before starting a preview

Editing reminders

If you find an error. “Find across all files” and see if you find it again - if you do, fix it!
When reading from print, to find the file, use “Find across all files”

Writing Style

Worry less about the sentence structure. Worry more about the flow. Think. What would Norvell Page write? What would Lester Dent write? Do not model H. P. Lovecraft.
Is it a sentence? Who cares. Just make sure it looks like one. And reads like one.
But we’re not supposed to start with a conjunction!
And who said that? Was it someone who started a sentence with “however”? What would the author of the Bible write? (Hint: And in paragraph two, they started with “And”! If its good enough for God…)
  • If you have (brackets) and have punctuation outside the brackets (like this), then it is only outside if it started outside. (Having a full stop outside this would be bad.)
  • Its vs it’s (remember: find across all files) - It’s wrong to write its, unless its thing is being described.
  • Don’t worry too much about capitalisation, but at least make it consistent in each letter.
  • PS. PPS. (Don’t make it a dotty abbreviation.)
  • Capitals for Things,
    • at the very least don’t mix it. “exploratory Testing” would be bad.
  • Chapter and Section Titles use Capitals
    • Letter chapters are written like sentences. Unless they have a Thing in it.
  • For example, we will not write For Example. And we will still write e.g.
But what about the readers that know grammar?
Ha. They’re going to be annoyed however you write it. Write it for the masses.

Recurring Jokes

If you ever get the opportunity to make a joke about repetition then you should take it. Copy and paste a previous section of text again. Preferably the previous sentence or paragraph. Some people will get it. Some people won’t notice. Some will call the grammar police. Either way, much hilarity shall ensue. So if you ever get the opportunity to make a joke about repetition then you should take it.

Thursday, 17 March 2016

Behind the scenes of Dear Evil Tester : Hitting Publish

Today became "Dear Evil Tester" launch day.
I suspect ‘professionals’ create a plan and stick to it. I’m more of a kanban, ship it when it is ready, kind of guy.
On the 16th March, I received a final proof copy, and I reviewed that. Then on the 17th I hit publish.
I found the free pdf comparison tool "diff-pdf" very useful for final leanpub print ready pdf comparison during my final proof phase.
The launch process proceeded as follows:
  • start at leanpub
  • check the book details
  • realise you haven’t added a price
  • add price (make it slightly cheaper than amazon kindle because of the difference in royalty rates)
  • click publish
  • check the page looks OK in incognito mode
  • tweet
  • tweet that a sample is available now so people can try before they buy
  • move on to createspace
  • click publish
  • stare at the message that says “this might take 3-4 days” and wonder how people manage to create a co-ordinated launch across all platforms
  • move on to kdp (kindle direct publishing)
  • click publish
  • again stare aghast at the “this might take 3-4 days” message
  • be amazed as you start responding to peeps tweets about buying the book. Thank you all.
  • see that have listed the book for sale
  • tweet that
  • start editing all your web pages
  • notice that the feedjs server you were relying on for your rss feeds has stopped responding, so create a new endpoint for your existing rss caching to work on different sites
  • notice that the kindle version has been published
  • tweet that
  • respond to more peeps tweets. Thank you all again.
  • Write a promotional blog post
I still have more to do in the launch.
  • make sure Amazon notice that the paperback and kindle book are actually the same book and are listed together
  • create a leanpub promo video
  • amend the sales pages
But overall, the ‘publish’ part of ‘self publishing’ has become quite smooth.
Thank you for your support.