Friday, 22 July 2016

We test REST even if it isn't REST

The Microsoft REST guidelines are being critiqued for not being particularly RESTful.
Pretty much every team I’ve worked on has had arguments about their implementation of REST APIs.
  • Versioning?
  • URI formats?
  • Query strings or paths?
  • Which return code?
  • Support OPTION?
  • Custom headers?
Those discussions are important for the design, but sometimes people get hung up on disagreements about theory.
The theory is important. We should read it, but at some point we have to make a decision about the implementation that works for us.
And at that point the theory almost doesn’t matter for testing. Testing deals with realities.
Reality - We could identify risks related to not interpreting REST in the same way as others:
  • tools might be harder to use against our service
  • libraries might not work as well against our service
  • we might get hammered online
And as a team we might choose to experiment to mitigate the technical risk, and take it on the chin for the community risks.
Reality:
  • Does it do what we want it to do?
  • Does it do stuff we don’t want it to do?
To begin answering these questions we have learn how to interact with whatever the heck we have built, and however the heck we have built it.
And for most REST APIs this means:
All of this means we have to do stuff. Learn the appropriate techniques, technologies and tools. And practice, practice, practice.


P.S. Our $10 ’Technical Web Testing 101’ course now includes “An Introduction to Interactive REST API Testing” this should provide a good introductory practical set of lectures and exercises to help people learn to interact with REST API - regardless of their RESTfulness or not.

On Workarounds and Fixes

TLDR: fixes are important. when we have work to do, sometimes a workaround is more important


One of the skills I think we develop when working with software is a search for workarounds.

I was reminded of that this morning when the start button on Windows 10 refused to work.

The start menu would not appear in response to the windows key, nor could I click on it.




The search for a fix

A quick web search revealed that, surprisingly, this is an incredibly commonly experienced issue.

So common in fact that Microsoft released a troubleshooter software for it.

The software basically said: "You have a problem, we have not fixed it"

Sadly the software didn't tell me, how to fix it.

Under those circumstances I do what everyone does, and search for the new error message

`"Microsoft.Windows.ShellExperienceHost" and "Microsoft.Windows.Cortana" applications need to be installed correctly.`

Sadly, no real new information.

So rather than try all the different options mentioned for a fix, It seems as though the problem periodically gets fixed after an update, and periodically comes back.

I had already lost about 2 hours to fiddling with this nonsense.

I went off to look for a workaround. I'm leaving the 'fix' with Microsoft.

My first thought was a replacement start menu.

I try not to augment my operating system functionality too much in case a later upgrade breaks it.

But in this case, it almost seems like the 'start menu' isn't core to the operating system and is itself an augmentation which is prone to breakage. I couldn't see how I could make the situation worse.

Short story - I installed classic menu and had a replacement working start menu in a couple of minutes.

This also has the option to click through to the windows start menu (which currently doesn't work) so I'll be able to see if a later windows update fixes the problem.

Thoughts

  • The search for a fix is important.
  • A user shouldn't have to search for a fix.
  • If troubleshooting tools know what the problem is, then they should tell you how to fix them.
  • Searching for a fix can take a long time.
  • Workarounds are necessary to keep the work going.
  • Workarounds can add risk. Evaluate risk and make a decision.
  • Time box the search for a fix, then split the team (one part fix, and one part look for workarounds) (I'm delegating the fix to Microsoft, I'm finding the workaround)

I use workarounds a lot in testing. Particularly when I automate the application. I do end up coding around issues (where the fix is taking a long time) to allow the rest of the execution to reach blocked areas of the system and continue to add some value. I sometimes add code in my workaround to detect if the issue I'm working around is fixed and automatically report that by failing execution.

I'm fortunate that I had to learn this early when trying to load games from tape into my ZX Spectrum as a child. We learned to play the volume control and treble during game loading based on the audible feedback and visual feedback of the loading bars on the screen.

I think the search for workarounds is a set of skills that we develop over time.

Like all skills, the more we are conscious that we are using the skill. The more we can deliberately improve it.

This morning I spent too long in 'fix' mode. I had work to do, I should have prioritized the work around.

ProTip: I had to follow the "How to Reinstall the Microsoft Edge Browser" instructions here to get the start menu back (and edge browser, which I hadn't noticed was missing, because I never use it, but I wanted to try the Microsoft Edge WebDriver and couldn't figure out why it wasn't at least starting a browser, until I noticed that manually I couldn't start the browser!)

Friday, 15 July 2016

An Open Answer to an Open Letter

TLDR; Condemn? No. Support? I like parts of the paper. Censor? No.
And now I’m doing something I don’t like: writing a blog post in response to ‘something on the internet’. I’m writing a blog post which I don’t think has any practical value. I warn you now. I don’t think you will find much humour herein either. I have added a comedy punchline at the bottom though, if you want to skip ahead.
Chris McMahon has written an open letter to named reviewers of James Bach and Michael Bolton’s paper on “Context Driven Approach to Automation in Testing”. I reviewed the paper.
In the Open Letter Chris McMahon asks the reviewers three things:
  • condemn publicly “both the tone and the substance of that paper”
  • If you do support the paper, I ask you to do so publicly.
  • “regardless of your view, I request that you ask the authors” … “to remove that paper from public view” because it “is an impediment to reasonable discussion and it has no place in the modern discourse about test automation”
I find a few things personally difficult here:
  • I don’t feel strongly about the paper. There are parts I like, parts I agree with, parts I disagree with, parts I don’t like as much. Much like any paper.
  • I don’t like censorship
I’m more annoyed by the “censor this” call to action, than the paper and the surrounding twitter storm.
I view censorship as an absolute last ditch we-have-run-out-of-all-other-options action for tackling a most heinous world threatening situation. That’s not how I view the paper.
Normally I ignore these public ‘add your voice to my voice’ calls for action. But since I was named, and since I was offended by the notion of censorship I wrote this post.

Background


Chris McMahon has written a review of the paper on his blog . He doesn’t like it.
In the paper James and Michael provide 3 case studies. 1 and 2 show some exploratory testing with tools being used. I thought these were fine.
Case 3 is an example of how not to automate something. Chris didn’t like it. I didn’t particularly like it either. But I think I had different reasons than Chris did. I think everyone can (and should) write ‘case studies’ about what they tried, what worked, and what failed. I would prefer to have seen more analysis of the reasons for the different steps in the case study and tying them back to the tool selection criteria that James and Michael write earlier in the paper. And I think some of the conclusions were overly generalised. But its not my paper and James and Michael can write what they want.
And…
If I were trying to automate the GUI of the application that James and Michael chose to automate. I would probably have started with AutoIt or AutoHotKey (Chris disagrees and seems to dislike AutoHotKey). I have used both AutoIt and AutoHotKey in the past. I have even written applications in AutoIt, and I wrote a Java Library around AutoIt DLLs.
I would have tried AutoIt or AutoHotKey first this because I was creating a short term hack to get something done.
If I were writing code to automate it longer term I would have investigated Appium Desktop drivers winappdriver or winium possibly White or Sikuli. I don’t know.
As James and Michael say in their paper automating “requires learning not only about the application and the tool, but also about how they will interact”. I have not had much success automating QT apps in the past, I try to find alternative approaches. I viewed case study 3 as an investigation approach.

Reviewer, not Editor, not Endorser


James and Michael add the names of reviewers to their paper. Regardless of whether they found the comments useful or not. Thanks James and Michael for asking me to review the paper and acknowledging the time I spent creating comments (I mean it. Thank you.).
I review a lot of material that is sent to me. I don’t always receive acknowledgment. I don’t seek acknowledgment. I review it as a favour to the person asking. They then decide what they do with the review comments. Sometimes I review things that I’m not asked to comment on, again the person receiving the comments can do what they like with them.
Some people might think that reviewing the paper means Endorsement. It doesn’t.
If I was an editor, and the paper was appearing in my publication, then it might well mean endorsement, and if I didn’t endorse it I would probably have to publish it with a caveat that ‘this paper does not represent the opinions of the publisher and blah de blah blah’. I didn’t edit it, or publish it. The final paper and all the benefits or otherwise that people derive from it are credited to James and Michael. It is their paper. They can write what they want.
You don’t get to see the comments I sent to James and Michael. And there is no reason that you should. If I lambasted the paper and provided 12 pages of hate fueled rhetoric written using cut out letters from newspapers, or had written a perfume scented love letter in my best penmanship using a quill that I handcrafted from a local peacock feather, you would never know. Either way James and Michael would thank me on their paper for the review comments.

My “Open Letter Answer”


I wrote a blog comment twice but the blogger hosted blog seems to have a system interactional issue with some comments. So I posted here. And that was the reason why I had to write this post. And I still feel sullied for having to do so. Are you enjoying this post so far? You might want to skip to the end for the comedy punchline.


My open letter answer to Chris below:


“I ask you to join me in condemning publicly both the tone and the substance of that paper.”
No.
“Condemn - to express complete disapproval of.”
I don’t think so.
James and Michael asked me to review the paper. I read it. I wrote comments. I sent the comments back to James and Michael. Some of my comments they took on board some of them they chose not to (or the paper does not exhibit all the type of changes I described). That is their right. They wrote the paper. It is theirs.
I’ll point out some features of the paper that I agree with. The simple fact that I agree with some of it, means I can’t condemn it:
  • “There are many wonderful ways tools can be used to help software testing.”
  • Tools applied incorrectly add waste
  • “Let’s break down testing further…” section
  • “Let your context drive your tooling”.
  • Case 1 and Case 2 are better than I remember them
Edit: I wrote more words here, but had to edit them out to fit the max comment length of this blog. Mainly I’m editing out longer examples of things I thought were good, and some I thought were bad, but it doesn’t actually matter what I thought. People can read it and make up their own mind.
All of the above was written to say:
  • I can’t condemn it, because there are parts I agree with
  • The parts I disagree with, I can ignore, and I don’t have to condemn
  • I gave comments to James and Michael, they may have reworked some of the text because of it, some of it they may not have. Its their paper, that’s their choice.
I don’t seem to take as much offence with the paper that you do. I don’t feel that strongly about it.
“If you do support the paper, I ask you to do so publicly.”
I think there are useful parts in this paper. I think some parts are less useful.
There are parts I liked. Some parts I didn’t
There are parts I agree with. Some parts I didn’t.
Dear blog reader: As with everything you read. Read it yourself and make up your own mind. My views of this paper are not important. Only your views of the paper are important. If you feel you are not qualified to ‘pass judgement’ on the paper. Then don’t. Read it for value.
And regardless of your view, I request that you ask the authors of the paper bearing your names to remove that paper from public view as well as to remove the copy that Keith Klain hosts here. For the reasons I pointed out, this paper is an impediment to reasonable discussion and it has no place in the modern discourse about test automation.
No. Absolutely not.
I will not knowingly engage in, or condone, censorship.
No.
The fact that I was asked this. Is the reason I responded at length.
“this paper is an impediment to reasonable discussion”.
Hardly.
You pointed out what you thought was wrong with the paper. I see nothing in your comments that demonstrate “this paper is an impediment to reasonable discussion”. On the contrary, having pointed out the flaws, you can then go on to say what you think people can do instead. That is a prompt to reasonable discussion, not an impediment.
“it has no place in the modern discourse about test automation”. People are entitled to describe their experiences of automating software. Even if they describe situations that they failed in the process. I might prefer to see some different conclusions drawn, or different generalisations made. But its not my paper.

Strong Feelings


Clearly Chris feels strongly about the paper. I don’t feel as strongly about the paper as he does.
When I do encounter ‘things on the internet’ that I feel strongly about I try to create, or link to material, that offers alternative choices and demonstrates alternative views, which might open up options for myself or my readers.
I don’t think I ‘attack’ as much as I used to. And I think I’ve taken down a lot of the old ‘book reviews’ I wrote which did that, since I don’t think they added value.
If I read a criticism, I like to read what the author thinks I should do instead. Then I can see balance.

People Read Differently


People will read the paper differently.
I read it initially to pass on comments to James and Michael, so I made the points above (in more detail) and made some other points and some very minor points. I don’t want to write them all out here because of ‘reviewer’/‘writer’ confidentiality. That sacrosanct bond between a writer and their reviewer, the comments were for their eyes only. They chose what to act on, and what to discard.
After publication, I saw the paper had changed, and I read it to see what I could take from it. And there were parts of the paper I found value in:
e.g.
  • “There are many wonderful ways tools can be used to help software testing.”
  • Tools applied incorrectly add waste
  • “Let’s break down testing further…” section
  • “Let your context drive your tooling”.
  • Case 1 and Case 2 are better than I remember them
And there are more than that, but you can read the paper for value and see what you take from it.
I did not read it to critique it or pass judgement. The paper didn’t trigger enough of a reaction in me that I thought I should massively promote the paper, or reduce the impact of the paper. I thought it would promote itself given the reputation of James and Michael. And the views in the paper are those of James and Michael. I didn’t see that I should try and enforce my views on top of that. I have blogs and conference talks and articles for that.
Some people read with pre-existing view of the authors either positive or negative. Personally, I will happily read a William Shatner novel, regardless of its perceived literary value by others. And there are some authors that I would never read again - regardless of their ability to sell millions and spawn movie after movie.
And I’m sure you can identify more ways of ‘reading’, so given the controversy that seems to have surrounded the paper I suggest you pick a reading approach prior to reading.

Related Reading


I have some papers and conference talks related to this topic:

The Paper


James and Michael have released the paper. For free. For anyone to read.
They continue to edit it. Possibly for Typos, possibly to clarify their position in the paper.
You could read it. You might disagree with it. You might find value in it. You might want to send comments to James and Michael to clarify sections. You might want to write a critque yourself, you might come out in favour with the paper, you might not.
Your choice. Your reaction. Your learning.
If you want to build a ‘movement’ against it. That’s your choice. Leave me out of it.

Humour Section - The Punchline


I will quote one item from my review, I hope James and Michael forgive me for breaking the hallowed state of “reviewer/writer” confidentiality.
I said " I don’t really see anything controversial in the paper so you could publish now if you want, and nothing bad will happen."
I honestly thought that if people read the paper, they would read it; concentrate on the parts they liked, ignore the parts they don’t.
How naive was I?

Friday, 17 June 2016

Register for Free Risk/Exploratory/Technical Testing Webinar on 28th June 2016

QASymphony have kindly invited me to present a webinar for them entitled "Risk Mitigation Using Exploratory and Technical Testing".

You can register to watch the webinar live. If you can't make it, then register and you'll be sent details of the free replay.

I've talked about Technical Testing and Exploratory Testing before. This time I want to approach it from the perspective of risk.

The blurb says:

"When we test our systems, we very often use business risk to prioritize and guide our testing. But there are so many more ways of modeling risk. If business risk is our only risk model then we ignore technology risks, and the risks that our processes themselves are adding to our project. Ignoring technical risk means that we don't improve our technical skills to allow us to model, observe and manipulate our systems at deeper levels and we miss finding important non-obvious problems. Too often people mistakenly equate 'technical testing' with automating because they don't model technical risk. In this webinar we'll explain how to model risk and use that to push our testing further. We'll also explain how to avoid some of the pitfalls people fall into while improving their technical testing."

I'm still working out the details, but I think I'll cover the following (and more):

  • What do I mean by risk?
  • Risks other than business risk.
  • How to identify risk?
  • Using risk to improve our process:
    • What risk do our tools introduce?
    • What risks does our process introduce?
  • Risk mitigation
  • Manifestation/Detection
  • What is technical risk?
  • How to use this to drive my testing
    • Risk as a coverage model
    • Risk as a derivation model
  • ...
Clearly I'm not using a statistical model of risk, or attempting to quantify it numerically. But I will be trying to explain how risk underpins the testing I conduct, and the processes I follow.

If I miss out any of the above for some reason, and you register,then you'll be able to ask about the topic in the Q&A section.

Hope to see you there.

Register for the free webinar

Monday, 13 June 2016

Text Adventure Games for Testers

TL;DR Announcing RestMud, a free text adventure game designed to improve your technical testing skills.


I love text adventure games. Playing. Writing. Programming. Love 'em. And now I have created a text adventure game for testers to improve their technical testing skills.
  • I wrote about text adventure games before, in the context of keyword driven automated execution.
  • I studied AI, compilers and interpreters because I wanted to understand Text Adventure Games
  • I've written more text adventure game parsers than I have adventure games. All lost to the mists of time in my cave of lost 'C' programming code:
  • I once wrote a wonderful text layout algorithm for text adventure games that only worked on the Mono screen resolution of the Atari ST - it looked great. 
  • I wrote a fantastic complex sentence handler with verbs, nouns, adverbs, pro-nouns, more nouns, conjunctions etc. Brilliant parser. But no game.
  • I once, with a friend of mine, started to pitch a point and click western horror sci-fi adventure game to a games publishing company - just as point and click adventure games faded and died as a genre.
  • I wrote small games in "The Quill", GAC, STAC, and others.

And now.

I unleash upon the world.

"RestMud"

(That was dramatic by the way.)


A Text Adventure for the modern world, and to help improve your testing skills.

You'll have to:
  • explore
  • take things
  • hoard things
  • explore a maze
  • map the world
  • pay attention to clues
  • use the Browser Dev Tools
  • amend URLs to access commands not available from the GUI
  • remember things
  • possibly use REST tools (although for the Single Player Basic Test Game this isn't required)
Wow. I mean wow. Wow. Are you wowed yet? I'm wowed.

I played the test game again this morning and scored 1190 - but I made a mistake. Can you score higher than that?

Try it for yourself. Download it and see. You'll need Java 1.8, instructions are in the zip file.


You can use comments to let me know of any issues, or brag about your success, or whatever else you choose to use comments for. But now, brave adventurer... Go! Do some adventuresome stuff.

Tuesday, 7 June 2016

Some "Dear Evil Tester" Book Reviews

After publishing “Dear Evil Tester” I forgot to setup a Google Alert, but I set one up a few days ago and this morning it picked up a ‘book review’.
As far as I know, there are two book reviews out there in the wild:
I never know quite what I'll find when I click on a book review, and having written a few reviews myself, I know how cutting they can be if the reader didn't get on with the book.
Fortunately both Jason and Mel seemed to enjoy the book.
Jason has a lot of book notes and summaries on his web site. I've subscribed to his RSS feed now, so I'll see what other books and quotes have resonated with him.
Mel wrote the book in a “Dear Evil Tester” letter style which included kind words about the book:
This book had me laughing at my desk and my coworkers wondering if I had gone around the bin. I was enchanted by the sarcasm and wit, but drawn to the practical advice you had in your answers given with the Evil Tester persona.
Mel also recommended a few other books to readers of her blog. I'll let you jump over to her site and read the review to find out the other books she mentions.
Hopefully I'll find new book reviews popping up occasionally, of course the sales and marketing department here at EvilTesting Towers might selectively choose which reviews we mention.
Thanks Mel, and Jason - and also thanks to those that reviewed on Amazon - Isabel, Lisa and B. Long

Thursday, 19 May 2016

National Software Testing Conference 2016

On 17th May, 2016 I presented “The Art of Questioning to improve Software Testing, Agile and Automating” at the National Software Testing Conference.
It was only a 20 minute talk. So I had to focus on high level points very carefully.
You can read the slides over on my Conference Talk Page. I also wrote up a the process of pitching the talk, which might provide some value if you are curious about the process that goes on in the background for an invited talk.

Golden Ticket and Conference Programme


The presentation draws on lessons learned from various forms of fast, brief and systemic psychotherapy. With a few simple points:
  • Why? is a question that targets beliefs
  • How, What, Where, When, Who - all target structure and process
  • We all have models of the world and our questions reflect that model
  • Answers we give, reflect our model
  • Responses to answers give information on how well the models of the question asker, and answering person, match up
  • Testing can be modelled as a questioning process
  • Improving our ability to ask questions improves our ability to test, manage, and change behaviour.
You can read some early work I did in this area (2004) in my ‘NLP For Testers’’ papers.
The conference is aimed at managers so I thought that psychological tools might be more useful than Software Technology Tools.
I spent a lot of time between talks speaking to people and networking, so I didn’t get a chance to see many talks. But those I did get to see I made some notes on that I will have to think about.
A few of the talks overlapped - particularly Paul Gerrard, Daniel Morris, and Geoff Thompson. At least, they overlapped for me.
Paul Gerrard from Gerrard Consulting provided a good overview of how important modeling is for effective testing, and he mentioned his ‘New Model of Software Testing’ to illustrate how the ‘checking’ or ‘asserting’ part of testing is a very small subset of what we do. Paul also described some work he is doing on building some tool support for supporting exploratory testing. I’m looking forward to seeing this when Paul releases it.
Daniel Morris overlapped with Paul when he was describing the various social networks and online shopping tools. Daniel was drawing attention to the multiple views that social networks and shopping sites provide. They have a rich underlying model of the products, and the customers, and the shopping patters, what people buy when they buy this, the navigation habits of the users. etc. All very much aligned to the content Paul described and the tool support that Paul was building up.
Both Daniel and Paul described some of the difficulties in visualising or collating the work of multiple testers, e.g. when testing how do you see what defects have already been raised in this area - if we were navigating a shopping site, we would see it on screen as a navigate, also ‘*****’ starred reviews of ‘how good is this section of the application’. I found value in this because I’m always trying to find out how to better visualise and explore the models I make of software and I found interesting parallels here, and obvious gaps in my current tool support.
Geoff Thomson from Experimentus described some ‘silent assassins’ for projects and stressed that companies and outsourced providers seem to be moving to a focus on ‘cost’ rather than ‘quality’. Geoff also provided different views of project progress and cost, again demonstrating that ‘model’ of the project can be represented in different ways.
I also saw David Rondell provide an overview of various technologies and the rate of change that testing has to deal with. Container based technologies and rapid environment configuration tools like Docker, Mesos, Ansible, Vagrant, Chef, etc. were mentioned in many of the talks. Very often we don’t have time at a management level to really dive deep into these technologies but it was good to see them being discussed at a management level. (There is a good list of associated technologies on the XebiaLabs website)
The Gala in the evening gave us a chance to network further and I received an excellent masterclass in Sales from Peter Shkurko from Parasoft, it is always good to augment book learning with experience from real practitioners. I asked Peter a lot of questions over dinner and Peter’s experience helped me expand my model of sales with tips I hadn’t picked up from any Sales book or training.
For any conference organisers - if you can get the vendors to present, not just, ‘their tools’, but also their experience of ‘selling those tools’ e.g particularly in selling software testing, or selling software tools. I think participants would find that useful.
I tried to pay Peter, and the rest of the table back by contributing testing knowledge and experience in the Gala Quiz. (We got lucky because there was a 4 point value WebDriver question that we were able to ace.)
The result of our combined sales and testing knowledge meant that our table won the Gala Quiz and received ‘golden tickets’ which will grant us access to the European Software Testing Awards in November. Because sales, marketing training, development and testing can all work together.