Saturday, 7 November 2015

Notes from the Eurostar Mobile Deep Dive 2015

I visited Eurostar 2015 and popped in to a few sessions, but was mainly there for the Mobile Deep Dive.

Alan Page started with an opening keynote on using instrumentation at the AOSP level to provide usage and defect information automatically for an emulation layer that Microsoft had been working on. He also mentioned that his team aims for “More value. Better Quality. Every week” which sounds like a perfectly splendid goal to remind yourself of every morning, and revisit every evening to reflect on how you are doing.

Julian Harty’s talk built on this with an in-depth exploration of how much analytics can be used to help the testing process. I particularly liked the example where a study had examined the ‘phones used by the people most likely to leave a review’ on the app store, and then use those phones as the subset of devices for your on device testing.

Karen Johnson asked (and I’m paraphrasing from memory rather than a written quote) "do you analyse this as a human or a machine? Or both?” Only about 3 of us put our hands up to both. Karen provided a walk through of an example thought process for approaching the scenario and condition creation of constructing an automated walkthrough of a mobile app she had worked on. Karen also reminded us that systems change, so the automated solutions we create will also change and to not get hung up on permanence of an artefact and to remove it when it ceases to add value.

I popped in to the Wearables talk by Marc Van’t Veer and saw him using IOS to Quicktime streaming for capturing the camera from his iOS device to take pictures of testing on his smart watch. Also using Mobizen to stream the display of his device on to a desktop for realtime screen capture. Always good to see people using technological workarounds to improve their testing.

Jeff Payne provided the funniest talk of the deep dive conference which was also filled with valuable information. I made notes that I need to learn how to examine the memory of a running mobile phone, and examine the temporary files, cache files and databases on the phone itself.

Jeff’s quote “Test where easiest and most effective” struck a chord since it also relates to knowing where you are going to get the most benefit from testing a risk because you understand where and how you can test it, rather than just testing on every device.

I found this day a useful addition to the Eurostar Conference and was fortunate to have a whole series of good discussions during the day, and in the evening.

Oh, and I presented as well ‘Risks, Issues and Experiences of Technical Testing on Mobile’. [slides]. The basic takeaways were:
  • We currently test out of fear, we could test to address technical risk
  • learn to understand the technicalities of your application or website to target the technical risks of cross platform usage when deciding what to test, rather than testing ‘everything’ on a subset of devices out of fear
  • poor ergonomics add risk to my test process so I add keyboards and that work around might add risk to the test process, in fact all such workarounds add risk, so we need to technically understand and communicate that risk to manage it in the team
  • the fact that we are testing on device at all, should be managed as a risk, because it means we are building something that we don’t fully understand or trust
  • build apps which use all the libraries and flows of your main app, but doesn’t have a ‘GUI’ but will self check the libraries and interaction and report back to a main server, then we could deploy the ‘test app’ to multiple cloud devices and quickly receive information on compatibility without having to deal with lag time of manual interaction with the device for testing
  • Focus on pain in your process and remove the pain. e.g. typing on devices is error prone and sometimes I don't have back and forward cursor keys so fixing the errors is painful - I could add a custom keyboard to the device, or I could add a physical keyboard to the device. By addressing the 'pain' in my process, I introduce a technical workaround that might introduce risk, but makes my process easier. And the risk is something we can investigate, discuss, assess and mitigate risk. Pain in the process is anything that gets in the way, stops you being effective - you may be so used to it, that you don't even notice it - that's dangerous.
Zeger created a sketchnote during the talk, but I think I made his life difficult by talking fast and cramming in a lot of material

Tuesday, 14 July 2015

Lessons learned from Black Ops Testing - Testing a JavaScript library


James Lyndsay suggested JS-Sequence-Diagrams as a target for Black Ops Testing.
I'm a big fan of "text to diagram" and "text to document" systems so I was hoping this would result in a good target. And it did.
If you haven't seen a "text to diagram" type system before then it basically takes text like:

Title: Simple Diagram
B-->A: done

And creates a diagram:

I often use Graphviz for this type of process since I tend to create more generic diagrams, but if you want to use Sequence Diagrams then JS-Sequence-Diagrams offers a simple grammar and effective output.
For the Black Ops Testing webinars, the nerve wracking thing about testing these small systems, concerns:
  • is it rich enough?
  • will we find enough?
  • will it illustrate enough thought processes and approaches?
And this library was, and we did, and it could.
JS-Sequence-Diagrams describes itself as "A simple javascript library to turn text into vector" diagrams. So it presents its self as a library.
Testing a library offers challenges for interactive testing, because you need something to wrap the library to make it usable: code, or a GUI of some form.
Fortunately JS-Sequence-Diagrams has several GUIs to offer us:
Since the demo page is available, I started my testing with that.

Do look yourself

You might want to have a quick look at the page and see what you think and find before continuing, otherwise you risk my notes impacting your ability to view the library and demo page without bias.

Go look now: JS-Sequence-Diagrams

You wait.
Time Passes...
Gandalf opens the round green door.

You wait.
Time Passes...
Gandalf goes east.

You wait.
Time passes...

You wait.
Time passes...
Thorin waits.

You wait.
Time passes...
Thorin says " Hurry up ".

Initial Thoughts

I looked at the demo page JS-Sequence-Diagrams and made a few notes on what it was built with and what other libraries in use. My initial thoughts were:
  • Use as a tool interactively, with default page
  • Use as a library
  • Explore configuration scope by using tool interactively with a different set of library components e.g. Raphael, lodash, no JQuery
  • JS Heavy so risk of JS Compatibility errors
  • Run and review the QUnit to identify coverage gaps and risks
  • Create a new GUI page to control loaded components and minimise risk of custom editor interaction obscuring bugs or impacting testing
  • compare 'standard' for sequence diagrams against implementation
You can see that I have identified some risks:
  • JS compatibility
  • Demo page has a JS editor which might conflict
  • Demo page might obscure testing
And I could also try and use the tool as a library.

Initial Interactions

My first few attempts at creating a simple diagram triggered an error because I found that some of the characters I was using were not allowed.
  • "," is not allowed in an actor name
  • "-" is not allowed in an actor name
  • ":" and ">" are not allowed in an actor name
I also found that some of the characters were only rendered in the 'simple' view, and not in the hand-drawn view. i.e.
  • ¦
  • \
So I made a note to my future self to explore character sets and see what would render and what would not.
I also wanted to find a way of minimising the impact of the editor in my testing. i.e if I found a bug, how would I know that it was a bug in the library and not a bug in the editor or the interaction between the library and the editor?

Interacting with the library without any harness

We already have all the code we need in the browser, after the demo site has loaded, to interact with the library directly. As a JavaScript library, it has already loaded into the DOM, so if we open up the developer console and type:


We will see the diagram visualising the above relationship in the browser. So we really don't need a harness or a framework to create simple interactions with the library. And we know that we can bypass the GUI if we need to.
I don't know how many testers do actually use the developer console to interact with the JavaScript in the applications they test, but I find it a useful skill to have. And it requires a reduced level of JavaScript knowledge required to do this, than it does to code a JavaScript application, so you can get started with this pretty quickly.

Time Passes

You can see the scope of testing I ran through in the associated notes.
This post is to describe at a higher level, some of the lessons learned and approaches taken rather than drop down and explain all the issues found and tests executed.
I made notes of the issues I found, but I realised after a while, that I should really have a small example for each issue which demonstrated the issue. And this tool is perfect for this since an issue description will have text, I can embed the text of the diagram in the issue.
So I revisited all the issues I found and added examples for each.

Cross Browser

I then realised, that I could take all the examples I had and check them against different browsers, if only I had a page that I could load into each browser that would render the different examples.
So I set about building an html page that I could add each of my examples to, and have them render on loading. I guess this could be called a cross browser testing tool, specific to this particular library.
I wanted something where I could add each of the diagram text files without too much editing, and without any additional coding each time I added an example.
So, despite my JavaScript skill not being the best in the world I interactively built up a single page where I could add all my examples in the form of:

<textarea class="exampleSource">
Title: Issue: char backslash "\" is represented a "/" in hand drawn\n actor name but backslash "\" in simple

Then I have JavaScript at the bottom of the file which, when the page is loaded, finds all those textarea blocks and, using a template I have in the html file, adds them to the DOM. They are then rendered using both the simple and the hand drawn versions.
I also made the template show the diagram source to make it easy to copy and paste into an interactive GUI if I needed, and act as documentation.
So not only do I have a page that I can load into any browser and check for compatibility, I don't need to do anything special to explore the two rendering options.

When I viewed my examples on the page I realised that I needed to use the title attribute more effectively to describe if the diagram was an 'Issue' or an 'Example'.
This made the page easier to scan and more self descriptive.
By loading this page I could also see that one of the issues I had found, did not appear here, but it did appear when the example was used interactively in the demo gui.

Title: Issue: can not mix alias and name in same diagram
Participant ActorA as A

I had evidence that the risk that I identified early regarding possible impact of the JS editor with the drawing library, had actually manifested.
Subsequent investigate with Steve and James revealed that, interactively:
  • this error didn't manifest in Firefox
  • this error only manifested in Chrome
  • and only manifested in Chrome on Windows, not on Mac
So, in addition to being an editor interaction issue, it was also a cross browser issue.
When testing, we have to make a call regarding how far do we diagnose a problem. I did not diagnose the problem any further than this - so I don't know what causes it. I suspect it is probably a line endings issue, but will leave additional investigation up to the development team, should they consider this worth fixing.

Interactive Testing

The test.html file in the source, allows interactive testing without a risk of the JS editor impacting the library because it uses a simple text area as input.
I decided to use this for interacting with the library.
But, rather than download all the js files and store them relative to the test.html file, as the current implementation required I wanted to pick up the relevant libraries from a cdn, that way I could switch them out easily and I wouldn't have to have much setup on my local machine.
I discovered that the JavaScript eco system has a centralised approach to this - much like maven in Java - via, so I used this as the source for the libraries in the code.
I then incrementally amended the test.html code such that it:
  • rendered both simple and hand drawn graph versions
  • reported any syntax errors on screen
  • doesn't require JQuery (since that is documented as optional on the main library page)
  • maintains a history of non-syntax error diagrams in the page to easily see what testing I did and maintain a record - I can save the page as a .html file to retain a 'record' of my testing session
I did not put the effort in, to make my code cross browser, so this had incompatibilities on IE that I didn't attempt to fix. It also had bugs on Firefox, that I didn't realise until Steve tried to use it in his testing. (Note: I have now amended explorer.html to work on Firefox and IE and Chrome).

Again, this didn't require great JavaScript skills. I built it incrementally, and cannibalized code from the demo page and the test.html page.
  • What else would you have added if you were interactively testing this library?
  • Feel free to amend the code to add those features and see how you get on.
Essentially, I crafted two tools to help me report on my testing and interact with the library.

Cross Browser Testing

As I was testing the library, a tool came through my newsfeed that I had not encountered or used before:
This claims to be a single GUI where you can run multiple browsers for cross browser testing.
I'm generally suspicious of these types of tools, but I did not want to appear churlish and risk the wrath of Earth Coincidence Control Office.
According to the developers, BrowseEmAll uses the browser 'embedded' mode, so use the same rendering engines and JavaScript engines as the browsers, although the developer tools are different.
Interestingly they have a WebDriver implementation for it, which is still in early stages, but might be a useful add-on to any existing local grid setup for supporting cross browser testing with early versions of the rendering engines. Rather than maintain a grid of lots of old versions. I have yet to try this however.
I haven't used BrowseEmAll enough to convince myself that I could use it in preference to actual browsers, but it did allow me to side-by-side demonstrate that the interactive issue I found earlier was only in Chrome, and not in Firefox.

On my notes

You can see different versions of my notes:
I tend to write most of my notes in markdown format now, so even in evernote, I write in markdown. I can then feed this through a markdown formatter like to generate pdf or html
Markdown is essentially a "text to document" process, where I write in pure text, and then another system parses the format and outputs html or pdf.
In Evernote, I write in markdown, but I also have the ability to add images without thinking about it. This allows me to add 'comments' into my text which don't appear in the final output.
This is also why I like the "text to diagram" systems. I can embed meta-data in the form of comments, this information is not rendered, but is useful for a human reading the text later.
In the past I've used Graphviz on site to document information I've received and I add comments into the Graphviz file for where I found the information, todos, gotchas, risks etc. none of this appears in the rendered image, but is very useful for me to build a model of the system I'm working with.
I do the same thing in the examples for js-sequence-diagram

Title: This creates a duplicate ActorA
Participant ActorA
# this fails with on chrome on windows
# but works with url on firefox
# works in test app in chrome
# works in chrome on mac

Where the comments above provide meta data about the issue regarding replication. I also use the features of the tool to help me.
And if you find this issue in the Evernote pdf you will also see that I've added a screenshot, which was for my information, rather than the markdown.
I particularly like the "text to X" approach because it allows for:
  • version control
  • diffing between versions
  • meta data and comments in the source which are not in the render
  • instantly at least 2 visualisations of the model (text, render)
  • often multiple visualisations with rendering parameters, which can help you see the model from a slightly different perspective e.g. different layout rules help spot different spatial relationships


Again, a surprising amount of opportunity for reflection on 'how to approach' something that, at a surface level, seems very simple.
I hardly touched my initial set of options about how to approach testing so there still remains a lot that I could continue to pursue with the testing of this library. And if you watch the webinar that this text relates to, you will see how differently James, Steve and Tony approached the topic.
And you can find an outline summary of my lessons learned below:
  • Library execution from console
    • Diagram.parse("A->B:\nB->C:").drawSVG($('#demo').find('.diagram').get(0));
  • Markdown notes writeup
    • Can read as .txt
    • Can easily convert to pdf via
    • Images embedded within text are not rendered (possibly good for adhoc image tracking)
    • like a text to diagram, a text to document allows embedded comments (images, html comments <!-- this won't be seen --> )
  • Tool as documentation
    • since the diags support # as a comment we can use that to our advantage when documenting the testing or raising defects e.g. urls, environment, versions - all stored in the diagram txt
    • Use the title as the issue name or test ideal
    • Create minimal source to recreate the issue and embed in defect report
  • Text to diagrams
    • Fast to create, autolayout
    • Can tweak for 'better' layout e.g. \n, aliases and naming
    • Learn the nuances of the tool
    • Version control and compare previous versions
    • Easier to auto generate the diagram source than a diagram equivalent
    • Use the 'comments' functionality for meta data and notes
    • Human readable at text, visual a different 'view'
  • Environment Control important - see the cross browser (Chrome/Firefox Participant issue)
    • What version is the demo site running?
    • Does the editor interfere? Cross-Browser Risk
  • Tester has different view of tool support required from testing
    • compare test.html with "explorer.html"
    • shows both simple/hand graph at same time
    • tracks history of usage
    • minimal libraries to avoid risk of interference
    • (but mine is buggy on IE because I don't know JS as well)
    • display parse errors on screen
  • Cross browser testing
    • testing what? GUI, Rendering, Parsing?
  • Observe, Interrogate, Manipulate
    • Console
    • JS Debugger - harder with minimised code (use pretty print in console)
    • Network Tab

Tuesday, 12 May 2015

Lessons Learned Testing QA Mail for Black Ops Testing

On the Black Ops Testing Webinar of 11th May 2015 we tested QA Mail. You can find a write up and watch the webinar replay over on

This post, expands a little on how I approached the testing and what I learned.

A long time ago, on a job far away, my team and I had to test website registrations and emails on various account events.

For our interactive testing this was fairly simple, we could use our company accounts or gmail accounts and then reset the environment and then we could re-register again. For the automation we needed a different solution. We didn't want to use gmail 'dot' and 'plus' accounts because we felt that the number of emails might put us against the gmail terms and conditions.

We started using creating adhoc email addresses for the automation, and I created some abstractions to read and delete the emails. But mailinator introduced a captcha part way through our testing and our automation failed.

I created a fairly simple appengine tool which acted as an email sink, and I wrapped a simple API around it for our automation. AppEngine has changed so much since then that the code I wrote will no longer work. But QA Mail does a very similar job to the AppEngine code that I wrote.

It provides a simple GUI and simple API wrapper around a polling daemon which reads a mailbox folder and brings in emails.


I approached the Black Ops Testing Webinar as a learning experience.

  • I didn't know too much about SMTP or email
  • I wanted to experiment with API Automation from a zero to 'working' state as fast as possible
  • I wanted to experiment with sending emails from Java
  • I wanted to know what tool support I would need for interrogating and comparing emails

Automation Abstractions

I started with the automation. And first off wanted to de-risk it by making sure I could send emails.

I had a quick try of the Javax mail libraries, and quickly decided to find an abstraction library to cut down on the time I required to get up to speed and sending emails fast.

I started using Simple Java Mail

With a few basic emails sent, I started to work on the API abstractions for QA Mail. You can see the various twists and turns I took via the history on github

I created abstractions at a few different levels:

  • A QA Mail Rest API Call abstraction
  • A QA Mail Domain abstraction
  • A Mailbox abstraction

These all work at a similar level so they overlap a little.

This allowed me to create basic automation fairly simply.

They lack a way of conducting automation to twist the API calls i.e.

  • make the REST call with a POST instead of a GET
  • add null parameters
  • use badly named params
  • add new params into the calls
  • re-order the params in the calls
  • etc.

I could achieve the above with direct calls using RestAssured, but since they are fairly common requirements when testing an API, I need to identify a different way of building abstraction layers which support 'testing' and not just 'exercising' the API.

In this particular instance I didn't need that flexibility since the API would have thrown 500 errors on invalid calls. QA Mail was written to meet the needs of a single user and graciously released to open source in the event that it might help others.

I did use the API abstractions to test for different combinations of routings e.g. no 'to', multiple cc, and bcc etc.

Unfortunately we were using an older version of the app which did have a bug in this area, so I didn't pursue this line of enquiry long. The bug has been fixed in the main code branch and on the QA Mail demo server.

Testing Emails

After sending a few emails, it became quickly apparent that I didn't really know what I was looking at in the raw email view.

I'm used to looking at HTTP headers, I'm not used to looking at email headers.

Was the email I was seeing correctly rendered?

How could I know?

A lack of oracles for domains we don't know well can make testing harder in the initial stages of building domain knowledge. One strategy I use involves me finding alternative sources of rendering the information via complementary or competing renderers.

In this instance I used similar tools: mailinator and temp-mail.

Both of these accept emails to anonymous mailboxes and render the email as raw text so you can see the headers.

I saved these as text files and compared the output through winmerge.

I found differences in the headers and had to go look them up to try and understand them. Oft times, many of the headers are actually added by the routing servers the mail winds its way through, so what I'm seeing is not what I actually created as the email.

So I needed to find a way to observe the emails I was sending out in the first place.

For the automation, I found a debug flag on Simple Java Mail which output the raw email smtp session to stdout so I can see the original message, headers and encoding. I was then able to compare this to the output and see what the routing had added, and what might have been added by QA Mail. In the final analysis, nothing was added by QA Mail, it simply sucks out the message after it has flowed through postfix.

For my interactive testing, I discovered the 'Show Original' menu item in gmail. This lets me see the 'raw' email sent to me, and which I'm sending out.

Very handy - I've actually become addicted to looking at email headers now, and most of the emails I receive I check the headers. I find it amazing how much information is contained in these about the machines that the email has passed through in its travels. I encourage you to have a look for yourself. Never again will I send an email direct from machine - I'll always try and use a server based tool to avoid giving away my machine ip addresses.

Observing the System Under Test

One of the interesting challenges I faced testing this was isolating where issues were introduced.

QA Mail runs on a server, so I can only observe it via logs.

It provides one log by default the log for the import daemon that polls the mailbox folder.

I ssh'd into the test environment and could 'tail -f' the log file.

This way, when I started testing different email address formats I could see if they were actually hitting the system.

I found that many were not. valid emails such as "1234--@" and "()<>[]:,;@\\\"!#$%&'*+-/=?^_`{}| ~.a" were not reaching the mailbox that QA Mail polled. They were being rejected by postfix, making it impossible for me to test how QA Mail would handle extreme email formats.

Identifying where processing occurs in an end to end system is a common challenge and one we should be aware of when testing. So I recommend trying to  understand the architecture of the application under test and trying to add observation points in as many positions in the chain as you can.


Normally when testing emails, I've been more focused on:

  • was an email sent?
  • Did the email render correctly?

When testing QA Mail I had to focus on:

  • Was it pulling in the appropriate information from the mailbox?
  • Was it routing the emails to the appropriate database tables?

And this forced me to consider new ways of observing and interacting with the system.

Thursday, 2 April 2015

Virtually Live in Romania - Technical Testing Webinar to Tabara De Testare

On 1st April I presented Technical Testing to the Tabara De Testare testing group in Romania.

I presented virtually over Google Hangouts. The Tabara De Testare testing group is spread over four cities in Romania, each of which live streamed the webinar to a room filled with their members. I could see the room via the presentation machines web cam.

We also had 70+ people watching from the comfort of their own homes and workplaces.

Thanks to Tabara De Testare for organising the webinar.

I have released the slides to the webinar on slideshare:

During the webinar I ran through the slides, then provided a short demo of Browser Dev tools supporting technical testing investigations on the demo application.

Dangerously, I then tried to demo proxy tools to help answer a question from the audience.

Clearly - using a proxy tool, while conducting a live webinar through a browser isn't the least dangerous option. And lo, I lost the Q&A chat window as a result. But I think we covered most of the questions during the live Q&A which followed.

If you'd like me to 'virtually' attend a testing group that you organise then please ask, as its easier for me to fly around the world via webcam than it is to jump on a plane, and it means you don't get stung for travel and accommodation costs.

I will edit and upload the webinar to my Technical Web Testing course shortly.

Monday, 26 January 2015

Some API Testing Basic Introductory Notes and Tools

Some applications provide an API. Some websites provide an API. This post provides some information on API testing, since that appears to have consume a lot of my time in January 2015. As preparation for our Black Ops Testing Workshop I performed a lot of API testing. And co-incidentally the January Weekend Testing session chose API testing as its topic. There should be enough links in this blog to provide you with the tools I use to test APIs.

API - Application Programmer's Interface

The name suggests something that only programmer's might use. And indeed an API makes life easier for software to interact with other software.

Really an API provides one more way of interacting with software:

  • By sending messages in an agreed format, to an agreed interface and receiving. an agreed response format back.

APIs tend to change less frequently, or in a more controlled fashion, than GUIs because when an API changes, all consumers of that API have to change as well.

Software tends not to have the requisite variety that a human user exhibits:

  • If you change the GUI then a human can probably figure out where you moved the button, or what new fields you added to the form that they need to type in. 
  • Software won't do that. Software will likely break, or fail to send the new information and so the interaction will break.
If you read this blog through an RSS reader then the RSS reader has used this blog's API. The API consists of a GET request on a URL to receive a response in XML format (an RSS feed).

You, as a user could GET the same URL and read the XML in the browser, but the API tends not to offer the same user experience, so we don't often do that. Or we use tools, like an RSS Reader, to help us.

Manually Testing an API

Just because the API calls itself a Programmer's Interface, does not mean that all our interaction with the API has to involve programming.

We can issue requests to an HTTP API with minimal tooling:
  • Use a browser to issue GET requests on an HTTP API
  • Use an HTTP Proxy to issue HTTP requests to an HTTP API
  • Use command line tools such as cURL or WGet to issue HTTP requests to an HTTP API
  • Use specific tools e.g. Postman to issue HTTP requests to an HTTP API
Preparation for Black Ops Testing

When choosing software for Black Ops Testing and Training workshops, I like software that has multiple methods of interaction e.g. GUI, API, Mobile Apps/Sites

This way testing can:
  • compare GUI against API, as well as underlying database
  • use the API to load data to support manual testing
  • check GUI actions by interrogating and manipulating the system through the API
  • test directly through the API
Prior to the Black Ops Testing workshop I had tested a lot of APIs, and I generally did the following:
  • Create confirmatory automation to check the API against its documentation. 
  • Manually testing the API using HTTP Proxy tools
    • to create tools and observe responses
    • edit/replay messages and observe responses
    • use the fuzzing tools on proxies to create messages with predefined data payloads
  • Use the browser for simple querying of the API and Firebug, with FirePath to help me analyse the responses
During the run up to the Black Ops Testing workshop I was re-reading my Moshe Feldenkrais books, and a few quotes stood out for me, the following being the first:

"...learning means having at least another way of doing the same thing."
I decided to increase the variety of responses available to me when testing an API and learn more ways of sending messages to an API and viewing the responses.

So I learned cURL to:

  • send messages from the command line, 
  • feed them through a proxy, 
  • change the headers
  • create some data driven messages from the command line with data sets in a file
I used 3 different proxies to experiment with their features of fuzzing, message construction and message viewing.

I experimented with different REST client tools, and settled on Postman.

I now had multiple ways of doing the same thing so that when I encountered an issue with Postman, I could try and replicate using cURL or Proxies and see if my problem was with the application or my use of Postman.
  • similarly with any of the tools, I could use one or other of the tools to isolate my problem to the app or my use of that specific tool
  • this helped me isolate a problem with my automation, which I initially thought was application related
Weekend Testing

During the Weekend Testing session, we were pointed at the API

I wanted some additional tool assistance to help me analyse the output from the API.

Because while Postman does a very capable job of pretty printing the XML and JSON, I needed a way to reduce the data in the message to something I could read more easily.

So instead of viewing the full XML tree. I used to create simplified XPath queries which rendered a subset of the data, i.e. if I wanted to read all the displayNames for events. I could click on the events in the tree and find the displayName attribute, or I could use XPath to show me only the displayNames for events
  • //event/@displayName

Looking back over my Feldenkrais notes, I can see a relevant quote for this:
"... he said something about learning the thing I already know in a different way in order to have free choice. For that I must be able to tell differences. And the differences must be significant. But I can distinguish smaller differences, not by increasing the stimulus, but by reducing the effort. To do this I must improve my organization."

My approach to API testing has changed. Because I spent the time increasing the variety of responses I have to the task of testing an API.

I didn't really mention automation in the above, although I use RestAssured for much of my API automation at the moment.

The above is a subset of what I have learned about API testing. 

I plan to continue to increase the variety of responses I have to testing APIs and increase my experience of testing APIs. I will over time collate this information into other posts and longer material so, do let me know in the comments if there are any questions or topics you would like to see covered in this blog.

Tools & References:

Thursday, 18 December 2014

My search for easy to use, free, local HTTP servers

I have lost count of the number of times I've had to look for a local HTTP server.
  • Experimenting with an open source app
  • Writing some HTML, JavaScript, PHP
  • Testing some flash app for a client
  • Running some internal client code
  • etc. etc.
And since this isn't something I do every day. I forget how to do it, each and every time I start.

I forget:
  • Which servers I already have installed
  • Where I installed them
  • Which directory I configured them to use
  • What local names did I give them to make it 'easy' for me to work with them
  • etc. etc.
Now it might just be me that faces this problem.

If so, I expect you have already stopped reading.

So to cut to the chase, my current favourites are Mongoose (Windows, Mac, Linux) and VirtualHostX (Mac)

Other HTTP Stacks

I have used some of the biggies:
And I probably still have them installed

And some of the tinies:
And some others that I can't remember.

All have been useful at the time. Sometimes I tried to install one but couldn't get it working on client machines because of permissions etc. etc.

I started looking around for alternatives that I could use during training courses, webinars etc.

Some I have not used

Prior to writing this post I was aware that Python had the capability to start up a small http server from the command line, but I hadn't used it. After publication, Brian Goad tweeted his usage of Python to do this.

Brian continued:
could be easily used as a function that takes the dir as argument: simple-server(){ cd $1; python -m SimpleHTTPServer; }
just go to localhost:8000 and you're set!
After Brian's reminder I had a quick look to see what other languages can do this:

If you know of any more languages that have this as part of their default then leave a comment and I'll add them here.

Virtual Machine Stacks

One thing I started using were virtual machines that have software installed already and don't require a web server e.g.
These are great for getting started quickly, but require a little download overhead - which can be painful over conference internet connections.

Sometimes I set up machines in the cloud, preinstalled:
As an additional backup, I like to have a local version that I can share.

VirtualHostX for the Mac

Since I mainly travel with a Mac Laptop I started using VirtualHostX for that.

VirtualHostX is basically a GUI that helps me work with the existing Mac installed LAMP stack.

I can avoid the Mac and command line config. I can avoid installing everything else, and just use VirtualHostX to configure and start/stop everything.

This saved a massive amount of time for me and I do recommend it. But it is Mac only.

Mongoose for Mac, Windows and Linux

I recently encountered Mongoose. It works on Mac, Windows and Linux. 

I used the free version to quickly experiment with some downloaded open source libraries. 

All you do is download the small executable into the directory, run it, and you get the traditional XAMPP style taskbar tooltip icon and easy to use config. 

You can run multiple versions by having them listen on different ports.

I paid $8 for the Windows dev version which allows me to view the HTTP traffic easily as well. This $8 also gives me access to the Linux Pro version. For an extra $5 I could get access to the MacOS pro version.


I suspect that 'proper' web developers will always prefer an XAMPP installation. But they will also use it more and be completely familiar with it.

For someone like me, who jumps between apps, configs, machines, sites, etc. 

I suspect that at some point I'll probably jump back to  XAMPP due to some future client needs. But for my own work. VirtualHostX and Mongoose are my current easy to use solutions.

What do you use?

Friday, 28 November 2014

Agile Testing Days 2014 - Workshop and Tutorial

At Agile Testing Days 2014, I presented a full day workshop on "Technical Testing" in Agile and was part of the Black Ops Testing Workshop with Steve Green and Tony Bruce.

Note: there are limited spaces left on our
  Black Ops Testing Full Day tutorial
in London in January 2015

Both of these were hands on events.

In the tutorial I present examples of Technical Testing, and how to integrate Technical Testing into Agile, the participants also test a real live application and apply the techniques, mindsets and tools that I describe.

Since it describe Technical Testing in an Agile system, we also spent time discussing the injection points for the technical testing process and thought processes.

The Black Ops Testing workshop took a similar approach but with less 'talking' since it was a much shorter time period with more people.

We started with a 5 minute lightning talk from myself, Tony, and Steve. During this we emphasized something important that we hoped the participants would focus on during their testing. We then let the participants loose on the system as we mingled. We coached, asked questions and observed. Then during the debrief we extemporize on our observations and ask questions about what we saw to pull out the insights from the participants. We repeat this process over the session.

Both the Black Ops Testing Workshop and my Tutorial used Redmine as the application under test.

We picked Redmine for a number of reasons, and I'll list some below:

  • Virtual Machines are available which make it easy to install
  • The Virtual Machines can be deployed easily to Amazon cloud servers
  • It is available as an Amazon Cloud Marketplace system, making it easy to install
There are some words in there you might notice "easy to install", "deployed easily".

Wouldn't it be great if all the apps we had to test were easy to install and configure and gain access to?

Yes it would. And as testers, when we work on projects, we can stress this to the project team, or work on it ourselves so that we don't spend a lot of time messing about with environments.

I used bitnami and their dashboard to automatically deploy and configure the environment. Tony used the amazon aws marketplace and worked with their dashboard. James Lyndsay helped us out, and went all old school, and deployed the base install to the machine.

I learned from this, that my exploratory note taking approach has permeated all my work. As I was installing the environment and configuring it, I made notes of 'what' I wanted to do, 'where' I was finding the information I needed, 'how' to do the steps I took, what data I was using (usernames, passwords, environment names, etc.). And when I needed to repeat the installation (I installed to a VM in Windows, on Mac, and in the cloud), I had all my notes from the previous installations.

When something went wrong in the environment, and I didn't have access to the parts of the system I needed, I was able to look through my notes and I could see that there were activities I had not performed. I thought I had, but my notes don't lie to me as much as my memory does.

It should come as no surprise to you then, that I stress note taking in my tutorial, and in the Black Ops Testing Workshop.

You can find the slides for the Black Ops Testing Workshop on slideshare. You can find more details about Black Ops Testing over on our .com site