Tuesday, 14 July 2015

Lessons learned from Black Ops Testing - Testing a JavaScript library

Introduction

James Lyndsay suggested JS-Sequence-Diagrams as a target for Black Ops Testing.
I'm a big fan of "text to diagram" and "text to document" systems so I was hoping this would result in a good target. And it did.
If you haven't seen a "text to diagram" type system before then it basically takes text like:

Title: Simple Diagram
A->B:
B->C:
C-->B:
B-->A: done

And creates a diagram:

I often use Graphviz for this type of process since I tend to create more generic diagrams, but if you want to use Sequence Diagrams then JS-Sequence-Diagrams offers a simple grammar and effective output.
For the Black Ops Testing webinars, the nerve wracking thing about testing these small systems, concerns:
  • is it rich enough?
  • will we find enough?
  • will it illustrate enough thought processes and approaches?
And this library was, and we did, and it could.
JS-Sequence-Diagrams describes itself as "A simple javascript library to turn text into vector" diagrams. So it presents its self as a library.
Testing a library offers challenges for interactive testing, because you need something to wrap the library to make it usable: code, or a GUI of some form.
Fortunately JS-Sequence-Diagrams has several GUIs to offer us:
Since the demo page is available, I started my testing with that.


Do look yourself

You might want to have a quick look at the page and see what you think and find before continuing, otherwise you risk my notes impacting your ability to view the library and demo page without bias.

Go look now: JS-Sequence-Diagrams

> WAIT
You wait.
Time Passes...
Gandalf opens the round green door.

> WAIT
You wait.
Time Passes...
Gandalf goes east.

> WAIT
You wait.
Time passes...

> WAIT
You wait.
Time passes...
Thorin waits.

> WAIT
You wait.
Time passes...
Thorin says " Hurry up ".

Initial Thoughts

I looked at the demo page JS-Sequence-Diagrams and made a few notes on what it was built with and what other libraries in use. My initial thoughts were:
  • Use as a tool interactively, with default page
  • Use as a library
  • Explore configuration scope by using tool interactively with a different set of library components e.g. Raphael, lodash, no JQuery
  • JS Heavy so risk of JS Compatibility errors
  • Run and review the QUnit to identify coverage gaps and risks
  • Create a new GUI page to control loaded components and minimise risk of custom editor interaction obscuring bugs or impacting testing
  • compare 'standard' for sequence diagrams against implementation
You can see that I have identified some risks:
  • JS compatibility
  • Demo page has a JS editor which might conflict
  • Demo page might obscure testing
And I could also try and use the tool as a library.

Initial Interactions

My first few attempts at creating a simple diagram triggered an error because I found that some of the characters I was using were not allowed.
NOTE:
  • "," is not allowed in an actor name
  • "-" is not allowed in an actor name
  • ":" and ">" are not allowed in an actor name
I also found that some of the characters were only rendered in the 'simple' view, and not in the hand-drawn view. i.e.
  • ¦
  • \
So I made a note to my future self to explore character sets and see what would render and what would not.
I also wanted to find a way of minimising the impact of the editor in my testing. i.e if I found a bug, how would I know that it was a bug in the library and not a bug in the editor or the interaction between the library and the editor?

Interacting with the library without any harness

We already have all the code we need in the browser, after the demo site has loaded, to interact with the library directly. As a JavaScript library, it has already loaded into the DOM, so if we open up the developer console and type:

Diagram.parse("Eris->Bob:").drawSVG($('#demo').find('.diagram').get(0));

We will see the diagram visualising the above relationship in the browser. So we really don't need a harness or a framework to create simple interactions with the library. And we know that we can bypass the GUI if we need to.
I don't know how many testers do actually use the developer console to interact with the JavaScript in the applications they test, but I find it a useful skill to have. And it requires a reduced level of JavaScript knowledge required to do this, than it does to code a JavaScript application, so you can get started with this pretty quickly.

Time Passes

You can see the scope of testing I ran through in the associated notes.
This post is to describe at a higher level, some of the lessons learned and approaches taken rather than drop down and explain all the issues found and tests executed.
I made notes of the issues I found, but I realised after a while, that I should really have a small example for each issue which demonstrated the issue. And this tool is perfect for this since an issue description will have text, I can embed the text of the diagram in the issue.
So I revisited all the issues I found and added examples for each.

Cross Browser

I then realised, that I could take all the examples I had and check them against different browsers, if only I had a page that I could load into each browser that would render the different examples.
So I set about building an html page that I could add each of my examples to, and have them render on loading. I guess this could be called a cross browser testing tool, specific to this particular library.
I wanted something where I could add each of the diagram text files without too much editing, and without any additional coding each time I added an example.
So, despite my JavaScript skill not being the best in the world I interactively built up a single page where I could add all my examples in the form of:

<textarea class="exampleSource">
Title: Issue: char backslash "\" is represented a "/" in hand drawn\n actor name but backslash "\" in simple
\->A:
</textarea>

Then I have JavaScript at the bottom of the file which, when the page is loaded, finds all those textarea blocks and, using a template I have in the html file, adds them to the DOM. They are then rendered using both the simple and the hand drawn versions.
I also made the template show the diagram source to make it easy to copy and paste into an interactive GUI if I needed, and act as documentation.
So not only do I have a page that I can load into any browser and check for compatibility, I don't need to do anything special to explore the two rendering options.


When I viewed my examples on the page I realised that I needed to use the title attribute more effectively to describe if the diagram was an 'Issue' or an 'Example'.
This made the page easier to scan and more self descriptive.
By loading this page I could also see that one of the issues I had found, did not appear here, but it did appear when the example was used interactively in the demo gui.

Title: Issue: can not mix alias and name in same diagram
Participant ActorA as A
A->ActorB:
ActorA->ActorB:

I had evidence that the risk that I identified early regarding possible impact of the JS editor with the drawing library, had actually manifested.
Subsequent investigate with Steve and James revealed that, interactively:
  • this error didn't manifest in Firefox
  • this error only manifested in Chrome
  • and only manifested in Chrome on Windows, not on Mac
So, in addition to being an editor interaction issue, it was also a cross browser issue.
When testing, we have to make a call regarding how far do we diagnose a problem. I did not diagnose the problem any further than this - so I don't know what causes it. I suspect it is probably a line endings issue, but will leave additional investigation up to the development team, should they consider this worth fixing.

Interactive Testing

The test.html file in the source, allows interactive testing without a risk of the JS editor impacting the library because it uses a simple text area as input.
I decided to use this for interacting with the library.
But, rather than download all the js files and store them relative to the test.html file, as the current implementation required I wanted to pick up the relevant libraries from a cdn, that way I could switch them out easily and I wouldn't have to have much setup on my local machine.
I discovered that the JavaScript eco system has a centralised approach to this - much like maven in Java - via cdnjs.com, so I used this as the source for the libraries in the code.
I then incrementally amended the test.html code such that it:
  • rendered both simple and hand drawn graph versions
  • reported any syntax errors on screen
  • doesn't require JQuery (since that is documented as optional on the main library page)
  • maintains a history of non-syntax error diagrams in the page to easily see what testing I did and maintain a record - I can save the page as a .html file to retain a 'record' of my testing session
I did not put the effort in, to make my code cross browser, so this had incompatibilities on IE that I didn't attempt to fix. It also had bugs on Firefox, that I didn't realise until Steve tried to use it in his testing. (Note: I have now amended explorer.html to work on Firefox and IE and Chrome).


Again, this didn't require great JavaScript skills. I built it incrementally, and cannibalized code from the demo page and the test.html page.
  • What else would you have added if you were interactively testing this library?
  • Feel free to amend the code to add those features and see how you get on.
Essentially, I crafted two tools to help me report on my testing and interact with the library.

Cross Browser Testing

As I was testing the library, a tool came through my newsfeed that I had not encountered or used before:
BrowseEmAll
This claims to be a single GUI where you can run multiple browsers for cross browser testing.
I'm generally suspicious of these types of tools, but I did not want to appear churlish and risk the wrath of Earth Coincidence Control Office.
According to the developers, BrowseEmAll uses the browser 'embedded' mode, so use the same rendering engines and JavaScript engines as the browsers, although the developer tools are different.
Interestingly they have a WebDriver implementation for it, which is still in early stages, but might be a useful add-on to any existing local grid setup for supporting cross browser testing with early versions of the rendering engines. Rather than maintain a grid of lots of old versions. I have yet to try this however.
I haven't used BrowseEmAll enough to convince myself that I could use it in preference to actual browsers, but it did allow me to side-by-side demonstrate that the interactive issue I found earlier was only in Chrome, and not in Firefox.

On my notes

You can see different versions of my notes:
I tend to write most of my notes in markdown format now, so even in evernote, I write in markdown. I can then feed this through a markdown formatter like dillinger.io to generate pdf or html
Markdown is essentially a "text to document" process, where I write in pure text, and then another system parses the format and outputs html or pdf.
In Evernote, I write in markdown, but I also have the ability to add images without thinking about it. This allows me to add 'comments' into my text which don't appear in the final output.
This is also why I like the "text to diagram" systems. I can embed meta-data in the form of comments, this information is not rendered, but is useful for a human reading the text later.
In the past I've used Graphviz on site to document information I've received and I add comments into the Graphviz file for where I found the information, todos, gotchas, risks etc. none of this appears in the rendered image, but is very useful for me to build a model of the system I'm working with.
I do the same thing in the examples for js-sequence-diagram

Title: This creates a duplicate ActorA
Participant ActorA
ActorA->ActorB:
# this fails with  https://bramp.github.io/js-sequence-diagrams/ on chrome on windows
# but works with url on firefox
# works in test app in chrome
# works in chrome on mac

Where the comments above provide meta data about the issue regarding replication. I also use the features of the tool to help me.
And if you find this issue in the Evernote pdf you will also see that I've added a screenshot, which was for my information, rather than the markdown.
I particularly like the "text to X" approach because it allows for:
  • version control
  • diffing between versions
  • meta data and comments in the source which are not in the render
  • instantly at least 2 visualisations of the model (text, render)
  • often multiple visualisations with rendering parameters, which can help you see the model from a slightly different perspective e.g. different layout rules help spot different spatial relationships

Summary

Again, a surprising amount of opportunity for reflection on 'how to approach' something that, at a surface level, seems very simple.
I hardly touched my initial set of options about how to approach testing so there still remains a lot that I could continue to pursue with the testing of this library. And if you watch the webinar that this text relates to, you will see how differently James, Steve and Tony approached the topic.
And you can find an outline summary of my lessons learned below:
  • Library execution from console
    • Diagram.parse("A->B:\nB->C:").drawSVG($('#demo').find('.diagram').get(0));
  • Markdown notes writeup
    • Can read as .txt
    • Can easily convert to pdf via dillinger.io
    • Images embedded within text are not rendered (possibly good for adhoc image tracking)
    • like a text to diagram, a text to document allows embedded comments (images, html comments <!-- this won't be seen --> )
  • Tool as documentation
    • since the diags support # as a comment we can use that to our advantage when documenting the testing or raising defects e.g. urls, environment, versions - all stored in the diagram txt
    • Use the title as the issue name or test ideal
    • Create minimal source to recreate the issue and embed in defect report
  • Text to diagrams
    • Fast to create, autolayout
    • Can tweak for 'better' layout e.g. \n, aliases and naming
    • Learn the nuances of the tool
    • Version control and compare previous versions
    • Easier to auto generate the diagram source than a diagram equivalent
    • Use the 'comments' functionality for meta data and notes
    • Human readable at text, visual a different 'view'
  • Environment Control important - see the cross browser (Chrome/Firefox Participant issue)
    • What version is the demo site running?
    • Does the editor interfere? Cross-Browser Risk
  • Tester has different view of tool support required from testing
    • compare test.html with "explorer.html"
    • shows both simple/hand graph at same time
    • tracks history of usage
    • minimal libraries to avoid risk of interference
    • (but mine is buggy on IE because I don't know JS as well)
    • display parse errors on screen
  • Cross browser testing
    • testing what? GUI, Rendering, Parsing?
  • Observe, Interrogate, Manipulate
    • Console
    • JS Debugger - harder with minimised code (use pretty print in console)
    • Network Tab
Links:

Tuesday, 12 May 2015

Lessons Learned Testing QA Mail for Black Ops Testing


On the Black Ops Testing Webinar of 11th May 2015 we tested QA Mail. You can find a write up and watch the webinar replay over on BlackOpsTesting.com

This post, expands a little on how I approached the testing and what I learned.

A long time ago, on a job far away, my team and I had to test website registrations and emails on various account events.

For our interactive testing this was fairly simple, we could use our company accounts or gmail accounts and then reset the environment and then we could re-register again. For the automation we needed a different solution. We didn't want to use gmail 'dot' and 'plus' accounts because we felt that the number of emails might put us against the gmail terms and conditions.

We started using mailinator.com creating adhoc email addresses for the automation, and I created some abstractions to read and delete the emails. But mailinator introduced a captcha part way through our testing and our automation failed.

I created a fairly simple appengine tool which acted as an email sink, and I wrapped a simple API around it for our automation. AppEngine has changed so much since then that the code I wrote will no longer work. But QA Mail does a very similar job to the AppEngine code that I wrote.

It provides a simple GUI and simple API wrapper around a polling daemon which reads a mailbox folder and brings in emails.

Great.

I approached the Black Ops Testing Webinar as a learning experience.


  • I didn't know too much about SMTP or email
  • I wanted to experiment with API Automation from a zero to 'working' state as fast as possible
  • I wanted to experiment with sending emails from Java
  • I wanted to know what tool support I would need for interrogating and comparing emails


Automation Abstractions


I started with the automation. And first off wanted to de-risk it by making sure I could send emails.

I had a quick try of the Javax mail libraries, and quickly decided to find an abstraction library to cut down on the time I required to get up to speed and sending emails fast.

I started using Simple Java Mail  https://github.com/bbottema/simple-java-mail

With a few basic emails sent, I started to work on the API abstractions for QA Mail. You can see the various twists and turns I took via the history on github

https://github.com/eviltester/qamail_automation

I created abstractions at a few different levels:


  • A QA Mail Rest API Call abstraction
  • A QA Mail Domain abstraction
  • A Mailbox abstraction


These all work at a similar level so they overlap a little.

This allowed me to create basic automation fairly simply.

They lack a way of conducting automation to twist the API calls i.e.


  • make the REST call with a POST instead of a GET
  • add null parameters
  • use badly named params
  • add new params into the calls
  • re-order the params in the calls
  • etc.


I could achieve the above with direct calls using RestAssured, but since they are fairly common requirements when testing an API, I need to identify a different way of building abstraction layers which support 'testing' and not just 'exercising' the API.

In this particular instance I didn't need that flexibility since the API would have thrown 500 errors on invalid calls. QA Mail was written to meet the needs of a single user and graciously released to open source in the event that it might help others.

I did use the API abstractions to test for different combinations of routings e.g. no 'to', multiple cc, and bcc etc.

Unfortunately we were using an older version of the app which did have a bug in this area, so I didn't pursue this line of enquiry long. The bug has been fixed in the main code branch and on the QA Mail demo server.

Testing Emails


After sending a few emails, it became quickly apparent that I didn't really know what I was looking at in the raw email view.

I'm used to looking at HTTP headers, I'm not used to looking at email headers.

Was the email I was seeing correctly rendered?

How could I know?

A lack of oracles for domains we don't know well can make testing harder in the initial stages of building domain knowledge. One strategy I use involves me finding alternative sources of rendering the information via complementary or competing renderers.

In this instance I used similar tools: mailinator and temp-mail.

Both of these accept emails to anonymous mailboxes and render the email as raw text so you can see the headers.

I saved these as text files and compared the output through winmerge.

I found differences in the headers and had to go look them up to try and understand them. Oft times, many of the headers are actually added by the routing servers the mail winds its way through, so what I'm seeing is not what I actually created as the email.

So I needed to find a way to observe the emails I was sending out in the first place.

For the automation, I found a debug flag on Simple Java Mail which output the raw email smtp session to stdout so I can see the original message, headers and encoding. I was then able to compare this to the output and see what the routing had added, and what might have been added by QA Mail. In the final analysis, nothing was added by QA Mail, it simply sucks out the message after it has flowed through postfix.

For my interactive testing, I discovered the 'Show Original' menu item in gmail. This lets me see the 'raw' email sent to me, and which I'm sending out.

Very handy - I've actually become addicted to looking at email headers now, and most of the emails I receive I check the headers. I find it amazing how much information is contained in these about the machines that the email has passed through in its travels. I encourage you to have a look for yourself. Never again will I send an email direct from machine - I'll always try and use a server based tool to avoid giving away my machine ip addresses.

Observing the System Under Test


One of the interesting challenges I faced testing this was isolating where issues were introduced.

QA Mail runs on a server, so I can only observe it via logs.

It provides one log by default the log for the import daemon that polls the mailbox folder.

I ssh'd into the test environment and could 'tail -f' the log file.

This way, when I started testing different email address formats I could see if they were actually hitting the system.

I found that many were not. valid emails such as "1234--@" and "()<>[]:,;@\\\"!#$%&'*+-/=?^_`{}| ~.a" were not reaching the mailbox that QA Mail polled. They were being rejected by postfix, making it impossible for me to test how QA Mail would handle extreme email formats.

Identifying where processing occurs in an end to end system is a common challenge and one we should be aware of when testing. So I recommend trying to  understand the architecture of the application under test and trying to add observation points in as many positions in the chain as you can.


Summary


Normally when testing emails, I've been more focused on:


  • was an email sent?
  • Did the email render correctly?


When testing QA Mail I had to focus on:


  • Was it pulling in the appropriate information from the mailbox?
  • Was it routing the emails to the appropriate database tables?


And this forced me to consider new ways of observing and interacting with the system.




Thursday, 2 April 2015

Virtually Live in Romania - Technical Testing Webinar to Tabara De Testare

On 1st April I presented Technical Testing to the Tabara De Testare testing group in Romania.

I presented virtually over Google Hangouts. The Tabara De Testare testing group is spread over four cities in Romania, each of which live streamed the webinar to a room filled with their members. I could see the room via the presentation machines web cam.

We also had 70+ people watching from the comfort of their own homes and workplaces.

Thanks to Tabara De Testare for organising the webinar.

I have released the slides to the webinar on slideshare:



During the webinar I ran through the slides, then provided a short demo of Browser Dev tools supporting technical testing investigations on the redmine.org demo application.

Dangerously, I then tried to demo proxy tools to help answer a question from the audience.

Clearly - using a proxy tool, while conducting a live webinar through a browser isn't the least dangerous option. And lo, I lost the Q&A chat window as a result. But I think we covered most of the questions during the live Q&A which followed.

If you'd like me to 'virtually' attend a testing group that you organise then please ask, as its easier for me to fly around the world via webcam than it is to jump on a plane, and it means you don't get stung for travel and accommodation costs.

I will edit and upload the webinar to my Technical Web Testing course shortly.

Monday, 26 January 2015

Some API Testing Basic Introductory Notes and Tools


Some applications provide an API. Some websites provide an API. This post provides some information on API testing, since that appears to have consume a lot of my time in January 2015. As preparation for our Black Ops Testing Workshop I performed a lot of API testing. And co-incidentally the January Weekend Testing session chose API testing as its topic. There should be enough links in this blog to provide you with the tools I use to test APIs.

API - Application Programmer's Interface

The name suggests something that only programmer's might use. And indeed an API makes life easier for software to interact with other software.

Really an API provides one more way of interacting with software:

  • By sending messages in an agreed format, to an agreed interface and receiving. an agreed response format back.

APIs tend to change less frequently, or in a more controlled fashion, than GUIs because when an API changes, all consumers of that API have to change as well.

Software tends not to have the requisite variety that a human user exhibits:

  • If you change the GUI then a human can probably figure out where you moved the button, or what new fields you added to the form that they need to type in. 
  • Software won't do that. Software will likely break, or fail to send the new information and so the interaction will break.
If you read this blog through an RSS reader then the RSS reader has used this blog's API. The API consists of a GET request on a URL to receive a response in XML format (an RSS feed).

You, as a user could GET the same URL and read the XML in the browser, but the API tends not to offer the same user experience, so we don't often do that. Or we use tools, like an RSS Reader, to help us.

Manually Testing an API

Just because the API calls itself a Programmer's Interface, does not mean that all our interaction with the API has to involve programming.

We can issue requests to an HTTP API with minimal tooling:
  • Use a browser to issue GET requests on an HTTP API
  • Use an HTTP Proxy to issue HTTP requests to an HTTP API
  • Use command line tools such as cURL or WGet to issue HTTP requests to an HTTP API
  • Use specific tools e.g. Postman to issue HTTP requests to an HTTP API
Preparation for Black Ops Testing

When choosing software for Black Ops Testing and Training workshops, I like software that has multiple methods of interaction e.g. GUI, API, Mobile Apps/Sites

This way testing can:
  • compare GUI against API, as well as underlying database
  • use the API to load data to support manual testing
  • check GUI actions by interrogating and manipulating the system through the API
  • test directly through the API
Prior to the Black Ops Testing workshop I had tested a lot of APIs, and I generally did the following:
  • Create confirmatory automation to check the API against its documentation. 
  • Manually testing the API using HTTP Proxy tools
    • to create tools and observe responses
    • edit/replay messages and observe responses
    • use the fuzzing tools on proxies to create messages with predefined data payloads
  • Use the browser for simple querying of the API and Firebug, with FirePath to help me analyse the responses
During the run up to the Black Ops Testing workshop I was re-reading my Moshe Feldenkrais books, and a few quotes stood out for me, the following being the first:


"...learning means having at least another way of doing the same thing."
I decided to increase the variety of responses available to me when testing an API and learn more ways of sending messages to an API and viewing the responses.

So I learned cURL to:

  • send messages from the command line, 
  • feed them through a proxy, 
  • change the headers
  • create some data driven messages from the command line with data sets in a file
I used 3 different proxies to experiment with their features of fuzzing, message construction and message viewing.

I experimented with different REST client tools, and settled on Postman.

I now had multiple ways of doing the same thing so that when I encountered an issue with Postman, I could try and replicate using cURL or Proxies and see if my problem was with the application or my use of Postman.
  • similarly with any of the tools, I could use one or other of the tools to isolate my problem to the app or my use of that specific tool
  • this helped me isolate a problem with my automation, which I initially thought was application related
Weekend Testing

During the Weekend Testing session, we were pointed at the songkick.com API

I wanted some additional tool assistance to help me analyse the output from the API.

Because while Postman does a very capable job of pretty printing the XML and JSON, I needed a way to reduce the data in the message to something I could read more easily.

So instead of viewing the full XML tree. I used codebeautify.org/Xpath-Tester to create simplified XPath queries which rendered a subset of the data, i.e. if I wanted to read all the displayNames for events. I could click on the events in the tree and find the displayName attribute, or I could use XPath to show me only the displayNames for events
  • //event/@displayName

Looking back over my Feldenkrais notes, I can see a relevant quote for this:
"... he said something about learning the thing I already know in a different way in order to have free choice. For that I must be able to tell differences. And the differences must be significant. But I can distinguish smaller differences, not by increasing the stimulus, but by reducing the effort. To do this I must improve my organization."

Summary
My approach to API testing has changed. Because I spent the time increasing the variety of responses I have to the task of testing an API.

I didn't really mention automation in the above, although I use RestAssured for much of my API automation at the moment.

The above is a subset of what I have learned about API testing. 

I plan to continue to increase the variety of responses I have to testing APIs and increase my experience of testing APIs. I will over time collate this information into other posts and longer material so, do let me know in the comments if there are any questions or topics you would like to see covered in this blog.

Tools & References:

Thursday, 18 December 2014

My search for easy to use, free, local HTTP servers


I have lost count of the number of times I've had to look for a local HTTP server.
  • Experimenting with an open source app
  • Writing some HTML, JavaScript, PHP
  • Testing some flash app for a client
  • Running some internal client code
  • etc. etc.
And since this isn't something I do every day. I forget how to do it, each and every time I start.

I forget:
  • Which servers I already have installed
  • Where I installed them
  • Which directory I configured them to use
  • What local names did I give them to make it 'easy' for me to work with them
  • etc. etc.
Now it might just be me that faces this problem.

If so, I expect you have already stopped reading.

So to cut to the chase, my current favourites are Mongoose (Windows, Mac, Linux) and VirtualHostX (Mac)


Other HTTP Stacks

I have used some of the biggies:
And I probably still have them installed

And some of the tinies:
And some others that I can't remember.

All have been useful at the time. Sometimes I tried to install one but couldn't get it working on client machines because of permissions etc. etc.

I started looking around for alternatives that I could use during training courses, webinars etc.

Some I have not used

Prior to writing this post I was aware that Python had the capability to start up a small http server from the command line, but I hadn't used it. After publication, Brian Goad tweeted his usage of Python to do this.

Brian continued:
could be easily used as a function that takes the dir as argument: simple-server(){ cd $1; python -m SimpleHTTPServer; }
just go to localhost:8000 and you're set!
After Brian's reminder I had a quick look to see what other languages can do this:

If you know of any more languages that have this as part of their default then leave a comment and I'll add them here.

Virtual Machine Stacks

One thing I started using were virtual machines that have software installed already and don't require a web server e.g.
These are great for getting started quickly, but require a little download overhead - which can be painful over conference internet connections.

Sometimes I set up machines in the cloud, preinstalled:
As an additional backup, I like to have a local version that I can share.

VirtualHostX for the Mac

Since I mainly travel with a Mac Laptop I started using VirtualHostX for that.

VirtualHostX is basically a GUI that helps me work with the existing Mac installed LAMP stack.

I can avoid the Mac and command line config. I can avoid installing everything else, and just use VirtualHostX to configure and start/stop everything.

This saved a massive amount of time for me and I do recommend it. But it is Mac only.

Mongoose for Mac, Windows and Linux

I recently encountered Mongoose. It works on Mac, Windows and Linux. 

I used the free version to quickly experiment with some downloaded open source libraries. 

All you do is download the small executable into the directory, run it, and you get the traditional XAMPP style taskbar tooltip icon and easy to use config. 

You can run multiple versions by having them listen on different ports.

I paid $8 for the Windows dev version which allows me to view the HTTP traffic easily as well. This $8 also gives me access to the Linux Pro version. For an extra $5 I could get access to the MacOS pro version.

Summary

I suspect that 'proper' web developers will always prefer an XAMPP installation. But they will also use it more and be completely familiar with it.

For someone like me, who jumps between apps, configs, machines, sites, etc. 

I suspect that at some point I'll probably jump back to  XAMPP due to some future client needs. But for my own work. VirtualHostX and Mongoose are my current easy to use solutions.

What do you use?

Friday, 28 November 2014

Agile Testing Days 2014 - Workshop and Tutorial

At Agile Testing Days 2014, I presented a full day workshop on "Technical Testing" in Agile and was part of the Black Ops Testing Workshop with Steve Green and Tony Bruce.

Note: there are limited spaces left on our
  Black Ops Testing Full Day tutorial
in London in January 2015

Both of these were hands on events.

In the tutorial I present examples of Technical Testing, and how to integrate Technical Testing into Agile, the participants also test a real live application and apply the techniques, mindsets and tools that I describe.

Since it describe Technical Testing in an Agile system, we also spent time discussing the injection points for the technical testing process and thought processes.

The Black Ops Testing workshop took a similar approach but with less 'talking' since it was a much shorter time period with more people.

We started with a 5 minute lightning talk from myself, Tony, and Steve. During this we emphasized something important that we hoped the participants would focus on during their testing. We then let the participants loose on the system as we mingled. We coached, asked questions and observed. Then during the debrief we extemporize on our observations and ask questions about what we saw to pull out the insights from the participants. We repeat this process over the session.

Both the Black Ops Testing Workshop and my Tutorial used Redmine as the application under test.

We picked Redmine for a number of reasons, and I'll list some below:

  • Virtual Machines are available which make it easy to install
  • The Virtual Machines can be deployed easily to Amazon cloud servers
  • It is available as an Amazon Cloud Marketplace system, making it easy to install
There are some words in there you might notice "easy to install", "deployed easily".

Wouldn't it be great if all the apps we had to test were easy to install and configure and gain access to?

Yes it would. And as testers, when we work on projects, we can stress this to the project team, or work on it ourselves so that we don't spend a lot of time messing about with environments.

I used bitnami and their dashboard to automatically deploy and configure the environment. Tony used the amazon aws marketplace and worked with their dashboard. James Lyndsay helped us out, and went all old school, and deployed the base install to the machine.

I learned from this, that my exploratory note taking approach has permeated all my work. As I was installing the environment and configuring it, I made notes of 'what' I wanted to do, 'where' I was finding the information I needed, 'how' to do the steps I took, what data I was using (usernames, passwords, environment names, etc.). And when I needed to repeat the installation (I installed to a VM in Windows, on Mac, and in the cloud), I had all my notes from the previous installations.

When something went wrong in the environment, and I didn't have access to the parts of the system I needed, I was able to look through my notes and I could see that there were activities I had not performed. I thought I had, but my notes don't lie to me as much as my memory does.

It should come as no surprise to you then, that I stress note taking in my tutorial, and in the Black Ops Testing Workshop.


You can find the slides for the Black Ops Testing Workshop on slideshare. You can find more details about Black Ops Testing over on our .com site


Agile Testing Days 2014 - Keynote

I presented a keynote at Agile Testing Days 2014, and took part in the Black Ops Testing Workshop, and presented a one day tutorial on "Technical Testing in Agile". This post covers the Keynote.

The keynote was underpinned by the notion that 'Agile' is not a 'thing', instead 'Agile' provides the context within which our project operates and therefore is part of the weltanshauung of the project. Or as I referred to it, the "System Of Development".

Because really I concentrate on 'Systems'. I think that I view 'context', as an understanding of the System. And remember that, we, the tester, form part of that system and the context as well. Therefore our 'beliefs' about testing become important as the impact how we interact with the people, and the system, and the process, in place on the project.

As ever, I made a lot of notes before the Keynote, and I present those below.

The slides are available on slideshare. The talk was recorded, but has not yet appeared online. My notes below might help you make sense of the slides.



A few things to note. Many of the keynotes overlapped: talking about role identification and beliefs, communicating from your own experience and models, building models of your process and testing, etc.

And prior to Agile Testing Days, Michael Bolton presented a Eurostar Webinar on 'Agile', which is worth watching, and James Bach presented "Skilled Testing and Agile Development" at Oredev 2014 which is worth watching. I include links to both of those for your delectation because they represent other testers applying Systems Thinking to create a model of 'Testing in Agile', watch them both.


Abstract:

Every Agile project is different, we know this, we don't do things 'by the book' on Agile projects. We learn, we interact, we change, we write the book we go along. Throughout all of this, testing needs to remain viable, and it needs to add value. Remaining viable in this kind of environment can be hard.

Fortunately, we can learn to add value. In this keynote, Alan will describe some of the approaches and models he has used to help testing remain viable. Helping testers analyze the 'system of development' so the test approach can target process risks. Helping testers harness their own unique skills and approaches. The attitudes that the testing process often needs to have driving it, and the skill sets that teams need to ensure are applied to their testing.

At a simple level, this is just Systems Thinking and Modeling. In practice this can prove highly subversive and deliberately provocative. Because we're not talking about 'fitting in', we're talking about survival.

Notes:

Warren Zevon, wrote “Ain’t that pretty at all”, in 1982

Warren Zevon wrote a song called “ain’t that pretty at all"

Warren Zevon was one of those singers who when he comes on, I pretty much have to listen, his voice and songs drag me in, rather than sitting as background music.

In this song, Mr Zevon describes a character who is pretty jaded.

     Well, I've seen all there is to see
     And I've heard all they have to say
     I've done everything I wanted to do . . .
     I've done that too

I know what that feels like. I’ve done management, performance testing, exploratory testing, agile testing, security testing, acceptance testing, UAT, automation, etc.

I’ve worked on Agile, waterfall, heavy weight documentation, lean, yada yada yada. None of it has ever fully worked or been perfect.

Feels like I’ve done everything. the danger is I become jaded or fixed in my ways.

People want Agile to be perfect.

And this Warren Zevon character And doesn’t like what he sees.

And Agile isn’t perfect.

Reality doesn’t seem to match up with his expectations, or desires, or wants.

     And it ain't that pretty at all
     Ain’t that pretty at all

Agile can be messy. It doesn’t always match the books or talks. And when you are new to it sometimes you don’t like what you see.

I went to the grand canyon, took a bus couple of hours to get there. When I got out, everyone else seemed to see a wonder example of mother nature.

I saw some cliffs.

We may not have the strategies to cope

I was back on the bus in 10 minutes. I might be jaded but I can avoid the sunk cost fallacy.

But we might not have the strategies we need to deal with the situation.

    So I'm going to hurl myself against the wall
    'Cause I'd rather feel bad than not feel anything at all

If you come from a waterfall background you might not know how to handle the interactions on an agile project. And if your prep was books and blogs, you might find they described an ideal where the strategies they used are not in place.

Some of our strategies for coping might be self destructive and we might not notice, and other people might not tell us. Because systems are self-healing and they can heal by excluding the toxic thing in the system.

Without the right strategy we make the wrong choice

And when you don’t have a lot of strategies you end up making choices and taking actions that aren’t necessarily the most appropriate for the situation.

Testers telling developers that the project would be better if they just did TDD and paired. Or if we all worked on automation acceptance tests together. Might not get the outcome that you want.

You might fall back on strategies that worked on other projects. But don’t fit in this one.

    So I'm going to hurl myself against the wall
    'Cause I'd rather feel bad than not feel anything at all

You end up wanting to write lots of documentation up front because you’ve always done it, or you want to test in places where the stories don’t go, or you want to remind everyone that ‘Agile’ isn’t done like that.

Whatever...

So very often, my job….

    I've been to Paris
    And it ain't that pretty at all
    I've been to Rome
    Guess what?

Is to help people, with their beliefs and expectations. To work with the people and system around them.

Because when I walked away from the Grand Canyon it wasn’t a reflection on the grand canyon, it was a reflection of me. My expectations were different from the reality. My belief in being wowed, because of all the things I’d heard about it stepped in the way of seeing what was on the ground in front of me. Or in the ground in front of me.

So they don’t do something stupid

   I'd like to go back to Paris someday and visit the Louvre Museum
   Get a good running start and hurl myself at the wall
   Going to hurl myself against the wall
   'Cause I'd rather feel bad than feel nothing at all
   And it ain't that pretty at all
   Ain't that pretty at all

So they don’t do something stupid on the project that stops them maximising their effectiveness, and puts blockers in the way to working effectively with the team.

I help testers survive in Agile projects

That’s never my role title. Test Manager, Test Consultant, Automation Specialist. Agile Tester. Blah Blah Blah.

But I seem to help testers survive, and help testing survive. So that we don’t fall prey to just automating acceptance criteria, we actually test the product and explore its capabilities.

And I say ’survive’, because that's what I had to learn to do.

We survive when we add value

I think we survive when we add value, learn, and make ourselves a viable part of the project in ways that are unique to us.

In “The Princess Bride”, the hero Westley is caught by The Dread Pirate Roberts, whom he offers to be his valet for 5 years.

The Dread Pirate Roberts agrees to try it but says he will most probably kill him in the morning. He’s never had a valet before so doesn’t know how a valet would add value to him, or how he could use him.

And when the morning arrives and The Dread Pirate Roberts comes to kill poor Westley. Westley thanks him for the opportunity to have learned so much about the workings of a pirate ship the previous day.

Westley explains to the dread pirate roberts how much he learned about the workings of the ship, and how he had helped the cook by pairing with him, and how he had reorganised the items in the cargo hold.

And every day Westley survives his capture by the Dread Pirate Roberts by working and adding value during the day. And every day he demonstrates the value that he can add to the operation of the pirate ship. And every day he learns more about piracy and the skills of pirates.

Until, years later, The Dread Pirate Roberts explains that The Dread Pirate Roberts is a role, not a person, and so the dock into a port, take on a new crew and Westley adopts the role of Dread Pirate Roberts.

And testers need to act such that people don't ask "What does testing do on Agile" because they know what their testers do on the project, and they see value in those activities. They know what specifically "Bob and Eris, or Dick and Jane or Janet and John" their testers, actually do.

This isn’t fluffy people stuff

When I look online and see the type of strategies that people describe for surviving and fitting in on Agile projects, it isn’t quite what I do or recommend, so I know there are alternative paths and routes in.

I’m not here to make you look good

I’m really not here to make you look good.

I mean I’m not here to make you look bad… but I might.

And I’m not here to make the product look bad… but I might.

And I’m not here to make the process, or the testing, or the ‘whatever’ look bad… but I might.

If it aint that pretty, then we can do something about it when we recognise that, and negative feedback can help.

I’m really here to help raise the bar and improve the work we all do together. If that makes you look good, then that’s great, if that makes you look bad and you improve as a result, then that’s great too. Looking good is a side-effect.

Survival does not mean 'fitting in'

I’m not talking about fitting in. Buying Lunch. Making people look good. yada yada.

I don’t think you get bonus points for doing that.

But if that’s you, don’t stop. I think its great if people want to buy doughnuts for everyone.

Its not me. So there are other ways to survive on projects.  I don’t have many strategies for ‘social success'

Personally I think team work means helping other people contribute the best stuff they can, and the stuff that only they can, and help pass that knowledge around the team. And everyone needs to to hat.

Testers survive by doing testing stuff

Its pretty much as complicated as that. Learn to test as much as you can, and drop all the bureaucracy and waste that adds no value. Then you’ve covered the basics.

Then you learn how to help others improve by passing on your knowledge and mindsets.

And we are taught how to do ‘testing stuff’ so if we do that well and improve our testing skills then we have a good chance of survival.

Testers are taught to work with the System Under Development

Essentially we are taught how to understand and work with the System Under Development

We learn techniques to analyse the system and model it and then build questions from the model which we ask of the system.

We might build a data domain model, then build questions around its boundaries and then ask the system those questions by phrasing them in form the system understands.

We learn how to do that as testers with techniques and analysis and blah de blah.. testing stuff.

We survive when we adapt to the System Of Development

Testing survives when it learns to work with the Systems in place. And two obvious systems are the System under development, and the System of development. We have to adapt to both.

And fortunately testers are often good at analysing systems. Modeling them. Viewing them from different perspectives. Breaking them into chunks. Viewing the flow of data. working out what data becomes information to subsystems. Looking for data transformations. Looking for risk in process and communication. Then figuring out different injection points.

When we work with systems under development we don't just input data at one end and see what happens out the other. Same with Systems of development, we don't just work at the stories that come in, and then check the system that comes out. We learn to look for different injection points and create feedback earlier.

So one of the main things I help testers do, is analyse the System Of Development, and adapt to it.

Many of the testing survival tricks and techniques we learn relate to Waterfall projects.

Annie Edson Taylor, thought she would become rich and famous if she could survive a trip over the niagra falls in a barrel.

She survived. She wasn’t rich and famous.

She survived by padding out her barrel and increasing the air pressure in the barrel.

You can get yourself ready before you hit Agile.

I survived waterfall by removing all padding, and taking responsibility for what I did.

I analysed the system of development and did what this specific implementation needed, so I could survive, and I took responsibility so that I could add value.

So I was in a better place than many people on waterfall projects when we started working on Agile.

Blake wrote "I must create a system..."

I must create a system or be enslaved by another man's. My business is not to reason and compare, my business is to create.

This is one of the first texts I ever read on system design and modelling, and I read it when I was about 19 or 20, and stuff stays with you when you encounter it early.

But this is my meta model. of modelling. And I know I have to own and take responsibility for my view of the world and the systems that I work with. Otherwise I’ll fall prey to ‘other’ peoples’ strategies.

I remember the first time I worked on an ‘Agile’ project

I had to learn the strategies to survive. And that’s how I can recognise the same blockers in other testers or projects new to Agile.

I fell prey to the Agile Hype

I had a whole set of beliefs about what agile was going to be like:

- the pairing
- the TDD
- the ATDD
- the BDD
- the ADBDBTBD - I might have made that one up

But there was a lot of stuff.

And it ain't that pretty at all

Reality wasn’t as pretty as I imagined.

I got stuck.

Remember I knew how to handle Waterfall. I had that down.

I could work the system of development and work around the blocks and annoyances that it threw at me.

But here was a new thing. A new system.

Stuck on...

I knew what we were ‘supposed’ to do. But people weren’t really doing that. and I didn’t know how to do this… hybrid thing.

I realised that people didn’t know what to do.

And I realised I simply didn’t have the basic skills to work in this new system of development.

I was the worst ‘pair’ in the world. You know when deer get caught in headlights. That was me the first time I was given the keyboard in a pairing session.

So I did what I always do...

Try to take over the world.

No - of course not. I’ve tried that before. Trying to impose your model on top of everyone else’s and making it work, is hard.

Its easier to treat the system as a unique system and model it. Observe it. Understand it.

Look at the parts, the subsystems, the relationships, the feedback flows, the amplifiers, the attenuators.

See I think this is what I do, I work with systems

That’s why my social strategies are… different.

I see the people systems in front of me. The processes. The politics. etc.

And that led to me changing… me

And much of what I did, I teach testers to do on projects.

I worked on my coding skills so I could pair

Every Agile Project is different. 

So we learn to analyse the system that is in place. TheWeltanschauung. The context.

System's have common concepts and elements, in general: entities, relationships, data flows, subsystems etc.

But each specific system, is different, and we can view it in different ways, with different models.