Monday, 18 January 2016

I did not know Firefox could do that: Syntax Error Highlighting in "View Page Source"

I found a feature I didn't know about today in Firefox.

The "View Page Source" view, in Firefox, marks in 'red' some simple errors.

Rendered HTML

e.g. in the source view below you can see I haven't added a doctype, and I have a </h2> where an </h1> should be.

View Page Source Shows Errors in Red


This doesn't catch all malformed HTML you would need an HTML Validator for that.

This feature should be particularly handy given that the inspect element DOM view is the 'rendered HTML' not the 'loaded HTML'.

Inspect Element Dom view has 'fixed' the error with </h2>


Monday, 11 January 2016

Dear Evil Tester, Who is Stafford Beer and Why should I care?

Question: Who is Stafford Beer and Why should I care?

Vernon asks:
"
Dear Evil Tester,  
I'm reading your post about the word "automation" and why we should stop using it. Cool stuff so thanks for that. 
My question is twofold: 
1) Why do you rate Stafford Beer?
2) Where/how do you find these books? You always have interesting books to share with us and I'm wondering how you wander across them. 
Cheers,
Regards, 
Vernon
"

Answer: Because he was splendid!


Dear Vernon,
Thanks for asking.

1) Why do you rate Stafford Beer?


I like Stafford Beer's sense of humour. How often are management books entertaining? Also they are very deep and require study. His books were written prior to the current management book vogue of having a single point and then padding it out with 'stories'.

And how many management theorists also incorporate their art and poetry into their work? Buckminster Fuller and Stafford Beer - I can't think of many others.

I like the way we get to see his models and explanations develop over time. Although people seem to think that Cybernetics died off, I think we see a progression and development through Stafford Beer's work and changing as computers grew ever more powerful.

But Stafford Beer's work always seemed to be about how can we better organise systems to support the necessary communication flows within them. And since he had an early, and ongoing, factory/management background, his work is very grounded in how organizations and teams work. So I can generalise his approaches and adopt a similar modeling style for the various places that I work in.

Modelling the System of Development in terms of teams, people within teams, their communication processes, their rituals, the other Systems they communicate to, the form the communications take, the data the communication transfers.

Also, I think it was from Stafford Beer that I learned about Attenuation and Amplification in terms of Human Communication, rather than in an electrical context that we might normally associate with Cybernetics.
  • Attenuation - making quieter and discarding the data
  • Amplification - making louder so the data has more weight and importance
We might standardise reporting to attenuate away detailed information that might distract communication, but we also lose nuances, and if there are no other communication lines in place, those signals might get lost. And overtime the standardisation may cease to be fit for purpose, but may not be revisited. Standardised Metrics that are no use to anyone. Risk and issue logs that are never reviewed. So compensatory processes like team based retrospectives can be used to pick up on the signals that would otherwise be lost, and acted upon in a local system rather than feeding through into the surrounding systems.

Certainly I took from Stafford Beer an approach of modeling systems within systems of overlapping systems and viewing them in different ways to attenuate the overlaps.

When I build and adapt test processes I'm very conscious of attenuation and amplification. And as a manager I make sure that I can amplify any signals that are lost via the standard attenuation communication processes, and try very hard to pick up on important weak signals before they amplify.

Stafford Beer was around when Operational Research was being used in the war, and for statistical control of processes. Stafford Beer worked in industry and so was conscious of customer satisfaction, viability and money. His work always seems practical to me.

I don't always 'get' Stafford Beer on first reading. And I probably still haven't 'got' him now. But I re-read his work and learn more each time.

If I was to start with Stafford Beer, I'd read whichever books I could get cheaply enough as they are all useful. But in terms of 'least scary to start with' I would specifically try and track down:
  • Designing Freedom
  • Diagnosing the System for Organizations
You can find some Stafford Beer lectures on youtube if you'd prefer a cheaper and less forbidding start [https://www.google.co.uk/search?q=stafford+beer+youtube]

2) Where/how do you find these books?


I don't remember how I first found Stafford Beer. I think it was in a reference in some other book. Although the first Stafford Beer book I bought "How many grapes went into the wine" I bought in a sale because it looked interesting. But didn't realise it was a Stafford Beer book until I had tracked down "Designing Freedom" and "Diagnosing the System for Organizations". Two very slim but very deep books.

...I'm wondering how you wander across them.


Much like a character from an H.P. Lovecraft novel, I would hunt down "strange, rare books on forbidden subjects". Books that are hinted at in the marginalia of less strange and less rare books on less forbidden subjects. I would constantly trawl second hand book shop after second hand book shop trusting to the Gods of coincidence and lucky chance. And there I would select random books and perform stichomancy to identify my next area of study.

But nowadays I tend to search on ebay and look up references in archive.org.

I no longer have an ebay search active for Stafford Beer, but when I was trying to source the books I primarily used ebay and abebooks. Also sometimes you get lucky with second hand sellers on Amazon.

Adopt A Reading Strategy of  'Go to the source'


One of my study and reading strategies is to 'go to the source'.

When I read a book that someone has recommended to me, or that I found useful. I'll look at what books the writer of that book referred to and try and find out who that writer studied. And then track down the books that they read and used.

That way I'll gain a different perspective on the original work, rather than a single writers interpretation of their work. It also takes me back to books that fewer people have read, which helps me have a different approach. Over time that has led me to many interesting authors that I would probably not have otherwise read.

And many of these books are old. So they are available at archive.org, or at Project Gutenberg, which can be helpful as sellers of second hand copies of some of these books seem to put a price value higher than I want to pay.

For your delight and delectation, I have attached a picture of my Stafford Beer stash.


  • Just out of frame on the left is his "Management Science" book, a fairly light intro to Cybernetics, then follows the bulk of his writing (although I do not have his book of poetry).
  • "The Fractal Organization" is a book 'about' Stafford Beer's viable System model.
  • Bertalanfly's classic "General System Theory"
  • The light, but enjoyable "Systems Management and Change: A graphic Guide", which I mainly keep to remind me that alternative presentation styles are important
  • I can see a Drucker book in there, he wrote a lot, but I only have a few, the one pictured is "Managing for results" which I thought highly of
  • Frank George's 'The Brain as a Computer', which is new to the shelf but has popped up in a lot of references so I tracked it down - you can read it online for free at archive.org [https://archive.org/details/brainasacomputer007406mbp]
  • there is a small selection of Herbert Simon shown here - "Sciences of the Artificial" (initially recommended by James Bach), "Reason in Human Affairs and Organizations". I think 'Organizations' is one of the best books on organizational change and management dynamics that I've read. I have Simon's "Models of Thought" in my 'read very soon pile'
  • A Milton Erickson book in there, because you can never have enough Milton Erickson on your shelf, "Hypnotic Realities" contains some useful transcript snippets for when you want to start working on your language patterns
  • I can see "The Political Brain" by Drew Westen, which I thought was an excellent study of emotional persuasion and marketing
  • At the right I can see we're bleeding into F Matthias Alexander's work - he of 'The Alexander Technique' fame. His books are all about modelling, feedback, change, testing and measurement. And who could turn their back on a book with the title "Constructive Conscious Control of the Individual"?
  • At the front you can see "Models for Management: The structure of competence". Really!? It has some useful pages but I'm not sure how long it will stay on the shelf. I suspect on next pass through it will probably be for the off.
  • A few obligatory books on experimentation and scientific method
  • And of course the IT person's favourite productivity tome "Getting Things Done"
  • You can also see an e-prime cheat sheet that I have stuck to the shelf for handy reference.

PS: I have only given you so much information because I have already managed to find most of Stafford Beer's books. I don't need any competition hunting for other topics I'm studying, so if you ask another question and don't get a response it might be because you've stumbled across one of the secret and forbidden knowledge areas that I currently have under study, or it might be because I want you to think you've stumbled across a secret and forbidden knowledge area. Or it might be because I'm busy.

Friday, 8 January 2016

Technical Testing with MS Edge and "the user will never do that"

I have yet to update my Windows Machine to Windows 10, and therefore haven't experienced the joys of MS Edge. Until now. And crikey! I was surprised.

I downloaded the VMWare Windows 10 virtual machine to gain access to MS Edge.

Tiny right click menu on MS Edge
When I right click on MS Edge to see the options available to me as a user I see a tiny menu offering:
  • Open in new tab
  • Open in new window
  • Copy link
  • Inspect Element

Gosh.

Compare that with the 'slightly' larger, more complicated and feature packed right click menu in IE11 on Windows 8.1
Feature packed menu in IE11
  • Open
  • Open in new tab
  • Open in new Window
  • Save target as....
  • blah
  • blah 
  • blah
  • stuff
At least that is what a normal user sees.

They really don't scan down, or if they do they skip the stuff in the middle and might see that they can 'clip it' and 'view properties', and blah blah blah.

Not so with MS Edge.

MS Edge provides a tiny list. The kind of list that makes you go:

"Oooh, I can cope with that. I wonder what that does. I'll just click that."

And lo' a new world of experience opened up to the user through inspect element.

And now everyone can manipulate the DOM, and the cookies, and the local storage, and whatever else we chose to use the client side for.

And, "but the user would never do that" becomes less and less likely.

I did not expect a heavily promoted consumer focused browser to be so functionally minimal that 'advanced' user workflows would become so accessible.

But it has. And they have.

Which means that now, more than ever, those of us that test; need to manipulate the DOM and ensure that our back end validation can cope with the range of variety that has opened up for 'normal' users to generate.

References:

Wednesday, 6 January 2016

Use StockFighter to improve your Rest API Manipulation Skills

I found a link to StockFighter in my news feed this morning.

It is a free to play game which has a GUI and a Rest API.

I've been working on a game of my own: a Mud with a GUI and Rest API, to use when I train people in technical web testing. And it is good to see other 'fun' activities which we can use to improve our skills.

I completed the first two levels using PostMan, and I'm going to drop down to some Java for the next level.

One interesting aspect of this game is that because other people are playing it, they are releasing the abstraction layers they are writing on github. So before you start coding you can read their code and learn from that first. A good way to see differing approaches to the problem and modelling that you might not have thought of.

The StockFighter forum has a list of Api libraries in a variety of languages that you can peruse.

I don't think that my game will end up in as polished a form as the StockFighters game, but I've had a few people playtest it so far, and they had fun trying to steal the treasure around them before the other players did, all the while learning more about DOM and URL manipulation.

I'll be playing around with  StockFighter a little more when I have time over the week, which might mean that I won't finish Front Mission Evolved on the Xbox 360 this week.

Questions:
  • Do you know of any other 'fun' activities like this for practicing your Rest manipulation skills?
  • Can you get further in the game than I did without resorting to coding?

Saturday, 7 November 2015

Notes from the Eurostar Mobile Deep Dive 2015

I visited Eurostar 2015 and popped in to a few sessions, but was mainly there for the Mobile Deep Dive.

Alan Page started with an opening keynote on using instrumentation at the AOSP level to provide usage and defect information automatically for an emulation layer that Microsoft had been working on. He also mentioned that his team aims for “More value. Better Quality. Every week” which sounds like a perfectly splendid goal to remind yourself of every morning, and revisit every evening to reflect on how you are doing.

Julian Harty’s talk built on this with an in-depth exploration of how much analytics can be used to help the testing process. I particularly liked the example where a study had examined the ‘phones used by the people most likely to leave a review’ on the app store, and then use those phones as the subset of devices for your on device testing.

Karen Johnson asked (and I’m paraphrasing from memory rather than a written quote) "do you analyse this as a human or a machine? Or both?” Only about 3 of us put our hands up to both. Karen provided a walk through of an example thought process for approaching the scenario and condition creation of constructing an automated walkthrough of a mobile app she had worked on. Karen also reminded us that systems change, so the automated solutions we create will also change and to not get hung up on permanence of an artefact and to remove it when it ceases to add value.

I popped in to the Wearables talk by Marc Van’t Veer and saw him using IOS to Quicktime streaming for capturing the camera from his iOS device to take pictures of testing on his smart watch. Also using Mobizen to stream the display of his device on to a desktop for realtime screen capture. Always good to see people using technological workarounds to improve their testing.

Jeff Payne provided the funniest talk of the deep dive conference which was also filled with valuable information. I made notes that I need to learn how to examine the memory of a running mobile phone, and examine the temporary files, cache files and databases on the phone itself.

Jeff’s quote “Test where easiest and most effective” struck a chord since it also relates to knowing where you are going to get the most benefit from testing a risk because you understand where and how you can test it, rather than just testing on every device.

I found this day a useful addition to the Eurostar Conference and was fortunate to have a whole series of good discussions during the day, and in the evening.


Oh, and I presented as well ‘Risks, Issues and Experiences of Technical Testing on Mobile’. [slides]. The basic takeaways were:
  • We currently test out of fear, we could test to address technical risk
  • learn to understand the technicalities of your application or website to target the technical risks of cross platform usage when deciding what to test, rather than testing ‘everything’ on a subset of devices out of fear
  • poor ergonomics add risk to my test process so I add keyboards and that work around might add risk to the test process, in fact all such workarounds add risk, so we need to technically understand and communicate that risk to manage it in the team
  • the fact that we are testing on device at all, should be managed as a risk, because it means we are building something that we don’t fully understand or trust
  • build apps which use all the libraries and flows of your main app, but doesn’t have a ‘GUI’ but will self check the libraries and interaction and report back to a main server, then we could deploy the ‘test app’ to multiple cloud devices and quickly receive information on compatibility without having to deal with lag time of manual interaction with the device for testing
  • Focus on pain in your process and remove the pain. e.g. typing on devices is error prone and sometimes I don't have back and forward cursor keys so fixing the errors is painful - I could add a custom keyboard to the device, or I could add a physical keyboard to the device. By addressing the 'pain' in my process, I introduce a technical workaround that might introduce risk, but makes my process easier. And the risk is something we can investigate, discuss, assess and mitigate risk. Pain in the process is anything that gets in the way, stops you being effective - you may be so used to it, that you don't even notice it - that's dangerous.
Zeger created a sketchnote during the talk, but I think I made his life difficult by talking fast and cramming in a lot of material

Tuesday, 14 July 2015

Lessons learned from Black Ops Testing - Testing a JavaScript library

Introduction

James Lyndsay suggested JS-Sequence-Diagrams as a target for Black Ops Testing.
I'm a big fan of "text to diagram" and "text to document" systems so I was hoping this would result in a good target. And it did.
If you haven't seen a "text to diagram" type system before then it basically takes text like:

Title: Simple Diagram
A->B:
B->C:
C-->B:
B-->A: done

And creates a diagram:

I often use Graphviz for this type of process since I tend to create more generic diagrams, but if you want to use Sequence Diagrams then JS-Sequence-Diagrams offers a simple grammar and effective output.
For the Black Ops Testing webinars, the nerve wracking thing about testing these small systems, concerns:
  • is it rich enough?
  • will we find enough?
  • will it illustrate enough thought processes and approaches?
And this library was, and we did, and it could.
JS-Sequence-Diagrams describes itself as "A simple javascript library to turn text into vector" diagrams. So it presents its self as a library.
Testing a library offers challenges for interactive testing, because you need something to wrap the library to make it usable: code, or a GUI of some form.
Fortunately JS-Sequence-Diagrams has several GUIs to offer us:
Since the demo page is available, I started my testing with that.


Do look yourself

You might want to have a quick look at the page and see what you think and find before continuing, otherwise you risk my notes impacting your ability to view the library and demo page without bias.

Go look now: JS-Sequence-Diagrams

> WAIT
You wait.
Time Passes...
Gandalf opens the round green door.

> WAIT
You wait.
Time Passes...
Gandalf goes east.

> WAIT
You wait.
Time passes...

> WAIT
You wait.
Time passes...
Thorin waits.

> WAIT
You wait.
Time passes...
Thorin says " Hurry up ".

Initial Thoughts

I looked at the demo page JS-Sequence-Diagrams and made a few notes on what it was built with and what other libraries in use. My initial thoughts were:
  • Use as a tool interactively, with default page
  • Use as a library
  • Explore configuration scope by using tool interactively with a different set of library components e.g. Raphael, lodash, no JQuery
  • JS Heavy so risk of JS Compatibility errors
  • Run and review the QUnit to identify coverage gaps and risks
  • Create a new GUI page to control loaded components and minimise risk of custom editor interaction obscuring bugs or impacting testing
  • compare 'standard' for sequence diagrams against implementation
You can see that I have identified some risks:
  • JS compatibility
  • Demo page has a JS editor which might conflict
  • Demo page might obscure testing
And I could also try and use the tool as a library.

Initial Interactions

My first few attempts at creating a simple diagram triggered an error because I found that some of the characters I was using were not allowed.
NOTE:
  • "," is not allowed in an actor name
  • "-" is not allowed in an actor name
  • ":" and ">" are not allowed in an actor name
I also found that some of the characters were only rendered in the 'simple' view, and not in the hand-drawn view. i.e.
  • ¦
  • \
So I made a note to my future self to explore character sets and see what would render and what would not.
I also wanted to find a way of minimising the impact of the editor in my testing. i.e if I found a bug, how would I know that it was a bug in the library and not a bug in the editor or the interaction between the library and the editor?

Interacting with the library without any harness

We already have all the code we need in the browser, after the demo site has loaded, to interact with the library directly. As a JavaScript library, it has already loaded into the DOM, so if we open up the developer console and type:

Diagram.parse("Eris->Bob:").drawSVG($('#demo').find('.diagram').get(0));

We will see the diagram visualising the above relationship in the browser. So we really don't need a harness or a framework to create simple interactions with the library. And we know that we can bypass the GUI if we need to.
I don't know how many testers do actually use the developer console to interact with the JavaScript in the applications they test, but I find it a useful skill to have. And it requires a reduced level of JavaScript knowledge required to do this, than it does to code a JavaScript application, so you can get started with this pretty quickly.

Time Passes

You can see the scope of testing I ran through in the associated notes.
This post is to describe at a higher level, some of the lessons learned and approaches taken rather than drop down and explain all the issues found and tests executed.
I made notes of the issues I found, but I realised after a while, that I should really have a small example for each issue which demonstrated the issue. And this tool is perfect for this since an issue description will have text, I can embed the text of the diagram in the issue.
So I revisited all the issues I found and added examples for each.

Cross Browser

I then realised, that I could take all the examples I had and check them against different browsers, if only I had a page that I could load into each browser that would render the different examples.
So I set about building an html page that I could add each of my examples to, and have them render on loading. I guess this could be called a cross browser testing tool, specific to this particular library.
I wanted something where I could add each of the diagram text files without too much editing, and without any additional coding each time I added an example.
So, despite my JavaScript skill not being the best in the world I interactively built up a single page where I could add all my examples in the form of:

<textarea class="exampleSource">
Title: Issue: char backslash "\" is represented a "/" in hand drawn\n actor name but backslash "\" in simple
\->A:
</textarea>

Then I have JavaScript at the bottom of the file which, when the page is loaded, finds all those textarea blocks and, using a template I have in the html file, adds them to the DOM. They are then rendered using both the simple and the hand drawn versions.
I also made the template show the diagram source to make it easy to copy and paste into an interactive GUI if I needed, and act as documentation.
So not only do I have a page that I can load into any browser and check for compatibility, I don't need to do anything special to explore the two rendering options.


When I viewed my examples on the page I realised that I needed to use the title attribute more effectively to describe if the diagram was an 'Issue' or an 'Example'.
This made the page easier to scan and more self descriptive.
By loading this page I could also see that one of the issues I had found, did not appear here, but it did appear when the example was used interactively in the demo gui.

Title: Issue: can not mix alias and name in same diagram
Participant ActorA as A
A->ActorB:
ActorA->ActorB:

I had evidence that the risk that I identified early regarding possible impact of the JS editor with the drawing library, had actually manifested.
Subsequent investigate with Steve and James revealed that, interactively:
  • this error didn't manifest in Firefox
  • this error only manifested in Chrome
  • and only manifested in Chrome on Windows, not on Mac
So, in addition to being an editor interaction issue, it was also a cross browser issue.
When testing, we have to make a call regarding how far do we diagnose a problem. I did not diagnose the problem any further than this - so I don't know what causes it. I suspect it is probably a line endings issue, but will leave additional investigation up to the development team, should they consider this worth fixing.

Interactive Testing

The test.html file in the source, allows interactive testing without a risk of the JS editor impacting the library because it uses a simple text area as input.
I decided to use this for interacting with the library.
But, rather than download all the js files and store them relative to the test.html file, as the current implementation required I wanted to pick up the relevant libraries from a cdn, that way I could switch them out easily and I wouldn't have to have much setup on my local machine.
I discovered that the JavaScript eco system has a centralised approach to this - much like maven in Java - via cdnjs.com, so I used this as the source for the libraries in the code.
I then incrementally amended the test.html code such that it:
  • rendered both simple and hand drawn graph versions
  • reported any syntax errors on screen
  • doesn't require JQuery (since that is documented as optional on the main library page)
  • maintains a history of non-syntax error diagrams in the page to easily see what testing I did and maintain a record - I can save the page as a .html file to retain a 'record' of my testing session
I did not put the effort in, to make my code cross browser, so this had incompatibilities on IE that I didn't attempt to fix. It also had bugs on Firefox, that I didn't realise until Steve tried to use it in his testing. (Note: I have now amended explorer.html to work on Firefox and IE and Chrome).


Again, this didn't require great JavaScript skills. I built it incrementally, and cannibalized code from the demo page and the test.html page.
  • What else would you have added if you were interactively testing this library?
  • Feel free to amend the code to add those features and see how you get on.
Essentially, I crafted two tools to help me report on my testing and interact with the library.

Cross Browser Testing

As I was testing the library, a tool came through my newsfeed that I had not encountered or used before:
BrowseEmAll
This claims to be a single GUI where you can run multiple browsers for cross browser testing.
I'm generally suspicious of these types of tools, but I did not want to appear churlish and risk the wrath of Earth Coincidence Control Office.
According to the developers, BrowseEmAll uses the browser 'embedded' mode, so use the same rendering engines and JavaScript engines as the browsers, although the developer tools are different.
Interestingly they have a WebDriver implementation for it, which is still in early stages, but might be a useful add-on to any existing local grid setup for supporting cross browser testing with early versions of the rendering engines. Rather than maintain a grid of lots of old versions. I have yet to try this however.
I haven't used BrowseEmAll enough to convince myself that I could use it in preference to actual browsers, but it did allow me to side-by-side demonstrate that the interactive issue I found earlier was only in Chrome, and not in Firefox.

On my notes

You can see different versions of my notes:
I tend to write most of my notes in markdown format now, so even in evernote, I write in markdown. I can then feed this through a markdown formatter like dillinger.io to generate pdf or html
Markdown is essentially a "text to document" process, where I write in pure text, and then another system parses the format and outputs html or pdf.
In Evernote, I write in markdown, but I also have the ability to add images without thinking about it. This allows me to add 'comments' into my text which don't appear in the final output.
This is also why I like the "text to diagram" systems. I can embed meta-data in the form of comments, this information is not rendered, but is useful for a human reading the text later.
In the past I've used Graphviz on site to document information I've received and I add comments into the Graphviz file for where I found the information, todos, gotchas, risks etc. none of this appears in the rendered image, but is very useful for me to build a model of the system I'm working with.
I do the same thing in the examples for js-sequence-diagram

Title: This creates a duplicate ActorA
Participant ActorA
ActorA->ActorB:
# this fails with  https://bramp.github.io/js-sequence-diagrams/ on chrome on windows
# but works with url on firefox
# works in test app in chrome
# works in chrome on mac

Where the comments above provide meta data about the issue regarding replication. I also use the features of the tool to help me.
And if you find this issue in the Evernote pdf you will also see that I've added a screenshot, which was for my information, rather than the markdown.
I particularly like the "text to X" approach because it allows for:
  • version control
  • diffing between versions
  • meta data and comments in the source which are not in the render
  • instantly at least 2 visualisations of the model (text, render)
  • often multiple visualisations with rendering parameters, which can help you see the model from a slightly different perspective e.g. different layout rules help spot different spatial relationships

Summary

Again, a surprising amount of opportunity for reflection on 'how to approach' something that, at a surface level, seems very simple.
I hardly touched my initial set of options about how to approach testing so there still remains a lot that I could continue to pursue with the testing of this library. And if you watch the webinar that this text relates to, you will see how differently James, Steve and Tony approached the topic.
And you can find an outline summary of my lessons learned below:
  • Library execution from console
    • Diagram.parse("A->B:\nB->C:").drawSVG($('#demo').find('.diagram').get(0));
  • Markdown notes writeup
    • Can read as .txt
    • Can easily convert to pdf via dillinger.io
    • Images embedded within text are not rendered (possibly good for adhoc image tracking)
    • like a text to diagram, a text to document allows embedded comments (images, html comments <!-- this won't be seen --> )
  • Tool as documentation
    • since the diags support # as a comment we can use that to our advantage when documenting the testing or raising defects e.g. urls, environment, versions - all stored in the diagram txt
    • Use the title as the issue name or test ideal
    • Create minimal source to recreate the issue and embed in defect report
  • Text to diagrams
    • Fast to create, autolayout
    • Can tweak for 'better' layout e.g. \n, aliases and naming
    • Learn the nuances of the tool
    • Version control and compare previous versions
    • Easier to auto generate the diagram source than a diagram equivalent
    • Use the 'comments' functionality for meta data and notes
    • Human readable at text, visual a different 'view'
  • Environment Control important - see the cross browser (Chrome/Firefox Participant issue)
    • What version is the demo site running?
    • Does the editor interfere? Cross-Browser Risk
  • Tester has different view of tool support required from testing
    • compare test.html with "explorer.html"
    • shows both simple/hand graph at same time
    • tracks history of usage
    • minimal libraries to avoid risk of interference
    • (but mine is buggy on IE because I don't know JS as well)
    • display parse errors on screen
  • Cross browser testing
    • testing what? GUI, Rendering, Parsing?
  • Observe, Interrogate, Manipulate
    • Console
    • JS Debugger - harder with minimised code (use pretty print in console)
    • Network Tab
Links:

Tuesday, 12 May 2015

Lessons Learned Testing QA Mail for Black Ops Testing


On the Black Ops Testing Webinar of 11th May 2015 we tested QA Mail. You can find a write up and watch the webinar replay over on BlackOpsTesting.com

This post, expands a little on how I approached the testing and what I learned.

A long time ago, on a job far away, my team and I had to test website registrations and emails on various account events.

For our interactive testing this was fairly simple, we could use our company accounts or gmail accounts and then reset the environment and then we could re-register again. For the automation we needed a different solution. We didn't want to use gmail 'dot' and 'plus' accounts because we felt that the number of emails might put us against the gmail terms and conditions.

We started using mailinator.com creating adhoc email addresses for the automation, and I created some abstractions to read and delete the emails. But mailinator introduced a captcha part way through our testing and our automation failed.

I created a fairly simple appengine tool which acted as an email sink, and I wrapped a simple API around it for our automation. AppEngine has changed so much since then that the code I wrote will no longer work. But QA Mail does a very similar job to the AppEngine code that I wrote.

It provides a simple GUI and simple API wrapper around a polling daemon which reads a mailbox folder and brings in emails.

Great.

I approached the Black Ops Testing Webinar as a learning experience.


  • I didn't know too much about SMTP or email
  • I wanted to experiment with API Automation from a zero to 'working' state as fast as possible
  • I wanted to experiment with sending emails from Java
  • I wanted to know what tool support I would need for interrogating and comparing emails


Automation Abstractions


I started with the automation. And first off wanted to de-risk it by making sure I could send emails.

I had a quick try of the Javax mail libraries, and quickly decided to find an abstraction library to cut down on the time I required to get up to speed and sending emails fast.

I started using Simple Java Mail  https://github.com/bbottema/simple-java-mail

With a few basic emails sent, I started to work on the API abstractions for QA Mail. You can see the various twists and turns I took via the history on github

https://github.com/eviltester/qamail_automation

I created abstractions at a few different levels:


  • A QA Mail Rest API Call abstraction
  • A QA Mail Domain abstraction
  • A Mailbox abstraction


These all work at a similar level so they overlap a little.

This allowed me to create basic automation fairly simply.

They lack a way of conducting automation to twist the API calls i.e.


  • make the REST call with a POST instead of a GET
  • add null parameters
  • use badly named params
  • add new params into the calls
  • re-order the params in the calls
  • etc.


I could achieve the above with direct calls using RestAssured, but since they are fairly common requirements when testing an API, I need to identify a different way of building abstraction layers which support 'testing' and not just 'exercising' the API.

In this particular instance I didn't need that flexibility since the API would have thrown 500 errors on invalid calls. QA Mail was written to meet the needs of a single user and graciously released to open source in the event that it might help others.

I did use the API abstractions to test for different combinations of routings e.g. no 'to', multiple cc, and bcc etc.

Unfortunately we were using an older version of the app which did have a bug in this area, so I didn't pursue this line of enquiry long. The bug has been fixed in the main code branch and on the QA Mail demo server.

Testing Emails


After sending a few emails, it became quickly apparent that I didn't really know what I was looking at in the raw email view.

I'm used to looking at HTTP headers, I'm not used to looking at email headers.

Was the email I was seeing correctly rendered?

How could I know?

A lack of oracles for domains we don't know well can make testing harder in the initial stages of building domain knowledge. One strategy I use involves me finding alternative sources of rendering the information via complementary or competing renderers.

In this instance I used similar tools: mailinator and temp-mail.

Both of these accept emails to anonymous mailboxes and render the email as raw text so you can see the headers.

I saved these as text files and compared the output through winmerge.

I found differences in the headers and had to go look them up to try and understand them. Oft times, many of the headers are actually added by the routing servers the mail winds its way through, so what I'm seeing is not what I actually created as the email.

So I needed to find a way to observe the emails I was sending out in the first place.

For the automation, I found a debug flag on Simple Java Mail which output the raw email smtp session to stdout so I can see the original message, headers and encoding. I was then able to compare this to the output and see what the routing had added, and what might have been added by QA Mail. In the final analysis, nothing was added by QA Mail, it simply sucks out the message after it has flowed through postfix.

For my interactive testing, I discovered the 'Show Original' menu item in gmail. This lets me see the 'raw' email sent to me, and which I'm sending out.

Very handy - I've actually become addicted to looking at email headers now, and most of the emails I receive I check the headers. I find it amazing how much information is contained in these about the machines that the email has passed through in its travels. I encourage you to have a look for yourself. Never again will I send an email direct from machine - I'll always try and use a server based tool to avoid giving away my machine ip addresses.

Observing the System Under Test


One of the interesting challenges I faced testing this was isolating where issues were introduced.

QA Mail runs on a server, so I can only observe it via logs.

It provides one log by default the log for the import daemon that polls the mailbox folder.

I ssh'd into the test environment and could 'tail -f' the log file.

This way, when I started testing different email address formats I could see if they were actually hitting the system.

I found that many were not. valid emails such as "1234--@" and "()<>[]:,;@\\\"!#$%&'*+-/=?^_`{}| ~.a" were not reaching the mailbox that QA Mail polled. They were being rejected by postfix, making it impossible for me to test how QA Mail would handle extreme email formats.

Identifying where processing occurs in an end to end system is a common challenge and one we should be aware of when testing. So I recommend trying to  understand the architecture of the application under test and trying to add observation points in as many positions in the chain as you can.


Summary


Normally when testing emails, I've been more focused on:


  • was an email sent?
  • Did the email render correctly?


When testing QA Mail I had to focus on:


  • Was it pulling in the appropriate information from the mailbox?
  • Was it routing the emails to the appropriate database tables?


And this forced me to consider new ways of observing and interacting with the system.