Thursday, 2 April 2015

Virtually Live in Romania - Technical Testing Webinar to Tabara De Testare

On 1st April I presented Technical Testing to the Tabara De Testare testing group in Romania.

I presented virtually over Google Hangouts. The Tabara De Testare testing group is spread over four cities in Romania, each of which live streamed the webinar to a room filled with their members. I could see the room via the presentation machines web cam.

We also had 70+ people watching from the comfort of their own homes and workplaces.

Thanks to Tabara De Testare for organising the webinar.

I have released the slides to the webinar on slideshare:



During the webinar I ran through the slides, then provided a short demo of Browser Dev tools supporting technical testing investigations on the redmine.org demo application.

Dangerously, I then tried to demo proxy tools to help answer a question from the audience.

Clearly - using a proxy tool, while conducting a live webinar through a browser isn't the least dangerous option. And lo, I lost the Q&A chat window as a result. But I think we covered most of the questions during the live Q&A which followed.

If you'd like me to 'virtually' attend a testing group that you organise then please ask, as its easier for me to fly around the world via webcam than it is to jump on a plane, and it means you don't get stung for travel and accommodation costs.

I will edit and upload the webinar to my Technical Web Testing course shortly.

Monday, 26 January 2015

Some API Testing Basic Introductory Notes and Tools


Some applications provide an API. Some websites provide an API. This post provides some information on API testing, since that appears to have consume a lot of my time in January 2015. As preparation for our Black Ops Testing Workshop I performed a lot of API testing. And co-incidentally the January Weekend Testing session chose API testing as its topic. There should be enough links in this blog to provide you with the tools I use to test APIs.

API - Application Programmer's Interface

The name suggests something that only programmer's might use. And indeed an API makes life easier for software to interact with other software.

Really an API provides one more way of interacting with software:

  • By sending messages in an agreed format, to an agreed interface and receiving. an agreed response format back.

APIs tend to change less frequently, or in a more controlled fashion, than GUIs because when an API changes, all consumers of that API have to change as well.

Software tends not to have the requisite variety that a human user exhibits:

  • If you change the GUI then a human can probably figure out where you moved the button, or what new fields you added to the form that they need to type in. 
  • Software won't do that. Software will likely break, or fail to send the new information and so the interaction will break.
If you read this blog through an RSS reader then the RSS reader has used this blog's API. The API consists of a GET request on a URL to receive a response in XML format (an RSS feed).

You, as a user could GET the same URL and read the XML in the browser, but the API tends not to offer the same user experience, so we don't often do that. Or we use tools, like an RSS Reader, to help us.

Manually Testing an API

Just because the API calls itself a Programmer's Interface, does not mean that all our interaction with the API has to involve programming.

We can issue requests to an HTTP API with minimal tooling:
  • Use a browser to issue GET requests on an HTTP API
  • Use an HTTP Proxy to issue HTTP requests to an HTTP API
  • Use command line tools such as cURL or WGet to issue HTTP requests to an HTTP API
  • Use specific tools e.g. Postman to issue HTTP requests to an HTTP API
Preparation for Black Ops Testing

When choosing software for Black Ops Testing and Training workshops, I like software that has multiple methods of interaction e.g. GUI, API, Mobile Apps/Sites

This way testing can:
  • compare GUI against API, as well as underlying database
  • use the API to load data to support manual testing
  • check GUI actions by interrogating and manipulating the system through the API
  • test directly through the API
Prior to the Black Ops Testing workshop I had tested a lot of APIs, and I generally did the following:
  • Create confirmatory automation to check the API against its documentation. 
  • Manually testing the API using HTTP Proxy tools
    • to create tools and observe responses
    • edit/replay messages and observe responses
    • use the fuzzing tools on proxies to create messages with predefined data payloads
  • Use the browser for simple querying of the API and Firebug, with FirePath to help me analyse the responses
During the run up to the Black Ops Testing workshop I was re-reading my Moshe Feldenkrais books, and a few quotes stood out for me, the following being the first:


"...learning means having at least another way of doing the same thing."
I decided to increase the variety of responses available to me when testing an API and learn more ways of sending messages to an API and viewing the responses.

So I learned cURL to:

  • send messages from the command line, 
  • feed them through a proxy, 
  • change the headers
  • create some data driven messages from the command line with data sets in a file
I used 3 different proxies to experiment with their features of fuzzing, message construction and message viewing.

I experimented with different REST client tools, and settled on Postman.

I now had multiple ways of doing the same thing so that when I encountered an issue with Postman, I could try and replicate using cURL or Proxies and see if my problem was with the application or my use of Postman.
  • similarly with any of the tools, I could use one or other of the tools to isolate my problem to the app or my use of that specific tool
  • this helped me isolate a problem with my automation, which I initially thought was application related
Weekend Testing

During the Weekend Testing session, we were pointed at the songkick.com API

I wanted some additional tool assistance to help me analyse the output from the API.

Because while Postman does a very capable job of pretty printing the XML and JSON, I needed a way to reduce the data in the message to something I could read more easily.

So instead of viewing the full XML tree. I used codebeautify.org/Xpath-Tester to create simplified XPath queries which rendered a subset of the data, i.e. if I wanted to read all the displayNames for events. I could click on the events in the tree and find the displayName attribute, or I could use XPath to show me only the displayNames for events
  • //event/@displayName

Looking back over my Feldenkrais notes, I can see a relevant quote for this:
"... he said something about learning the thing I already know in a different way in order to have free choice. For that I must be able to tell differences. And the differences must be significant. But I can distinguish smaller differences, not by increasing the stimulus, but by reducing the effort. To do this I must improve my organization."

Summary
My approach to API testing has changed. Because I spent the time increasing the variety of responses I have to the task of testing an API.

I didn't really mention automation in the above, although I use RestAssured for much of my API automation at the moment.

The above is a subset of what I have learned about API testing. 

I plan to continue to increase the variety of responses I have to testing APIs and increase my experience of testing APIs. I will over time collate this information into other posts and longer material so, do let me know in the comments if there are any questions or topics you would like to see covered in this blog.

Tools & References:

Thursday, 18 December 2014

My search for easy to use, free, local HTTP servers


I have lost count of the number of times I've had to look for a local HTTP server.
  • Experimenting with an open source app
  • Writing some HTML, JavaScript, PHP
  • Testing some flash app for a client
  • Running some internal client code
  • etc. etc.
And since this isn't something I do every day. I forget how to do it, each and every time I start.

I forget:
  • Which servers I already have installed
  • Where I installed them
  • Which directory I configured them to use
  • What local names did I give them to make it 'easy' for me to work with them
  • etc. etc.
Now it might just be me that faces this problem.

If so, I expect you have already stopped reading.

So to cut to the chase, my current favourites are Mongoose (Windows, Mac, Linux) and VirtualHostX (Mac)


Other HTTP Stacks

I have used some of the biggies:
And I probably still have them installed

And some of the tinies:
And some others that I can't remember.

All have been useful at the time. Sometimes I tried to install one but couldn't get it working on client machines because of permissions etc. etc.

I started looking around for alternatives that I could use during training courses, webinars etc.

Some I have not used

Prior to writing this post I was aware that Python had the capability to start up a small http server from the command line, but I hadn't used it. After publication, Brian Goad tweeted his usage of Python to do this.

Brian continued:
could be easily used as a function that takes the dir as argument: simple-server(){ cd $1; python -m SimpleHTTPServer; }
just go to localhost:8000 and you're set!
After Brian's reminder I had a quick look to see what other languages can do this:

If you know of any more languages that have this as part of their default then leave a comment and I'll add them here.

Virtual Machine Stacks

One thing I started using were virtual machines that have software installed already and don't require a web server e.g.
These are great for getting started quickly, but require a little download overhead - which can be painful over conference internet connections.

Sometimes I set up machines in the cloud, preinstalled:
As an additional backup, I like to have a local version that I can share.

VirtualHostX for the Mac

Since I mainly travel with a Mac Laptop I started using VirtualHostX for that.

VirtualHostX is basically a GUI that helps me work with the existing Mac installed LAMP stack.

I can avoid the Mac and command line config. I can avoid installing everything else, and just use VirtualHostX to configure and start/stop everything.

This saved a massive amount of time for me and I do recommend it. But it is Mac only.

Mongoose for Mac, Windows and Linux

I recently encountered Mongoose. It works on Mac, Windows and Linux. 

I used the free version to quickly experiment with some downloaded open source libraries. 

All you do is download the small executable into the directory, run it, and you get the traditional XAMPP style taskbar tooltip icon and easy to use config. 

You can run multiple versions by having them listen on different ports.

I paid $8 for the Windows dev version which allows me to view the HTTP traffic easily as well. This $8 also gives me access to the Linux Pro version. For an extra $5 I could get access to the MacOS pro version.

Summary

I suspect that 'proper' web developers will always prefer an XAMPP installation. But they will also use it more and be completely familiar with it.

For someone like me, who jumps between apps, configs, machines, sites, etc. 

I suspect that at some point I'll probably jump back to  XAMPP due to some future client needs. But for my own work. VirtualHostX and Mongoose are my current easy to use solutions.

What do you use?

Friday, 28 November 2014

Agile Testing Days 2014 - Workshop and Tutorial

At Agile Testing Days 2014, I presented a full day workshop on "Technical Testing" in Agile and was part of the Black Ops Testing Workshop with Steve Green and Tony Bruce.

Note: there are limited spaces left on our
  Black Ops Testing Full Day tutorial
in London in January 2015

Both of these were hands on events.

In the tutorial I present examples of Technical Testing, and how to integrate Technical Testing into Agile, the participants also test a real live application and apply the techniques, mindsets and tools that I describe.

Since it describe Technical Testing in an Agile system, we also spent time discussing the injection points for the technical testing process and thought processes.

The Black Ops Testing workshop took a similar approach but with less 'talking' since it was a much shorter time period with more people.

We started with a 5 minute lightning talk from myself, Tony, and Steve. During this we emphasized something important that we hoped the participants would focus on during their testing. We then let the participants loose on the system as we mingled. We coached, asked questions and observed. Then during the debrief we extemporize on our observations and ask questions about what we saw to pull out the insights from the participants. We repeat this process over the session.

Both the Black Ops Testing Workshop and my Tutorial used Redmine as the application under test.

We picked Redmine for a number of reasons, and I'll list some below:

  • Virtual Machines are available which make it easy to install
  • The Virtual Machines can be deployed easily to Amazon cloud servers
  • It is available as an Amazon Cloud Marketplace system, making it easy to install
There are some words in there you might notice "easy to install", "deployed easily".

Wouldn't it be great if all the apps we had to test were easy to install and configure and gain access to?

Yes it would. And as testers, when we work on projects, we can stress this to the project team, or work on it ourselves so that we don't spend a lot of time messing about with environments.

I used bitnami and their dashboard to automatically deploy and configure the environment. Tony used the amazon aws marketplace and worked with their dashboard. James Lyndsay helped us out, and went all old school, and deployed the base install to the machine.

I learned from this, that my exploratory note taking approach has permeated all my work. As I was installing the environment and configuring it, I made notes of 'what' I wanted to do, 'where' I was finding the information I needed, 'how' to do the steps I took, what data I was using (usernames, passwords, environment names, etc.). And when I needed to repeat the installation (I installed to a VM in Windows, on Mac, and in the cloud), I had all my notes from the previous installations.

When something went wrong in the environment, and I didn't have access to the parts of the system I needed, I was able to look through my notes and I could see that there were activities I had not performed. I thought I had, but my notes don't lie to me as much as my memory does.

It should come as no surprise to you then, that I stress note taking in my tutorial, and in the Black Ops Testing Workshop.


You can find the slides for the Black Ops Testing Workshop on slideshare. You can find more details about Black Ops Testing over on our .com site


Agile Testing Days 2014 - Keynote

I presented a keynote at Agile Testing Days 2014, and took part in the Black Ops Testing Workshop, and presented a one day tutorial on "Technical Testing in Agile". This post covers the Keynote.

The keynote was underpinned by the notion that 'Agile' is not a 'thing', instead 'Agile' provides the context within which our project operates and therefore is part of the weltanshauung of the project. Or as I referred to it, the "System Of Development".

Because really I concentrate on 'Systems'. I think that I view 'context', as an understanding of the System. And remember that, we, the tester, form part of that system and the context as well. Therefore our 'beliefs' about testing become important as the impact how we interact with the people, and the system, and the process, in place on the project.

As ever, I made a lot of notes before the Keynote, and I present those below.

The slides are available on slideshare. The talk was recorded, but has not yet appeared online. My notes below might help you make sense of the slides.



A few things to note. Many of the keynotes overlapped: talking about role identification and beliefs, communicating from your own experience and models, building models of your process and testing, etc.

And prior to Agile Testing Days, Michael Bolton presented a Eurostar Webinar on 'Agile', which is worth watching, and James Bach presented "Skilled Testing and Agile Development" at Oredev 2014 which is worth watching. I include links to both of those for your delectation because they represent other testers applying Systems Thinking to create a model of 'Testing in Agile', watch them both.


Abstract:

Every Agile project is different, we know this, we don't do things 'by the book' on Agile projects. We learn, we interact, we change, we write the book we go along. Throughout all of this, testing needs to remain viable, and it needs to add value. Remaining viable in this kind of environment can be hard.

Fortunately, we can learn to add value. In this keynote, Alan will describe some of the approaches and models he has used to help testing remain viable. Helping testers analyze the 'system of development' so the test approach can target process risks. Helping testers harness their own unique skills and approaches. The attitudes that the testing process often needs to have driving it, and the skill sets that teams need to ensure are applied to their testing.

At a simple level, this is just Systems Thinking and Modeling. In practice this can prove highly subversive and deliberately provocative. Because we're not talking about 'fitting in', we're talking about survival.

Notes:

Warren Zevon, wrote “Ain’t that pretty at all”, in 1982

Warren Zevon wrote a song called “ain’t that pretty at all"

Warren Zevon was one of those singers who when he comes on, I pretty much have to listen, his voice and songs drag me in, rather than sitting as background music.

In this song, Mr Zevon describes a character who is pretty jaded.

     Well, I've seen all there is to see
     And I've heard all they have to say
     I've done everything I wanted to do . . .
     I've done that too

I know what that feels like. I’ve done management, performance testing, exploratory testing, agile testing, security testing, acceptance testing, UAT, automation, etc.

I’ve worked on Agile, waterfall, heavy weight documentation, lean, yada yada yada. None of it has ever fully worked or been perfect.

Feels like I’ve done everything. the danger is I become jaded or fixed in my ways.

People want Agile to be perfect.

And this Warren Zevon character And doesn’t like what he sees.

And Agile isn’t perfect.

Reality doesn’t seem to match up with his expectations, or desires, or wants.

     And it ain't that pretty at all
     Ain’t that pretty at all

Agile can be messy. It doesn’t always match the books or talks. And when you are new to it sometimes you don’t like what you see.

I went to the grand canyon, took a bus couple of hours to get there. When I got out, everyone else seemed to see a wonder example of mother nature.

I saw some cliffs.

We may not have the strategies to cope

I was back on the bus in 10 minutes. I might be jaded but I can avoid the sunk cost fallacy.

But we might not have the strategies we need to deal with the situation.

    So I'm going to hurl myself against the wall
    'Cause I'd rather feel bad than not feel anything at all

If you come from a waterfall background you might not know how to handle the interactions on an agile project. And if your prep was books and blogs, you might find they described an ideal where the strategies they used are not in place.

Some of our strategies for coping might be self destructive and we might not notice, and other people might not tell us. Because systems are self-healing and they can heal by excluding the toxic thing in the system.

Without the right strategy we make the wrong choice

And when you don’t have a lot of strategies you end up making choices and taking actions that aren’t necessarily the most appropriate for the situation.

Testers telling developers that the project would be better if they just did TDD and paired. Or if we all worked on automation acceptance tests together. Might not get the outcome that you want.

You might fall back on strategies that worked on other projects. But don’t fit in this one.

    So I'm going to hurl myself against the wall
    'Cause I'd rather feel bad than not feel anything at all

You end up wanting to write lots of documentation up front because you’ve always done it, or you want to test in places where the stories don’t go, or you want to remind everyone that ‘Agile’ isn’t done like that.

Whatever...

So very often, my job….

    I've been to Paris
    And it ain't that pretty at all
    I've been to Rome
    Guess what?

Is to help people, with their beliefs and expectations. To work with the people and system around them.

Because when I walked away from the Grand Canyon it wasn’t a reflection on the grand canyon, it was a reflection of me. My expectations were different from the reality. My belief in being wowed, because of all the things I’d heard about it stepped in the way of seeing what was on the ground in front of me. Or in the ground in front of me.

So they don’t do something stupid

   I'd like to go back to Paris someday and visit the Louvre Museum
   Get a good running start and hurl myself at the wall
   Going to hurl myself against the wall
   'Cause I'd rather feel bad than feel nothing at all
   And it ain't that pretty at all
   Ain't that pretty at all

So they don’t do something stupid on the project that stops them maximising their effectiveness, and puts blockers in the way to working effectively with the team.

I help testers survive in Agile projects

That’s never my role title. Test Manager, Test Consultant, Automation Specialist. Agile Tester. Blah Blah Blah.

But I seem to help testers survive, and help testing survive. So that we don’t fall prey to just automating acceptance criteria, we actually test the product and explore its capabilities.

And I say ’survive’, because that's what I had to learn to do.

We survive when we add value

I think we survive when we add value, learn, and make ourselves a viable part of the project in ways that are unique to us.

In “The Princess Bride”, the hero Westley is caught by The Dread Pirate Roberts, whom he offers to be his valet for 5 years.

The Dread Pirate Roberts agrees to try it but says he will most probably kill him in the morning. He’s never had a valet before so doesn’t know how a valet would add value to him, or how he could use him.

And when the morning arrives and The Dread Pirate Roberts comes to kill poor Westley. Westley thanks him for the opportunity to have learned so much about the workings of a pirate ship the previous day.

Westley explains to the dread pirate roberts how much he learned about the workings of the ship, and how he had helped the cook by pairing with him, and how he had reorganised the items in the cargo hold.

And every day Westley survives his capture by the Dread Pirate Roberts by working and adding value during the day. And every day he demonstrates the value that he can add to the operation of the pirate ship. And every day he learns more about piracy and the skills of pirates.

Until, years later, The Dread Pirate Roberts explains that The Dread Pirate Roberts is a role, not a person, and so the dock into a port, take on a new crew and Westley adopts the role of Dread Pirate Roberts.

And testers need to act such that people don't ask "What does testing do on Agile" because they know what their testers do on the project, and they see value in those activities. They know what specifically "Bob and Eris, or Dick and Jane or Janet and John" their testers, actually do.

This isn’t fluffy people stuff

When I look online and see the type of strategies that people describe for surviving and fitting in on Agile projects, it isn’t quite what I do or recommend, so I know there are alternative paths and routes in.

I’m not here to make you look good

I’m really not here to make you look good.

I mean I’m not here to make you look bad… but I might.

And I’m not here to make the product look bad… but I might.

And I’m not here to make the process, or the testing, or the ‘whatever’ look bad… but I might.

If it aint that pretty, then we can do something about it when we recognise that, and negative feedback can help.

I’m really here to help raise the bar and improve the work we all do together. If that makes you look good, then that’s great, if that makes you look bad and you improve as a result, then that’s great too. Looking good is a side-effect.

Survival does not mean 'fitting in'

I’m not talking about fitting in. Buying Lunch. Making people look good. yada yada.

I don’t think you get bonus points for doing that.

But if that’s you, don’t stop. I think its great if people want to buy doughnuts for everyone.

Its not me. So there are other ways to survive on projects.  I don’t have many strategies for ‘social success'

Personally I think team work means helping other people contribute the best stuff they can, and the stuff that only they can, and help pass that knowledge around the team. And everyone needs to to hat.

Testers survive by doing testing stuff

Its pretty much as complicated as that. Learn to test as much as you can, and drop all the bureaucracy and waste that adds no value. Then you’ve covered the basics.

Then you learn how to help others improve by passing on your knowledge and mindsets.

And we are taught how to do ‘testing stuff’ so if we do that well and improve our testing skills then we have a good chance of survival.

Testers are taught to work with the System Under Development

Essentially we are taught how to understand and work with the System Under Development

We learn techniques to analyse the system and model it and then build questions from the model which we ask of the system.

We might build a data domain model, then build questions around its boundaries and then ask the system those questions by phrasing them in form the system understands.

We learn how to do that as testers with techniques and analysis and blah de blah.. testing stuff.

We survive when we adapt to the System Of Development

Testing survives when it learns to work with the Systems in place. And two obvious systems are the System under development, and the System of development. We have to adapt to both.

And fortunately testers are often good at analysing systems. Modeling them. Viewing them from different perspectives. Breaking them into chunks. Viewing the flow of data. working out what data becomes information to subsystems. Looking for data transformations. Looking for risk in process and communication. Then figuring out different injection points.

When we work with systems under development we don't just input data at one end and see what happens out the other. Same with Systems of development, we don't just work at the stories that come in, and then check the system that comes out. We learn to look for different injection points and create feedback earlier.

So one of the main things I help testers do, is analyse the System Of Development, and adapt to it.

Many of the testing survival tricks and techniques we learn relate to Waterfall projects.

Annie Edson Taylor, thought she would become rich and famous if she could survive a trip over the niagra falls in a barrel.

She survived. She wasn’t rich and famous.

She survived by padding out her barrel and increasing the air pressure in the barrel.

You can get yourself ready before you hit Agile.

I survived waterfall by removing all padding, and taking responsibility for what I did.

I analysed the system of development and did what this specific implementation needed, so I could survive, and I took responsibility so that I could add value.

So I was in a better place than many people on waterfall projects when we started working on Agile.

Blake wrote "I must create a system..."

I must create a system or be enslaved by another man's. My business is not to reason and compare, my business is to create.

This is one of the first texts I ever read on system design and modelling, and I read it when I was about 19 or 20, and stuff stays with you when you encounter it early.

But this is my meta model. of modelling. And I know I have to own and take responsibility for my view of the world and the systems that I work with. Otherwise I’ll fall prey to ‘other’ peoples’ strategies.

I remember the first time I worked on an ‘Agile’ project

I had to learn the strategies to survive. And that’s how I can recognise the same blockers in other testers or projects new to Agile.

I fell prey to the Agile Hype

I had a whole set of beliefs about what agile was going to be like:

- the pairing
- the TDD
- the ATDD
- the BDD
- the ADBDBTBD - I might have made that one up

But there was a lot of stuff.

And it ain't that pretty at all

Reality wasn’t as pretty as I imagined.

I got stuck.

Remember I knew how to handle Waterfall. I had that down.

I could work the system of development and work around the blocks and annoyances that it threw at me.

But here was a new thing. A new system.

Stuck on...

I knew what we were ‘supposed’ to do. But people weren’t really doing that. and I didn’t know how to do this… hybrid thing.

I realised that people didn’t know what to do.

And I realised I simply didn’t have the basic skills to work in this new system of development.

I was the worst ‘pair’ in the world. You know when deer get caught in headlights. That was me the first time I was given the keyboard in a pairing session.

So I did what I always do...

Try to take over the world.

No - of course not. I’ve tried that before. Trying to impose your model on top of everyone else’s and making it work, is hard.

Its easier to treat the system as a unique system and model it. Observe it. Understand it.

Look at the parts, the subsystems, the relationships, the feedback flows, the amplifiers, the attenuators.

See I think this is what I do, I work with systems

That’s why my social strategies are… different.

I see the people systems in front of me. The processes. The politics. etc.

And that led to me changing… me

And much of what I did, I teach testers to do on projects.

I worked on my coding skills so I could pair

Every Agile Project is different. 

So we learn to analyse the system that is in place. TheWeltanschauung. The context.

System's have common concepts and elements, in general: entities, relationships, data flows, subsystems etc.

But each specific system, is different, and we can view it in different ways, with different models.

Thursday, 6 November 2014

Confessions of An Accidental Security Tester

At Oredev 2014 I presented "Confessions of an Accidental Security Tester".

The slides are on slideshare, the video below and on vimeo:


"Alan Richardson does not describe himself as a security tester. He's read the books so he knows enough to know he doesn't know or do that stuff properly. But he has found security issues, on projects, and on live sites that he depends on for his business.

You want to know user details? Yup, found those. You want to download the paid for assets from the site without paying for them? Yup, can do. You want to see the payment details for other people? OK, here they are. All of this, and more, as Alan stumbled, shocked, from one security issue to the next,

In this session Alan describes examples of security issues, and how he found them: the tools he used, why he used them, what he observed and what that triggered in his thought processes.

Perhaps most shocking, is not that the issues were live, and relatively easy to find and exploit. But that the companies were so uninterested in them. So this talk also covers how to 'advocate' for these issues. It also warns you not to expect rewards and gratitude. Companies with these type of issues typically do not have bug bounty schemes.

Nowadays, many of the tools you need to find and exploit these issues are built in to the browser. Anyone could find them. But testers have a head start. So in this session Alan shows how you can build on the knowledge and thought processes you already have, to find these types of issues.

This is a talk about pushing your functional testing further, deeper, and with more technical observation, so you too can 'accidentally' discover security issues."

CONFESSIONS OF AN ACCIDENTAL SECURITY TESTER - "I DIDN'T BREAK IN, YOU LEFT THE DOOR OPEN" from ├średev Conference on Vimeo.

Wednesday, 24 September 2014

An exploratory testing example explored: Taskwarrior

or "Why I explored Taskwarrior the way I did".

In a previous post I discussed the tooling environment that I wanted to support my testing of Taskwarrior for the Black Ops Testing webinar of 22nd September 2014.

In this post, I'll discuss the 'actual testing' that I performed, and why I think I performed it the way I did. At the end of the post I have summarised some 'principles' that I have drawn from my notes.

I didn't intend to write such a long post, but I've added sections to break it up. Best of luck if you make it all the way through.


First Some Context

First, some context:
  • 4 of us, are involved in the testing for the webinar
  • I want the webinars to have entertainment and educational value
  • I work fast to try and find 'talking points' to support the webinar

The above means that I don't always approach the activity the same way I would a commercial project, but I use the same skills and thought processes. 

Therefore to meet the context:
  • I test to 'find bugs fast' because they are 'talking points' in the webinar
  • I explore different ways of testing because we learn from that and can hopefully talk about things that are 'new' for us, and possibly also new for some people on the webinar

I make these points so that you don't view the testing in the webinar as 'this is how you should do exploratory testing' but you understand 'why' I perform the specific exploration I talk about in the webinar.

And then I started Learning


So what did I do, and what led to what?

(At this point I'm reading my notes from github to refresh my memory of the testing)

After making sure I had a good enough toolset to support my testing...

I spent a quick 20 minutes learning the basics of the application using the 30 second guide. At this point I knew 'all you need to know' to use the application and create tasks, delete tasks, complete tasks and view upcoming tasks.

Because of the tooling I set up, I can now see that there are 4 files used in the application to store data (in version 2.2)

  • pending.data
  • history
  • undo.data
  • completed.data

From seeing the files using multitail, I know:
  • the format of the data
  • items move from one file to another
  • undo.data has a 'before' and 'after' state
  • history would normally look 'empty' when viewing the file, but because I'm tailing it, I can see data added and then removed, so it acts as a temporary buffer for data

In the back of my head now, I have a model that:
  • there might be risks moving data from one file to another if a file was locked etc. 
  • text files can be amended, 
    • does the app handle malformed files? truncated files? erroneous input in files?
  • state is represented in the files, so if the app crashed midway then the files might be out of sync
  • undo would act on a task action by task action
  • as a user, the text files make it simpler for me to recover from any application errors and increase the flexibility open to me as a technical user
  • as a tester, I could create test data pretty easily by creating files from scratch or amending the files
  • as a tester, I can put the application into the state I want by amending the files and bypassing the GUI

All of these things I might want to test for if I was doing a commercial exploratory session - I don't pursue these in this session because 'yes they might make good viewing' but 'I suspect there are bugs waiting in the simpler functions'.

After 20 minutes:
  • I have a basic understanding of the application, 
  • I've read minimal documentation, 
  • I've had hands on experience with the application,
  • I have a working model of the application storage mechanism
  • I have a model of some risks to pursue

I take my initial learning and build a 'plan'


I decide to experiment with minimal data for the 'all you need to know' functionality and ask myself a series of questions, which I wrote down as a 'plan' in my notes.

  • What if I do the minimal commands incorrectly?

Which I then expanded as a set of sub questions to explore this question:
  • if I give a wrong command?
  • if I miss out an id?
  • if I repeat a command?
  • if a task id does not exist?
  • if I use a priority that does not exist?
  • if I use an attribute that does not exist?
This is not a complete scope expansion for the command and attributes I've learned about, but this is a set of questions that I can ask of the application as 'tests'.

I gain more information about the application and learn stuff, and cover some command and data scope as I do so.

I start to follow the plan


10 minutes later I start 'testing' by asking these questions to the application.

Almost immediately my 'plan' of 'work through the questions and learn stuff' goes awry.

I chose 'app' as a 'wrong command' instead of 'add'. but the application prompts me that it will modify all tasks. I didn't expect that. So I have to think:
  • the message by the application suggests that 'app' == 'append'
  • I didn't know there was an append command, I didn't spot that. A quick 'task help | grep append' tells me that there is an append command. I try to grep for 'app' but I don't see that 'app' is a synonym for append. Perhaps commands can be truncated? (A question about truncation to investigate for later)
  • I also didn't expect to see 3 tasks listed as being appended to, since I only have one pending task, one completed task, and one deleted task. (Is that normal? A question to investigate in the future)

And so, with one test I have a whole new set of areas to investigate and research. Some of this would lead to tests. Some of this would lead to reading documentation. Some of this would lead to conversations with developers and analysts and etc. if I was working on a project.

I conduct some more testing using the questions, and add the answers to my notes.

I also learn about the 'modify' command.

Now I have CRUD commands: 'add', 'task', 'modify' | 'done', 'delete'

I can see that I stop my learning session at that point - it coincidentally happens to be 90 mins. Not by design. Just, that seemed to be the right amount of time for a focused test learning session.

I break and reflect


When I come back from my break, I reflect on what I've done and decide on an approach to ask questions of:
  • how can I have the system 'show' me the data? i.e. deleted data
  • how can I add a task with a 'due' date to explore more functionality?
These questions lead me to read the documentation a little more. And I discover how to show the internal data from the system using the 'info' command. I also learn how to see 'deleted' tasks and a little more about filtering.

Seeing the data returned by 'info' makes me wonder if I can 'modify' the data shown there. 

I start testing modify


I already know that I can delete a task with the delete command. But perhaps I can modify the 'status' and create a 'deleted' task that way?

And I start to explore the modify command using the attributes that are shown to me, by the application. (you can see more detail in the report)

And I can 'delete' a task using the modify, but it does not exercise the full workflow i.e. the task is still in the pending file until I issue a 'report'. 

My model of the application changes:
  • reporting - actually does a data clean up, prior to the report and moves deleted tasks and hanging actions so that the pending file is 'clean'
  • 'modify' bypasses some of the top level application controls so might be a way of stressing the functionality a little
At this point, I start to observe unexpected side-effects in the files. And I find a bug where I can create a blank undo record that I can't undo (we demonstrate this in the webinar).

This is my first 'bug' and it is a direct result of observing a level lower than the GUI, i.e. at a technical level.

I start modifying calculated fields


I know, from viewing the storage mechanism that some fields shown on the GUI i.e. ID, and Urgency. Do not actually exist. They do not exist in the storage mechanism in the file. They are calculated fields and exist in the internal model of the data, not in the persistent model.

So I wonder if I can modify those?

The system allows me to modify them, and add them to the persistence mechanism, but ignores them for the internal model.

This doesn't seem like a major bug, but I believe it to be a bug since other attributes I try to modify, which do not exist in the internal model e.g. 'newAttrib' and 'bob' are not accepted by the modify command, but 'id' and 'urgency' are.

I start exploring the 'entry' value


The 'entry' value is the date the task was entered. This is automatically added:

  •  Should I be able to amend it?
  • What happens if I can?
So I start to experiment.

I discover that I can amend it, and that I can amend it to be in the future.

I might expect this with a 'due' date. But not a 'task creation date' i.e. I know about this task but it hasn't been created yet.

I then check if this actually matters. i.e. if tasks that didn't officially exist yet are always less urgent than tasks that do, then it probably didn't matter.

But I was able to see that a task that didn't exist yet was more urgent than a task that did.

follow on questions that I didn't pursue would then relate to other attributes on the task:
  •  if the task doesn't exist yet, should priority matter?
  • Can I have a due date before the entry exists?
  • etc.
Since I was following a defect seam, I didn't think about this at the time.

And that's why we review our testing. As I'm doing now. To reflect on what I did, and identify new ways of using the information that I found.

Playtime


I'd been testing for a while, so I wanted a 'lighter' approach.

I decided to see if I could create a recurring task, that recurred so quickly, that it would generate lots of data for me.

Recurring tasks are typically, weekly or daily. But what if a task repeated every second? in a minute I could create 60 and have more data to play with.

But I created a Sorcerers Apprentice moment. Tasks were spawning every second, but I didn't know how to stop them. The application would not allow me to mark a 'parent' recurring task as done, or delete a 'parent' recurring task. I would have to delete all the children, but they were spawning every second. What could I do?

I could probably amend the text files, but I might introduce a referential integrity or data error. I really wanted to use the system to 'fix' this.

Eventually I turned to the 'modify' command. If I can't mark the task as 'deleted' using the 'delete' command. Perhaps I can bypass the controls and modify it to 'status:deleted'

And I did. So a 'bug' that I identify about bypassing controls, was actually useful for my testing. Perhaps the 'modify' command is working as expected and is actually for admins etc.

Immediate Reflection


I decided to finish my day by looking at my logs.

I decided:

  • If only there were a better way of tracking the testing.

Which was one of the questions I identify in my 'tool setup session' but decided I had a 'good enough' approach to start.

And now, having added some value with testing, and learning even more about what I need the tooling to support. I thought I could justify some time to improve my tooling.

I had a quick search, experimented with the 'script' command, but eventually found that using a different terminal which logged input and output would give me an eve better 'good enough' environment.

Release


We held the Black Ops Testing Webinar and discussed our testing, where I learned some approaches from the rest of the team.

I released my test notes to the wild on github.

Later Reflection


This blog post represents my more detailed reflection on what I did.

This reflection was not for the purpose of scope expansion, clearly I could do 'more' testing around the areas I mentioned in my notes.

This reflection was for the purpose of thinking through my thinking, and trying to communicate it to others as well as myself. Because it is all very well seeing what I did, with the notes. And seeing the questions that I wrote allows you to build a model of my test thinking. But a meta reflection on my notes seemed like a useful activity to pursue.

If you notice any principles of approaches in my notes that I didn't document here, then please let me know in the comments.

Principles

  • Questions can drive testing...
    • You don't need the answers, you just need to know how to convert the question into a form the application can understand. The application knows all the answers, and the answers it gives you are always correct. You have to decide if those 'correct' answers were the ones you, or the spec, or the user, or the data, or etc. etc., actually wanted.
  • The answers you receive from your questions will drive your testing
    • Answers lead to more questions. More questions drive more testing.
  • The observations you make as you ask questions and review your answers, will drive your testing.
    • The level to which you can observe, will determine how far you can take this. If you only observe at the GUI layer then you are limited to a surface examination. I was able to observe at the storage layer, so I was able to pursue different approaches than someone working at the GUI.
  • Experiment with internal consistency
    • Entity level with consistency between attributes
    • Referential consistency between entities
    • Consistency between internal storage and persistent storage
    • etc.
  • Debriefs are important to create new plans of attack
    • personal debriefs allow you to learn from yourself, identify gaps and new approaches
    • your notes have to support your debriefs otherwise you will work from memory and don't give yourself all the tools you need to do a proper debrief.
  • Switch between 'serious' testing and 'playful' testing
    • It gives your brain a rest
    • It keeps you energised
    • You'll find different things.