Wednesday, 24 September 2014

An exploratory testing example explored: Taskwarrior

or "Why I explored Taskwarrior the way I did".

In a previous post I discussed the tooling environment that I wanted to support my testing of Taskwarrior for the Black Ops Testing webinar of 22nd September 2014.

In this post, I'll discuss the 'actual testing' that I performed, and why I think I performed it the way I did. At the end of the post I have summarised some 'principles' that I have drawn from my notes.

I didn't intend to write such a long post, but I've added sections to break it up. Best of luck if you make it all the way through.


First Some Context

First, some context:
  • 4 of us, are involved in the testing for the webinar
  • I want the webinars to have entertainment and educational value
  • I work fast to try and find 'talking points' to support the webinar

The above means that I don't always approach the activity the same way I would a commercial project, but I use the same skills and thought processes. 

Therefore to meet the context:
  • I test to 'find bugs fast' because they are 'talking points' in the webinar
  • I explore different ways of testing because we learn from that and can hopefully talk about things that are 'new' for us, and possibly also new for some people on the webinar

I make these points so that you don't view the testing in the webinar as 'this is how you should do exploratory testing' but you understand 'why' I perform the specific exploration I talk about in the webinar.

And then I started Learning


So what did I do, and what led to what?

(At this point I'm reading my notes from github to refresh my memory of the testing)

After making sure I had a good enough toolset to support my testing...

I spent a quick 20 minutes learning the basics of the application using the 30 second guide. At this point I knew 'all you need to know' to use the application and create tasks, delete tasks, complete tasks and view upcoming tasks.

Because of the tooling I set up, I can now see that there are 4 files used in the application to store data (in version 2.2)

  • pending.data
  • history
  • undo.data
  • completed.data

From seeing the files using multitail, I know:
  • the format of the data
  • items move from one file to another
  • undo.data has a 'before' and 'after' state
  • history would normally look 'empty' when viewing the file, but because I'm tailing it, I can see data added and then removed, so it acts as a temporary buffer for data

In the back of my head now, I have a model that:
  • there might be risks moving data from one file to another if a file was locked etc. 
  • text files can be amended, 
    • does the app handle malformed files? truncated files? erroneous input in files?
  • state is represented in the files, so if the app crashed midway then the files might be out of sync
  • undo would act on a task action by task action
  • as a user, the text files make it simpler for me to recover from any application errors and increase the flexibility open to me as a technical user
  • as a tester, I could create test data pretty easily by creating files from scratch or amending the files
  • as a tester, I can put the application into the state I want by amending the files and bypassing the GUI

All of these things I might want to test for if I was doing a commercial exploratory session - I don't pursue these in this session because 'yes they might make good viewing' but 'I suspect there are bugs waiting in the simpler functions'.

After 20 minutes:
  • I have a basic understanding of the application, 
  • I've read minimal documentation, 
  • I've had hands on experience with the application,
  • I have a working model of the application storage mechanism
  • I have a model of some risks to pursue

I take my initial learning and build a 'plan'


I decide to experiment with minimal data for the 'all you need to know' functionality and ask myself a series of questions, which I wrote down as a 'plan' in my notes.

  • What if I do the minimal commands incorrectly?

Which I then expanded as a set of sub questions to explore this question:
  • if I give a wrong command?
  • if I miss out an id?
  • if I repeat a command?
  • if a task id does not exist?
  • if I use a priority that does not exist?
  • if I use an attribute that does not exist?
This is not a complete scope expansion for the command and attributes I've learned about, but this is a set of questions that I can ask of the application as 'tests'.

I gain more information about the application and learn stuff, and cover some command and data scope as I do so.

I start to follow the plan


10 minutes later I start 'testing' by asking these questions to the application.

Almost immediately my 'plan' of 'work through the questions and learn stuff' goes awry.

I chose 'app' as a 'wrong command' instead of 'add'. but the application prompts me that it will modify all tasks. I didn't expect that. So I have to think:
  • the message by the application suggests that 'app' == 'append'
  • I didn't know there was an append command, I didn't spot that. A quick 'task help | grep append' tells me that there is an append command. I try to grep for 'app' but I don't see that 'app' is a synonym for append. Perhaps commands can be truncated? (A question about truncation to investigate for later)
  • I also didn't expect to see 3 tasks listed as being appended to, since I only have one pending task, one completed task, and one deleted task. (Is that normal? A question to investigate in the future)

And so, with one test I have a whole new set of areas to investigate and research. Some of this would lead to tests. Some of this would lead to reading documentation. Some of this would lead to conversations with developers and analysts and etc. if I was working on a project.

I conduct some more testing using the questions, and add the answers to my notes.

I also learn about the 'modify' command.

Now I have CRUD commands: 'add', 'task', 'modify' | 'done', 'delete'

I can see that I stop my learning session at that point - it coincidentally happens to be 90 mins. Not by design. Just, that seemed to be the right amount of time for a focused test learning session.

I break and reflect


When I come back from my break, I reflect on what I've done and decide on an approach to ask questions of:
  • how can I have the system 'show' me the data? i.e. deleted data
  • how can I add a task with a 'due' date to explore more functionality?
These questions lead me to read the documentation a little more. And I discover how to show the internal data from the system using the 'info' command. I also learn how to see 'deleted' tasks and a little more about filtering.

Seeing the data returned by 'info' makes me wonder if I can 'modify' the data shown there. 

I start testing modify


I already know that I can delete a task with the delete command. But perhaps I can modify the 'status' and create a 'deleted' task that way?

And I start to explore the modify command using the attributes that are shown to me, by the application. (you can see more detail in the report)

And I can 'delete' a task using the modify, but it does not exercise the full workflow i.e. the task is still in the pending file until I issue a 'report'. 

My model of the application changes:
  • reporting - actually does a data clean up, prior to the report and moves deleted tasks and hanging actions so that the pending file is 'clean'
  • 'modify' bypasses some of the top level application controls so might be a way of stressing the functionality a little
At this point, I start to observe unexpected side-effects in the files. And I find a bug where I can create a blank undo record that I can't undo (we demonstrate this in the webinar).

This is my first 'bug' and it is a direct result of observing a level lower than the GUI, i.e. at a technical level.

I start modifying calculated fields


I know, from viewing the storage mechanism that some fields shown on the GUI i.e. ID, and Urgency. Do not actually exist. They do not exist in the storage mechanism in the file. They are calculated fields and exist in the internal model of the data, not in the persistent model.

So I wonder if I can modify those?

The system allows me to modify them, and add them to the persistence mechanism, but ignores them for the internal model.

This doesn't seem like a major bug, but I believe it to be a bug since other attributes I try to modify, which do not exist in the internal model e.g. 'newAttrib' and 'bob' are not accepted by the modify command, but 'id' and 'urgency' are.

I start exploring the 'entry' value


The 'entry' value is the date the task was entered. This is automatically added:

  •  Should I be able to amend it?
  • What happens if I can?
So I start to experiment.

I discover that I can amend it, and that I can amend it to be in the future.

I might expect this with a 'due' date. But not a 'task creation date' i.e. I know about this task but it hasn't been created yet.

I then check if this actually matters. i.e. if tasks that didn't officially exist yet are always less urgent than tasks that do, then it probably didn't matter.

But I was able to see that a task that didn't exist yet was more urgent than a task that did.

follow on questions that I didn't pursue would then relate to other attributes on the task:
  •  if the task doesn't exist yet, should priority matter?
  • Can I have a due date before the entry exists?
  • etc.
Since I was following a defect seam, I didn't think about this at the time.

And that's why we review our testing. As I'm doing now. To reflect on what I did, and identify new ways of using the information that I found.

Playtime


I'd been testing for a while, so I wanted a 'lighter' approach.

I decided to see if I could create a recurring task, that recurred so quickly, that it would generate lots of data for me.

Recurring tasks are typically, weekly or daily. But what if a task repeated every second? in a minute I could create 60 and have more data to play with.

But I created a Sorcerers Apprentice moment. Tasks were spawning every second, but I didn't know how to stop them. The application would not allow me to mark a 'parent' recurring task as done, or delete a 'parent' recurring task. I would have to delete all the children, but they were spawning every second. What could I do?

I could probably amend the text files, but I might introduce a referential integrity or data error. I really wanted to use the system to 'fix' this.

Eventually I turned to the 'modify' command. If I can't mark the task as 'deleted' using the 'delete' command. Perhaps I can bypass the controls and modify it to 'status:deleted'

And I did. So a 'bug' that I identify about bypassing controls, was actually useful for my testing. Perhaps the 'modify' command is working as expected and is actually for admins etc.

Immediate Reflection


I decided to finish my day by looking at my logs.

I decided:

  • If only there were a better way of tracking the testing.

Which was one of the questions I identify in my 'tool setup session' but decided I had a 'good enough' approach to start.

And now, having added some value with testing, and learning even more about what I need the tooling to support. I thought I could justify some time to improve my tooling.

I had a quick search, experimented with the 'script' command, but eventually found that using a different terminal which logged input and output would give me an eve better 'good enough' environment.

Release


We held the Black Ops Testing Webinar and discussed our testing, where I learned some approaches from the rest of the team.

I released my test notes to the wild on github.

Later Reflection


This blog post represents my more detailed reflection on what I did.

This reflection was not for the purpose of scope expansion, clearly I could do 'more' testing around the areas I mentioned in my notes.

This reflection was for the purpose of thinking through my thinking, and trying to communicate it to others as well as myself. Because it is all very well seeing what I did, with the notes. And seeing the questions that I wrote allows you to build a model of my test thinking. But a meta reflection on my notes seemed like a useful activity to pursue.

If you notice any principles of approaches in my notes that I didn't document here, then please let me know in the comments.

Principles

  • Questions can drive testing...
    • You don't need the answers, you just need to know how to convert the question into a form the application can understand. The application knows all the answers, and the answers it gives you are always correct. You have to decide if those 'correct' answers were the ones you, or the spec, or the user, or the data, or etc. etc., actually wanted.
  • The answers you receive from your questions will drive your testing
    • Answers lead to more questions. More questions drive more testing.
  • The observations you make as you ask questions and review your answers, will drive your testing.
    • The level to which you can observe, will determine how far you can take this. If you only observe at the GUI layer then you are limited to a surface examination. I was able to observe at the storage layer, so I was able to pursue different approaches than someone working at the GUI.
  • Experiment with internal consistency
    • Entity level with consistency between attributes
    • Referential consistency between entities
    • Consistency between internal storage and persistent storage
    • etc.
  • Debriefs are important to create new plans of attack
    • personal debriefs allow you to learn from yourself, identify gaps and new approaches
    • your notes have to support your debriefs otherwise you will work from memory and don't give yourself all the tools you need to do a proper debrief.
  • Switch between 'serious' testing and 'playful' testing
    • It gives your brain a rest
    • It keeps you energised
    • You'll find different things.


Tuesday, 23 September 2014

Lessons learned testing Command Line Applications from Black Ops Testing Webinar

For the Black Ops Testing Webinar on 22nd September 2014 we were testing Taskwarrior, a CLI application.

I test a lot of Web software, hence the "Technical Web Testing 101" course, but this was a CLI application so I needed to get my Unix skills up to date again, and figure out what supporting infrastructure I needed.

By the way, you can see the full list of notes I made at github. In this post I'm going to explain the thought process a little more.

Before I started testing I wanted to make sure I could meet my 'technical testing' needs:
  • Observe the System
  • Control the Environment
  • Restore the App to known states
  • Take time stamp logs of my testing
  • Backup my data and logs from the VM to my main machine
And now I'll explain how I met those needs... and the principles the process served.



You will also find here a real life exploratory testing log that I wrote as I tested.

Observe the System

Part of my Technical Testing approach requires me to have the ability to Observe the system that I test.

With the web this is easy, I use proxies and developer tools.

How to do this with a CLI app? Well in this case Taskwarrior is file based. So I hunted around for some file monitoring tools.

I already knew about Multitail, so that was my default.  'Tail' allows you to monitor the changes to the end of a file. Multitail allows me to 'tail' multiple files in the same window.

I looked around for other file monitoring tools, but couldn't really find any.

With Multitail I was able to start it with a single command that was 'monitor all files in this directory'
  • multitail -Q 1 ~/.task/*

I knew that the files would grow larger than tail could display, so I really wanted a way to view the files.

James Lyndsay used Textmate to see the files changing. I didn't have time to look around for an editor that would do that (and I didn't know James was using Textmate because we test in isolation and debrief later so we can all learn from each other and ask questions). 

So I used gedit. The out of the box editing tool on Linux Ubuntu. This will reload the file if it has changed on the disk when I change tabs. And since I was monitoring the files using Multitail, I knew when to change tabs.

OK, so I'm 'good' on the monitoring front.

Control the Environment

Next thing I want the ability to do? Reset my environment to a clean state.

With Taskwarrior that simply involves deleting the data files:
  • rm ~/.task/*

Restore the App to known states

OK. So now I want the ability to backup my data, and restore it.

I found a backup link on the Taskwarrior site. But I wanted to zip up the files rather than tar them, simply because it made it easier to work cross platform.
  • zip -r ~/Dropbox/shared/taskData.zip ~/.task
The above zips up, recursively, a directory. I used recursive add, just in case Taskwarrior changed its file setup as I tested. Because my analysis of how it stored the data was based on an initial 'add a few tasks' usage I could not count on it remaining true for the life of the application usage. Perhaps as I do more complicated tasks it would change? I didn't know, so using a 'recursive add' gave me a safety net.

Same as the Multitail command, which will automatically add any new files it finds in the directory, so it added a history file after I started monitoring.

Backup my data and logs from the VM to my main machine

I would do most of my testing in a VM, and I really wanted the files to exist on my main machine because I have all the tools I need to edit and process them there. And rather than messing about with a version control system or shared folders between VM etc.

I decided the fastest way of connecting my machines was by using Dropbox. So I backup the data and my logs to Dropbox on the VM and it automatically syncs to all my other machines, including my desktop.

Take time stamp logs of my testing

I had in the back of my mind that I could probably use the 'history' function in Bash to help me track my testing.

Every command you type into the Bash shell is recorded in the history log. And you can see the commands you type if you use the 'history' command. You can even repeat them if you use '!' i.e. '!12' repeats the 12th command in the history.

I wanted to use the log as my 'these are the actual commands I typed in over this period of time.
So I had to figure out how to add time stamps to the history log
  • export HISTTIMEFORMAT='%F %T    '

I looked for simple way to extract items from the history log to a text file, but couldn't find anything in time so I eventually settled on a simple redirect e.g. redirect the output from the history command to a text file (in dropbox of course to automatically sync it)
  • history > ~/Dropbox/shared/task20140922_1125_installAndStartMultiTail.txt

And since I did not add the HISTTIMEFORMAT to anything global I tagged it on the front of my history command
  • export HISTTIMEFORMAT='%F %T    '; history > ~/Dropbox/shared/task20140922_1125_installAndStartMultiTail.txt

I added 'comments' to the history using the 'echo' command, which displays the text on screen and appears in the history as an obviously non-testing command.

I also found with history:
  • add a ' ' space before the command and it doesn't appear in the history - great for 'non core' actions
  • you can delete 'oops' actions from the history with 'history -d #number'

And that was my basic test environment setup. If I tracked this as session based testing then this would have been a test environment setup session.

All my notes are in the full testing notes I made.

I could have gone further:
  • ideally I wanted to automatically log the output of commands as well as the command input
  • I could have figured out how to make the HISTTIMEFORMAT stick
  • I could have figured out how to have the dropbox daemon run automatically rather than running in a command window
  • I could have found a tool to automatically refresh a view of the full file (i.e. as James did with textmate)
  • etc.

But I had done enough to meet my basic needs for Observation, Manipulation and Reporting.
Oh, and I decided on a Google doc for my test notes very early on, because, again, that would be real time accessible from all the machines I was working from. I exported a pdf of the docs to the gihub repo.

And what about the testing?

Yeah, I did some testing as well. You can see the notes I made for the testing in the github repo.

So how did the tools help?


As I was testing, I was able to observe the system as I entered commands.

So I 'task add' a new task and I could see an entry added to the pending.data, and the history.data, and the undo.data.

As I explored, I was able to see 'strange' data in the undo.data file that I would have missed had I not had visibility into the system.

You are able to view the data if you want, because I backed up up as I went along, and it automatically saved to dropbox, which I later committed to github.

What else?

At the end of the day, I spent a little time on the additional tooling tasks I had noted earlier.

I looked at the 'script' command which allows you to record your input and output to a file, but since it had all the display esc characters in it, it wasn't particularly useful for reviewing.

Then I thought I'd see what alternative terminal tools might be useful.

I initially thought of Terminator, since I've used it before. And when I tried it, I noticed the logger plug-in, which would have been perfect, had I had that at the start of my testing since the output is human readable and captures the command input and the system output as you test. You can see an example of the output in the github repo.

I would want to change the bash prompt so that it had the date time in it though so the log was more self documenting.

You can see the change in my environment in the two screenshots I captured.

  • The initial setup was just a bunch of terminals 
  • The final setup has Terminator for the execution, with gedit for viewing the changed files, and multitail on the right showing me the changes in the system after each command

Principles

And as I look over the preceding information I can pull out some principles that underpin what I do.
  • Good enough tooling
    • Ideally I'd love a fully featured environment and great tools, but so long as my basic needs are met, I can spend time on testing. There is no point building a great environment and not testing. But similarly, if I don' t have the tools, I can't see and do certain things that I need in my testing.
  • Learn to use the default built in tools
    • Gedit, Bash, History - none of that was new. By using the defaults I have knowledge I can transfer between environments. And can try and use the tools in more creative ways e.g. 'echo' statements in the history
  • Automated Logging doesn't trump note-taking
    • Even if I had Terminator at the start of my testing, and a full log. I'd still take notes in my exploratory log. I need to write down my questions, conclusions, thoughts, research links, etc. etc.
    • If I only had one, I'd take my exploratory log above any automated log taking tool.
  • Write the log to be read
    • When you read the exploratory log, I think it is fairly readable (you might disagree). But I did not go back and edit the content. That is all first draft writing.
      • What do I add in the edit? I answer any questions that I spot that I asked and answered. I change TODOs that I did. etc. I try not to change the substance of the text, even if my 'conclusions' as to how something are working are wrong at the start, but change at the end, I leave it as is, so I have a log of my changing model.
You might identify other principles in this text. If you do, please leave a comment and let me know.

Why didn't you use tool X?

If you have particular tools that you prefer to use when testing CLI applications then please leave a comment so I can learn from your experience.


Wednesday, 10 September 2014

Using Wireshark to observe Mobile HTTP Network traffic

You can find wireshark on line - it is a free tool.

https://www.wireshark.org/

Note that you may not be able to capture the mobile traffic on Windows because of WinPCap limitations. You may need to buy an additional adapter to do this. I'm using Mac to show you this functionality.
http://wiki.wireshark.org/CaptureSetup/WLAN#windows

What is it?

Wireshark is a tool for monitoring network traffic. Unlike an HTTP proxy server where you have to configure your machine to point to the HTTP proxy server in order to monitor the traffic. With Wireshark, you tell it to capture traffic from your network card, and it can then capture any traffic going through that network.
So if your mobile device is on the same wifi network as your Wireshark machine's wifi card. Then you can capture the wifi traffic, filter it, and then monitor the HTTP traffic from your mobile device.

Why would I want to do that?

Because sometimes the mobile app you are testing does not honour the proxy settings of the device and goes direct, so you don't see the traffic.
And because you can start learning more about the network traffic layers being used by your application and your device in general.
(It's also fun to hook into hotel wifi and airport lounge wifi - but don't tell anyone.)
But the serious point, is that we know we want to observe the http traffic. If we have the issue that we can't because we can't configure the app to point to the proxy then we need other options. We need to increase our flexibility to approaching the observation. So we have a new option - work at the network traffic level, rather than proxy.
Our aim is to keep looking for new ways of achieving our outcomes. Not finding tools, for the sake of tools. But finding new approaches.

Installing Wireshark

On Windows

The Windows install is simple. Just download and run.
https://www.wireshark.org/download.html

On Mac

Mac install was a little harder for me and it didn't work out the box so I had to do the extra steps to add the application to XQuartz
If it doesn't work then you could try, start xquartz In the Applications menu of Xquartz, customize it and "Add Item" with the command:
  • "open /Applications/Wireshark.app/Contents/MacOs/Wireshark"
  • or "open wireshark"
  • or "wireshark"
Then you could try, running wireshark from the Applications menu in XQuartz, or from the application icon directly.
You might find these links helpful if you are on a mac:

On Linux

I haven't tried the install on linux - I imagine the instructions on the Wireshark website work fine.

First Usage

Wireshark, can seem intimidating initially to work with.
It is a complicated tool and there is a lot to learn about it.

Start a Capture

On the main page, select your network card hooked to the wifi network. Then click "Capture Options".
In Capture options table. Check to see that "Mon. Mode" says enabled, for the interface you want to use. If it doesn't, you'll only see your own traffic.
To change "Mon. Mode", double click the item in the table, and choose "Capture packets in promiscuous mode" and "Capture packets in monitor mode", press [OK].
Then [Start] the capture.
If you are on an encrypted network then you might need to decrypt the traffic.
http://wiki.wireshark.org/HowToDecrypt802.11
I sometimes have to fiddle with the IEEE 802.11 preferences: changing them, hitting apply, changing them, etc. Until I see the actual http traffic.
I also have to disconnect the android device from the network, and then reconnect it, so that it sends the initial network connection and decryption packets. Feel free to test on open networks where you don't have these type of issues because they are insecure if you want to.

Filter the capture

At this point your going to start seeing a lot of traffic flowing through your network.
So you want to filter it.
in the filter text editor type "ip.addr eq 192.168.1.143" or whatever the ip of your device is, to start seeing that traffic.
then if you just want to see the http traffic you can do
"ip.addr eq 192.168.1.143 and http"
or, to see just the GET requests:
"ip.addr eq 192.168.1.143 and http.request.method eq "GET""
This is a useful tool to have in your toolbox, for those moments where you have less control over the application under test, but still want to observe the traffic in your testing.

You can see examples of Wireshark in action to help test an Android app in our Technical Web Testing 101 online course.

Monday, 8 September 2014

StarEast 2014 Lightning Talk: "A Sense of Readiness"

At StarEast 2014, I presented a Lightning Talk as part of their "Lightning Strikes the Keynotes"
You can watch it here.
I make quite a lot of notes and prep for my talks before I present them, and so in this post I will walk you through some of the notes, and the process I used to get ready for the talk.
And I'll use the medium of the blog to expand on the topic a little with additional lessons learned from pulp authors, relating to test planning and preparation.


I think a lightning talk is 'as hard' as a full talk, in some ways it is harder. For those people who present a lightning talk as their 'first talk' because they think it will be easier. The only 'easy' part is that you are on and off the stage faster. I find I have to work much harder to condense my message into such a small time frame. 
I made notes on a few different topics, but eventually decided upon the theme:
  • "Are you ready to start testing tomorrow?"
with the title 
  • "A sense of urgency"? or "A sense of readiness"?
Because... "Are you ready to start testing tomorrow?" was a question I use when evaluating my strategy and planning process as a test manager, and when I'm a tester. I always want to know that I am ready to test tomorrow (or now) if I have to. And because I don't think everyone else adopts this frame of mind, I wanted to explore it a little.


My first step when preparing a talk is...
  1. to just talk
Given the title, I talk to the wall, record it, and make notes.
For me, this is a purely temporary measure, and I delete these artifacts afterwards.
Then I collate my notes into a small first person script like essay.
Which in this instance came out as a single thread below. By single thread I mean, one topic, one path through, no 'asides' or references to analogous material.

Over the years, I've been on site and people have been talking about a "sense of urgency", 'people' generally means management. And "sense of urgency" generally means "why aren't these people working harder to meet the deadlines that we have arbitrarily imposed upon them.
When I get involved, I don't usually try and solve that problem. What I like to focus on is a sense of readiness.
Because I often see testers - not ready to test the software. They are writing the strategy and the approach and everything else they are asked to, but they aren't getting ready. 
This is really basic but I ask people "could you start testing the software tomorrow"? If you could then you're in a pretty good place. If you're not ready, then you're in a pretty good place because you have your todo list to get ready, by asking 'what do we need in order to be ready'. Everything else - strategy, policy, approach, etc. is a bonus. Because if you're ready - you can communicate your readiness, and your 'documentation' is a result of taking the time to write it down.
And you need to be ready to test at different levels. functionality, requirements, domain, technology. But all of that is for nothing if you don't yet have the attitude that you could test this thing at the drop of a hat. Mentally building that 'testing' sense of readiness so that you could test it now, if you had to.
So I encourage you all. Build a sense of readiness. Are you ready to test a week from now, tomorrow, an hour from now, can you test it now? If not - work out why not and and stick that on your preparation list. and get ready.


This had the basics of what I wanted to cover. And I left it to sit for a while. Because really, I wanted to try multi-threading the talk. Adding in some analogous threads, creating and closing open loops as I talked.  I hadn't tried this approach for a short talk before, but since this wasn't a 'Lightning Talk', it was supposed to be a 'Lightning Keynote', I wanted to add more texture to the presentation.
And during the 'sitting' period I read a Novel called "Silvertip's Search" by Max Brand.
You can see my copy above. The London, Hodder and Stoughton edition, first published in 1948. (Max Brand died in 1944)

And in here, I found a passage that I thought fit my topic. A conversation between the head bad guy and one of his minions. 
Throughout the book, both the head bad guy, and the hero, are 'ready for anything' and 'at any time'. And In this paragraph, the head bad guy explains his secret.
"Are you laughing at me, chief?" he asked. "You know that nobody in the world can stand up to you."
"Nobody? Ah, ah, the world is larger than we are," said the criminal. "I should never pretend that nobody can stand up against me. All I know is that I keep myself in practice, patiently, every day, working away my hours." He sighed. "A little natural talent, and constant preparation. That's all it needs. You fellows are my equals, every one of you. Taking a little pains is all the difference between us..."
This is on page 70 of the edition I own.


Given this, I thought I could weave into the presentation: the text of the book, and additionally, Max Brand and his writing strategies. 
That would then give me at least three threads. One personal, one fictional, and one cross domain.
So my next set of notes looked like this.

Some managers like to talk about a “Sense of Urgency”, which in management speak means - why are my staff not working hard enough to meet these arbitrary deadlines we’ve set.
I read a lot of pulp novels. Most written to very tight deadlines. Generally filled with life and death decisions, made quickly, based on minimal information and minimal planning. Urgency in a pulp novel gets you killed, or lets the villain get away. Readiness defines the best heroes.
Max Brand knew a lot about deadlines. His official biography lists over 500 novels. and 400 short stories He was so prolific, that new books based on his outlines continue to be written and published after his death.
I see a lot of testers on site being busy, writing stuff like policies, strategies, plans, approaches etc. They think they are getting ‘ready’, most often they are complying with a ‘sense of urgency’ that says we need a strategy, or we need a plan. They are getting ready. They’re getting ready for their next meeting. But they are not getting ready for their testing.
And if you ask them. Are you ready? They’ll typically tell you about all the things they are waiting for, and they are in a holding pattern. 
And that’s not what I mean by readiness.
Readiness works at different levels. Could you test an application that you don’t know anything about about? But you understand the technology it is built on? Or you understand the domain it sits in. There are lots of models around readiness: skills, domain, the app requirements, techniques, technology, and these models all overlap.
And if you were ready, you could test the app from the point of view of any of these models and add value. And gain enough time to develop one of the other models and test from another perspective. Your strategies, and plans and policies become a communication and explanation of your readiness.
A Sense of Readiness leads to a confidence and flexibility that you could test something if it was delivered to you tomorrow, or now.
So back to Max Brand, and specifically his novel "Silvertip’s Search". 
One of the bad guys has betrayed his gang, and he’s up before the head bad guy trying to convince them not to kill him.
And I’ll paraphrase, here. Max Brand is a better writer than I’m making him sound like here.
The bad guy persuading for is life says “I wouldn't betray you boss, nobody can stand up to you”
And the boss disagrees, and Max Brand, or Faust, then has the lead bad guy describe his approach to writing. and it is “Nope, I don’t promise no-one is better than me. I just keep myself in practice, a little every day, constant preparation. We’re the same. Taking pains is all the difference between us”
Max Brand describes his prolific approach to writing. And how we go about developing a sense of readiness, because we don’t know what is going to come at us. All we can do is work on ourselves so that we have the confidence to tackle it if it comes in next week, or tomorrow, or today, or ten minutes from now.


I emboldened the first part of the sentence because that becomes the outline that I commit to memory to inform my talk.


At this point, I discovered the Internet Archive contains a version of the novel. The quoted paragraph is on page 86 of the Internet Archive version. Yes, if you want to read this novel, you can.
So I decided to download the novel to my Kindle and wrap the hardback cover around the Kindle as a 'prop' for the talk. Since I didn't want slides, and I was talking about the novel, having a physical representation of it seemed like a useful stage device. 
And I could possibly build some tension by 'teasing' a reveal early in the talk, then reading from the novel at the end of the talk.And I added the following lines into the outline.
I brought along a pulp novel for you. This is Silvertip’s Search. A western published in 1945, based on his pulp story published in 1933, written by Max Brand. Or Frederick Faust - his real name. He created the character of Destry, and probably most famously Dr Kildare.
And hidden In this novel, Max Brand describes the secret of his writing success.

You can watch the talk and see how closely it matches the outline above. I think I missed out some stuff and I think I added a little more.
 And now, in this blog, I can expand a little further - with information I wouldn't include in a lightning talk. 
If I was using this in a longer talk then I might well include the information I'm about to give you below.

Additional reasons I like pulp as an example of readiness... 
The pulp authors worked from small outlines:
  • A Title
  • A Paragraph
  • A Blurb
  • A short plot outline
They did this for a number of reasons:
  • They wrote for money.
    • So they had to pitch the story, and didn't want to spend the time writing a full treatment, so they pitched outlines. Sometimes they pitched titles, to see what grabbed attention.
  • They could expand them, quickly, when needed. 
    • Sometimes they would be asked to contribute a story to a magazine with only a few days notice because some other author had let the editor down. And out would come, either an earlier story that hadn't sold, or an expanded form from the outline. 
"Silvertip's Search" is a good example of this process. The novel, was an expanded version of one of Max Brand's short stories. So a published (and previously paid for) work, was expanded into a novel, which was then sold again. 
And because pulp authors worked like this, they often left behind lots of outlines, scraps of ideas, blurbs etc. Which hadn't been sold, or used, or expanded. Which is how pulp authors continue to publish work long after they are dead. Someone manages to take their fast preparation and turn it into something else.
 
And so, to relate this back to testing...
A lot of the time in testing we see promoted the idea that you have to prep in advance, and that your advanced prep has to have copious detail.
I don't think you have to. My experience of testing tells me otherwise.
I work to be 'ready' as fast as I can. I know there will be gaps in my readiness. But I know I could start, and add value, fast. And with each passing moment that allows me more time to prepare I increase my readiness, until at some point, I test.
And one external source of validation I use for this, is the work, and approach, of pulp authors.
Pulp authors used "a sense of readiness" to help them. We can too.



Thursday, 14 August 2014

How to convert VirtualBox to VMWare and install the ethernet device drivers

I couldn't work around the recent bug in VirtualBox version 4.3.14. It conflicts with my Anti-virus software under windows.

When I upgraded to version 4.3.15, I no longer experienced the anti-virus crash, but my networking was trashed and I couldn't get it working. So I decided to try and migrate over to VMWare.

I use VMWare Fusion on the Mac to run my Windows and Linux VMs, and VMWare VMs are cross platform. It seems like a sensible move, even though it will cost me some cash to buy the VMWare Player on Windows.

  • Step 1 - convert VirtualBox to an Appliance
  • Step 2 - load the appliance into VMWare
  • Step 3 - uninstall the VirtualBox addons
  • Step 4 - install the VMWare addons
  • Step 5 - edit the .vmxf file to change the network settings
  • Step 6 - reauthenticate the Windows license
  • Step 7 - enjoy your ported VM
  • Step 8 - delete the VirtualBox VMs

Step 1 - convert VirtualBox to an Appliance

In VirtualBox I exported the VM as an appliance using the "File \ Export Appliance..." menu.

Step 2 - load the appliance into VMWare

Using VMWare Player, I open the appliance using "Open a Virtual Machine".

VMWare complains about certificates, but we tell it to retry and continue.

Then it prompts to save as a VM location.

Voila, the VM is converted - but it doesn't work yet.

Step 3 - uninstall the VirtualBox addons

Start the VMWare machine, and uninstall the VirtualBox addons.

Step 4 - install the VMWare addons

Install the VMWare tools.

At this point, we should be finished, but I wasn't. I had to fix the networking.

It took me a while to find the right online resource for the next step.

Step 5 - edit the .vmxf file to change the network settings

I could not get the drivers for the ethernet adapters to work with the VMWare Windows XP VM.

I had to edit the .vmxf file that configures the virtual machine, and remove the "e1000" line.

Then, when I restart the machine, the correct drivers from the VMWare tools are used for the ethernet device.

I also had to make sure that the "Automatic Bridging Settings" in the VM Settings were set to bridge the correct networks, including the "Microsoft Wi-Fi Direct Virtual Adapter" and the "Microsoft Hosted Network Virtual Adapter".

Step 6 - re-authenticate the Windows license

At this point Windows complains that the hardware configuration has changed, and needs to re-authenticate.

For some reason, it wouldn't re-authenticate when I was logged in, so I had to restart the machine, and re-authenticate prior to the login.

Step 7 - enjoy your ported VM

I couldn't face a re-install of Windows, and downloading all the service packs etc. So I wanted to get this conversion working, which is why I persisted, and you now see my notes as a blog post.

So now. I'll use VMWare for my VMs on Windows, as well as Mac.

Hopefully, Oracle will fix VirtualBox fully, as I like to have options.

But I think VirtualBox has lost me as a user for my licensed Windows VMs. The conversion from VMWare to VirtualBox is not as easy as the conversion from VirtualBox to VMWare.

Step 8 - delete the VirtualBox VMs

Since I use licensed versions of Windows, in addition to the modern.ie VMs, I had to remember to delete the VirtualBox VMs.


Monday, 17 February 2014

Back to Basics: How to use the Windows Command Line

Those of us that have worked with computers for most of our lives, take the command line for granted. We know it exists, we know basically how to use it, and we know how to find the commands we need even if we can't remember them.

But, not everyone knows how to use the command line. I've had quite a few questions on the various courses I conduct because people have no familiarity with the command line. And the worst part was, I could not find a good resource to send them to, in order to learn the command line.

As a result, I created a short 6 minute video that shows how to start the windows command line, change to a specific directory, run some commands, and how to find out more information.



Start Command line by:
  • clicking on start \ "Command Prompt"
  • Start \ Run, "cmd"
  • Start \ search for "cmd"
  • Win+R, "cmd"
  • Windows Powertoy "Open Command Window Here"
  • Shift + Right Click - "Open Command Prompt Here"
  • type "cmd" in explorer (Win+e, navigate, "cmd")     
  • Windows 8 command from dashboard
Change to a directory using "cd /d " then copy and paste the absolute path from Windows Explorer.

Basic Commands:
  • dir - show directory listing
  • cd .. - move up a directory
  • cd directoryname  - change to a subdirectory
  • cls - clear the screen
  • title name - retitle a command window
  • help - what commands are available
  • help command - information on the command

If anyone wants more videos like this then please either leave comments here, or on YouTube and let me know. Or if you know of any great references to point beginners at then I welcome those comments as well.

Thursday, 13 February 2014

Introducing Virtualbox modern.ie Turnkey Virtual Machines for Web Testing

My install of VirtualBox prompted me to update today. And I realised that I hadn't written much about VirtualBox, and I find any videos I had created about it.

Which surprised me since I use Virtual Machines. A lot.


No Matter, since I created the above video today.

In it, I show the basic install process for VirtualBox. A free Virtualisation platform from Oracle which runs on Windows, Mac and Linux.

Also, Modern.IE, which I know I have mentioned before. The Microsoft site where you can download virtual machines for each version of MS Windows - XP through to Windows 8, with a variety of IE versions.

Perfect for 'compatibility' testing - the main use case I think Microsoft envisioned for the site. Or for creating sandbox environments and for running automation against different browsers, which I often use it to do.

I even mention TurnkeyLinux, where you can find pre-built virtual machines for numerous open source tools.

In fact the version of RedMine that I used on the Black Ops Testing Workshops to demonstrate the quick automation I created. I installed via a TurnkeyLinux virtual machine.

Oracle, even host a set of pre-built virtual machines.

A New Feature in VirtualBox (that I only noticed today)

I noticed some functionality had crept in to VirtualBox today.

The cool 'Seamless Mode' which I had previously noticed in Parallels on the Mac (as 'Coherence' mode) and on VM Fusion on the Mac called ('Unity' mode). This allows 'windows' on the virtual machine to run as though they were 'normal' windows on your machine - so not contrained within the virtual machine window.

I love this feature. It means I no longer have to keep switching in and out of a VM Window and can run the virtualised apps alongside native apps. And with shared clipboard and drag and drop, it seems too easy to forget that I ran the app from a VM.

If you haven't tried this yet. Download VirtualBox, install the Win XP with IE6 VM, and then run it in 'Seamless' Mode so you have IE6 running on the desktop of your shiny whiz bang monster desktop. Try it. Testing with IE6 becomes a fun thing to do - how often do you hear that?