The scenario you face as a tester:
- You have a main web site www.eviltester.com
- You have a new mobile site m.eviltester.com
- You have a set of redirection rules that take you from www to m. based on the device
- And the device is identified by the user-agent header string
The first thought for testing?
- We need to get a bunch of devices to test this on.
- We could spoof the user-agent.
- Well, Chrome has the override settings where we could choose a different user-agent.
- We could have our debug proxy change the user-agent for us.
Where will we find the user-agents?We need an oracle source for our data set of user-agents. Fortunately there are a few sites out there that track what user-agents are in use:
useragentstring.com - if you have a preference that differs then leave a comment and let me know.
So I wrote some code. And I know about all the "testers shouldn't code", "tester's don't need to code", "blah blah blah" discussions.
I can code. It increases my ability to respond to the variety of conditions on a project. Requisite Variety. I encourage you to learn how to code. (Hey I'm writing a book about that.)
So I code.
I wrote a simple set of Java code that:
- Uses GhostDriver - the new headless driver wrapper around PhantomJS
- Visits useragentstring.com and scrapes off the user-agent strings
- Filters the user-agent strings to those that I consider 'mobile' devices
- Iterates over all those user-agents
- Creates a new GhostDriver with that user-agent and visits the www site
- Checks that I redirect to the mobile site
Surely it would be faster to use direct HTTP calls?
- Faster to run, but not necessarily faster to write. Yes.
- See I can use the WebDriver findElements commands when scraping the page and not have to remember how to parse XML in Java or download another Java library.
- I can use the WebDriver to visit the site and handle all the redirection for me, rather than write some redirect handling code for the Apache HTTP libraries.
I tidied it up a little for release to github so it isn't completely embarassing, but hey ho, it added value. I'll use it again. It looks pretty nasty, but it works.
Sometimes that's the type of automation I write when I test.
But that wasn't the requirement scope!
- True. It wasn't.
- The requirement scope was small.
- Sometimes we have to explore.
- I look for external oracles and comparative sites and rules to help me evaluate if the requirements meet the actual user need.
- In this instance I found a lot of user-agents that the redirect rules didn't cover.
But if it wasn't in the requirements we can't justify the testing!
- I can use a comparison with other sites handling of the user-agents (e.g. bbc or tfl)
- I can see if the gaps in the system under test are better or worse than theirs.
- BBC didn't handle 1 user-agent I found,
- TFL didn't handle 3,
- The system under test didn't handle 100+