I’m getting RestMud ready for some workshops later in the year and this means:
- making sure a player can complete the maps I have created
- I’m OK with some bugs (because they are for testers), but I need them to complete because they are games
- making sure they can handle enough players
- I imagine that we will have a max of 20 or so people playing at a time
- making sure I don’t break existing games
- with last minute engine changes
- with new game maps
- with new game commands and script commands
- Spark Web Framework
- Games/Maps are written as Java classes
- Games have custom functionality implemented using a set of internal DSL classes
- A ‘Game’ class is the main interface e.g.
processVerbNounForPlayer(verb, noun, player)
- Spark is my REST API - using JSON
- Spark is my Web Server for the HTML GUI
- Would you exclusively automate it?
- Would you exclusively explore it?
- Would you script extensively and follow scripts?
- Where are the risks?
- What tools would you use?
- I have JUnit @Test methods for game domain classes
- I have JUnit @Test methods for internal DSL classes
- REST API
- Web API
- Integrated Game Class Testing
- Testing the Games themselves
- Multi-user interaction
- I have JUnit @Test methods which instantiate the engine with ‘test’ game snippets to check that the basic engine works and can process verb/noun combinations
But I don’t want to play the whole game to get to that point, I just want to make sure that that set of conditions works. And some conditions mean that some other conditions don’t work (there are multiple paths through the game), therefore I don’t want to end to end test all of this.
- I have JUnit @Test methods to play the Game in small chunks
- instantiate the game
- setup the player state and game state
- issue verb,noun combinations through the game interface to check that the conditions work
This works for single player, non random games, where:
- commands are deterministic
- I start in the same location
successfullyVisitRoom("1", walkthrough("we start in room 1", "look", "")); successfully(walkthrough("I always examine signs on walls", "examine", "ahint")); successfullyVisitRoom("2", walkthrough("north leads into room 2", "go", "n")); successfully(walkthrough("oh oh, it is dark here", "look", "")); successfully(walkthrough("amend the url to go back south /go/s", "go", "s")); successfullyVisitRoom("1", walkthrough("to get back to room 1", "look", "")); successfullyVisitRoom("3", walkthrough("east leads into room 3 - east room", "go", "e"));
I think these are pretty readable, and I use high level methods to create a ‘Test DSL’ for writing these
At the moment I have one ‘walkthrough’ test per game.
This also writes out a CSV file with all the commands that are entered.
I have a REST API test which reads the file and sends the requests to the REST API, this outputs the request and the responses.
At the moment I review the output rather than automatically assert against it.
In the future, I will re-use the walkthrough test but the Test DSL will have a backend that uses the REST API rather than the game API.
The Rest API is very similar to that described in my Tracks REST API Testing Case Study but I’m using Jsoup and Gson as my HTTP and JSON parsing libraries.
This is semi-automated at the moment with a blast of messages which I review.
I’m doing this because:
- the REST API Walkthrough demonstrates that the game can be completed through the REST API
- I want to ‘test’ other conditions through the REST API
- it is unaware of the game it is playing, so if there are puzzles it won’t solve them, unless by accident
myFirstBot.addActionStrategy(new WalkerStrategy().canOpenDoors(false)); myFirstBot.addActionStrategy(new AllDoorCloserStrategy().setWaitingStrategy(new RandomWaitStrategy().waitTimeBetween(500,2000))); myFirstBot.addActionStrategy(new RandomDoorCloserStrategy()); myFirstBot.addActionStrategy(new RandomDoorOpenerStrategy()); myFirstBot.addActionStrategy(new RandomTakerStrategy()); myFirstBot.addActionStrategy(new RandomExaminerStrategy()); myFirstBot.addActionStrategy(new RandomUseStrategy()); myFirstBot.addWaitingStrategy(new RandomWaitStrategy().waitTimeBetween(0,100));
You can probably guess what the different strategies do.
I play to add game specific strategies so that the bots can solve problems and not get ‘stuck’ in one room - which currently happens on one of the maps.
This allows me to simulate a user that doesn’t know what they are doing and wanders around pulling things and taking stuff in different order.
Bots are a form of Model Based Testing. A collection of strategies is the ‘Model’ that the bot uses to interact with the application and the bot implements a traversal strategy (currently -randomly choose a strategy and execute it) to execute.
At the moment I don’t really care what the bots do. I’m checking for - no server errors, and no exceptions in the bot or the server. Also I use them for background load when I’m performing exploratory testing through the GUI or the REST API.
Instead I decided that I wanted a more flexibile strategy which re-used the existing REST API testing work.
I could probably re-use my API abstractions but decided it would be easier and faster just to make sure that my bots were threadsafe and that I could start up multiple bots.
I introduced a ThreadedBot which I can start, and stop, but it will autonomously ‘play’ the game in its own thread, interacting with the other users on the game.
It is quite annoying to play alongside the bots as they have a habit of closing doors that I have just opened.
But in this way I’ve been able to use the GUI and play the game with 150 bots running in the background using the API every second or so.
I need to introduce a few more strategies and reporting capabilities in the bots but it is a fairly low tech but scalable approach to automated simulation of multiple users.
Sharp eyed readers might notice a similarity between the ‘strategy’ approach and the ‘screenplay’ pattern and there are certainly lessons for me to learn in the screenplay pattern for readability but I’m refactoring my way to better bots through usage.
‘walkthrough’ of the RestMud public single player game.
I certainly didn’t write 30 pages of Walkthrough - that would be madness given that the game has a tendency (nay, obligation) to change.
Instead I expanded my ‘Test DSL’ to output markdown as it executes, and report some of the output from the game.
dsl.walkthroughStep("\n## Walkthrough\n"); successfullyVisitRoom("1", walkthrough("we start in room 1", "look", "")); successfully(walkthrough("I always examine signs on walls", "examine", "ahint")); dsl.walkthroughStep("\n## Room 2 is dark\n"); successfullyVisitRoom("2", walkthrough("north leads into room 2", "go", "n"));
This allows me to have an executable
- which checks that a user can complete the game
- outputs a csv of commands to replay via the REST API
- generate a markdown file which I can process through pandoc to create a pdf
I explore this system all the time.
When I write the code I use TDD and explore scenarios.
I write games which are designed to explore the game DSL and create different use cases for the game.
I use Postman to interact with the REST API, and I use the game through the GUI in Chrome. (The game HTML is such that I view cross browser rendering and interaction as low risk.)
I still have a lot to do.
- Dependency Injection of REST API Abstraction to re-use walkthrough
@Testinstead of csv
- Increasingly diverse and clever bots
- I chose JSoup because I can also re-use it for headless browser interaction
- I will add WebDriver into the mix as a GUI abstraction so I can switch between JSoup and Browsers
- Technical REST API testing - headers, formatting, malformed requests, etc.
- Bots running from multiple machines
- Bot testing against a cloud based deploy of the game rather than my local machine
- more of all of the above - because while I’m doing well on code coverage metrics I know that my coverage of usage models is low
- Testing the Admin interface
But of course, I do not need to do all of this prior to my workshops.