Wednesday, 15 February 2017

Should I test at the GUI Level or the API Level?

TLDR; Where to test? Can you isolate the functionality? If so, test the isolation most heavily. Then look to see what integrates, and how it integrates. Then test the integration as heavily as the ‘how’ requires.

Question: Is there a rule of thumb when deciding to test at the GUI level or API level? Are any rules to help decide when to test at one level over the other?
Answer: I don’t think I use a simple rule of thumb. But I will try and explore some of the thought processes I use to make the decision.

When I am trying to decide whether to test at the GUI or the API I have to figure out:
  • what am I trying to test?
  • can I isolate the functionality I’m testing to a specific ‘level’?

And I think the word ‘isolate’ might be the ‘rule of thumb’ you are looking for.
  • Test heavily where you can isolate.
  • Test the integration heavily for the ‘isolated’ unique integration functionality
  • Test the integration as heavily as the implementation requires
  • Test the integration for emergent behaviour
  • Test the overlap between systems, during integration, more lightly
e.g. If the system has ‘create user’ functionality:
  • exposed through the GUI via an admin interface
  • exposed via an API REST call
I need to test at both levels because the functionality of ‘create user’ is not isolated to one level.

The questions I have to ask though are ‘how much’ and ‘what’ do I have to test at each level?

I may decide not to test as much through the GUI if my GUI calls the REST API because I can model that as (at least) two communicating systems. A GUI system and an API system.

I can heavily test the parts of functionality that are isolated in the API at the ‘API level’, and then lightly test the overlap of the integration between the GUI and the API.

And I want to test the unique functionality in the GUI. e.g. the GUI may have ajax calls triggered from GUI interaction - I want to test that at the GUI level.

Depending on the development process we may find that the ‘ajax’ calls have already been covered by JavaScript unit tests. But they haven’t been run integrated in the browser page, or from all the browsers and operating systems the GUI is used in therefore, depending on the libraries used, I may want to test that cross browser and on different operating systems

If the GUI calls a different backend endpoint than the API, then I need to test both routes through to the backend code until I get to the point in the backend system where I encounter shared code - assuming there is shared code in the system used by both the REST API and the GUI triggered backend flow.

But at this point, I also need to consider how much coverage we have from unit tests which cover the shared code.

I guess my rule of thumb might look like a list:
  • build a model of the system such that you can identify the integration points and the ‘isolated’ shared functionality.
  • Test isolated functionality at the lowest points you can.
  • Work back out to higher (or ‘peer’) levels of integration and abstraction and consider the system in terms of integrating systems.
  • Look for unique functionality at the higher (or ‘peer’) levels of abstraction, you will need to test them there.
  • If you exercise unique functionality in isolation - by mocking out the integrating systems - then you might need to look at this from a technical risk perspective and decide if, or how much, you need to exercise it while integrated.
I could also model the above as:
  • create multiple overlapping models of the system
  • consider coverage of the multiple models of the system
  • consider coverage of flows through the different models of the system
  • consideration of technical risk relating to the implementation and the environments the system runs on
PS. When I explain this, people often map this on to their favourite pyramid model. I don’t use any pyramid models. I prefer to model the system that I’m working with, rather than have a general model. Other people prefer pyramid models.


  1. Good thinking here, thanks.

    I tend to push testing as much as possible to the API level, because those tests tend to be more durable. They are less prone to changes and enhancements. Also, injecting at the UI level tends to require more things that can break (framework, page objects, browser, another network hop, etc.) That said, modeling the system allows you to map the tests to the appropriate level. The example that I use, for testing an invoice system, we probably want to verify the "math" at the services level, and the workflow at UI level.

    1. Thanks John. The interesting thing about the question was that the underlying assumption was about automating. But the wording was 'testing'. When 'testing' durability has less priority, when 'automating' then durability clearly has an increased priority. And the notion that 'more things that can break' would suggest more 'testing' rather than more 'automating' :)

      Thanks for the example, it sounds like a good instantiation of what I meant by 'isolate'. Hopefully the 'math' can also be covered pretty thoroughly at the unit level as well. Then the integration with the math functionality at the service level then, as you describe, the workflow at the UI level.

      Thanks for instantiating the blog post with a real example.

  2. Good points. Many time the testing is done at unit and and GUI. The GUI is used for integration testing. The team is unaware that they are testing multiple systems at once. Its harder to test, harder to analyse, harder to prepare, but most important you need to wait till the GUI, API and the implementation is done. A rule that I follow is you should not test or automate a feature at GUI level that isn't tested at unit or API level. There should be multiple suites for the isolated systems, integration steps and the final step is the GUI test. This way the test are smaller, faster and more sustainable.

    1. Hi Marc, thanks for the comment. Your comment suggests that you have a stricter set of rules than I do. I test, and automate, as we go along, I don't necessarily wait till the implementation is done. And I do test, and automate, the GUI prior to API level tests - sometimes the risk profile means that we decide to start there but I would try and quickly move to the other levels. API tests do tend to run faster and can take less time to have something up and running regularly.

      Thanks, Alan

  3. Alan, great post. I ma curious if it is possible to have same suite that has API and Gui tests together. I means like in BDD style to have step running api test that goes to Gui validation?

    1. Hi, yes it is possible. I thought I had an example of that on github, but I can't find one so I'll create something in the future.

      I frequently do this - use the API to setup the conditions, use the GUI to execute the scenario, use the GUI and the API to check the results.

      It can also be possible, when the API and the GUI use the same authentication scheme to authenticate on one, and use the same session in the other e.g. if the authentication is cookie based then it might be possible to login to the GUI, capture the session cookie, and send the API requests with that cookie. Same in reverse - login with the API, capture the cookie sent back, and add it to the browser cookies used by the GUI automating. If the API provides a session header then it might be possible to add that header via a programmable proxy for the GUI tests (although that is a little harder). Sometimes session headers and cookies use the same values so you can share. It depends on the authentication scheme, but I've done this in the past as well, to use the API or HTTP (APP as API) to reduce the browser flow for a scenario.