Positive side effects of automated testing

We’ve recently been doing more automated testing in the SSP, and with that has come a lot of the benefits you might expect: an ability to spot faults as we make changes, and a guarantee of functionality working as prescribed among them. But I’ve come across a whole bunch of bonus benefits we get, some related to managing stories and projects, and some as personal gains for me.

It’s probably helpful to provide some background first as to how we use tests at the moment. Currently the business analysts in our team provide test steps with their stories, written in Gherkin syntax. We implement these tests as part of development to confirm that they work as expected, sometimes adapting them for consistency or ease of test development.

For each story, we write its tests on a separate branch of our central testing repository (following the Gitflow model). This means that, once the story is complete, we can create a merge request to get it into the main trunk of tests. Each request is peer reviewed by at least one other developer before being merged.

I should probably also note that these side effects are from my point-of-view as a developer, and I’m sure there are other benefits our Business Analyst colleagues could share from their side.

It helps me think in stories

Before I started writing tests for all my stories, I found it was very easy to blur the work for them together. I’d be doing a bit of Story 1 and think “it’s easy enough to fit Story 2 whilst I’m here, I’ll do that now”. But then my changes would be overlapped and it would be hard to track which records were updated, and back out if necessary. In particular, if I hadn’t marked myself as working on Story 2 and the story was changed by a colleague, I was tied up in knots.

Writing tests around each story has given me clear delineation of what my scope is and what I should be working on. Fitting in part of Story 2 is much less appealing because it also means writing tests for Story 2 and much more concretely diverting myself from what I’m working on.

Similarly, it stops me wondering off-track without notifying anybody. Where I spot little things that need fixing, I’ve become better at creating documented JIRAs for them, mostly so that I have a branch name to write the test on.

It’s easier to document

At the end of each story, we document how the new functionality holds together technically. Having written and implemented tests by this point, I find now that I’m much more aware of the wider picture of the story. I have a better understanding of what the functionality is, how it fits in to other parts of the system, and where its limitations lie. From that information, the documentation is much easier to write.

It helps me and the Business Analyst define the story

Due to the technical complexity of EUCLID, we often have to have multiple discussions between developers and business analysts about how a story should be specified and implemented. By having tests written from the first discussion that we’re constantly refining, it’s much easier to see what we’ve previously defined a story as, and what we want to change.

It helps highlight constraints and conflicts

Writing the “Given” step of a test often provides an indicator of constraints of the story. For example, “Given I have an assessment question which is an exam for an APT course and has a failing mark against it”. This may seem like a lot of information for a single step (and in the real world probably is), but it indicates some key information that I could miss in implementation: that exams need to be handled differently, that non-APT courses don’t adhere to these rules and that this is specifically about failing marks.

In a story which requires a lot of technical set-up, these bits of information can be important to have up-front. Knowing that we need to handle exams differently may lead me to make the assessment type a key part of setup, rather than a random dropdown.

Working with multiple test scenarios across multiple stories can also help bring conflicts to light. Where one story says that “I should see the assessment structure” and another says “For exams, I shouldn’t see the question details”, it’s easy to pick up on the clash early on, have a conversation about it, and feed that back into the stories.

It helps me see what my colleagues are doing

We work in quite discrete teams (and sometimes discreet teams!) in the SSP, and it can be hard to keep track of what your colleagues are working on at any one time. We do have a weekly stand-up for developers and we show work at our team meetings but, as a developer, it’s nice to sometimes see behind-the-scenes a bit more.

In general, a good way to keep a hand on what other developers are working on is through peer reviews but, due to some technical limitations, we can’t easily do individual code review in EUCLID and bringing someone to look at development normally involves a vast introduction to the project. However, our automated tests are written in a completely external system which can easily have code reviews (as highlighted in the lead) and allows us that insight.

By seeing the test scenarios that other developers write, and reviewing them, I am able to see what their project will enable users to do, how it integrates with other parts of EUCLID, and what my colleagues are currently working on.

It helps me understand our system layout

We’re using Cucumber at the moment, which allows us to have our features in subdirectories for free (that is, we can have any folder tree structure without configuring Cucumber differently). This has led us to group our test scenarios into folders for the component of EUCLID they are testing. We have a folder for Student Hub tests, one for the Assessment Tools, one for Self Service. Where projects have input in different components, they touch different folders (for example, there are assessment-related tests in all three of the folders listed).

This sort of grouping and structure isn’t possible in EUCLID’s code base, meaning that testing is one of very few places that allows us to see the logical groupings of functionality and tools within the system. On top of that, because we can freely change the subdirectories, we can keep these folders in flux as the core functions of EUCLID change.

Our automated testing future is looking bright at the moment, as we build up a strong corpus of tests in various parts of EUCLID. A lot of the gains we’re making are expected and intentional, but it has also been eye-opening to learn more about our systems and project management through this development.