Automated tests — have we created a monster?

Ron Shoshani
R&D and Stuff
Published in
4 min readOct 18, 2014

--

I hope that you all agree that automation is important, but what type of tests should you implement and to what extent?

At my previous company we developed an Application Performance Management (monitoring) tool, which means we were sensitive to many factors like JVM version, Application server type and version, OS type and version, protocols (HTTP, JDBC, SOAP, RMI..), frameworks, and more. We decided to create an end-to-end system tests environment which verifies that our software functions correctly under all scenarios.

We ended up with an amazing architecture that spins up hundreds of VM’s (on ESX and OpenStacks nodes), installs the latest build, runs tests, verifies the expected result and sends a report with passed/failed indications. This helped us reduce the manual QA work needed to release a build to practically zero, but it came with many complications:

  1. Run time — running the whole test suite took us several hours. You can reduce the runtime by adding more machines, but that means spending more money.
  2. Complexity — each test usually traverses through several components which makes it more difficult to track down the reason for a failed test..
  3. False positive results — in continuation to the previous bullet, false positive results are the most difficult to find and are usually quite common in system tests, because there are so many moving parts (code, environment, machines, tools, ..)
  4. Resources — system tests usually require quite a lot of resources (CPU/RAM). It might not be so extreme like in our case, but the more machines/nodes the more time you invest in maintenance.

Here are my thoughts on how to build a balanced automation suite for your product.

General

The below description relates to a system that combines backend and frontend components. I think that for products that are mainly a mobile app or mostly frontend, the mix of tests will be different.

Here’s a rough sketch of what I think should be the mix of test types:

Unit tests

Majority of tests should be unit tests. They are quick to run, validate the basic building blocks of your code, are awesome for when you need to restructure your code and are a great way to document what you’re doing. Many people find them annoying because you need to use stubs and mock objects, and so they claim that unit tests are not worth the effort. I think otherwise.

Convincing developers to implement unit tets (and god forbid, write them before you code, aka TDD) might not be a simple task because the value is not always clear right away. Pick a developer who gets it (or even better, do it yourself), and ask him (or her) to add unit tests to the next feature he implements. Let him choose the needed frameworks and tools (will make it a more interesting challenge) and then have him present the results to a bigger audience, for example his team. Find people who like the idea, and use them to broaden the coverage of unit tests.

In order to have better chances of success — get support from your team leads. Get them onboard with why it’s important, and have them care and help in the mission of implementing unit tests.

Integration tests

Integration tests combine testing more than one component. The need for integration tests really depends on how your product is implemented, in our case there were too many dependencies to use mock objects so it made sense to add another component to the mix and test them together.

System tests

These are the end to end tests which involve all components of your product. I personally don’t like that system tests invoke the UI directly, but rather one layer below (RESTful API for example) mainly because the UI changes frequently which causes high maintenance in fixing the tests.

One thing you should do from time to time is prune tests otherwise you get to a situation with a deflated suite of tests which take too long to run. Every now and then go over the tests and remove the ones you think are not relevant, either because the product has changed or because another test is actually covering this one as well.

Another best practice to follow — log everything. As each test invokes many components, it’s sometimes difficult to troubleshoot a failing test. Logs were always my best friend in these cases.

UI tests

I’m sure that many people will think differently on what i’m about to say regarding UI tests, but it’s ok as it very much depends on the type of product you are developing. In general, I think that you should have a pretty thin layer of UI tests that give you coverage of a smoke test — make sure that everything is in tact and nothing major is broken. I usually try to avoid too many UI tests because they break more frequently than others.

I’m a big believer that the developer that broke a test should be the one to fix it, so I’m trying to “protect” my developers time and not handle the fuss of UI tests too much. If you can afford to put a full time person on UI tests, I admit it has great value.

I don’t think there is a good answer to how you create the best automation suite for your product. Feel free to share your thoughts on what you’d do differently.

--

--