5 Best Practices for Automated Browser Testing

Automated Browser Testing During Continuous Integration with CircleCI and ngrok

Automated testing entails much more than simply creating tests and enabling them. A “set it and forget it” approach won’t get you very far with automated tests — particularly automated browser tests, which interact with the ever changing frontend of your application or website.

The true workload ultimately comes with the maintenance and evolution of your tests, so it’s extremely important to design your tests in a logical and maintainable way. Below are five best practices that we suggest when building Ghost Inspector tests, and automated browser tests in general.

1. Keep your tests modular and DRY (Don’t Repeat Yourself) by using our test import feature.

This feature allows you to section off specific sequences into reusable “sub-tests” and include them within other tests. For example, you could have a “Login” test and reuse that anywhere you need to login. This cuts down on repetition tremendously and ensures that changes only need to be made in one place.

2. Reuse your tests across different environments.

If you wish to execute the same tests on different websites/environments, for instance on both staging and production, there’s no need to duplicate the tests just to change the start URL. Instead, use our API to execute the test or suite and override the start URL on the fly.

3. Keep tests as simple as possible and shoot to test one specific feature with each test.

Avoid the temptation of testing a bunch of different things within one test. For example, I wouldn’t suggest creating a test that logs into your system, updates an account, adds an item to the shopping cart, and then checks out all in one test. That test is going to be brittle, since any issue along that workflow could break it — and when it breaks, you won’t specifically know where the issue is without a good deal of review. Try to split that workflow into individual tests to test each feature separately. This may require you to repeat specific operations a number of times, like logging in, but suggestion #1 above will make that very easy.

4. Keep tests to a reasonable length.

There’s currently no hard test length limit (other than 10 minutes of run time). However, in my experience, the shorter your test is, the more durable and effective it will be. If you get beyond 30 – 40 steps, your test may be starting to get too complicated — not because Ghost Inspector can’t execute it — but because it’s more likely that something will go wrong and it’ll be difficult to troubleshoot. If you have a test that’s 100 steps long and step #97 fails, trying to reproduce that scenario in your browser can be painstaking work. Testing specific features may require you to do a number of things to get the test into a certain situation, but try to stay conscious of how long your test is becoming.

5. Sometimes you have to ask yourself, “Does an automated test really make sense here?”

Every type of automated testing out there comes with a “maintenance fee”, from unit tests up through end-to-end tests. As the complexity of the test increases and the volatility of the feature you’re testing increases, so too does the required maintenance. The primary motivation for automated testing is to save you time. This happens when the automated “maintenance fee” is cheaper than the cost of manually testing a feature — but realistically, that’s not always going to be the case. There are sometimes going to be situations where it’s just not convenient to maintain a Ghost Inspector test, even if you can technically “make it work”. You will need to use your judgement as to whether an automated test makes sense, given the situation.