Ghost Inspector lets you record and manage tests that check specific functionality in your website or application. We store your tests in the cloud and run them continuously, alerting you if anything breaks.
We provide a test recorder for both Chrome and Firefox. When you’re ready to record a test, click on the extension’s toolbar icon and follow the instructions. All your actions, such as clicking elements or filling in forms, will be recorded. Your test can span multiple URLs and even include user logins (though you should read our security section first). You can also specify assertions that must be true for your test to pass. For example, you can assert that certain text must be present on the page, like a username after you log in.
After you’re done recording your actions and assertions, your test is synced to the Ghost Inspector service where it can be run manually, on a scheduled interval, or through our API. When a test is run, it’s the equivalent of a human user performing the actions in a web browser (just as you did during the recording process). Ghost Inspector can notify you with details when your test fails, or when the resulting screenshot changes by a predetermined amount.
Nope. In fact, it doesn’t even have to be your website. You can record a test on any website that can be reached in your web browser. We do encourage you to record and run tests on your own web properties though.
Nope. You can record your tests at whatever URLs you choose. You could use, for example, a staging server — as long as that URL is accessible to us. The starting URL of a test can also be edited or overridden at any time. You could record a test on your live site, then point it to a staging server, or vice versa.
Currently our service offers Google Chrome and Mozilla Firefox options in a Linux environment.
We are planning to expand our browser ecosystem in the future. We've focused initially on generally "smoke" testing applications for bugs and regressions — meaning things that break your application in general — as opposed to cross browser testing for browser-specific bugs. As we're maturing, we're beginning to explore the latter and expect additional options to become available such as Internet Explorer, Edge and Safari. We do not yet have ETAs for these browsers.
In the mean time, our system allows you to export your tests for use with Selenium, a popular browser automation tool that has plugins for various browsers and operating systems.
By default, our system will run tests in parallel. This means that there's no "ordering" because the tests all run along side each other. We do this for a couple of different reasons. Keeping your tests independent and focused on a single task will prevent dependency failures and make troubleshooting easier. It also means that your testing will be sped up dramatically because we can run 30+ tests at a time, instead of 1 at a time.
We try to discourage folks from the idea of running their tests sequentially in order for those reasons. However, with that said, we understand that there's sometimes a need for it and do have an option for running the tests sequentially in a specific order. You can find it under Settings > Concurrency in your suite. When you visit this screen you'll see an explanation of the feature along with a checkbox to enable it. Check the box to enable the concurrency limitation, then set the slider to "1 test" for sequential test running. Tests will now be run one after the next in alphabetical order (the same order shown on the suite screen) when you trigger the suite. You also have the option to pass variables from each test run to the next so that variables you create in earlier tests (such as usernames) are available to tests later in the suite run. We also provide an option to abort the suite run (and skip the remaining tests) if a test fails which should help to save time in the event of a test failure.
You can place numbers in front of your test names if you'd like to control the ordering. We recommend using a "01", "02", "03" type scheme.
Just note that all tests still run in a new session, so for instance, if a login is required, this must be done at the start of each test. Our test modularization feature helps to make this easy and keep the steps in one place.
Unforunately neither Flash nor WebGL are currently enabled in our browsers during testing. We've opted to forego Flash support as we're unable to record or play back actions inside of a Flash object. It's unlikely that we'll add Flash support in the future. We know some folks depend on Flash and that this presents a challenge for them. We apologize for any inconvenience there.
WebGL is a technology that we're still actively exploring and may attempt to support in the future.
No. Every single test run takes place in a clean, independent environment without any cache, cookies, history or session data being carried over from other test runs. This creates a behavior similar to what you would expect to see in your own browser's "incognito" mode.
Not at the moment. All tests are run on a system with the clock set to UTC time.
We have a status page showing our current IP addresses that can be used for whitelisting or exclusion from analytics. Please note that while these rarely change, they are technically subject to change at any time. We do our best to announce any upcoming changes ahead of time.
Yes, currently test runs can last no longer than 10 minutes. If your test is still running after 10 minutes, it will timeout and exit.
By default, tests will run concurrently in no particular order. Capacity can vary, but in general, you'll see something in the range of 30 - 50 tests being run concurrently. However, you can trigger as many as you'd like and we'll queue them all.
We run tests concurrently for a couple of different reasons. Keeping your test independent and focused on a single task will prevent dependency failures and make troubleshooting easier. It also means that your testing will be sped up dramatically because we can run 30+ tests at a time, instead of 1 at a time.
If you would prefer to limit concurrency (for performance reasons) or run your tests sequentially, we have an option for that. You'll find it under "Settings" > "Concurrency" in your suite. If you enable this option, the slider can be used to constrain the number of tests that are run concurrently. If you would like the suite to run sequentially, set this value to 1 and the tests will be executed one at a time in alphabetical order.
Yes. We provide "Cancel" buttons in the application next to your active test and suite runs which can be used to immediately cancel the execution. This is not yet available in the API. Please note that tests are counted towards your monthly test run allotment when they are triggered. A canceled test still counts as a test run in regards to account usage.
Yes, you can trigger tests from a number of geographical locations around the world. This can be set under "Settings" > "Browser Access" in your test, "Settings" > "Test Defaults" in your suite, or overridden through the API when triggering.
We currently focus on functional testing, so we do not have proper features for doing load and performance testing. You can technically trigger your test a number of times at once, if you want to do a very rough stress test. However, due to the way our queueing works, the timing and throughput on the tests can vary. In other words, if you trigger 100 copies of a test, our system will not instantly trigger all 100 of them. It'll queue them and process them in the 30-50 concurrent test range (depending on capacity at the moment).
On the performance side, we do capture test duration which can serve as a rough metric for performance. However, because we are a functional testing tool, we do not prioritize the speed of the test -- we prioritize ensuring that your system is working as intended, which means that the test may slow down at certain points waiting for elements/requests/etc. That makes the duration field an imprecise measurement if you're looking for accurate performance metrics.
The short answer is that we hope to offer more load/performance features in the future, but currently Ghost Inspector is focused on functional testing and not an ideal tool for precisely testing load and performance.
Yes, we provide an API which allows you to execute a test, or suite of tests, remotely. You’d likely want to point these tests at your staging server (which you can easily do). Tests are executed asynchronously, so executing an entire suite of tests shouldn’t take longer than the longest test. The API returns a pass/fail result for the executed test(s) which you can use in your continuous integration process. We have Build & Deployment Integration docs available.
Yes, there is a Selenium export option available in the "More" menu of your tests and suites. This is available for all plans.
If you need to exclude traffic from your analytics system or otherwise, there are currently two ways you can possibly detect traffic from our system to your website:
Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0 Safari/537.36 Ghost Inspector.
Are you using a dual or multiple monitor setup? There is a known issue in Chrome that causes problems when attempting to interact with a Chrome extension popup when two or more monitors are active. For the time being, we suggest using the extension in a browser window on your primary monitor, or in a single monitor setup.
We're always happy to help troubleshoot or answer other questions you might have, just visit our support section to contact us.