At Ghost Inspector, we pride ourselves on creating tools that simplify the testing process, making it possible for anyone to build a test without coding skills. With our wide-ranging usability options, even our seasoned users sometimes wonder if they’ve fully tapped into Ghost Inspector’s capabilities. So we’ve put together this guide to detail Ghost Inspector’s more nuanced and advanced features.
While we’re excited about our specialized features like visual, email, and accessibility testing, this guide’s focus will be on testing techniques that set a solid foundation for your tests. Because each Ghost Inspector user has unique testing needs, we’ve put together a variety of approaches and options. Our goal is to equip you with diverse strategies and best practices, so you can tailor your testing process to suit any challenge. Let’s get started!
Table of Contents
Best practices for recording and reusing tests
When creating a foundational test, like a simple login that can be incorporated into other tests, here are three helpful tips to streamline your process and guarantee smooth test execution:
1. Always conclude your tests with an assertion.
Assertions are key after interactive actions, like form submissions, in order to confirm the expected outcome. For a login test, an assertion confirms both the successful login and the return to the homepage. When recording, assertion mode allows you to select specific elements on the page to validate a test’s success. This step is important to ensure your test accurately checks for the correct outcomes.
2. Manage your tests more easily with the ‘Import Only’ setting.
If a test is designed to be a reusable module (like a login test), Ghost Inspector offers a setting to mark it as Import Only. This setting indicates that the test isn’t intended to be run independently but used as a part of other tests. This simplifies the test interface and provides a clear overview of where the test is being imported. Plus, marking a test as Import Only ensures consistency and saves time when it comes to creating future tests.
3. Keep your interface organized by collapsing imported steps.
To keep your test interface clean, Ghost Inspector allows you to collapse imported steps. This feature is particularly useful when you frequently access a test to modify or troubleshoot steps unrelated to the imported module.
Note: Reusing test steps not only saves time but also ensures consistency across different tests. By adopting these practices, you can boost your testing efficiency across the board!
Running tests across various browsers, screen sizes, and geographical locations
In Settings, there are options for selecting the browser version, screen size, and geolocation for running tests. You can set a test to run on both the latest versions of Chrome and Firefox; choose various screen resolutions like 1280px, 1440px, and mobile screen sizes; and for geolocations, tests can be configured to run in different regions, such as Northern Virginia or London, England.
When you run a test with these settings, the system will execute a combination of all the selected parameters. For example, if you choose:
- two browsers
- two screen sizes
- two geolocations
Then, Ghost Inspector will launch eight instances of the test. Each row in the test summary represents a unique combination of these settings.
This feature is particularly useful for those who want to test a single scenario across diverse configurations. With these multiple selection settings, you can easily apply various configurations to a single test and run them all with just a click.
Additionally, these settings aren’t limited to individual tests. You can apply them at the suite level as well. In the test defaults section of a suite, you can also make multiple selections for the browser version, screen size, and geolocation. This is a handy feature for achieving extensive coverage in terms of screen sizes, browsers, and locations.
Note: Remember that each test instance counts as a separate run. So, if you set up a test with multiple configurations, it will consume more test runs than a single instance test. This is important to consider in terms of your testing quota. But for those seeking extensive coverage across different browsers, screen sizes, and locations, these settings offer great convenience and efficiency.
Build codeless browser tests with Ghost Inspector
Our 14 day free trial gives you and your team full access. Create tests in minutes. No credit card required.
Using variables to customize your testing environment
Variables are used for removing hard-coded values from various parts of our testing process, including test settings, start URLs, and individual steps. They allow for dynamic substitution, enabling you to easily change these values when needed. For example, when creating a login test, you can use a variable for the email address. That way, you can alter the email address dynamically when running the test or suite. Variables can also be used for element selectors, allowing customization or storage of these selectors for different uses.
There are many useful ways to apply variables in automated testing. Alongside your login variable, you can add them to your start URL at the suite level. Simply replace the recorded website URL with a variable, allowing customization of the start URL. Now you have two variables: one for the login and another for the website URL, each with their default values.
Running the test with these default settings will result in no apparent change, as the test behaves as originally recorded. However, the true advantage lies in the ability to pass different values for these variables through methods like APIs, custom-run settings, or data sources.
The simplest way to modify the variables is via the Run test with custom settings option, found under the More menu. Here, you can manually enter different values for the variables, such as altering the website URL. Though this method is direct, it may be time-consuming for frequent changes.
Instead, we recommend using the Run Again buttons located next to each test result. Clicking these buttons automatically fills in the variables, making it easy to modify them as needed. This way, you can quickly set a test to run on a staging site by changing the variables accordingly. The custom variables that you assign will then be built in when the test runs. You can also pass these variables through the API for even further flexibility in test execution.
Using variables in data-driven testing and environments
When it comes to data-driven testing and the use of environments in automation, an innovative approach we’ve adopted involves using spreadsheets to manage variable values within Ghost Inspector. By storing a spreadsheet with Ghost Inspector, you can have a set of values ready for these variables. In a test, you might have different rows in a spreadsheet containing various email addresses. When you run the test, Ghost Inspector triggers a test run for each row of data, allowing you to test multiple scenarios efficiently. This method is particularly useful for scenarios like testing a signup or contact form with data like invalid email addresses or empty fields. You can turn any part of your test into a variable and then use a spreadsheet to run through different combinations of these variables.
Another powerful way to use this feature is to support different testing environments. For example, you might have a spreadsheet for a staging environment with its own URL and email, and another for the production environment. By storing these as data sources within Ghost Inspector, you can easily switch between environments without manually uploading spreadsheets each time.
These data sources are managed at the organizational level and can be easily updated or replaced. When running a test, simply select the appropriate environment from your stored data sources, and Ghost Inspector will use the corresponding variables. This allows you to design tests where variables change based on the environment, streamlining the process significantly.
You can also set default data sources for your tests. For example, you might have ‘staging’ as your default, but you can easily override this to run tests in the ‘production’ environment. This flexibility extends to API runs as well, where you can specify which data source to use. By storing and managing these variables and environments within Ghost Inspector, the process becomes more efficient, allowing you to focus on the core aspects of your testing strategy.
Troubleshooting sequential testing
Sequential testing is an important concept in automated testing, especially in scenarios where tasks must be carried out in a specific order. By default, Ghost Inspector runs tests simultaneously, but there are situations, like needing to create an account and then use it in another test, where sequential execution is necessary.
One way people attempt to handle this is by creating a “mega test,” or combining all steps into one extensive test. While this method does ensure sequence, it also creates a lengthy and complex test, which often leads to challenges in troubleshooting and increased flakiness from the sheer number of steps involved. This not only makes it difficult for users when a step fails deep into the test but also complicates support efforts.
So what’s the solution? Instead, create separate tests for each action but instruct Ghost Inspector to run them one after the other, which allows data to be passed between them.
Let’s put it in action: Say you’re creating a sign-up test. You’d record a user signing up, ensuring to capture an assertion. However, when creating a new account, a unique email is often required. To achieve this, simply use a timestamp variable, ensuring a unique email each time. This email variable can then be passed on to subsequent tests.
To ensure that tests run in the correct order, use a numbering system in the test names (e.g., 001, 002, 003). Then, configure Ghost Inspector to run these tests sequentially. This is done in the suite settings by setting the concurrency limit to one. This setting not only allows for sequential test execution but also maintains variables between tests and offers the option to abort the suite if any test fails.
While it’s generally more efficient to design tests to run independently and in parallel, there are cases where sequential testing is unavoidable. Setting the concurrency to one and passing variables between tests offers a neat solution to manage these scenarios effectively, keeping tests organized and segmented. This approach should be used judiciously, reserved for situations where tests must be executed in a specific order.
Local tunneling options with Ghost Inspector
Local tunneling is important when dealing with sites that are not yet publicly accessible. Ghost Inspector can easily access and test sites that are accessible through public URLs, like a demo site or a staging site. The challenge arises when you want to test a website running locally on your machine. For example, you might have a simple Next.js site, a repository that you can start and run locally. Accessible via localhost, this site is running on your machine as you make changes and test functionality. Unfortunately, if you try to use this localhost URL in Ghost Inspector, it won’t work. Localhost is essentially a reference to your machine itself, and Ghost Inspector cannot access it as it does with public URLs.
To enable Ghost Inspector to access and test your local site, you need to create a tunnel. A popular solution for this is ngrok, a free tunneling tool that allows you to assign a publicly accessible URL to your local instance. By creating an HTTP tunnel with ngrok on the port your site is running (like port 3000 for a Next.js site), you get URLs that forward to your local site. These URLs make your local environment externally accessible through the tunnel, a useful feature when manually working on your site or testing a branch locally. By generating ngrok URLs, you allow Ghost Inspector to run tests against your local setup. This is also valuable in continuous integration environments like CircleCI or LayerCI, where your app launches inside a container. In such cases, a tunnel is necessary to provide Ghost Inspector with a URL that can access the app inside the container.
However, there are a couple caveats to consider. First, this approach generally works best with straightforward sites operating through a single port. Complex sites with multiple ports might pose challenges, as traffic could accidentally bypass the tunnel. Also, if your website is not configured to handle a random domain assigned by ngrok, this could be problematic. Second, if you’re working within a corporate environment or inside a VPN, using a tool like ngrok might require additional considerations and permissions from your team or system administrator. Both of these conditions should be considered when validating that your tunneling solution aligns with your organizational requirements.
Using Ghost Inspector’s Command Line Interface (CLI) tool
Our Command Line Interface feature is for those who prefer working directly from the command line or need to integrate automated testing into their development workflows. The installation process of the Ghost Inspector CLI tool is straightforward, especially if you’re familiar with Node.js. Using npm (Node Package Manager), you can easily install the Ghost Inspector package globally on your system. The command for this installation is simple: npm install -g ghost-inspector. Once installed, the ghost-inspector command becomes available in your command line, unlocking a myriad of functionalities.
The CLI tool has extensive capabilities, including:
- Allowing interaction with various elements directly from the command line.
This means you can perform almost any action that the Ghost Inspector API allows, but with the convenience of the command line. Trigger tests, fetch results, and much more.
- Flexibility in handling different options for executing tests.
Specify lists of browsers or screen sizes, turn screenshot comparison on or off, and even pass in custom variables.
- Ngrok tunneling for launching a tunnel directly from the command line to use in testing.
This feature is especially handy for local testing scenarios. To execute a specific test, simply copy its ID from the Ghost Inspector interface and run it directly from the command line. This functionality is particularly useful for setting up cron jobs or integrating testing into continuous integration systems.
Whether it’s running tests locally, on staging environments, or as part of a CI/CD pipeline, the CLI tool adds powerful customization options for testers.
Understanding explicit window targeting
As one of our newest features, explicit window targeting allows users to target the specific window or tab that they want to focus on during test steps. Typically, when using a CSS selector, the test runners will cycle through every tab, but with this latest feature, users can add the “window:” string followed by a partial or full tab name or URL. This way, you have more detailed control on your testing targets.
While there can be a bit of a learning curve for mastering Ghost Inspector, our codeless testing automation features help provide testing teams with powerful testing options, regardless of coding ability. We hope this guide has given you some insight into our feature capabilities!
If you have any questions or need clarification on any of these topics, we’re happy to help. You can reach out to us 24/7 for assistance.