Improving our UI testing at SingleStore

JL

Joanna Lew

Software Engineer

Improving our UI testing at SingleStore

In this blog post, written by Joanna Lew, (Software Engineer at SingleStore), we share our experience with adding Cypress and React Testing Library UI tests to our web platform. We discuss the process of going from having no automated UI tests to having test coverage for all main features, and the importance of a healthy team culture in regards to testing.

As a SingleStore Software Engineer, I work on our Singlestore Helios UI, which is where you can create, access, resize, destroy, and monitor SingleStore clusters in the Cloud. This UI is called "Customer Portal" and its front-end is built with React and TypeScript. During this project's early development stages, we didn't have any UI tests. However, in the last couple of years, we have added React Testing Library and Cypress for integration tests and end-to-end tests, respectively.

When I started working on Portal, the product had been around for about a year, and the only tests we had were unit tests. Although we didn’t have any integration or end-to-end tests, static typing through TypeScript and linting through ESLint were enough to give us confidence that our code probably didn’t have bugs.

Portal was still a relatively small app at the time, and in our pull requests, we'd mention what we did to manually test the feature, and maybe add a screenshot. (As a side note, our backend and API had much better tests, so our app wasn’t without tests as a whole.)

Pull request with manual test plan

The only tests we had on the front-end were small unit tests for utility functions. This was fine for us because in the early stages of development we were still trying to validate our product, so we prioritized building new features over setting up tests.

The plan was to invest properly in testing once we were more confident in the product.

setting-up-end-to-end-testingSetting up end-to-end testing

As the team grew and we added more features, end-to-end testing became a higher priority.

On the one hand, the product became more complex which leads to more potential bugs (higher surface area). On the other hand, because the team grew, individual engineers had less knowledge about all areas of the system which made it more likely for us to introduce bugs. Moreover, the manual testing of new features was starting to become a bottleneck in the development lifecycle and the engineering team pushed towards a better testing story.

With that in mind, we dedicated some time to setting up an infrastructure for end-to-end tests[1], which mimics our production environment and can be spun up in our CI system.

Then, we added our first end-to-end test with Cypress. The test itself was rather small, but with the infrastructure set up, engineers could write their own end-to-end tests for larger features.

We also set up our CI system to automatically run the tests, notify authors of failed commits and block deployments for those commits.

Since then, we’ve added end-to-end tests for testing major features on Portal, such as ensuring that users are able to create clusters, invite other users to their organization, add payment methods, and more.

Running a test in cypress locally
/* Test Plan:
 * - Submit "Request POC" form
 * - Verify that poc message looks right
 * - Submit and wait for successful contact sales submission
 */
it("can go through full request poc flow", function (){cy.server();
    cy.route("post", "**/portal-contact-sales*",{} ).as("sales");

    const customer = getRandomCredentials();
    const clusterInfo = getClusterDetails();
    const{clusterName} = clusterInfo;

    cy.newCustomerSession(customer);

    cy.visit("/organizations/org-id/clusters");
    cy.contains("Create Cluster").click();

    cy.contains("Create a Database Cluster");
    cy.get('input[name="name"]').clear().type(clusterName);
        cy.contains("Proof of Concept").click();

    cy.get(`textarea[name="pocMessage"]`).type(
        "I want to use SingleStore!"
    );

    cy.contains("Next").click();

    cy.wait("@sales");
    cy.contains("We’ll get back to you as soon as possible");
    cy.findByTestId("close-modal-btn").click();

    cy.toggleSidebarUserSection(customer);
    cy.logoutCustomer();} );

We were, however, missing coverage on smaller features, such as ensuring that we would display the cluster size correctly or that a button was disabled if the cluster wasn’t in an “active” state.

adding-integration-testsAdding integration tests

It seemed unnecessary to actually spin up an entire environment to check whether a button should be disabled in a specific circumstance.

Inspired by Kent C. Dodds, we decided to follow his guidelines for how to break up our tests so that most of our tests should be integration tests. We ended up using React Testing Library for these as it is perfect for our use case since it encourages good testing practices, runs through Jest (so automatically parallelized) and has great TypeScript support.

We use React Testing Library to test smaller interactions in the frontend. For example, in our create cluster form, people can choose to create a free Trial cluster or a paid On-Demand cluster. If the Trial cluster option is selected, we want to show a card telling the user that free Trial credits are being applied to their cluster. If the On-Demand option is selected, we don’t want to show that card, since Trial credits aren’t being applied to their cluster.

This type of test gives us a lot of safety even with a mocked backend. Of course, on the backend side, we have API tests and all of our API changes are backwards-compatible (GraphQL makes this easier).

The interaction choosing Free Trial or On-Demand in our Create Cluster form
test("Check that Credits Remaining Card appears for Trial clusters", async () =>{window.history.pushState({} ,
            "",
            `/organizations/${orgWithTrialSub.orgID} /clusters/create`
        );

        render(<PortalApp />);

        // Wait for the create cluster page to load
        await screen.findByText("Create a Database Cluster");

        // Check that the Trial cluster option is selected
        expect(
            screen.getByLabelText("Free Trial Credits",{exact: false} )
        ).toBeChecked();
        expect(
            screen.getByLabelText("Free Trial Credits",{exact: false} )
        ).toHaveAttribute("value", "CloudTrialV1");

        // Credits remaining card should appear, wait for the billing query
                await screen.findByText("Estimated time at selected size");

        // Click edit size, wait for flyout to open
        userEvent.click(screen.getByText("Edit"));
        await screen.findByRole("button",{name: "Select Size"} );

        // Credits remaining card should appear both in the flyout and in the form
        expect(
            screen.getAllByText("Estimated time at selected size")
        ).toHaveLength(2);} );

Integration tests with React Testing Library are great because we can check that the smaller parts of the UI work and behave as we’d expect.

The tests also run much faster than our Cypress tests; about a minute for all of our current unit and integration tests, as opposed to 12 minutes for all our end-to-end tests. This allows us to build "matrix" tests that iterate through all combinations of possible scenarios.

Our integration tests usually render the entire app, as shown in the example above where we have render(). Although it’s possible to only render and test a component, we generally favor rendering the entire app and navigating to the page we’re testing instead, because we can ensure we’re actually testing the whole app, including routing, any global state, etc. Our tests are not tied to the implementation, but rather check that we are properly displaying the information we expect the user to see. Theoretically, we could refactor our entire app, update all our existing components, and still have our tests pass.

We mock all backend and third-party requests with Mock Service Worker. Mocking requests allows us to focus on testing the frontend, since the service will never be down, and we can easily simulate errors and edge cases that would normally be difficult with end-to-end tests. For example, we can write tests for API failure scenarios as well as really slow responses from the API to verify our loading states. However, we can also end up mocking something incorrectly, such as accepting invalid query parameters and returning “everything is alright” instead of an error. This may not reflect the actual behavior of our application, and so, it is important to check that our mocks are as accurate as possible.

Another shortcoming is that jsdom (how React Testing Library simulates the browser environment) doesn’t actually apply any styling we do with CSS. Instead, jsdom renders the React tree into JavaScript objects. This means that if a button is mistakenly covered up by other things on the page with absolute positioning (z-index), our integration tests won’t notice and will still consider the button adequately rendered on the page and clickable.

In cases where we need to test this kind of thing, we fall back to Cypress. It leverages headless browsers so it will simulate our UI perfectly just like Google Chrome.

Since adopting React Testing Library, we've added integration tests for most features we've built, as well as developed a culture of adding tests that reproduce most bugs that we fix.

Pull request after we had integration and end-to-end tests

building-a-culture-of-testingBuilding a culture of testing

Developing an engineering culture where tests are valued is a slow and continual process, and I think it starts with addressing difficulties in testing.

Testing should be as easy as possible. Nobody wants to spend an hour adding tests for a tiny change that took a minute. Similarly, nobody likes seeing tests fail, and then spending an hour debugging why.

To try to mitigate those feelings of frustration, we had an internal workshop for how to add tests and debug them. We have an internal wiki page with a recording of the workshop, a written guide of step by step instructions, and some tips and tricks for debugging.

We also get together on calls to see why the test is failing and debug it together, because sometimes another set of eyes is just what’s needed. As we add more tests, testing becomes even easier since there are now more examples to draw from.

After everyone on the team started feeling more comfortable with tests, we began addressing it directly in code reviews. If someone makes a change to an existing feature that doesn’t have a test, we ask if some tests can be added. If the existing feature is very large and the current change is small, we might only add tests for the part we changed. We also updated our tests to run for each pull request, making it easier to spot any issues in code review.

Although we don’t have perfect test coverage, the tests we add ensure the current bug we’re addressing won’t be reproduced, or the current feature we’re building will work as expected. Adding tests incrementally helps gradually increase test coverage without feeling like our pull requests are bogged down by testing. Ultimately, it’s a balance of ensuring features are well-tested where possible, and understanding not every bug that slips through our tests will be a show-stopper.

We're also continuously looking at other testing tools such as TestCafe. However, as a team, what's perhaps more important is developing a healthy testing culture.

This takes a long time and requires effort from everyone to think about how to better structure our tests or how many and what tests to add for new functionality. At the end of the day, the goal of testing is increasing our confidence in the quality of our product.

With proper testing, one should never fear deploying new versions of the UI and that's what we strive for.

If you’re an engineer interested in helping us improve our tests, or if you’re passionate about delivering an application with great user experience, join us here at Singlestore. We’re hiring!

[1]: Don't underestimate the complexity of setting up an end-to-end testing infrastructure. If possible, you should build it in a way that reuses the production environment infrastructure configuration, so that these are always in sync. Another aspect to keep in mind is the seeding of your data for tests. Ideally, tests are seeded with a healthy amount of data.


Share