18 KiB
layout | title | published | date | tags | ||
---|---|---|---|---|---|---|
blog | Introduction to JavaScript Application testing | true | 2021-05-07T14:44:57.102Z |
|
An automated testing suite for your application is like a feature requirements document. When this suite is passing, it means that the features are delivered correctly as described in the requirements. Automated tests help reduce manual testing time and as long they are passing it means that the feature is working as described. Not only do they reduce costs and time of manual testing but they also can help while development.
Different language ecosystems offer a different set of tools to develop automated tests. In this article, I am going to describe how various types of tests can be utilized to cover the most common testing scenarios for Web/Mobile applications development with 3rd party integrations.
Unit tests
Unit tests are functions that test a single specific part of the application in small units. Function can be interpreted as a logical unit. Unit tests verify if this function has correct behavior with different arguments or its dependencies. A test consists of a description of the requirement that should be satisfied.
describe('Location formatter', () => {
it('should format location based on a passed primaryLocation', () => {
const formattedLocation = formatLocation({
primaryLocation: 'Pennsylvania',
city: null,
countryIsoCode: 'US',
})
expect(formattedLocation).toBe('Pennsylvania, US')
})
it('should format location based on a passed city', () => {
const formattedLocation = formatLocation({
primaryLocation: null,
city: 'Svidník',
countryIsoCode: 'SK',
})
expect(formattedLocation).toBe('Svidník, SK')
})
it('should format location based on a passed primaryLocation with city being ignored', () => {
const formattedLocation = formatLocation({
primaryLocation: 'Pennsylvania',
city: 'Košice',
countryIsoCode: 'US',
})
expect(formattedLocation).toBe('Pennsylvania, US')
})
})
These tests can run after any code change and they don't affect other resources other than CPU. They are also good for practicing test-driven development as they are fast to execute and give immediate feedback to the developer.
If the tested unit is dependent on some external resource like time, browser setting, API response, or some service,mocking can be used to substitute the resource to the unit. Mocking is especially useful with a technique called dependency injection
The effect of the unit can be then tested on these mocks:
const service = {
fetch: jest.fn(() => responseFixture)
}
// Using dependency injection
const result = fetchAndParseResponse(service, 'path')
// -- snip expecting correct result
// It's expected that `fetchAndParseResponse` will call the `fetch` only once
expect(service.fetch).toHaveBeenCalled(1)
Unit tests can also make use of snapshots.
Tools that help writing unit tests
- Jest testing framework
- mocha test runner - Usually paired with assertion library of choice
- Chai assertion library
- jest-date-mock - Testing functionality that is dependent on the Date
Additional resources for unit tests
- Jest mocking guide
- How to Start Unit Testing Your JavaScript Code by Ondrej Polesny @ FreeCodeCamp
- The importance of test driven development by Godswill Okwara @ medium.com
Snapshots
Snapshot matchers will save the input object as an artifact and they will use that artifact to compare the snapshot with the snapshot of the next test run.
it('should create a JSON response', () => {
const feed = createFeedResponse();
expect(JSON.parse(feed)).toMatchSnapshot('parsed feed');
});
On subsequent test runs, Jest will compare the output with the previous snapshot. If they match, the test will pass. If they don't match, either the test runner found a bug in your code that should be fixed, or the implementation has changed and the snapshot needs to be updated.
These snapshots help to detect changes in the output of some functionality. They can also be used with a component renderer to detect changes in the output of component's HTML, DOM or virtual DOM. Snapshots are created and updated by the test runner automatically so they lower the churn of updating tests over time.
Snapshot artifacts should be committed into the code repository alongside the tests, so they can be used as an input for the next test runs. Snapshots can be utilized for many types of tests.
If the expected result includes some varying data, property matchers can be utilized to verify the type of the data instead.
Additional resources on snapshots
Integration testing
Integration tests are meant to validate the behavior of the individual components and their interaction with different system modules.
Take an API resolver as an example:
- Resolver is given different parameters
- It fetches some data from a database
- Makes an action given the data
- Returns a response to the client
An integration test should test and validate the functionality of this resolver. It should simulate a client request and assert desired response output. It should also verify the actions that the resolver is making.
// Mock the e-mail service, we don't want to send real emails
jest.mock('../email.js')
it('should send activation email', async () => {
const client = await Client.query().findOne({
firstName: 'testing',
lastName: 'user',
})
const sendActivationEmail = await apolloClient.mutate({
mutation: SEND_ACTIVATION_EMAIL_MUTATION,
variables: { clientId: client.id },
})
expect(sendActivationEmail.errors).not.toBeDefined()
// One email should be sent
expect(email.send).toHaveBeenCalledTimes(1)
// Verify that the email was sent with correct parameters
expect(email.send).toHaveBeenCalledWith({
templateId: EMAIL_TEMPLATE_IDS.CLIENT_EMAIL_ACTIVATION,
to: client.contactEmail,
data: {
client_first_name: client.firstName,
},
})
})
It might be useful to mock some dependencies such as a database or a 3rd party API to ensure that these tests can run multiple times in sandboxed environment.
Sometimes it is desired to have a database, that is set up just for integration tests to avoid mocking. It can be also used for validating integration in between the tests. In this scenario, it is very useful to seed the database. The database can be then truncated and seeded in between the tests ensuring predictable and stable test results.
Integration tests can be run with the same test runner as the unit tests, but with a different configuration. Configuration might include a setup and a teardown process. When running tests with a database in place the tests could fail because the same data dependency is being modified in multiple tests at once, therefore it is required for these tests to run in serial mode.
Keep in mind that you should avoid testing 3rd party libraries or code that is not part of the application logic. This code is usually tested by the authors of those libraries.
Tools that help writing integration tests
- casual or faker - Fake data generators
- Utilities for testing Apollo Server
- supertest - HTTP request assertion library
- jest-fetch-mock - Request mocking
- SinonJS - Standalone test spies, stubs and mocks
Additional resources for integration tests
- What is API testing by Archana Choudary @ edureka.com
- GraphQL integration tests with apollo-server-testing, jest-mongodb and nock by Julien Deflaux @ medium.com
End to End (e2e) testing
e2e tests should guarantee that our users can browse and use the application. Every part of the application, whether it is a front-end, back-end, or a 3rd party integration, will be tested. We look to verify the behavior of the application in response to events triggered by the user or any different side-effect. The goal is to test the application from the user's perspective.
Element selectors
There are different frameworks and approaches for creating DOM elements.
These elements can have different representations of their state in the HTML.
Developers should agree on the practice of selecting these elements in tests that suits them best.
You should try to target elements in a way that you can be sure that the results will be consistent even after different layout changes that might happen throughout the lifetime of the project.
Therefore I'd say that XPath is not the recommended way.
To have selectors that are reliable and not changing with different layout changes we can use id
attributes or even more sophisticated data-*
attributes.
With data-*
attributes we can target multiple nodes with the same identifier and establish a convention that will make sure that the elements with these attributes are being used in the tests so if they are changed/deleted, there will have to be a change in the tests as well.
<button data-test-id="close-modal-button">Close</button>
You don't have to mark all targeted elements with id
you can use CSS selectors as well.
Ensure, that you will not have to change stable test cases because of new additional features, which are not relevant to the established tested features.
Form validation
Forms should be tested for correctly evaluating validation. The errors should appear when the validation fails. It's also good user experience testing when tests require validation errors to be displayed only if the elements were previously touched or the form was submitted.
it('should display validation errors', () => {
// Form is submitted with `enter`
cy.get('[data-test-id="first-name-input"]').type('{enter}')
// All validation errors should be visible
cy.get('[data-test-id="first-name-hint"]'.FIRST_NAME_HINT)
.should('exist')
.and('contain', 'First Name is a required field')
cy.get('[data-test-id="last-name-hint"]'.LAST_NAME_HINT)
.should('exist')
.and('contain', 'Last Name is a required field')
cy.get('[data-test-id="email-hint"]')
.should('exist')
.and('contain', 'Email is a required field')
cy.get('[data-test-id="password-hint"]')
.should('exist')
.and('contain', 'Password is a required field')
cy.get('[data-test-id="terms-accepted-hint"]')
.should('exist')
.and('contain', 'You have to accept the terms of use')
// Email field validation
cy.get('[data-test-id="email-input"]').type(
`invalid{enter}`
)
cy.get('[data-test-id="email-hint"]')
.should('exist')
.and('contain', 'Invalid email')
})
Form submission should be tested as well, to see if the back-end works correctly. It should be expected to see correct error messages being shown to the user when the submission fails. For example logging in with wrong password:
it('should toast wrong password', () => {
cy.get('[data-test-id="email-input"]').type('user@login.test')
cy.get('[data-test-id="password-input"]')type(
`wrongpassword{enter}`
// -- snip -- wait for submission
cy.get('[data-test-id="error-message"]')
.should('exist')
.and('contain', 'The password is invalid or the user does not have a password.')
})
While submission is in progress, the loading state of the form can be expected for having disabled input fields. There should be a way to wait for asynchronous operations to be completed, so we can reliably test the loading behavior. Cypress has an intercept method, that allows us to hold the response and do as many expectations until it is desired for the response to be let go with the wait method.
cy.intercept('**/submit').as('submission')
// -- snip -- fill the form
cy.get('[data-test-id="submit-button"]').click()
// Expecting loading state
cy.get('[data-test-id="submit-button"]').should('have.class', 'loading').and('be.disabled')
// If submission request is already done if will be witheld from resolving until `cy.wait` is called
// Otherwise it will wait until the request is completed
cy.wait('@submission')
Authentication
Don't try to log in to the application for every test that needs authenticated user. Test the login flow as a proper test case, but just for that sake only. For tests that require a user to be logged in, create a separate log-in functionality that will authenticate the user in the background without any browser interaction. This will speed up the testing process tremendously. Read cypress documentation on login for more examples.
Dedicated API routes
Sometimes it's beneficial to implement a dedicated testing API route/resolver which will help seed individual tests or perform a clean up after some tests. Also, can be used for simulating a result of integration with 3rd party API without accessing 3rd party resources. Make sure that, these routes can only be run in a non-production environment.
Screenshot matching
Many plugins can be integrated into the testing environment, that enable testing visual regressions.
Screenshot snapshot matching allows to:
- Ensure, that the layout is not broken for different window sizes
- Check visual changes of every component as part of a review
- Prevent accidental visual regression in components
Screenshots can be taken of the whole viewport or individual components. Be sure to not include any variable data (Dates, IDs, etc.) in the screenshots as they will easily cause a mismatch.
Tools that help writing e2e tests
- Cypress testing framework - Next generation front-end testing framework
- Playwright - Playwright is a library to automate Chromium, Firefox and WebKit with a single API
- Puppeteer - High level API to control Chromium
- Jest-image-snapshot Matching image comparisons
- Percy - Visual review platform for visual regression tests
Additional resources for e2e tests
- Cypress testing strategies
- BrowserStack e2e testing guide by Shreya Bose
Component tests
Component tests take individual components and test them as a unit. These can run in the browser or just DOM simulator.
These tests, just like unit tests, should ensure that the expected output of the component is rendered based on the component input. They should also test the behavior of the component when any internal state change happens. For example, when you open an accordion, you should expect the children to be shown in the output. You can also expect, that the passed callbacks are being called according to event triggers.
Most often, these tests rely on snapshots. They reveal changes in the output of the components. Also, they save a tremendous amount of time and pain while maintaining these tests.
Tools and resources for component testing
Code coverage
While being an important metric it can also be misleading. It is a measure, that indicates how much of the source code is being executed while testing. Coverage can identify branches of code, that haven't been validated. Having 100% coverage does not mean, that you have successfully tested your application. High coverage does not guarantee to have bug-free code. Therefore, it's important to test all possible scenarios and validate the code according to requirements.