An automated testing suite for your application is like a feature requirements document.
When this suite is passing, it means that the features are delivered correctly as described in the requirements.
Automated tests help reduce manual testing time and as long they are passing it means that the feature is working as described.
Not only do they reduce costs and time of manual testing but they also can help while development.
Different language ecosystems offer a different set of tools to develop automated tests.
In this article, I am going to describe how various types of tests can be utilized to cover the most common testing scenarios for Web/Mobile applications development with 3rd party integrations.
- [Unit tests](#unit-tests)
- [Tools that help writing unit tests](#tools-that-help-writing-unit-tests)
- [Additional resources for unit tests](#additional-resources-for-unit-tests)
- [Snapshots](#snapshots)
- [Additional resources on snapshots](#additional-resources-on-snapshots)
- [Integration testing](#integration-testing)
- [Tools that help writing integration tests](#tools-that-help-writing-integration-tests)
- [Additional resources for integration tests](#additional-resources-for-integration-tests)
- [End to End (e2e) testing](#end-to-end-e2e-testing)
- [Element selectors](#element-selectors)
- [Form validation](#form-validation)
- [Authentication](#authentication)
- [Dedicated API routes](#dedicated-api-routes)
- [Screenshot matching](#screenshot-matching)
- [Tools that help writing e2e tests](#tools-that-help-writing-e2e-tests)
- [Additional resources for e2e tests](#additional-resources-for-e2e-tests)
- [Component tests](#component-tests)
- [Tools and resources for component testing](#tools-and-resources-for-component-testing)
- [Code coverage](#code-coverage)
- [Additional resources for code coverage](#additional-resources-for-code-coverage)
## Unit tests
Unit tests are functions that test a single specific part of the application in small units.
_Function_ can be interpreted as a logical unit.
Unit tests verify if this _function_ has correct behavior with different arguments or its dependencies.
A test consists of a description of the requirement that **should** be satisfied.
These tests can run after any code change and they don't affect other resources other than _CPU_.
They are also good for practicing _test-driven development_ as they are fast to execute and give immediate feedback to the developer.
If the tested unit is dependent on some external resource like _time_, browser setting, API response, or some service,_mocking_ can be used to substitute the resource to the unit.
_Mocking_ is especially useful with a technique called [_dependency injection_](https://en.wikipedia.org/wiki/Dependency_injection)
The effect of the unit can be then tested on these _mocks_:
```javascript
const service = {
fetch: jest.fn(() => responseFixture)
}
// Using dependency injection
const result = fetchAndParseResponse(service, 'path')
// -- snip expecting correct result
// It's expected that `fetchAndParseResponse` will call the `fetch` only once
expect(service.fetch).toHaveBeenCalled(1)
```
Unit tests can also make use of [snapshots](#snapshots).
### Tools that help writing unit tests
- [Jest testing framework](https://jestjs.io/)
- [mocha test runner](https://mochajs.org/) - Usually paired with assertion library of choice
- [How to Start Unit Testing Your JavaScript Code](https://www.freecodecamp.org/news/how-to-start-unit-testing-javascript/) by [Ondrej Polesny @ FreeCodeCamp](https://www.freecodecamp.org/news/author/ondrej/)
- [The importance of test driven development](https://medium.com/@gondy/the-importance-of-test-driven-development-f80b0d02edd8) by [Godswill Okwara @ medium.com](https://medium.com/@gondy)
## Snapshots
Snapshot matchers will save the input object as an artifact and they will use that artifact to compare the snapshot with the snapshot of the next test run.
> On subsequent test runs, Jest will compare the output with the previous snapshot. If they match, the test will pass.
> If they don't match, either the test runner found a bug in your code that should be fixed, or the implementation has changed and the snapshot needs to be updated.
These snapshots help to detect changes in the output of some functionality.
They can also be used with a component renderer to detect changes in the output of component's _HTML_, _DOM_ or virtual _DOM_.
It might be useful to **mock** some **dependencies** such as a database or a 3rd party API to ensure that these tests can run multiple times in sandboxed environment.
Sometimes it is desired to have a database, that is set up just for integration tests to avoid mocking.
It can be also used for validating integration in between the tests.
In this scenario, it is very useful to _seed_ the database. The database can be then truncated and seeded in between the tests ensuring predictable and stable test results.
Integration tests can be run with the same test runner as the [unit tests](#unit-tests), but with a different configuration.
Configuration might include a [setup](https://jestjs.io/docs/configuration#globalsetup-string) and a [teardown](https://jestjs.io/docs/configuration#globalteardown-string) process.
When running tests with a database in place the tests could fail because the same data dependency is being modified in multiple tests at once, therefore it is required for these tests to run in [serial mode](https://jestjs.io/docs/cli#--runinband).
Keep in mind that you should **avoid testing** 3rd party libraries or **code that is not part** of the application logic.
This code is usually tested by the authors of those libraries.
### Tools that help writing integration tests
- [casual](https://www.npmjs.com/package/casual) or [faker](https://www.npmjs.com/package/faker) - Fake data generators
- [Utilities for testing Apollo Server](https://www.apollographql.com/docs/apollo-server/testing/testing/)
- [What is API testing](https://www.edureka.co/blog/what-is-api-testing) by [Archana Choudary @ edureka.com](https://www.edureka.co/blog/author/archana-cedureka-co/)
- [GraphQL integration tests with apollo-server-testing, jest-mongodb and nock](https://medium.com/@jdeflaux/graphql-integration-tests-with-apollo-server-testing-jest-mongodb-and-nock-af5a82e95954) by [Julien Deflaux @ medium.com](https://medium.com/@jdeflaux)
## End to End (e2e) testing
e2e tests should guarantee that our **users can browse and use the application**.
Every part of the application, whether it is a front-end, back-end, or a 3rd party integration, will be tested.
We look to verify the behavior of the application in response to events triggered by the user or any different side-effect.
The goal is to test the application from the user's perspective.
### Element selectors
There are different frameworks and approaches for creating _DOM_ elements.
These elements can have different representations of their state in the _HTML_.
Developers should agree on the practice of selecting these elements in tests that suits them best.
You should try to target elements in a way that you can be sure that the **results will be consistent** even after different layout changes that might happen throughout the lifetime of the project.
Therefore I'd say that [XPath](https://developer.mozilla.org/en-US/docs/Web/XPath) is not the recommended way.
To have selectors that are reliable and not changing with different layout changes we can use `id` attributes or even more sophisticated `data-*` attributes.
With `data-*` attributes we can target multiple nodes with the same identifier and establish a convention that will make sure that the elements with these attributes are being used in the tests so if they are changed/deleted, there will have to be a change in the tests as well.
You don't have to mark all targeted elements with `id` you can use _CSS_ selectors as well.
Ensure, that you will not have to change stable test cases because of new additional features, which are not relevant to the established tested features.
### Form validation
Forms should be tested for correctly evaluating validation.
The errors should appear when the validation fails.
It's also good user experience testing when tests require validation errors to be displayed only if the elements were previously touched or the form was submitted.
_Cypress_ has an [intercept method](https://docs.cypress.io/api/commands/intercept.html), that allows us to **hold the response** and do as many expectations **until it is desired** for the response to be let go with the [wait method](https://docs.cypress.io/api/commands/wait.html).
For tests that require a user to be logged in, create a separate log-in functionality that will authenticate the user in the background without any browser interaction. This will speed up the testing process tremendously.
Sometimes it's beneficial to implement a dedicated testing API route/resolver which will help seed individual tests or perform a clean up after some tests.