Quality Requirements
The strategy to meet a high-quality solution involves both automated and manual quality checks.
Automated Quality Checks
Code Maintenance
- The code is backed by code style checking.
- The code is documented sufficiently.
- We provide general documentation through repository-based README-files, and also annotate the code where the complexity requires additional background information.
- Code style checking (and unit-testing) is automated though CI/CD pipelines.
Functionality Maintenance
- Code-level unit-tests for risky but easy-to-test functionality (does not involve external systems).
- Backend API tests to make sure the API meets the expectations and the underlying integrations work.
- API performance tests with big data to verify the capability of the system to handle demanding processes and to control overload.
- Simple website scanning tests to make sure the links are correct, and user-functionality is not broken.
Most of the tests are written in Python (just like the software), except website/browser tests that based on Cypress.
While the unit-tests are included in the code-base, other tests are executed separately after the build-and-deploy process, and therefore need separate Docker images. The tests must be configurable (e.g. targeted deployment environment, login-user and credential).
Manual Quality Checks
Before creating a release, the development team goes though manual testing phase (takes about a week), especially to detect issues that programmed tests cannot judge.
Success Criteria
- All tests must finish without a failure (there is no time-limit).
- The tests (combined) cover all integration options.
- The tests (combined) cover all user roles and/or permissions.
- It is sufficient for the automated website tests to just cover the Chrome browser.
Note that the minimum performance criteria will be specified later based on the performance indicators from software analysis.
Quality Strategy
Since the GDI is foremost a research-and-development type of project, the development team initially focuses on producing working prototypes, which may including more than one iteration. To avoid slowing down the development, initial quality just focuses on the code and manual testing. Automated tests will be added once the chosen prototype has matured enough to call it stable. The purpose of the tests is to detect when the functionality breaks.
The documentation is usually produced with the development work.