summaryrefslogtreecommitdiff
path: root/topics
diff options
context:
space:
mode:
Diffstat (limited to 'topics')
-rw-r--r--topics/automated-testing.gmi136
1 files changed, 136 insertions, 0 deletions
diff --git a/topics/automated-testing.gmi b/topics/automated-testing.gmi
new file mode 100644
index 0000000..6b22423
--- /dev/null
+++ b/topics/automated-testing.gmi
@@ -0,0 +1,136 @@
+# Automated Testing
+
+## Tags
+
+* keywords: testing, CI, CD
+
+## Introduction
+
+As part of the
+=> ../systems/ci-cd.gmi CI/CD effort
+there is need for automated tests to ensure that the system is working as expected.
+
+This document is meant to track the implementation of the automated tests and possibly the related infrastructure for running the tests.
+
+## Genenetwork 3
+
+### Unit Tests
+
+There is a collection of unit tests in the *tests/unit* in the
+=> https://github.com/genenetwork/genenetwork3 Genenetwork 3 repository
+
+### Integration
+
+There is (as of 2022-Feb-10) an
+=> https://github.com/genenetwork/genenetwork3/tree/main/tests/integration integration tests directory
+in the Genenetwork 3 repository.
+
+The tests there, however, are technically unit tests. Each test seems to test a single logical unit of the system e.g. correlations, gemma, etc.
+
+There is no test that seems to check for interactions among the logical units/modules of the system e.g.
+
+* authorisation <==> file-upload <==> correlations
+* partial-correlations <==> trait-editting <==> gemma analysis
+
+etc.
+
+### API Tests
+
+There is need for tests to ensure that all expected endpoints are up and running.
+
+Maybe even check that the data is correct.
+
+### Performance and Responsiveness
+
+There is a need to ensure that the system does not take forever to compute stuff.
+
+There is a single performance tests module in
+=> https://github.com/genenetwork/genenetwork3/tree/main/tests/performance the performance tests directory
+for Genenetwork 3 but it is run manually, and mostly tests a very specific query that might or might not have been used in the code.
+
+The performance tests in GN3 should probably be focussed on checking the following (among others):
+
+* Each API endpoint responds within a specified amount of time
+* Select computation-heavy functions respond within a specified amount of time for given data
+* Database-querying functions used in the system respond within specified amount of time
+
+etc.
+
+This is relevant since GN3 is behind Nginx which defines a timeout.
+
+### Regression Tests
+
+Checks that previously working features are not broken. These can be added as we go along
+
+## Genenetwork 2
+
+### Unit Tests
+
+Present under the
+=> https://github.com/genenetwork/genenetwork2/tree/testing/wqflask/tests/unit unit tests directory
+in the GN2 repository.
+
+### Integration
+
+Genenetwork 2 has a "Mechanical Rob" testing system that is under construction whose purpose (as far as I - fredm - can tell) is to "walk" some common paths that have multiple logical units working together, thus performing some form of integration testing.
+
+The only issue I (fredm) find in that as it is currently, it will not be able to test javascript interactions that are crucial to some operations in certain flows.
+
+### Performance and Responsiveness
+
+Since GN2 is not meant to handle computations itself, the bigger concern here is responsiveness.
+
+There might need to be checks for responsiveness built in.
+
+### Regression Tests
+
+Checks that previously working features are not broken. These can be added as we go along
+
+### Notes from Email Correspondence
+
+Selenium (and other browser-automation tools) were said to be too complicated, and are to be avoided as much as possible. It is better to use headless Firefox/Chromium and fetch pages with Mechanical Rob.
+
+Selenium had been previously introduced in GN2 and then swiftly removed.
+
+
+=> https://github.com/genenetwork/gn-docs/blob/master/scripts/screenshot.rb sample script to create screen shots
+
+
+=> https://github.com/genenetwork/gn-deploy-servers/blob/master/scripts/rabbit/monitor_websites.sh tests currently run to monitor GN2 and end points
+
+We should cover the GN2/GN3 ones.
+
+
+For GN2 we should write scripts that test:
+
+* [ ] main menu selector
+* [ ] search
+* [ ] global search
+* [ ] search with wild cards
+* [ ] select items and add to shopping basket/collections
+* [ ] mapping page
+* [ ] run R/qtl mapping
+* [ ] run GEMMA mapping
+* [ ] run correlations
+* [ ] Run the functions on the shopping basket/collections, such as CTL, network graph
+
+
+> Please don't emulate hitting browser buttons (selenium style). Simply
+> find URL paths that do the job. You may have to set cookies.
+>
+> Mechanical Rob can do that.
+>
+> If using the browser interface is too hard, then create 'back-end'
+> tests that cover the real functionality. I am not too concerned about
+> the browser display - i.e., real Rob catches that quickly enough. I am
+> *really* worried about regressions where search etc. starts giving
+> different results.
+>
+> See what I mean?
+>
+> Pj.
+
+
+## Testing interface
+
+Tests in different categories should be grouped into different command-line endpoints. For example, unit tests could be run by "python3 setup.py check", integration tests could be run by "python3 setup.py integration-check", performance tests could be run by "python3 setup.py performance-check", and so on. This way, the CI will have to be configured only once, and then committers will be able to add new tests without requesting for a CI reconfiguration each time. We won't have to wait on others to respond. Less coordination will be required leading to smoother work for everyone.