summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--.gitignore1
-rw-r--r--issues/genenetwork2/broken-collections-features.gmi44
-rw-r--r--issues/genenetwork2/fix-display-for-time-consumed-for-correlations.gmi15
-rw-r--r--issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zipbin0 -> 143152 bytes
-rw-r--r--issues/genenetwork3/ctl-maps-error.gmi46
-rw-r--r--issues/genenetwork3/generate-heatmaps-failing.gmi34
-rw-r--r--issues/genenetwork3/rqtl2-mapping-error.gmi42
-rw-r--r--issues/gn-guile/rendering-images-within-markdown-documents.gmi22
-rw-r--r--issues/gn-guile/rework-hard-dependence-on-github.gmi21
-rw-r--r--issues/gn-uploader/AuthorisationError-gn-uploader.gmi66
-rw-r--r--issues/gn-uploader/replace-redis-with-sqlite3.gmi17
-rw-r--r--issues/gnqa/implement-no-login-requirement-for-gnqa.gmi20
-rw-r--r--issues/production-container-mechanical-rob-failure.gmi224
-rw-r--r--issues/systems/apps.gmi22
-rw-r--r--issues/systems/octoraid-storage.gmi18
-rw-r--r--issues/systems/penguin2-raid5.gmi61
-rw-r--r--issues/systems/tux02-production.gmi4
-rw-r--r--issues/systems/tux04-disk-issues.gmi277
-rw-r--r--issues/systems/tux04-production.gmi279
-rw-r--r--tasks/alexm.gmi86
-rw-r--r--tasks/bonfacem.gmi103
-rw-r--r--tasks/felixl.gmi128
-rw-r--r--tasks/fredm.gmi16
-rw-r--r--tasks/machine-room.gmi8
-rw-r--r--tasks/octopus.gmi3
-rw-r--r--tasks/pjotrp.gmi87
-rw-r--r--tasks/programmer-team/meetings.gmi82
-rw-r--r--tasks/roadmap.gmi65
-rw-r--r--tasks/zachs.gmi7
-rw-r--r--topics/ai/aider.gmi6
-rw-r--r--topics/ai/ontogpt.gmi7
-rw-r--r--topics/database/mariadb-database-architecture.gmi66
-rw-r--r--topics/deploy/genecup.gmi69
-rw-r--r--topics/deploy/installation.gmi2
-rw-r--r--topics/deploy/machines.gmi7
-rw-r--r--topics/deploy/setting-up-or-migrating-production-across-machines.gmi58
-rw-r--r--topics/deploy/uthsc-vpn-with-free-software.gmi11
-rw-r--r--topics/deploy/uthsc-vpn.scm2
-rw-r--r--topics/genenetwork-releases.gmi77
-rw-r--r--topics/genenetwork/starting_gn1.gmi4
-rw-r--r--topics/gn-learning-team/next-steps.gmi48
-rw-r--r--topics/octopus/maintenance.gmi98
-rw-r--r--topics/octopus/recent-rust.gmi76
-rw-r--r--topics/programming/autossh-for-keeping-ssh-tunnels.gmi65
-rw-r--r--topics/systems/backup-drops.gmi34
-rw-r--r--topics/systems/backups-with-borg.gmi220
-rw-r--r--topics/systems/ci-cd.gmi48
-rw-r--r--topics/systems/mariadb/mariadb.gmi11
-rw-r--r--topics/systems/mariadb/precompute-mapping-input-data.gmi23
-rw-r--r--topics/systems/migrate-p2.gmi12
-rw-r--r--topics/systems/screenshot-github-webhook.pngbin0 -> 177112 bytes
-rw-r--r--topics/systems/synchronising-the-different-environments.gmi68
-rw-r--r--topics/systems/update-production-checklist.gmi182
53 files changed, 2885 insertions, 107 deletions
diff --git a/.gitignore b/.gitignore
index 329cfdc..8a5b167 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,3 +2,4 @@
index.gmi
tracker.gmi
.aider*
+.tissue/**/*
diff --git a/issues/genenetwork2/broken-collections-features.gmi b/issues/genenetwork2/broken-collections-features.gmi
new file mode 100644
index 0000000..4239929
--- /dev/null
+++ b/issues/genenetwork2/broken-collections-features.gmi
@@ -0,0 +1,44 @@
+# Broken Collections Features
+
+## Tags
+
+* type: bug
+* status: open
+* priority: high
+* assigned: zachs, fredm
+* keywords: gn2, genenetwork2, genenetwork 2, collections
+
+## Descriptions
+
+There are some features in the search results page, and/or the collections page that are broken — these are:
+
+* "CTL" feature
+* "MultiMap" feature
+* "Partial Correlations" feature
+* "Generate Heatmap" feature
+
+### Reproduce Issue
+
+* Go to https://genenetwork.org
+* Select "Mouse (Mus musculus, mm10) for "Species"
+* Select "BXD Family" for "Group"
+* Select "Traits and Cofactors" for "Type"
+* Select "BXD Published Phenotypes" for "Dataset"
+* Type "locomotion" in the "Get Any" field (without the quotes)
+* Click "Search"
+* In the results page, select the traits with the following "Record" values: "BXD_10050", "BXD_10051", "BXD_10088", "BXD_10091", "BXD_10092", "BXD_10455", "BXD_10569", "BXD_10570", "BXD_11316", "BXD_11317"
+* Click the "Add" button and add them to a new collection
+* In the resulting collections page, click the button for any of the listed failing features above
+
+### Failure modes
+
+* The "CTL" and "WCGNA" features have a failure mode that might have been caused by recent changes making use of AJAX calls, rather than submitting the form manually.
+* The "MultiMap" and "Generate Heatmap" features raise exceptions that need to be investigated and resolved
+* The "Partial Correlations" feature seems to run forever
+
+## Break-out Issues
+
+We break-out the issues above into separate pages to track the progress of the fixes for each feature separately.
+
+=> /issues/genenetwork3/ctl-maps-error
+=> /issues/genenetwork3/generate-heatmaps-failing
diff --git a/issues/genenetwork2/fix-display-for-time-consumed-for-correlations.gmi b/issues/genenetwork2/fix-display-for-time-consumed-for-correlations.gmi
new file mode 100644
index 0000000..0c8e9c8
--- /dev/null
+++ b/issues/genenetwork2/fix-display-for-time-consumed-for-correlations.gmi
@@ -0,0 +1,15 @@
+# Fix Display for the Time Consumed for Correlations
+
+## Tags
+
+* type: bug
+* status: closed, completed
+* priority: low
+* assigned: @alexm, @bonz
+* keywords: gn2, genenetwork2, genenetwork 2, gn3, genenetwork3 genenetwork 3, correlations, time display
+
+## Description
+
+The breakdown of the time consumed for the correlations computations, displayed at the bottom of the page, is not representative of reality. The time that GeneNetwork3 (or background process) takes for the computations is not actually represented in the breakdown, leading to wildly inaccurate displays of total time.
+
+This will need to be fixed.
diff --git a/issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zip b/issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zip
new file mode 100644
index 0000000..7681b88
--- /dev/null
+++ b/issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zip
Binary files differ
diff --git a/issues/genenetwork3/ctl-maps-error.gmi b/issues/genenetwork3/ctl-maps-error.gmi
new file mode 100644
index 0000000..6726357
--- /dev/null
+++ b/issues/genenetwork3/ctl-maps-error.gmi
@@ -0,0 +1,46 @@
+# CTL Maps Error
+
+## Tags
+
+* type: bug
+* status: open
+* priority: high
+* assigned: alexm, zachs, fredm
+* keywords: CTL, CTL Maps, gn3, genetwork3, genenetwork 3
+
+## Description
+
+Trying to run the CTL Maps feature in the collections page as described in
+=> /issues/genenetwork2/broken-collections-feature
+
+We get an error in the results page of the form:
+
+```
+{'error': '{\'code\': 1, \'output\': \'Loading required package: MASS\\nLoading required package: parallel\\nLoading required package: qtl\\nThere were 13 warnings (use warnings() to see them)\\nError in xspline(x, y, shape = 0, lwd = lwd, border = col, lty = lty, : \\n invalid value specified for graphical parameter "lwd"\\nCalls: ctl.lineplot -> draw.spline -> xspline\\nExecution halted\\n\'}'}
+```
+
+on the CLI the same error is rendered:
+```
+Loading required package: MASS
+Loading required package: parallel
+Loading required package: qtl
+There were 13 warnings (use warnings() to see them)
+Error in xspline(x, y, shape = 0, lwd = lwd, border = col, lty = lty, :
+ invalid value specified for graphical parameter "lwd"
+Calls: ctl.lineplot -> draw.spline -> xspline
+Execution halted
+```
+
+On my local development machine, the command run was
+```
+Rscript /home/frederick/genenetwork/genenetwork3/scripts/ctl_analysis.R /tmp/01828928-26e6-4cad-bbc8-59fd7a7977de.json
+```
+
+Here is a zipped version of the json file (follow the link and click download):
+=> https://github.com/genenetwork/gn-gemtext-threads/blob/main/issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zip
+
+Troubleshooting a while, I suspect
+=> https://github.com/genenetwork/genenetwork3/blob/27d9c9d6ef7f37066fc63af3d6585bf18aeec925/scripts/ctl_analysis.R#L79-L80 this is the offending code.
+
+=> https://cran.r-project.org/web/packages/ctl/ctl.pdf The manual for the ctl library
+indicates that our call above might be okay, which might mean something changed in the dependencies that the ctl library used.
diff --git a/issues/genenetwork3/generate-heatmaps-failing.gmi b/issues/genenetwork3/generate-heatmaps-failing.gmi
index 03256e6..522dc27 100644
--- a/issues/genenetwork3/generate-heatmaps-failing.gmi
+++ b/issues/genenetwork3/generate-heatmaps-failing.gmi
@@ -28,3 +28,37 @@ On https://gn2-fred.genenetwork.org the heatmaps fails with a note ("ERROR: unde
=> https://github.com/scipy/scipy/issues/19972
This issue should not be present with python-plotly@5.20.0 but since guix-bioinformatics pins the guix version to `b0b988c41c9e0e591274495a1b2d6f27fcdae15a`, we are not able to pull in newer versions of packages from guix.
+
+
+### Update 2025-04-08T10:59CDT
+
+Got the following error when I ran the background command manually:
+
+```
+$ export RUST_BACKTRACE=full
+$ /gnu/store/dp4zq4xiap6rp7h6vslwl1n52bd8gnwm-profile/bin/qtlreaper --geno /home/frederick/genotype_files/genotype/genotype/BXD.geno --n_permutations 1000 --traits /tmp/traits_test_file_n2E7V06Cx7.txt --main_output /tmp/qtlreaper/main_output_NGVW4sfYha.txt --permu_output /tmp/qtlreaper/permu_output_MJnzLbrsrC.txt
+thread 'main' panicked at src/regression.rs:216:25:
+index out of bounds: the len is 20 but the index is 20
+stack backtrace:
+ 0: 0x61399d77d46d - <unknown>
+ 1: 0x61399d7b5e13 - <unknown>
+ 2: 0x61399d78b649 - <unknown>
+ 3: 0x61399d78f26f - <unknown>
+ 4: 0x61399d78ee98 - <unknown>
+ 5: 0x61399d78f815 - <unknown>
+ 6: 0x61399d77d859 - <unknown>
+ 7: 0x61399d77d679 - <unknown>
+ 8: 0x61399d78f3f4 - <unknown>
+ 9: 0x61399d6f4063 - <unknown>
+ 10: 0x61399d6f41f7 - <unknown>
+ 11: 0x61399d708f18 - <unknown>
+ 12: 0x61399d6f6e4e - <unknown>
+ 13: 0x61399d6f9e93 - <unknown>
+ 14: 0x61399d6f9e89 - <unknown>
+ 15: 0x61399d78e505 - <unknown>
+ 16: 0x61399d6f8d55 - <unknown>
+ 17: 0x75ee2b945bf7 - __libc_start_call_main
+ 18: 0x75ee2b945cac - __libc_start_main@GLIBC_2.2.5
+ 19: 0x61399d6f4861 - <unknown>
+ 20: 0x0 - <unknown>
+```
diff --git a/issues/genenetwork3/rqtl2-mapping-error.gmi b/issues/genenetwork3/rqtl2-mapping-error.gmi
new file mode 100644
index 0000000..480c7c6
--- /dev/null
+++ b/issues/genenetwork3/rqtl2-mapping-error.gmi
@@ -0,0 +1,42 @@
+# R/qtl2 Maps Error
+
+## Tags
+
+* type: bug
+* status: open
+* priority: high
+* assigned: alexm, zachs, fredm
+* keywords: R/qtl2, R/qtl2 Maps, gn3, genetwork3, genenetwork 3
+
+## Reproduce
+
+* Go to https://genenetwork.org/
+* In the "Get Any" field, enter "synap*" and press the "Enter" key
+* In the search results, click on the "1435464_at" trait
+* Expand the "Mapping Tools" accordion section
+* Select the "R/qtl2" option
+* Click "Compute"
+* In the "Computing the Maps" page that results, click on "Display System Log"
+
+### Observed
+
+A traceback is observed, with an error of the following form:
+
+```
+⋮
+FileNotFoundError: [Errno 2] No such file or directory: '/opt/gn/tmp/gn3-tmpdir/JL9PvKm3OyKk.txt'
+```
+
+### Expected
+
+The mapping runs successfully and the results are presented in the form of a mapping chart/graph and a table of values.
+
+### Debug Notes
+
+The directory "/opt/gn/tmp/gn3-tmpdir/" exists, and is actually used by other mappings (i.e. The "R/qtl" and "Pair Scan" mappings) successfully.
+
+This might imply a code issue: Perhaps
+* a path is hardcoded, or
+* the wrong path value is passed
+
+The same error occurs on https://cd.genenetwork.org but does not seem to prevent CD from running the mapping to completion. Maybe something is missing on production — what, though?
diff --git a/issues/gn-guile/rendering-images-within-markdown-documents.gmi b/issues/gn-guile/rendering-images-within-markdown-documents.gmi
new file mode 100644
index 0000000..fe3ed39
--- /dev/null
+++ b/issues/gn-guile/rendering-images-within-markdown-documents.gmi
@@ -0,0 +1,22 @@
+# Rendering Images Linked in Markdown Documents
+
+## Tags
+
+* status: open
+* priority: high
+* type: bug
+* assigned: alexm, bonfacem, fredm
+* keywords: gn-guile, images, markdown
+
+## Description
+
+Rendering images linked within markdown documents does not work as expected — we cannot render images if they have a relative path.
+As an example see the commit below:
+=> https://github.com/genenetwork/gn-docs/commit/783e7d20368e370fb497974f843f985b51606d00
+
+In that commit, we are forced to use the full github uri to get the images to load correctly when rendered via gn-guile. This, has two unfortunate consequences:
+
+* It makes editing more difficult, since the user has to remember to find and use the full github URL for their images.
+* It ties the data and code to github
+
+This needs to be fixed, such that any and all paths relative to the markdown file are resolved at render time automatically.
diff --git a/issues/gn-guile/rework-hard-dependence-on-github.gmi b/issues/gn-guile/rework-hard-dependence-on-github.gmi
new file mode 100644
index 0000000..751e9fe
--- /dev/null
+++ b/issues/gn-guile/rework-hard-dependence-on-github.gmi
@@ -0,0 +1,21 @@
+# Rework Hard Dependence on Github
+
+## Tags
+
+* status: open
+* priority: medium
+* type: bug
+* assigned: alexm
+* assigned: bonfacem
+* assigned: fredm
+* keywords: gn-guile, github
+
+## Description
+
+Currently, we have a hard-dependence on Github for our source repository — you can see this in lines 31, 41, 55 and 59 of the code linked below:
+
+=> https://git.genenetwork.org/gn-guile/tree/web/view/markdown.scm?id=0ebf6926db0c69e4c444a6f95907e0971ae9bf40
+
+The most likely reason is that the "edit online" functionality might not exist in a lot of other popular source forges.
+
+This is rendered moot, however, since we do provide a means to edit the data on Genenetwork itself. We might as well get rid of this option, and only allow the "edit online" feature on Genenetwork and stop relying on its presence in the forges we use.
diff --git a/issues/gn-uploader/AuthorisationError-gn-uploader.gmi b/issues/gn-uploader/AuthorisationError-gn-uploader.gmi
new file mode 100644
index 0000000..50a236d
--- /dev/null
+++ b/issues/gn-uploader/AuthorisationError-gn-uploader.gmi
@@ -0,0 +1,66 @@
+# AuthorisationError in gn uploader
+
+## Tags
+* assigned: fredm
+* status: open
+* priority: critical
+* type: error
+* key words: authorisation, permission
+
+## Description
+
+Trying to create population for Kilifish dataset in the gn-uploader webpage,
+then encountered the following error:
+```sh
+Traceback (most recent call last):
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/flask/app.py", line 917, in full_dispatch_request
+ rv = self.dispatch_request()
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/flask/app.py", line 902, in dispatch_request
+ return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/uploader/authorisation.py", line 23, in __is_session_valid__
+ return session.user_token().either(
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/pymonad/either.py", line 89, in either
+ return right_function(self.value)
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/uploader/authorisation.py", line 25, in <lambda>
+ lambda token: function(*args, **kwargs))
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/uploader/population/views.py", line 185, in create_population
+ ).either(
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/pymonad/either.py", line 91, in either
+ return left_function(self.monoid[0])
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/uploader/monadic_requests.py", line 99, in __fail__
+ raise Exception(_data)
+Exception: {'error': 'AuthorisationError', 'error-trace': 'Traceback (most recent call last):
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/flask/app.py", line 917, in full_dispatch_request
+ rv = self.dispatch_request()
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/flask/app.py", line 902, in dispatch_request
+ return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/authlib/integrations/flask_oauth2/resource_protector.py", line 110, in decorated
+ return f(*args, **kwargs)
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/gn_auth/auth/authorisation/resources/inbredset/views.py", line 95, in create_population_resource
+ ).then(
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/pymonad/monad.py", line 152, in then
+ result = self.map(function)
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/pymonad/either.py", line 106, in map
+ return self.__class__(function(self.value), (None, True))
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/gn_auth/auth/authorisation/resources/inbredset/views.py", line 98, in <lambda>
+ "resource": create_resource(
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/gn_auth/auth/authorisation/resources/inbredset/models.py", line 25, in create_resource
+ return _create_resource(cursor,
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/gn_auth/auth/authorisation/checks.py", line 56, in __authoriser__
+ raise AuthorisationError(error_description)
+gn_auth.auth.errors.AuthorisationError: Insufficient privileges to create a resource
+', 'error_description': 'Insufficient privileges to create a resource'}
+
+```
+The error above resulted from the attempt to upload the following information on the gn-uploader-`create population section`
+Input details are as follows:
+Full Name: Kilifish F2 Intercross Lines
+Name: KF2_Lines
+Population code: KF2
+Description: Kilifish second generation population
+Family: Crosses, AIL, HS
+Mapping Methods: GEMMA, QTLReaper, R/qtl
+Genetic type: intercross
+
+And when pressed the `Create Population` icon, it led to the error above.
+
diff --git a/issues/gn-uploader/replace-redis-with-sqlite3.gmi b/issues/gn-uploader/replace-redis-with-sqlite3.gmi
new file mode 100644
index 0000000..3e5020a
--- /dev/null
+++ b/issues/gn-uploader/replace-redis-with-sqlite3.gmi
@@ -0,0 +1,17 @@
+# Replace Redis with SQL
+
+## Tags
+
+* status: open
+* priority: low
+* assigned: fredm
+* type: feature, feature-request, feature request
+* keywords: gn-uploader, uploader, redis, sqlite, sqlite3
+
+## Description
+
+We currently (as of 2024-06-27) use Redis for tracking any asynchronous jobs (e.g. QC on uploaded files).
+
+A lot of what we use redis for, we can do in one of the many SQL databases (we'll probably use SQLite3 anyway), which are more standardised, and easier to migrate data from and to. It has the added advantage that we can open multiple connections to the database, enabling the different processes to update the status and metadata of the same job consistently.
+
+Changes done here can then be migrated to the other systems, i.e. GN2, GN3, and gn-auth, as necessary.
diff --git a/issues/gnqa/implement-no-login-requirement-for-gnqa.gmi b/issues/gnqa/implement-no-login-requirement-for-gnqa.gmi
new file mode 100644
index 0000000..9dcef53
--- /dev/null
+++ b/issues/gnqa/implement-no-login-requirement-for-gnqa.gmi
@@ -0,0 +1,20 @@
+# Implement No-Login Requirement for GNQA
+
+## Tags
+
+* type: feature
+* status: progress
+* priority: medium
+* assigned: alexm,
+* keywords: gnqa, user experience, authentication, login, llm
+
+## Description
+This feature will allow usage of LLM/GNQA features without requiring user authentication, while implementing measures to filter out bots
+
+
+## Tasks
+
+* [x] If logged in: perform AI search with zero penalty
+* [ ] Add caching lifetime to save on token usage
+* [ ] Routes: check for referrer headers — if the previous search was not from the homepage, perform AI search
+* [ ] If global search returns more than *n* results (*n = number*), perform an AI search
diff --git a/issues/production-container-mechanical-rob-failure.gmi b/issues/production-container-mechanical-rob-failure.gmi
new file mode 100644
index 0000000..ae6bae8
--- /dev/null
+++ b/issues/production-container-mechanical-rob-failure.gmi
@@ -0,0 +1,224 @@
+# Production Container: `mechanical-rob` Failure
+
+## Tags
+
+* status: closed, completed, fixed
+* priority: high
+* type: bug
+* assigned: fredm
+* keywords: genenetwork, production, mechanical-rob
+
+## Description
+
+After deploying the latest commits to https://gn2-fred.genenetwork.org on 2025-02-19UTC-0600, with the following commits:
+
+* genenetwork2: 2a3df8cfba6b29dddbe40910c69283a1afbc8e51
+* genenetwork3: 99fd5070a84f37f91993f329f9cc8dd82a4b9339
+* gn-auth: 073395ff331042a5c686a46fa124f9cc6e10dd2f
+* gn-libs: 72a95f8ffa5401649f70978e863dd3f21900a611
+
+I had the (not so) bright idea to run the `mechanical-rob` tests against it before pushing it to production, proper. Here's where I ran into problems: some of the `mechanical-rob` tests failed, specifically, the correlation tests.
+
+Meanwhile, a run of the same tests against https://cd.genenetwork.org with the same commits was successful:
+
+=> https://ci.genenetwork.org/jobs/genenetwork2-mechanical-rob/1531 See this.
+
+This points to a possible problem with the setup of the production container, that leads to failures where none should be. This needs investigation and fixing.
+
+### Update 2025-02-20
+
+The MariaDB server is crashing. To reproduce:
+
+* Go to https://gn2-fred.genenetwork.org/show_trait?trait_id=1435464_at&dataset=HC_M2_0606_P
+* Click on "Calculate Correlations" to expand
+* Click "Compute"
+
+Observe that after a little while, the system fails with the following errors:
+
+* `MySQLdb.OperationalError: (2013, 'Lost connection to MySQL server during query')`
+* `MySQLdb.OperationalError: (2006, 'MySQL server has gone away')`
+
+I attempted updating the configuration for MariaDB, setting the `max_allowed_packet` to 16M and then 64M, but that did not resolve the problem.
+
+The log files indicate the following:
+
+```
+2025-02-20 7:46:07 0 [Note] Recovering after a crash using /var/lib/mysql/gn0-binary-log
+2025-02-20 7:46:07 0 [Note] Starting crash recovery...
+2025-02-20 7:46:07 0 [Note] Crash recovery finished.
+2025-02-20 7:46:07 0 [Note] Server socket created on IP: '0.0.0.0'.
+2025-02-20 7:46:07 0 [Warning] 'user' entry 'webqtlout@tux01' ignored in --skip-name-resolve mode.
+2025-02-20 7:46:07 0 [Warning] 'db' entry 'db_webqtl webqtlout@tux01' ignored in --skip-name-resolve mode.
+2025-02-20 7:46:07 0 [Note] Reading of all Master_info entries succeeded
+2025-02-20 7:46:07 0 [Note] Added new Master_info '' to hash table
+2025-02-20 7:46:07 0 [Note] /usr/sbin/mariadbd: ready for connections.
+Version: '10.5.23-MariaDB-0+deb11u1-log' socket: '/run/mysqld/mysqld.sock' port: 3306 Debian 11
+2025-02-20 7:46:07 4 [Warning] Access denied for user 'root'@'localhost' (using password: NO)
+2025-02-20 7:46:07 5 [Warning] Access denied for user 'root'@'localhost' (using password: NO)
+2025-02-20 7:46:07 0 [Note] InnoDB: Buffer pool(s) load completed at 250220 7:46:07
+250220 7:50:12 [ERROR] mysqld got signal 11 ;
+Sorry, we probably made a mistake, and this is a bug.
+
+Your assistance in bug reporting will enable us to fix this for the next release.
+To report this bug, see https://mariadb.com/kb/en/reporting-bugs
+
+We will try our best to scrape up some info that will hopefully help
+diagnose the problem, but since we have already crashed,
+something is definitely wrong and this may fail.
+
+Server version: 10.5.23-MariaDB-0+deb11u1-log source revision: 6cfd2ba397b0ca689d8ff1bdb9fc4a4dc516a5eb
+key_buffer_size=10485760
+read_buffer_size=131072
+max_used_connections=1
+max_threads=2050
+thread_count=1
+It is possible that mysqld could use up to
+key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 4523497 K bytes of memory
+Hope that's ok; if not, decrease some variables in the equation.
+
+Thread pointer: 0x7f599c000c58
+Attempting backtrace. You can use the following information to find out
+where mysqld died. If you see no messages after this, something went
+terribly wrong...
+stack_bottom = 0x7f6150282d78 thread_stack 0x49000
+/usr/sbin/mariadbd(my_print_stacktrace+0x2e)[0x55f43330c14e]
+/usr/sbin/mariadbd(handle_fatal_signal+0x475)[0x55f432e013b5]
+sigaction.c:0(__restore_rt)[0x7f615a1cb140]
+/usr/sbin/mariadbd(+0xcbffbe)[0x55f43314efbe]
+/usr/sbin/mariadbd(+0xd730ec)[0x55f4332020ec]
+/usr/sbin/mariadbd(+0xd1b36b)[0x55f4331aa36b]
+/usr/sbin/mariadbd(+0xd1cd8e)[0x55f4331abd8e]
+/usr/sbin/mariadbd(+0xc596f3)[0x55f4330e86f3]
+/usr/sbin/mariadbd(_ZN7handler18ha_index_next_sameEPhPKhj+0x2a5)[0x55f432e092b5]
+/usr/sbin/mariadbd(+0x7b54d1)[0x55f432c444d1]
+/usr/sbin/mariadbd(_Z10sub_selectP4JOINP13st_join_tableb+0x1f8)[0x55f432c37da8]
+/usr/sbin/mariadbd(_ZN10JOIN_CACHE24generate_full_extensionsEPh+0x134)[0x55f432d24224]
+/usr/sbin/mariadbd(_ZN10JOIN_CACHE21join_matching_recordsEb+0x206)[0x55f432d245d6]
+/usr/sbin/mariadbd(_ZN10JOIN_CACHE12join_recordsEb+0x1cf)[0x55f432d23eff]
+/usr/sbin/mariadbd(_Z16sub_select_cacheP4JOINP13st_join_tableb+0x8a)[0x55f432c382fa]
+/usr/sbin/mariadbd(_ZN4JOIN10exec_innerEv+0xd16)[0x55f432c63826]
+/usr/sbin/mariadbd(_ZN4JOIN4execEv+0x35)[0x55f432c63cc5]
+/usr/sbin/mariadbd(_Z12mysql_selectP3THDP10TABLE_LISTR4ListI4ItemEPS4_jP8st_orderS9_S7_S9_yP13select_resultP18st_select_lex_unitP13st_select_lex+0x106)[0x55f432c61c26]
+/usr/sbin/mariadbd(_Z13handle_selectP3THDP3LEXP13select_resultm+0x138)[0x55f432c62698]
+/usr/sbin/mariadbd(+0x762121)[0x55f432bf1121]
+/usr/sbin/mariadbd(_Z21mysql_execute_commandP3THD+0x3d6c)[0x55f432bfdd1c]
+/usr/sbin/mariadbd(_Z11mysql_parseP3THDPcjP12Parser_statebb+0x20b)[0x55f432bff17b]
+/usr/sbin/mariadbd(_Z16dispatch_command19enum_server_commandP3THDPcjbb+0xdb5)[0x55f432c00f55]
+/usr/sbin/mariadbd(_Z10do_commandP3THD+0x120)[0x55f432c02da0]
+/usr/sbin/mariadbd(_Z24do_handle_one_connectionP7CONNECTb+0x2f2)[0x55f432cf8b32]
+/usr/sbin/mariadbd(handle_one_connection+0x5d)[0x55f432cf8dad]
+/usr/sbin/mariadbd(+0xbb4ceb)[0x55f433043ceb]
+nptl/pthread_create.c:478(start_thread)[0x7f615a1bfea7]
+x86_64/clone.S:97(__GI___clone)[0x7f6159dc6acf]
+
+Trying to get some variables.
+Some pointers may be invalid and cause the dump to abort.
+Query (0x7f599c012c50): SELECT ProbeSet.Name,ProbeSet.Chr,ProbeSet.Mb,
+ ProbeSet.Symbol,ProbeSetXRef.mean,
+ CONCAT_WS('; ', ProbeSet.description, ProbeSet.Probe_Target_Description) AS description,
+ ProbeSetXRef.additive,ProbeSetXRef.LRS,Geno.Chr, Geno.Mb
+ FROM ProbeSet INNER JOIN ProbeSetXRef
+ ON ProbeSet.Id=ProbeSetXRef.ProbeSetId
+ INNER JOIN Geno
+ ON ProbeSetXRef.Locus = Geno.Name
+ INNER JOIN Species
+ ON Geno.SpeciesId = Species.Id
+ WHERE ProbeSet.Name in ('1447591_x_at', '1422809_at', '1428917_at', '1438096_a_at', '1416474_at', '1453271_at', '1441725_at', '1452952_at', '1456774_at', '1438413_at', '1431110_at', '1453723_x_at', '1424124_at', '1448706_at', '1448762_at', '1428332_at', '1438389_x_at', '1455508_at', '1455805_x_at', '1433276_at', '1454989_at', '1427467_a_at', '1447448_s_at', '1438695_at', '1456795_at', '1454874_at', '1455189_at', '1448631_a_at', '1422697_s_at', '1423717_at', '1439484_at', '1419123_a_at', '1435286_at', '1439886_at', '1436348_at', '1437475_at', '1447667_x_at', '1421046_a_at', '1448296_x_at', '1460577_at', 'AFFX-GapdhMur/M32599_M_at', '1424393_s_at', '1426190_at', '1434749_at', '1455706_at', '1448584_at', '1434093_at', '1434461_at', '1419401_at', '1433957_at', '1419453_at', '1416500_at', '1439436_x_at', '1451413_at', '1455696_a_at', '1457190_at', '1455521_at', '1434842_s_at', '1442525_at', '1452331_s_at', '1428862_at', '1436463_at', '1438535_at', 'AFFX-GapdhMur/M32599_3_at', '1424012_at', '1440027_at', '1435846_x_at', '1443282_at', '1435567_at', '1450112_a_at', '1428251_at', '1429063_s_at', '1433781_a_at', '1436698_x_at', '1436175_at', '1435668_at', '1424683_at', '1442743_at', '1416944_a_at', '1437511_x_at', '1451254_at', '1423083_at', '1440158_x_at', '1424324_at', '1426382_at', '1420142_s_at', '1434553_at', '1428772_at', '1424094_at', '1435900_at', '1455322_at', '1453283_at', '1428551_at', '1453078_at', '1444602_at', '1443836_x_at', '1435590_at', '1434283_at', '1435240_at', '1434659_at', '1427032_at', '1455278_at', '1448104_at', '1421247_at', 'AFFX-MURINE_b1_at', '1460216_at', '1433969_at', '1419171_at', '1456699_s_at', '1456901_at', '1442139_at', '1421849_at', '1419824_a_at', '1460588_at', '1420131_s_at', '1446138_at', '1435829_at', '1434462_at', '1435059_at', '1415949_at', '1460624_at', '1426707_at', '1417250_at', '1434956_at', '1438018_at', '1454846_at', '1435298_at', '1442077_at', '1424074_at', '1428883_at', '1454149_a_at', '1423925_at', '1457060_at', '1433821_at', '1447923_at', '1460670_at', '1434468_at', '1454980_at', '1426913_at', '1456741_s_at', '1449278_at', '1443534_at', '1417941_at', '1433167_at', '1434401_at', '1456516_x_at', '1451360_at', 'AFFX-GapdhMur/M32599_5_at', '1417827_at', '1434161_at', '1448979_at', '1435797_at', '1419807_at', '1418330_at', '1426304_x_at', '1425492_at', '1437873_at', '1435734_x_at', '1420622_a_at', '1456019_at', '1449200_at', '1455314_at', '1428419_at', '1426349_s_at', '1426743_at', '1436073_at', '1452306_at', '1436735_at', '1439529_at', '1459347_at', '1429642_at', '1438930_s_at', '1437380_x_at', '1459861_s_at', '1424243_at', '1430503_at', '1434474_at', '1417962_s_at', '1440187_at', '1446809_at', '1436234_at', '1415906_at', 'AFFX-MURINE_B2_at', '1434836_at', '1426002_a_at', '1448111_at', '1452882_at', '1436597_at', '1455915_at', '1421846_at', '1428693_at', '1422624_at', '1423755_at', '1460367_at', '1433746_at', '1454872_at', '1429194_at', '1424652_at', '1440795_x_at', '1458690_at', '1434355_at', '1456324_at', '1457867_at', '1429698_at', '1423104_at', '1437585_x_at', '1437739_a_at', '1445605_s_at', '1436313_at', '1449738_s_at', '1437525_a_at', '1454937_at', '1429043_at', '1440091_at', '1422820_at', '1437456_x_at', '1427322_at', '1446649_at', '1433568_at', '1441114_at', '1456541_x_at', '1426985_s_at', '1454764_s_at', '1424071_s_at', '1429251_at', '1429155_at', '1433946_at', '1448771_a_at', '1458664_at', '1438320_s_at', '1449616_s_at', '1435445_at', '1433872_at', '1429273_at', '1420880_a_at', '1448645_at', '1449646_s_at', '1428341_at', '1431299_a_at', '1433427_at', '1418530_at', '1436247_at', '1454350_at', '1455860_at', '1417145_at', '1454952_s_at', '1435977_at', '1434807_s_at', '1428715_at', '1418117_at', '1447947_at', '1431781_at', '1428915_at', '1427197_at', '1427208_at', '1455460_at', '1423899_at', '1441944_s_at', '1455429_at', '1452266_at', '1454409_at', '1426384_a_at', '1428725_at', '1419181_at', '1454862_at', '1452907_at', '1433794_at', '1435492_at', '1424839_a_at', '1416214_at', '1449312_at', '1436678_at', '1426253_at', '1438859_x_at', '1448189_a_at', '1442557_at', '1446174_at', '1459718_x_at', '1437613_s_at', '1456509_at', '1455267_at', '1440480_at', '1417296_at', '1460050_x_at', '1433585_at', '1436771_x_at', '1424294_at', '1448648_at', '1417753_at', '1436139_at', '1425642_at', '1418553_at', '1415747_s_at', '1445984_at', '1440024_at', '1448720_at', '1429459_at', '1451459_at', '1428853_at', '1433856_at', '1426248_at', '1417765_a_at', '1439459_x_at', '1447023_at', '1426088_at', '1440825_s_at', '1417390_at', '1444744_at', '1435618_at', '1424635_at', '1443727_x_at', '1421096_at', '1427410_at', '1416860_s_at', '1442773_at', '1442030_at', '1452281_at', '1434774_at', '1416891_at', '1447915_x_at', '1429129_at', '1418850_at', '1416308_at', '1422858_at', '1447679_s_at', '1440903_at', '1417321_at', '1452342_at', '1453510_s_at', '1454923_at', '1454611_a_at', '1457532_at', '1438440_at', '1434232_a_at', '1455878_at', '1455571_x_at', '1436401_at', '1453289_at', '1457365_at', '1436708_x_at', '1434494_at', '1419588_at', '1433679_at', '1455159_at', '1428982_at', '1446510_at', '1434131_at', '1418066_at', '1435346_at', '1449415_at', '1455384_x_at', '1418817_at', '1442073_at', '1457265_at', '1447361_at', '1418039_at', '1428467_at', '1452224_at', '1417538_at', '1434529_x_at', '1442149_at', '1437379_x_at', '1416473_a_at', '1432750_at', '1428389_s_at', '1433823_at', '1451889_at', '1438178_x_at', '1441807_s_at', '1416799_at', '1420623_x_at', '1453245_at', '1434037_s_at', '1443012_at', '1443172_at', '1455321_at', '1438396_at', '1440823_x_at', '1436278_at', '1457543_at', '1452908_at', '1417483_at', '1418397_at', '1446589_at', '1450966_at', '1447877_x_at', '1446524_at', '1438592_at', '1455589_at', '1428629_at', '1429585_s_at', '1440020_at', '1417365_a_at', '1426442_at', '1427151_at', '1437377_a_at', '1433995_s_at', '1435464_at', '1417007_a_at', '1429690_at', '1427999_at', '1426819_at', '1454905_at', '1439516_at', '1434509_at', '1428707_at', '1416793_at', '1440822_x_at', '1437327_x_at', '1428682_at', '1435004_at', '1434238_at', '1417581_at', '1434699_at', '1455597_at', '1458613_at', '1456485_at', '1435122_x_at', '1452864_at', '1453122_at', '1435254_at', '1451221_at', '1460168_at', '1455336_at', '1427965_at', '1432576_at', '1455425_at', '1428762_at', '1455459_at', '1419317_x_at', '1434691_at', '1437950_at', '1426401_at', '1457261_at', '1433824_x_at', '1435235_at', '1437343_x_at', '1439964_at', '1444280_at', '1455434_a_at', '1424431_at', '1421519_a_at', '1428412_at', '1434010_at', '1419976_s_at', '1418887_a_at', '1428498_at', '1446883_at', '1435675_at', '1422599_s_at', '1457410_at', '1444437_at', '1421050_at', '1437885_at', '1459754_x_at', '1423807_a_at', '1435490_at', '1426760_at', '1449459_s_at', '1432098_a_at', '1437067_at', '1435574_at', '1433999_at', '1431289_at', '1428919_at', '1425678_a_at', '1434924_at', '1421640_a_at', '1440191_s_at', '1460082_at', '1449913_at', '1439830_at', '1425020_at', '1443790_x_at', '1436931_at', '1454214_a_at', '1455854_a_at', '1437061_at', '1436125_at', '1426385_x_at', '1431893_a_at', '1417140_a_at', '1435333_at', '1427907_at', '1434446_at', '1417594_at', '1426518_at', '1437345_a_at', '1420091_s_at', '1450058_at', '1435161_at', '1430348_at', '1455778_at', '1422653_at', '1447942_x_at', '1434843_at', '1454956_at', '1454998_at', '1427384_at', '1439828_at') AND
+ Species.Name = 'mouse' AND
+ ProbeSetXRef.ProbeSetFreezeId IN (
+ SELECT ProbeSetFreeze.Id
+ FROM ProbeSetFreeze WHERE ProbeSetFreeze.Name = 'HC_M2_0606_P')
+
+Connection ID (thread ID): 41
+Status: NOT_KILLED
+
+Optimizer switch: index_merge=on,index_merge_union=on,index_merge_sort_union=on,index_merge_intersection=on,index_merge_sort_intersection=off,engine_condition_pushdown=off,index_condition_pushdown=on,derived_merge=on,derived_with_keys=on,firstmatch=on,loosescan=on,materialization=on,in_to_exists=on,semijoin=on,partial_match_rowid_merge=on,partial_match_table_scan=on,subquery_cache=on,mrr=off,mrr_cost_based=off,mrr_sort_keys=off,outer_join_with_cache=on,semijoin_with_cache=on,join_cache_incremental=on,join_cache_hashed=on,join_cache_bka=on,optimize_join_buffer_size=on,table_elimination=on,extended_keys=on,exists_to_in=on,orderby_uses_equalities=on,condition_pushdown_for_derived=on,split_materialized=on,condition_pushdown_for_subquery=on,rowid_filter=on,condition_pushdown_from_having=on,not_null_range_scan=off
+
+The manual page at https://mariadb.com/kb/en/how-to-produce-a-full-stack-trace-for-mariadbd/ contains
+information that should help you find out what is causing the crash.
+Writing a core file...
+Working directory at /export/mysql/var/lib/mysql
+Resource Limits:
+Limit Soft Limit Hard Limit Units
+Max cpu time unlimited unlimited seconds
+Max file size unlimited unlimited bytes
+Max data size unlimited unlimited bytes
+Max stack size 8388608 unlimited bytes
+Max core file size 0 unlimited bytes
+Max resident set unlimited unlimited bytes
+Max processes 3094157 3094157 processes
+Max open files 64000 64000 files
+Max locked memory 65536 65536 bytes
+Max address space unlimited unlimited bytes
+Max file locks unlimited unlimited locks
+Max pending signals 3094157 3094157 signals
+Max msgqueue size 819200 819200 bytes
+Max nice priority 0 0
+Max realtime priority 0 0
+Max realtime timeout unlimited unlimited us
+Core pattern: core
+
+Kernel version: Linux version 5.10.0-22-amd64 (debian-kernel@lists.debian.org) (gcc-10 (Debian 10.2.1-6) 10.2.1 20210110, GNU ld (GNU Binutils for Debian) 2.35.2) #1 SMP Debian 5.10.178-3 (2023-04-22)
+
+2025-02-20 7:50:17 0 [Note] Starting MariaDB 10.5.23-MariaDB-0+deb11u1-log source revision 6cfd2ba397b0ca689d8ff1bdb9fc4a4dc516a5eb as process 3086167
+2025-02-20 7:50:17 0 [Note] InnoDB: !!! innodb_force_recovery is set to 1 !!!
+2025-02-20 7:50:17 0 [Note] InnoDB: Uses event mutexes
+2025-02-20 7:50:17 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
+2025-02-20 7:50:17 0 [Note] InnoDB: Number of pools: 1
+2025-02-20 7:50:17 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
+2025-02-20 7:50:17 0 [Note] InnoDB: Using Linux native AIO
+2025-02-20 7:50:17 0 [Note] InnoDB: Initializing buffer pool, total size = 17179869184, chunk size = 134217728
+2025-02-20 7:50:17 0 [Note] InnoDB: Completed initialization of buffer pool
+2025-02-20 7:50:17 0 [Note] InnoDB: Starting crash recovery from checkpoint LSN=1537379110991,1537379110991
+2025-02-20 7:50:17 0 [Note] InnoDB: Last binlog file '/var/lib/mysql/gn0-binary-log.000134', position 82843148
+2025-02-20 7:50:17 0 [Note] InnoDB: 128 rollback segments are active.
+2025-02-20 7:50:17 0 [Note] InnoDB: Removed temporary tablespace data file: "ibtmp1"
+2025-02-20 7:50:17 0 [Note] InnoDB: Creating shared tablespace for temporary tables
+2025-02-20 7:50:17 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
+2025-02-20 7:50:17 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
+2025-02-20 7:50:17 0 [Note] InnoDB: 10.5.23 started; log sequence number 1537379111003; transaction id 3459549902
+2025-02-20 7:50:17 0 [Note] Plugin 'FEEDBACK' is disabled.
+2025-02-20 7:50:17 0 [Note] InnoDB: Loading buffer pool(s) from /export/mysql/var/lib/mysql/ib_buffer_pool
+2025-02-20 7:50:17 0 [Note] Loaded 'locales.so' with offset 0x7f9551bc0000
+2025-02-20 7:50:17 0 [Note] Recovering after a crash using /var/lib/mysql/gn0-binary-log
+2025-02-20 7:50:17 0 [Note] Starting crash recovery...
+2025-02-20 7:50:17 0 [Note] Crash recovery finished.
+2025-02-20 7:50:17 0 [Note] Server socket created on IP: '0.0.0.0'.
+2025-02-20 7:50:17 0 [Warning] 'user' entry 'webqtlout@tux01' ignored in --skip-name-resolve mode.
+2025-02-20 7:50:17 0 [Warning] 'db' entry 'db_webqtl webqtlout@tux01' ignored in --skip-name-resolve mode.
+2025-02-20 7:50:17 0 [Note] Reading of all Master_info entries succeeded
+2025-02-20 7:50:17 0 [Note] Added new Master_info '' to hash table
+2025-02-20 7:50:17 0 [Note] /usr/sbin/mariadbd: ready for connections.
+Version: '10.5.23-MariaDB-0+deb11u1-log' socket: '/run/mysqld/mysqld.sock' port: 3306 Debian 11
+2025-02-20 7:50:17 4 [Warning] Access denied for user 'root'@'localhost' (using password: NO)
+2025-02-20 7:50:17 5 [Warning] Access denied for user 'root'@'localhost' (using password: NO)
+2025-02-20 7:50:17 0 [Note] InnoDB: Buffer pool(s) load completed at 250220 7:50:17
+```
+
+A possible issue is the use of the environment variable SQL_URI at this point:
+
+=> https://github.com/genenetwork/genenetwork2/blob/testing/gn2/wqflask/correlation/rust_correlation.py#L34
+
+which is requested
+
+=> https://github.com/genenetwork/genenetwork2/blob/testing/gn2/wqflask/correlation/rust_correlation.py#L7 from here.
+
+I tried setting an environment variable "SQL_URI" with the same value as the config and rebuilt the container. That did not fix the problem.
+
+Running the query directly in the default mysql client also fails with:
+
+```
+ERROR 2013 (HY000): Lost connection to MySQL server during query
+```
+
+Huh, so this was not a code problem.
+
+Configured database to allow upgrade of tables if necessary and restarted mariadbd.
+
+The problem still persists.
+
+Note Pjotr: this is likely a mariadb bug with 10.5.23, the most recent mariadbd we use (both tux01 and tux02 are older). The dump shows it balks on creating a new thread: pthread_create.c:478. Looks similar to https://jira.mariadb.org/browse/MDEV-32262
+
+10.5, 10.6, 10.11 are affected. so running correlations on production crashes mysqld? I am not trying for obvious reasons ;) the threading issues of mariadb look scary - I wonder how deep it goes.
+
+We'll test for a different version of mariadb combining a Debian update because Debian on tux04 is broken.
diff --git a/issues/systems/apps.gmi b/issues/systems/apps.gmi
index 51c9d24..b9d4155 100644
--- a/issues/systems/apps.gmi
+++ b/issues/systems/apps.gmi
@@ -153,7 +153,7 @@ downloading from http://cran.r-project.org/src/contrib/Archive/KernSmooth/KernSm
- 'configure' phasesha256 hash mismatch for /gnu/store/n05zjfhxl0iqx1jbw8i6vv1174zkj7ja-KernSmooth_2.23-17.tar.gz:
expected hash: 11g6b0q67vasxag6v9m4px33qqxpmnx47c73yv1dninv2pz76g9b
actual hash: 1ciaycyp79l5aj78gpmwsyx164zi5jc60mh84vxxzq4j7vlcdb5p
-hash mismatch for store item '/gnu/store/n05zjfhxl0iqx1jbw8i6vv1174zkj7ja-KernSmooth_2.23-17.tar.gz'
+ hash mismatch for store item '/gnu/store/n05zjfhxl0iqx1jbw8i6vv1174zkj7ja-KernSmooth_2.23-17.tar.gz'
```
Guix checks and it is not great CRAN allows for changing tarballs with the same version number!! Luckily building with a more recent version of Guix just worked (TM). Now we create a root too:
@@ -184,12 +184,24 @@ and it looks like lines like these need to be updated:
=> https://github.com/genenetwork/singleCellRshiny/blob/6b2a344dd0d02f65228ad8c350bac0ced5850d05/app.R#L167
-Let me ask the author Siamak Yousefi.
+Let me ask the author Siamak Yousefi. I think we'll drop it.
+
+## longevity
+
+Package definition is at
+
+=> https://git.genenetwork.org/guix-bioinformatics/tree/gn/packages/mouse-longevity.scm
+
+Container is at
+
+=> https://git.genenetwork.org/guix-bioinformatics/tree/gn/services/bxd-power-container.scm
## jumpshiny
+Jumpshiny is hosted on balg01. Scripts are in tux02 git.
+
```
-balg01:~/gn-machines$ guix system container --network -L . -L ../guix-bioinformatics/ -L ../guix-past/modules/ --substitute-urls='https:
-//ci.guix.gnu.org https://bordeaux.guix.gnu.org https://cuirass.genenetwork.org' test-r-container.scm -L ../guix-forge/guix/
-/gnu/store/xyks73sf6pk78rvrwf45ik181v0zw8rx-run-container
+root@balg01:/home/j*/gn-machines# . /usr/local/guix-profiles/guix-pull/etc/profile
+guix system container --network -L . -L ../guix-forge/guix/ -L ../guix-bioinformatics/ -L ../guix-past/modules/ --substitute-urls='https://ci.guix.gnu.org https://bordeaux.guix.gnu.org https://cuirass.genenetwork.org' test-r-container.scm -L ../guix-forge/guix/gnu/store/xyks73sf6pk78rvrwf45ik181v0zw8rx-run-container
+/gnu/store/6y65x5jk3lxy4yckssnl32yayjx9nwl5-run-container
```
diff --git a/issues/systems/octoraid-storage.gmi b/issues/systems/octoraid-storage.gmi
new file mode 100644
index 0000000..97e0e55
--- /dev/null
+++ b/issues/systems/octoraid-storage.gmi
@@ -0,0 +1,18 @@
+# OctoRAID
+
+We are building machines that can handle cheap drives.
+
+# octoraid01
+
+This is a jetson with 4 22TB seagate-ironwolf-pro-st22000nt001-22tb-enterprise-nas-hard-drives-7200-rpm.
+
+Unfortunately the stock kernel has no RAID support, so we simple mount the 4 drives (hosted on a USB-SATA bridge).
+
+Stress testing:
+
+```
+cd /export/nfs/lair01
+stress -v -d 1
+```
+
+Running on multiple disks the jetson is holding up well!
diff --git a/issues/systems/penguin2-raid5.gmi b/issues/systems/penguin2-raid5.gmi
new file mode 100644
index 0000000..f03075d
--- /dev/null
+++ b/issues/systems/penguin2-raid5.gmi
@@ -0,0 +1,61 @@
+# Penguin2 RAID 5
+
+# Tags
+
+* assigned: @fredm, @pjotrp
+* status: in progress
+
+# Description
+
+The current RAID contains 3 disks:
+
+```
+root@penguin2:~# cat /proc/mdstat
+md0 : active raid5 sdb1[1] sda1[0] sdg1[4]
+/dev/md0 33T 27T 4.2T 87% /export
+```
+
+using /dev/sda,sdb,sdg
+
+The current root and swap is on
+
+```
+# root
+/dev/sdd1 393G 121G 252G 33% /
+# swap
+/dev/sdd5 partition 976M 76.5M -2
+```
+
+We can therefore add four new disks in slots /dev/sdc,sde,sdf,sdh
+
+penguin2 has no out-of-band and no serial connector right now. That means any work needs to be done on the terminal.
+
+Boot loader menu:
+
+```
+menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-7ff268df-cb90-4cbc-9d76-7fd6677b4964' {
+ load_video
+ insmod gzio
+ if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
+ insmod part_msdos
+ insmod ext2
+ set root='hd2,msdos1'
+ if [ x$feature_platform_search_hint = xy ]; then
+ search --no-floppy --fs-uuid --set=root --hint-bios=hd2,msdos1 --hint-efi=hd2,msdos1 --hint-baremetal=ahci2,msdos1 7ff268df-cb90-4cbc-9d76-7fd6677b4964
+ else
+ search --no-floppy --fs-uuid --set=root 7ff268df-cb90-4cbc-9d76-7fd6677b4964
+ fi
+ echo 'Loading Linux 5.10.0-18-amd64 ...'
+ linux /boot/vmlinuz-5.10.0-18-amd64 root=UUID=7ff268df-cb90-4cbc-9d76-7fd6677b4964 ro quiet
+ echo 'Loading initial ramdisk ...'
+ initrd /boot/initrd.img-5.10.0-18-amd64
+}
+```
+
+Added to sdd MBR
+
+```
+root@penguin2:~# grub-install /dev/sdd
+Installing for i386-pc platform.
+Installation finished. No error reported.
+```
diff --git a/issues/systems/tux02-production.gmi b/issues/systems/tux02-production.gmi
index 7de911f..d811c5e 100644
--- a/issues/systems/tux02-production.gmi
+++ b/issues/systems/tux02-production.gmi
@@ -14,9 +14,9 @@ We are going to move production to tux02 - tux01 will be the staging machine. Th
* [X] update guix guix-1.3.0-9.f743f20
* [X] set up nginx (Debian)
-* [X] test ipmi console (172.23.30.40)
+* [X] test ipmi console
* [X] test ports (nginx)
-* [?] set up network for external tux02e.uthsc.edu (128.169.4.52)
+* [?] set up network for external tux02
* [X] set up deployment evironment
* [X] sheepdog copy database backup from tux01 on a daily basis using ibackup user
* [X] same for GN2 production environment
diff --git a/issues/systems/tux04-disk-issues.gmi b/issues/systems/tux04-disk-issues.gmi
index 9bba105..bc6e1db 100644
--- a/issues/systems/tux04-disk-issues.gmi
+++ b/issues/systems/tux04-disk-issues.gmi
@@ -101,3 +101,280 @@ and nothing ;). Megacli is actually the tool to use
```
megacli -AdpAllInfo -aAll
```
+
+# Database
+
+During a backup the DB shows this error:
+
+```
+2025-03-02 06:28:33 Database page corruption detected at page 1079428, retrying...\n[01] 2025-03-02 06:29:33 Database page corruption detected at page 1103108, retrying...
+```
+
+
+Interestingly the DB recovered on a second backup.
+
+The database is hosted on a solid /dev/sde Dell Ent NVMe FI. The log says
+
+```
+kernel: I/O error, dev sde, sector 2136655448 op 0x0:(READ) flags 0x80700 phys_seg 40 prio class 2
+```
+
+Suggests:
+
+=> https://stackoverflow.com/questions/50312219/blk-update-request-i-o-error-dev-sda-sector-xxxxxxxxxxx
+
+> The errors that you see are interface errors, they are not coming from the disk itself but rather from the connection to it. It can be the cable or any of the ports in the connection.
+> Since the CRC errors on the drive do not increase I can only assume that the problem is on the receive side of the machine you use. You should check the cable and try a different SATA port on the server.
+
+and someone wrote
+
+> analyzed that most of the reasons are caused by intensive reading and writing. This is a CDN cache node. Type reading NVME temperature is relatively high, if it continues, it will start to throttle and then slowly collapse.
+
+and temperature on that drive has been 70 C.
+
+Mariabd log is showing errors:
+
+```
+2025-03-02 6:54:47 0 [ERROR] InnoDB: Failed to read page 449925 from file './db_webqtl/SnpAll.ibd': Page read from tablespace is corrupted.
+2025-03-02 7:01:43 489015 [ERROR] Got error 180 when reading table './db_webqtl/ProbeSetXRef'
+2025-03-02 8:10:32 489143 [ERROR] Got error 180 when reading table './db_webqtl/ProbeSetXRef'
+```
+
+Let's try and dump those tables when the backup is done.
+
+```
+mariadb-dump -uwebqtlout db_webqtl SnpAll
+mariadb-dump: Error 1030: Got error 1877 "Unknown error 1877" from storage engine InnoDB when dumping table `SnpAll` at row: 0
+mariadb-dump -uwebqtlout db_webqtl ProbeSetXRef > ProbeSetXRef.sql
+```
+
+Eeep:
+
+```
+tux04:/etc$ mariadb-check -uwebqtlout -c db_webqtl ProbeSetXRef
+db_webqtl.ProbeSetXRef
+Warning : InnoDB: Index ProbeSetFreezeId is marked as corrupted
+Warning : InnoDB: Index ProbeSetId is marked as corrupted
+error : Corrupt
+tux04:/etc$ mariadb-check -uwebqtlout -c db_webqtl SnpAll
+db_webqtl.SnpAll
+Warning : InnoDB: Index PRIMARY is marked as corrupted
+Warning : InnoDB: Index SnpName is marked as corrupted
+Warning : InnoDB: Index Rs is marked as corrupted
+Warning : InnoDB: Index Position is marked as corrupted
+Warning : InnoDB: Index Source is marked as corrupted
+error : Corrupt
+```
+
+On tux01 we have a working database, we can test with
+
+```
+mysqldump --no-data --all-databases > table_schema.sql
+mysqldump -uwebqtlout db_webqtl SnpAll > SnpAll.sql
+```
+
+Running the backup with rate limiting from:
+
+```
+Mar 02 17:09:59 tux04 sudo[548058]: pam_unix(sudo:session): session opened for user root(uid=0) by wrk(uid=1000)
+Mar 02 17:09:59 tux04 sudo[548058]: wrk : TTY=pts/3 ; PWD=/export3/local/home/wrk/iwrk/deploy/gn-deploy-servers/scripts/tux04 ; USER=roo>
+Mar 02 17:09:55 tux04 sudo[548058]: pam_unix(sudo:auth): authentication failure; logname=wrk uid=1000 euid=0 tty=/dev/pts/3 ruser=wrk rhost= >
+Mar 02 17:04:26 tux04 su[548006]: pam_unix(su:session): session opened for user ibackup(uid=1003) by wrk(uid=0)
+```
+
+Oh oh
+
+Tux04 is showing errors on all disks. We have to bail out. I am copying the potentially corrupted files to tux01 right now. We have backups, so nothing serious I hope. I am only worried about the myisam files we have because they have no strong internal validation:
+
+```
+2025-03-04 8:32:45 502 [ERROR] db_webqtl.ProbeSetData: Record-count is not ok; is 5264578601 Should be: 5264580806
+2025-03-04 8:32:45 502 [Warning] db_webqtl.ProbeSetData: Found 28665 deleted space. Should be 0
+2025-03-04 8:32:45 502 [Warning] db_webqtl.ProbeSetData: Found 2205 deleted blocks Should be: 0
+2025-03-04 8:32:45 502 [ERROR] Got an error from thread_id=502, ./storage/myisam/ha_myisam.cc:1120
+2025-03-04 8:32:45 502 [ERROR] MariaDB thread id 502, OS thread handle 139625162532544, query id 837999 localhost webqtlout Checking table
+CHECK TABLE ProbeSetData
+2025-03-04 8:34:02 79695 [ERROR] mariadbd: Table './db_webqtl/ProbeSetData' is marked as crashed and should be repaired
+```
+
+See also
+
+=> https://dev.mysql.com/doc/refman/8.4/en/myisam-check.html
+
+Tux04 will require open heart 'disk controller' surgery and some severe testing before we move back. We'll also look at tux05-8 to see if they have similar problems.
+
+## Recovery
+
+According to the logs tux04 started showing serious errors on March 2nd - when I introduced sanitizing the mariadb backup:
+
+```
+Mar 02 05:00:42 tux04 kernel: I/O error, dev sde, sector 2071078320 op 0x0:(READ) flags 0x80700 phys_seg 16 prio class 2
+Mar 02 05:00:58 tux04 kernel: I/O error, dev sde, sector 2083650928 op 0x0:(READ) flags 0x80700 phys_seg 59 prio class 2
+...
+```
+
+The log started on Feb 23 when we had our last reboot. It probably is a good idea to turn on persistent logging! Anyway, it is likely files were fine until March 2nd. Similarly the mariadb logs also show
+
+```
+2025-03-02 6:53:52 489007 [ERROR] mariadbd: Index for table './db_webqtl/ProbeSetData.MYI' is corrupt; try to repair it
+2025-03-02 6:53:52 489007 [ERROR] db_webqtl.ProbeSetData: Can't read key from filepos: 2269659136
+```
+
+So, if we can restore a backup from March 1st we should be reasonably confident it is sane.
+
+First is to backup the existing database(!) Next restore the new DB by changing the DB location (symlink in /var/lib/mysql as well as check /etc/mysql/mariadb.cnf).
+
+When upgrading it is an idea to switch on these in mariadb.cnf
+
+```
+# forcing recovery with these two lines:
+innodb_force_recovery=3
+innodb_purge_threads=0
+```
+
+Make sure to disable (and restart) once it is up and running!
+
+So the steps are:
+
+* [X] install updated guix version of mariadb in /usr/local/guix-profiles (don't use Debian!!)
+* [X] repair borg backup
+* [X] Stop old mariadb (on new host tux02)
+* [X] backup old mariadb database
+* [X] restore 'sane' version of DB from borg March 1st
+* [X] point to new DB in /var/lib/mysql and cnf file
+* [X] update systemd settings
+* [X] start mariadb new version with recovery setting in cnf
+* [X] check logs
+* [X] once running revert on recovery setting in cnf and restart
+
+OK, looks like we are in business again. In the next phase we need to validate files. Normal files can be checked with
+
+```
+find -type f \( -not -name "md5sum.txt" \) -exec md5sum '{}' \; > md5sum.txt
+```
+
+and compared with another set on a different server with
+
+```
+md5sum -c md5sum.txt
+```
+
+* [X] check genotype file directory - some MAGIC files missing on tux01
+
+gn-docs is a git repo, so that is easily checked
+
+* [X] check gn-docs and sync with master repo
+
+
+## Other servers
+
+```
+journalctl -r|grep -i "I/O error"|less
+# tux05
+Nov 18 02:19:55 tux05 kernel: XFS (sdc2): metadata I/O error in "xfs_da_read_buf+0xd9/0x130 [xfs]" at daddr 0x78 len 8 error 74
+Nov 05 14:36:32 tux05 kernel: blk_update_request: I/O error, dev sdb, sector 1993616 op 0x1:(WRITE) flags
+0x0 phys_seg 35 prio class 0
+Jul 27 11:56:22 tux05 kernel: blk_update_request: I/O error, dev sdc, sector 55676616 op 0x0:(READ) flags
+0x80700 phys_seg 26 prio class 0
+Jul 27 11:56:22 tux05 kernel: blk_update_request: I/O error, dev sdc, sector 55676616 op 0x0:(READ) flags
+0x80700 phys_seg 26 prio class 0
+# tux06
+Apr 15 08:10:57 tux06 kernel: I/O error, dev sda, sector 21740352 op 0x1:(WRITE) flags 0x1000 phys_seg 4 prio class 2
+Dec 13 12:56:14 tux06 kernel: I/O error, dev sdb, sector 3910157327 op 0x9:(WRITE_ZEROES) flags 0x8000000 phys_seg 0 prio class 2
+# tux07
+Mar 27 08:00:11 tux07 mfschunkserver[1927469]: replication error: failed to create chunk (No space left)
+# tux08
+Mar 27 08:12:11 tux08 mfschunkserver[464794]: replication error: failed to create chunk (No space left)
+```
+
+Tux04, 05 and 06 show disk errors. Tux07 and Tux08 are overloaded with a full disk, but no other errors. We need to babysit Lizard more!
+
+```
+stress -v -d 1
+```
+
+Write test:
+
+```
+dd if=/dev/zero of=./test bs=512k count=2048 oflag=direct
+```
+
+Read test:
+
+```
+/sbin/sysctl -w vm.drop_caches=3
+dd if=./test of=/dev/zero bs=512k count=2048
+```
+
+
+smartctl -a /dev/sdd -d megaraid,0
+
+RAID Controller in SL 3: Dell PERC H755N Front
+
+# The story continues
+
+I don't know what happened but the server gave a hard
+error in the logs:
+
+```
+racadm getsel # get system log
+Record: 340
+Date/Time: 05/31/2025 09:25:17
+Source: system
+Severity: Critical
+Description: A high-severity issue has occurred at the Power-On
+Self-Test (POST) phase which has resulted in the system BIOS to
+abruptly stop functioning.
+```
+
+Woops! I fixed it by resetting idrac and rebooting remotely. Nasty.
+
+Looking around I found this link
+
+=>
+https://tomaskalabis.com/wordpress/a-high-severity-issue-has-occurred-at-the-power-on-self-te
+st-post-phase-which-has-resulted-in-the-system-bios-to-abruptly-stop-functioning/
+
+suggesting we should upgrade idrac firmware. I am not going to do that
+without backups and a fully up-to-date fallback online. It may fix the
+other hardware issues we have been seeing (who knows?).
+
+Fred, the boot sequence is not perfect yet. Turned out the network
+interfaces do not come up in the right order and nginx failed because
+of a missing /var/run/nginx. The container would not restart because -
+missing above - it could not check the certificates.
+
+## A week later
+
+```
+[SMM] APIC 0x00 S00:C00:T00 > ASSERT [AmdPlatformRasRsSmm] u:\EDK2\MdePkg\Library\BasePciSegmentLibPci\PciSegmentLib.c(766): ((Address) & (0xfffffffff0000000ULL | (3))) == 0 !!!! X64 Exception Type - 03(#BP - Breakpoint) CPU Apic ID - 00000000 !!!!
+RIP - 0000000076DA4343, CS - 0000000000000038, RFLAGS - 0000000000000002
+RAX - 0000000000000010, RCX - 00000000770D5B58, RDX - 00000000000002F8
+RBX - 0000000000000000, RSP - 0000000077773278, RBP - 0000000000000000
+RSI - 0000000000000087, RDI - 00000000777733E0 R8 - 00000000777731F8, R9 - 0000000000000000, R10 - 0000000000000000
+R11 - 00000000000000A0, R12 - 0000000000000000, R13 - 0000000000000000
+R14 - FFFFFFFFA0C1A118, R15 - 000000000005B000
+DS - 0000000000000020, ES - 0000000000000020, FS - 0000000000000020
+GS - 0000000000000020, SS - 0000000000000020
+CR0 - 0000000080010033, CR2 - 0000000015502000, CR3 - 0000000077749000
+CR4 - 0000000000001668, CR8 - 0000000000000001
+DR0 - 0000000000000000, DR1 - 0000000000000000, DR2 - 0000000000000000 DR3 - 0000000000000000, DR6 - 00000000FFFF0FF0, DR7 - 0000000000000400
+GDTR - 000000007773C000 000000000000004F, LDTR - 0000000000000000 IDTR - 0000000077761000 00000000000001FF, TR - 0000000000000040
+FXSAVE_STATE - 0000000077772ED0
+!!!! Find image based on IP(0x76DA4343) u:\Build_Genoa\DellBrazosPkg\DEBUG_MYTOOLS\X64\DellPkgs\DellChipsetPkgs\AmdGenoaModulePkg\Override\AmdCpmPkg\Features\PlatformRas\Rs\Smm\AmdPlatformRasRsSmm\DEBUG\AmdPlatformRasRsSmm.pdb (ImageBase=0000000076D3E000, EntryPoint=0000000076D3E6C0) !!!!
+```
+
+New error in system log:
+
+```
+Record: 341 Date/Time: 06/04/2025 19:47:08
+Source: system
+Severity: Critical Description: A high-severity issue has occurred at the Power-On Self-Test (POST) phase which has resulted in the system BIOS to abruptly stop functioning.
+```
+
+The error appears to relate to AMD Brazos which is probably part of the on board APU/GPU.
+
+The code where it segfaulted is online at:
+
+=> https://github.com/tianocore/edk2/blame/master/MdePkg/Library/BasePciSegmentLibPci/PciSegmentLib.c
+
+and has to do with PCI registers and that can actually be caused by the new PCIe card we hosted.
diff --git a/issues/systems/tux04-production.gmi b/issues/systems/tux04-production.gmi
new file mode 100644
index 0000000..58ff8c1
--- /dev/null
+++ b/issues/systems/tux04-production.gmi
@@ -0,0 +1,279 @@
+# Production on tux04
+
+Lately we have been running production on tux04. Unfortunately Debian got broken and I don't see a way to fix it (something with python versions that break apt!). Also mariadb is giving problems:
+
+=> issues/production-container-mechanical-rob-failure.gmi
+
+and that is alarming. We might as well try an upgrade. I created a new partition on /dev/sda4 using debootstrap.
+
+The hardware RAID has proven unreliable on this machine (and perhaps others).
+
+We added a drive on a PCIe raiser outside the RAID. Use this for bulk data copying. We still bootstrap from the RAID.
+
+Luckily not too much is running on this machine and if we mount things again, most should work.
+
+# Tasks
+
+* [X] cleanly shut down mariadb
+* [X] reboot into new partition /dev/sda4
+* [X] git in /etc
+* [X] make sure serial boot works (/etc/default/grub)
+* [X] fix groups and users
+* [X] get guix going
+* [X] get mariadb going
+* [X] fire up GN2 service
+* [X] fire up SPARQL service
+* [X] sheepdog
+* [ ] fix CRON jobs and backups
+* [ ] test full reboots
+
+
+# Boot in new partition
+
+```
+blkid /dev/sda4
+/dev/sda4: UUID="4aca24fe-3ece-485c-b04b-e2451e226bf7" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="2e3d569f-6024-46ea-8ef6-15b26725f811"
+```
+
+After debootstrap there are two things to take care of: the /dev directory and grub. For good measure
+I also capture some state
+
+```
+cd ~
+ps xau > cron.log
+systemctl > systemctl.txt
+cp /etc/network/interfaces .
+cp /boot/grub/grub.cfg .
+```
+
+we should still have access to the old root partition, so I don't need to capture everything.
+
+## /dev
+
+I ran MAKEDEV and that may not be needed with udev.
+
+## grub
+
+We need to tell grub to boot into the new partition. The old root is on
+UUID=8e874576-a167-4fa1-948f-2031e8c3809f /dev/sda2.
+
+Next I ran
+
+```
+tux04:~$ update-grub2 /dev/sda
+Generating grub configuration file ...
+Found linux image: /boot/vmlinuz-5.10.0-32-amd64
+Found initrd image: /boot/initrd.img-5.10.0-32-amd64
+Found linux image: /boot/vmlinuz-5.10.0-22-amd64
+Found initrd image: /boot/initrd.img-5.10.0-22-amd64
+Warning: os-prober will be executed to detect other bootable partitions.
+Its output will be used to detect bootable binaries on them and create new boot entries.
+Found Debian GNU/Linux 12 (bookworm) on /dev/sda4
+Found Windows Boot Manager on /dev/sdd1@/efi/Microsoft/Boot/bootmgfw.efi
+Found Debian GNU/Linux 11 (bullseye) on /dev/sdf2
+```
+
+Very good. Do a diff on grub.cfg and you see it even picked up the serial configuration. It only shows it added menu entries for the new boot. Very nice.
+
+At this point I feel safe to boot as we should be able to get back into the old partition.
+
+# /etc/fstab
+
+The old fstab looked like
+
+```
+UUID=8e874576-a167-4fa1-948f-2031e8c3809f / ext4 errors=remount-ro 0 1
+# /boot/efi was on /dev/sdc1 during installation
+UUID=998E-68AF /boot/efi vfat umask=0077 0 1
+# swap was on /dev/sdc3 during installation
+UUID=cbfcd84e-73f8-4cec-98ee-40cad404735f none swap sw 0 0
+UUID="783e3bd6-5610-47be-be82-ac92fdd8c8b8" /export2 ext4 auto 0 2
+UUID="9e6a9d88-66e7-4a2e-a12c-f80705c16f4f" /export ext4 auto 0 2
+UUID="f006dd4a-2365-454d-a3a2-9a42518d6286" /export3 auto auto 0 2
+/export2/gnu /gnu none defaults,bind 0 0
+# /dev/sdd1: PARTLABEL="bulk" PARTUUID="b1a820fe-cb1f-425e-b984-914ee648097e"
+# /dev/sdb4 /export ext4 auto 0 2
+# /dev/sdd1 /export2 ext4 auto 0 2
+```
+
+# reboot
+
+Next we are going to reboot, and we need a serial connector to the Dell out-of-band using racadm:
+
+```
+ssh IP
+console com2
+racadm getsel
+racadm serveraction powercycle
+racadm serveraction powerstatus
+
+```
+
+Main trick it so hit ESC, wait 2 sec and 2 when you want the bios boot menu. Ctrl-\ to escape console. Otherwise ESC (wait) ! to get to the boot menu.
+
+# First boot
+
+It still boots by default into the old root. That gave an error:
+
+[FAILED] Failed to start File Syste…a-2365-454d-a3a2-9a42518d6286
+
+This is /export3. We can fix that later.
+
+When I booted into the proper partition the console clapped out. Also the racadm password did not work on tmux -- I had to switch to a standard console to log in again. Not sure why that is, but next I got:
+
+```
+Give root password for maintenance
+(or press Control-D to continue):
+```
+
+and giving the root password I was in maintenance mode on the correct partition!
+
+To rerun grup I had to add `GRUB_DISABLE_OS_PROBER=false`.
+
+Once booting up it is a matter of mounting partitions and tick the check boxes above.
+
+The following contained errors:
+
+```
+/dev/sdd1 3.6T 1.8T 1.7T 52% /export2
+```
+
+# Guix
+
+Getting guix going is a bit tricky because we want to keep the store!
+
+```
+cp -vau /mnt/old-root/var/guix/ /var/
+cp -vau /mnt/old-root/usr/local/guix-profiles /usr/local/
+cp -vau /mnt/old-root/usr/local/bin/* /usr/local/bin/
+cp -vau /mnt/old-root/etc/systemd/system/guix-daemon.service* /etc/systemd/system/
+cp -vau /mnt/old-root/etc/systemd/system/gnu-store.mount* /etc/systemd/system/
+```
+
+Also had to add guixbuild users and group by hand.
+
+# nginx
+
+We use the streaming facility. Check that
+
+```
+nginx -V
+```
+
+lists --with-stream=static, see
+
+=> https://serverfault.com/questions/858067/unknown-directive-stream-in-etc-nginx-nginx-conf86/858074#858074
+
+and load at the start of nginx.conf:
+
+```
+load_module /usr/lib/nginx/modules/ngx_stream_module.so;
+```
+
+and
+
+```
+nginx -t
+```
+
+passes
+
+Now the container responds to the browser with `Internal Server Error`.
+
+# container web server
+
+Visit the container with something like
+
+```
+nsenter -at 2838 /run/current-system/profile/bin/bash --login
+```
+
+The nginx log in the container has many
+
+```
+2025/02/22 17:23:48 [error] 136#0: *166916 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: genenetwork.org, request: "GET /gn3/gene/aliases/st%2029:1;o;s HTTP/1.1", upstream: "http://127.0.0.1:9800/gene/aliases/st%2029:1;o;s", host: "genenetwork.org"
+```
+
+that is interesting. Acme/https is working because GN2 is working:
+
+```
+curl https://genenetwork.org/api3/version
+"1.0"
+```
+
+Looking at the logs it appears it is a redis problem first for GN2.
+
+Fred builds the container with `/home/fredm/opt/guix-production/bin/guix`. Machines are defined in
+
+```
+fredm@tux04:/export3/local/home/fredm/gn-machines
+```
+
+The shared dir for redis is at
+
+--share=/export2/guix-containers/genenetwork/var/lib/redis=/var/lib/redis
+
+with
+
+```
+root@genenetwork-production /var# ls lib/redis/ -l
+-rw-r--r-- 1 redis redis 629328484 Feb 22 17:25 dump.rdb
+```
+
+In production.scm it is defined as
+
+```
+(service redis-service-type
+ (redis-configuration
+ (bind "127.0.0.1")
+ (port 6379)
+ (working-directory "/var/lib/redis")))
+```
+
+The defaults are the same as the definition of redis-service-type (in guix). Not sure why we are duplicating.
+
+After starting redis by hand I get another error `500 DatabaseError: The following exception was raised while attempting to access http://auth.genenetwork.org/auth/data/authorisation: database disk image is malformed`. The problem is it created
+a DB in the wrong place. Alright, the logs in the container say:
+
+```
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:C 23 Feb 2025 14:04:31.040 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:C 23 Feb 2025 14:04:31.040 # Redis version=7.0.12, bits=64, commit=00000000, modified=0, pid=3977, just started
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:C 23 Feb 2025 14:04:31.040 # Configuration loaded
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:M 23 Feb 2025 14:04:31.041 * Increased maximum number of open files to 10032 (it was originally set to 1024).
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:M 23 Feb 2025 14:04:31.041 * monotonic clock: POSIX clock_gettime
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:M 23 Feb 2025 14:04:31.041 * Running mode=standalone, port=6379.
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:M 23 Feb 2025 14:04:31.042 # Server initialized
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:M 23 Feb 2025 14:04:31.042 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:M 23 Feb 2025 14:04:31.042 # Wrong signature trying to load DB from file
+Feb 23 14:04:31 genenetwork-production shepherd[1]: [redis-server] 3977:M 23 Feb 2025 14:04:31.042 # Fatal error loading the DB: Invalid argument. Exiting.
+Feb 23 14:04:31 genenetwork-production shepherd[1]: Service redis (PID 3977) exited with 1.
+```
+
+This is caused by a newer version of redis. This is odd because we are using the same version from the container?!
+
+Actually it turned out the redis DB was corrupted on the SSD! Same for some other databases (ugh).
+
+Fred copied all data to an enterprise level storage, and we rolled back to some older DBs, so hopefully we'll be OK for now.
+
+# Reinstating backups
+
+In the next step we need to restore backups as described in
+
+=> /topics/systems/backups-with-borg
+
+I already created an ibackup user. Next we test the backup script for mariadb.
+
+One important step is to check the database:
+
+```
+/usr/bin/mariadb-check -c -u user -p* db_webqtl
+```
+
+A successful mariadb backup consists of multiple steps
+
+```
+2025-02-27 11:48:28 +0000 (ibackup@tux04) SUCCESS 0 <32m43s> mariabackup-dump
+2025-02-27 11:48:29 +0000 (ibackup@tux04) SUCCESS 0 <00m00s> mariabackup-make-consistent
+2025-02-27 12:16:37 +0000 (ibackup@tux04) SUCCESS 0 <28m08s> borg-tux04-sql-backup
+2025-02-27 12:16:46 +0000 (ibackup@tux04) SUCCESS 0 <00m07s> drop-rsync-balg01
+```
diff --git a/tasks/alexm.gmi b/tasks/alexm.gmi
index 88d3927..7ec8e87 100644
--- a/tasks/alexm.gmi
+++ b/tasks/alexm.gmi
@@ -1,4 +1,4 @@
-# Tasks for Fred
+# Tasks for Alex
## Description
@@ -16,11 +16,83 @@ You can refine the search by constraining the checks some more, e.g. to get high
# Tasks
-* [ ] Make GNQA reliable (with @fahamu)
-* [ ] Improve UX for GNQA (with @shelbys)
-* [ ] GNQA add abstracts pubmed (with @shelbys)
+## This week
+
+* [ ] Start application - Pwani
+* - [X] Got all transcripts
+* [+] Correlations - Fred is having issues - Rust updated on Guix
+* - also take a look at long running SQL statement and large LIMIT value (check prod!)
+* [ ] Friend of UTHSC - Pjotr needs to send forms
+* [+] Disable spinner on production (check prod!)
+* [+] Rqtl2 - BXD output work on CD
+* - [ ] should go to production w. fredm
+ Disable for Production
+* - [X] DO mice family file - children are heterozygous - family file contains parents->child
+* - [X] DO GN2 compatible by generating .geno files
+ Test on CD
+* [+ ] Minor refactorings - Rqtl2 is hacky
+* [ ] Work in development system container and document
+=> https://git.genenetwork.org/gn-machines/commit/?h=gn-local-development-container&id=589dcf32be90f5ec827cb6976d3cb5838d500ac0
+* [+] Create terminal output for external processes on *PRODUCTION* (Rqtl1, Rqtl2, GEMMA, pair-scan are done --- WGCNA as a pilot, with @bonfacem and @pjotrp)
+
+
+## (14/4/25)
+
+* [x] Debug DO results for for genenetwork2
+ * [x] inspect results from gn3 and display mapping results
+ * [x] Debug db tunneling connection
+ * [x] Debug rendering huge datatables
+
+## (21/4/25)
+* [x] QTL computation for the DO dataset
+ * [x] Debug rendering large datasets using datatables
+ * [x] fix issue with qtl2 plot for DO dataset
+ * [x] Caching for qtl2 computations
+
+* [] Pwani Campus Application
+
+## 28/4/25
+
+* [x] Push changes to CD/Production
+* [x] Enable RQTL2 only for DO/bxd dataset
+* [] look at integrating QTL for HS dataset
+* [x] setup local container with bons
+
+## 5/05/25
+
+* [] Integrate hsrat dataset for rqtl2 mapping.
+* [] Pwani campus application.
+* [] Look at caching for genotype probabilities (rqtl2).
+* [] Add full logs on the mapping results page.
+* [x] Add test feature flag for rqtl2.
+
+## 2/06/2025
+
+* work onsubset for hs dataset;; define founder genotype files??
+* script to dump genotypes to db with bons
+* experiment with caching for Genotypic probabilities rds objects
+* work on genenetwork llms how to make search without login
+
+* masters ; submit documents
+
+## Next week(s)
+
+* [ ] Accelerate Xapian functionality - needs Aider key from Pjotr
+* Check and fix CTL?
+* [+] Create terminal output for external processes (Rqtl1, Rqtl2, pair-scan are done --- WGCNA as a pilot, with @bonfacem and @pjotrp)
+* [X] GNQA says there are no results, but has them
+* [X] Correlations are slow
+
+## Done
+
+* [X] Rqtl1 - ITP output - 3K individuals - family file
+* [X] When bonz is ready wire up GNQA
+* + balg-qa.genenetwork.org
+* [X] Don't support new PIL - stick to the old one in guix-bioninformatics
+* [X] Make GNQA reliable (with @fahamu)
+* [X] Improve UX for GNQA (with @shelbys) -- Adrian wants to use our AI UX for their setup
+* [X] GNQA add abstracts pubmed (with @shelbys)
=> ../issues/fetch-pubmed-references-to-gnqa
+* [X] Edit markdown/gemtext pages through web UI (with @bonfacem)
+
-* [ ] Edit markdown/gemtext pages through web UI (with @bonfacem)
-* [ ] GNQA add GN metadata with @bonfacem
-* [ ] Create terminal output for external processes (WGCNA as a pilot, with @bonfacem and @pjotrp)
diff --git a/tasks/bonfacem.gmi b/tasks/bonfacem.gmi
index 52f4027..03848f1 100644
--- a/tasks/bonfacem.gmi
+++ b/tasks/bonfacem.gmi
@@ -8,9 +8,62 @@
## Tasks
-* [X] Indexing generif data / Improve Local Search
-* [ ] Add hashes to RDF metadata
-* [-] Brain Data (To be spec'ed further)
+### Note
+- GN-auth dashboard fixes. Follow up with Fred.
+- Case-attributes used in co-variates.
+- Encourage FahamuAI to be open.
+
+### This week
+* [+] Case Attributes (Do a diagnostic and delegate)
+* - Git blame. Add tests.
+* - Error when checking the history.
+* - Reach out to Zach.
+* - Disable diff in the UI.
+* [ ] Distinct admin and dev user.
+* [ ] Adapter to LMDB into a cross object.
+* - Try computations with R/qtl2.
+* - Look at R LMDB libraries.
+* - Look at functions that read the files.
+* - PJ: LMDB adapter in R and cross-type files.
+* [ ] Send Arun an e-mail on how to go about upgrading shepherd.
+* [ ] Dump all genotypes from production to LMDB.
+* - PJ sync tux01 genotypes with tux02/04.
+* [+] Correlations hash.
+* - Add dataset count to RDF.
+* [ ] Spam + LLMs
+* - RateLimiting for Rif Editing.
+* - Honep Pot approach.
+* [+] Help Alex with SSL certification container error.
+* - Put the changes in the actual scm files.
+* [X] Python Fahamu.
+* [X] Memvid - brief look.
+
+### Later
+* [ ] Dockerise GN container. For Harm.
+* [ ] Send emails when job fail.
+* [ ] Look at updating gn-auth/gn-libs to PYTHONPATH for gn2/3.
+* [ ] Sample/individual/strain/genometype counts for PublishData only - ProbeSetData? https://github.com/genenetwork/genenetwork2/blob/testing/scripts/sample_count.py - mirror in RDF and use global search
+* - search for all traits that have more than X samples
+* [ ] Add case attributes to RDF and share with Felix (depends on @felixl)
+* [ ] xapian search, add dataset size keys, as well as GN accession id, trait id, and date/year
+* - Improve xapian markdown docs to show all used fields/keys with examples
+* - genewiki search (link in table? check with Rob)
+* - base line with GN1 search - add tests
+* - Fix missing search term for sh* - both menu search and global search
+* - Use GN1 as a benchmark for search results (mechanical Rob?)
+* - Xapian ranges for markers
+
+### Even later
+
+* [ ] Rest API for precompute output (mapping with GEMMA)
+* [ ] GNQA add GN metadata (to RAG)
+* - Focus on RIF
+* - triple -> plain text
+* - bob :fatherof nancy -> Bob is the father of Nancy.
+
+## Later
+
+* [ ] AI improvements
### On going tasks
@@ -34,3 +87,47 @@ Should something in one of these closed issues be amiss, we can always and shoul
Currently closed issues are:
=> https://issues.genenetwork.org/search?type=closed-issue&query=assigned%3ABonfaceKilz%20AND%20type%3Aissue%20AND%20is%3Aclosed Closed Issues
+
+* [X] Indexing generif data / Improve Local Search
+* [X] lmdb publishdata output and share with Pjotr and Johannes
+
+## Done
+
+* [X] Add lmdb output hashes with index and export LMDB_DATA_DIRECTORY
+* [X] Share small database with @pjotrp and @felixl
+* [X] With Alex get rqtl2 demo going in CD (for BXD)
+* [X] Set up meeting with ILRI
+* - Zasper https://news.ycombinator.com/item?id=42572057 - Alan
+* [X] Migrate fahamuai RAG to VPS and switch tokens to GGI OpenAI account
+* 1. Running AI server using (our) VPS and our tokens
+* + Pjotr gives API key - OpenAI - model?
+* 2. Read the code base - Elixir is plumbing incl. authentication, Python processing text etc.
+* 3. Try ingestion and prompt (REST API) - check out postgres tables
+* 4. Backup state from production Elixir
+* 5. Assess porting it to Guix (don't do any work) - minimum version Elixir
+* 6. Get docs from Shelby/Brian
+* [X] Set-up grobit on balg01
+* - guix docker/native
+* - recent breaking changes
+* [X] GeneRIF
+* - Merge recent changes first. Ping Rob.
+* - Brainstorm ideas around log-in.
+* - Unlimited tokens that don't expire.
+* - Sync prod with CD -- sqlite.
+* - Add deletion
+* [X] Describe Generif/wikidata access for Rob in an email with test account on CD
+* 1. Send email to Rob
+* 2. Work on production w. Fred
+* [X] Distinguish CD from production -- banners/buttons/colors.
+* [X] Use aider - give a presentation in the coming weeks
+* [X] gn-auth fixes
+* [X] Assess Brian's repo for deployment.
+* [X] Finish container work
+* - View diffs in BXD: Edit case attributes throws an error.
+* [X] Check small db from: https://files.genenetwork.org/database/
+* [X] Changes to Production + (Alex)
+* [X] File issue with syslog
+* [X] LMDB database.
+* - Simplify (focus on small files). Don't over-rely on Numpy.
+* [X] Assess adding GeneRIF to LLM.
+* [X] Referrer headers -- a way of preventing bots beyond rate-limiting.
diff --git a/tasks/felixl.gmi b/tasks/felixl.gmi
index 209e8c9..347f387 100644
--- a/tasks/felixl.gmi
+++ b/tasks/felixl.gmi
@@ -1,4 +1,4 @@
-# Tasks for Munyoki
+# Tasks for Felix
## Tags
@@ -6,12 +6,134 @@
* assigned: felixl
* status: in progress
-## October
+## Tasks
+### Goals
+
+1. Write papers for PhD
+2. Load data into GN - serve the communities
+3. Get comfortable with programming
+
+#### Previous week(s)
+
+* [x] Restless Legs Syndrome (RLS) - 'Traditional Phewas' - AI aspect - Johannes
+* [+] Finalize the slide deck - so it can be read on its own
+* [.] Review paper: one-liners for @pjotrp - why is this important for GN and/or thesis
+* - [ ] list of relevant papers with one-liners - the WHY
+=> https://pmc.ncbi.nlm.nih.gov/articles/PMC3294237/
+* [+] Analyse and discuss BXD case attributes with Rob --- both group level and dataset level
+* [ ] Sane representation of case attributes in RDF with @bonfacem
+* [X] Present C.elegans protocol and example mappings with GEMMA/Rqtl
+* [ ] Uploader - setting up code with @fredm
+* - [ ] Concrete improvement to work on
+* - [X] run small database mysql locally
+* - [X] aider with Sonnet + code fixes
+* - [ ] document - add to code base - merge with Fred's tree - share changes with Pjotr & team
+* [ ] Sort @alexm application with Pwani = this week
+
+### This week (07-04-2025 onwards)
+
+* GN2 tasks
+ * [ X ] Progress on Kilifish
+ - meet with Dennis (send him an email with all the queries needed)
+ - progress to format and upload data to gn2 (to be ready by latest Friday!)
+ * [ X ] Make a milestone with genotype smoothing
+
+* PhD tasks
+ * [ X ] Complete and share concept note and timeline to supervisors, have a meeting for progress
+ * [ ] Make a milestone on chapter one manuscript (deep dive into the selected papers){THE BIG PICTURE; a complete draft by early May}
+
+* Programming
+ * [ ] Make a milestone with the uploader (really push and learn!)
+ - documentation (use ai); add to the code base of the uploader
+ - utilise the hurdles to learn programming priniciples in action
+
+### This week (14-04-2025 onwards)
+
+* gn-uploader programming
+ * [X] - Resolve the config file issue with your local uploader
+ * [ ] - Run the uploader locally, then break the system, see how components connect to each other
+ * [ ] - document your findings
+
+* genotype smoothing
+ * [ ] - resolve errors with plotting, document your findings
+
+### This week (21-04-Onwards)
+
+* genotype smoothing
+ * [ ] - haplotyping tools for smoothing (plink,., etc)
+ - see what it can offer with smoothing. See what others say about this.
+
+* gn-uploader programming
+ * [ ] - Run the uploader locally, then break the system, see how components connect to each other (ask help from Bonz)
+ * [ ] - document your findings
+
+### This week (28-04-Onwards)
+* gn-uploader programming
+ * [X] - Run the uploader locally, then break the system, see how components connect to each other (ask help from Bonz)
+ * [X] - document your findings
+ {Get help from your teammates/AI to jump start this!, swallow your pride! :(}
+
+* genotype smoothing
+ * [X] Keep refining the following:
+ * [X] filtering power adapted from plink
+ * [X] the xsomes mix up in the plot (probably the phenotype data?)
+ * [X] Update findings and push to github
+
+### This week (05-05-Onwards)
+* programming (gn-uploader)
+ * [ ] - pick one file each day, review it, understand it
+ * [ ] - pair programming with Alex on test runs
+
+* HS rats scripts
+ * [ ] - prepare/refine scripts to quickly process HS rats file
+ * [ ] - assist alex with hs rats cross info
+
+* AOBs
+ * [ X ] Weekly meetings
+ * [ X ] follow up with Paul on his progress
+ * [ X ] follow up on the MSc bioinformatics project
+ * [ X ] follow up on Alex's application with Pwani
+
+### (12-05-onwards)
+ * [X] - HS genotypes scripting
+
+### (19-05-onwards)
+ * [X] - HS genotypes debugging (memory issue)
+ * [X] - pair programming with Bonz to improve the script
+
+### this week (26-05-onwards)
+ * [X] - process the genotype file for hs rats
+ * [X] - approach by tissues categories
+ * [X] - adipose and liver
+ - test by Xsomes for memory capture
+ - run the working commands
+ * [X] - the rest 10 other tissues (in progress)
+ * [X] - *.bed file vs the updated vcf files from the website?
+
+### this week (02-06-onwards)
+* [X] - process the genotypes for the rest of the 10 tissues for HS rats
+* [X] - document the new findings about smoothing using bcftools and plink
+
+* ## this week (09-06-onwards)
+* [ ] - identify start and end points for haplotypes in hs genotype files
+* [ ] - upload the final updates to gn2, test and see the results
+* [ ] - gn-uploader/uploader folder, explore
+
+### Later weeks (non-programming tasks)
+
+* [ ] Kilifish into GN
+* [ ] Review paper on genotyping
+* [ ] HS Rat
+* [ ] Prepare others for C.elegans
* [ ] Upload Arabidopsis dataset
* [ ] Upload Medaka dataset
+* [ ] Work on improved DO and Ce genotyping
+
+### Done
+
+
-## Tasks
### On going tasks
=> https://issues.genenetwork.org/search?query=assigned%3Afelixl+AND+is%3Aopen&type=open-issue All in-progress tasks
diff --git a/tasks/fredm.gmi b/tasks/fredm.gmi
index 5e7e71d..1cd3125 100644
--- a/tasks/fredm.gmi
+++ b/tasks/fredm.gmi
@@ -1,5 +1,21 @@
# Tasks for Fred
+# Tags
+
+* kanban: fredm
+* assigned: @fredm
+* status: in progress
+
+# Tasks
+
+* [ ] Add drives to Penguin2, see issues/systems/penguin2-raid5
+* [X] Move production files from sdc to sde
+* [ ] Fix password weakness
+* [ ] Fix gn-docs and editing, e.g. facilities page by gn-guile in container
+* [ ] Unifiy container dirs
+* [ ] Fix wikidata gene aliases (see mapping page) with @pjotrp
+* [ ] Public SPARQL container?
+
## Description
These are the tasks and issues to be handled by Fred.
diff --git a/tasks/machine-room.gmi b/tasks/machine-room.gmi
index badac82..77f7b8e 100644
--- a/tasks/machine-room.gmi
+++ b/tasks/machine-room.gmi
@@ -11,17 +11,19 @@
## GN
+* [ ] penguin2 has 90TB of space we can use on NFS/backups
+* [ ] Script to replace reaper with GEMMA
* [ ] Transfer nervenet.org to dnsimple
-* [ ] Trait vectors for Johannes
+* [+] Trait vectors for Johannes
+* [X] grub on tux04
* [ ] nft on tux04
* [ ] !!Organize pluto, update Julia and add apps to GN menu Jupyter notebooks
-* [ ] !!Xusheng jumpshiny services
+* [+] !!Xusheng jumpshiny services
* [ ] Fix apps and create system containers for herd services - see issues/systems/apps
* [ ] Slurm+ravanan on production for GEMMA speedup
* [ ] Embed R/qtl2 (Alex)
* [ ] Hoot in GN2 (Andrew)
* [ ] tux02 certbot failing (manual now)
-* [ ] penguin2 has 32TB of space we can use on NFS/backups
## Octopus:
diff --git a/tasks/octopus.gmi b/tasks/octopus.gmi
index 27232ec..61955ec 100644
--- a/tasks/octopus.gmi
+++ b/tasks/octopus.gmi
@@ -2,6 +2,9 @@
In this file we track tasks that need to be done.
+Tuxes still have some 30x 2.5" slots.
+Lambda has 18x 2.5" slots.
+
# Tasks
* [X] get lizardfs and NFS going on tuxes tux06-09
diff --git a/tasks/pjotrp.gmi b/tasks/pjotrp.gmi
index b284c46..57620aa 100644
--- a/tasks/pjotrp.gmi
+++ b/tasks/pjotrp.gmi
@@ -6,35 +6,69 @@
* assigned: pjotrp
* status: in progress
-# Notes
-
-The tasks here should probably be broken out into appropriately tagged issues, where they have not - they can be found and filtered out with tissue (formerly gnbug).
+# Current
-=> https://issues.genenetwork.org
+## 1U01HG013760
-Generally work applies to NIH/R073237482 and other grants.
+* Prefix-Free Parsing Compressed Suffix Tree (PFP) for tokenization
+* Mempang
-# Current
+* [+] create backup server with @fredm
+* [+] RAG with Shelby and Bonz
+* [+] Moni builds 1U01HG013760
+* [+] test framework wfmash - vertebrate tree and HPC compute?
+* - wfmash - wgatools -> PAF + FASTA to VCF
+* - wfmash arch=native build
+* [ ] gbam - data compression with Nick and Hasithak
+* [X] accelerate wfmash with @santiago and team
+* [+] package wfmash and Rust wfa2-lib
+* [ ] add Ceph for distributed network storage 1U01HG013760
+* [ ] Work on pangenome genotyping 1U01HG013760
+* [ ] update freebayes into Debian (version #)
+* - [ ] static build and prepare for conda
+* [ ] update vcflib into Debian (version #)
+* - [ ] static build and prepare for conda
+* [ ] pangenome as a 1st class input for GEMMA
+* kilifish pangenome with Paul and Dario
## Systems
+* [+] jumpshiny
+* [ ] pluto
+* [ ] Backup production databases on Tux04
+* - [+] Dump containers w. databases
+* - [X] Dump mariadb
+* - [ ] backup remote
+* - [ ] borg-borg
+* - [ ] fix root scripts
* [ ] make sure production is up to scratch (see stable below)
-* [ ] backup tux04
-* [ ] add Ceph for distributed network storage 1U01HG013760
+* [ ] synchronize git repos for public, CD, fallback and production using sheepdog and document
* [ ] drop tux02 backups on balg01
-* [ ] drop backups NL
-* [ ] reintroduce borg-borg
+* [X] Small database public
## Ongoing tasks (current/urgent)
-* [ ] Precompute
-* [+] Set up stable GeneNetwork server instance with new hardware (see below)
-=> /topics/systems/fire-up-genenetwork-system-container.gmi
+* [ ] ~Felix, Alex, Rahul as friends of UTHSC
+* [ ] Precompute with GEMMA
+ + [ ] Store N
+ + [ ] Store significance levels
+ + [ ] Check genotype input data
+ + [ ] Imputation
+ + [ ] Do same with bulkLMM
+ + [ ] Generate lmdb output
+ + [ ] Hook into Xapian
+ + [ ] Hook into correlations
+
* [ ] Check email setup tux04
-* [+] Julia as part of GN3 deployment
+* [ ] jbrowse plugin code - https://genenetwork.trop.in/mm10
+* [+] bulklmm Julia as part of GN3 deployment
+ - precompute & Julia
+=> https://github.com/GregFa/TestSysimage
+ Here the repo with BulkLMMSysimage:
+=> https://github.com/GregFa/BulkLMMSysimage
=> /topics/deploy/julia.gmi
-* [ ] Work on pangenome genotyping 1U01HG013760
-* [+] Moni builds 1U01HG013760
+* [X] Set up stable GeneNetwork server instance with new hardware (see below)
+=> /topics/systems/fire-up-genenetwork-system-container.gmi
# Tasks
@@ -51,11 +85,11 @@ Now (X=done +=WIP _=kickoff ?=?)
* [+] Build leadership team
* [+] gBAM
* [ ] p-value global search
-* [+] Xapian search add tags, notmuch style (with @zachs)
+* [+] Xapian search add tags, notmuch style (with @bonfacem and @zachs)
=> ../issues/systems/octopus
-* [ ] Add R/qtl2 and multi-parent support with Karl (DO and Magic populations)
+* [+] Add R/qtl2 and multi-parent support with Karl (DO and Magic populations)
* [+] Fix slow search on Mariadb? Moving to xapian
* [.] GeneNetwork paper
* + [ ] add FAIR statement
@@ -70,7 +104,7 @@ Longer term
Later
* [ ] Mempang25 1U01HG013760
- + [ ] Invites
+ + [X] Invites
+ [ ] Payments
+ [ ] Rooms
+ [ ] Catering
@@ -86,11 +120,7 @@ Later
### Set up stable server instance with new hardware
* [ ] ssh-shell access for git markdown
-* [ ] R/qtl2 with Karl and Alex
-* [+] Set up opensmtpd as a service
- + [ ] Add package dependency
- + [X] Test on open port 25
- + [ ] Add public-inbox (Arun)
+* [+] R/qtl2 with Karl and Alex, see [alex.gmi]
=> ./machine-room.gmi machine room
@@ -118,3 +148,12 @@ Later
* [X] Fix mariadb index search - need to upgrade mariadb to convert final utf8mb4, see
=> ../issues/slow-sql-query-for-xapian-indexing.gmi
* [X] Debian/free software issues incl. vcflib work in Zig and release
+* [X] Set up opensmtpd as a service
+
+# Notes
+
+The tasks here should probably be broken out into appropriately tagged issues, where they have not - they can be found and filtered out with tissue (formerly gnbug).
+
+=> https://issues.genenetwork.org
+
+Generally work applies to NIH/R073237482 and other grants.
diff --git a/tasks/programmer-team/meetings.gmi b/tasks/programmer-team/meetings.gmi
new file mode 100644
index 0000000..d972b3b
--- /dev/null
+++ b/tasks/programmer-team/meetings.gmi
@@ -0,0 +1,82 @@
+# Weekly meetings
+
+In this document we will track tasks based of our weekly meetings. This list sets the agenda
+on progress for the next week's meeting.
+
+## 02-10-2024
+## @felixm
+* [ ] Use Aider to contribute and cover to Fred's coding. Share useful prompts.
+* [ ] Feed relevant papers to GPT and find similar summary for other datasets. Start with C-Elegans.
+
+
+## @bonfacem
+* [ ] Share values with PJ.
+* [ ] Assume LMDB files are transient. When hash doesn't exist, generate the hash for that dataset. Use LMDB to store key value pairs of hashes.
+* [ ] Add dump script to gn-guile.
+* [ ] Add Case Attributes in Virtuoso.
+
+## @alex
+* [ ] Push R/QTL2 to production
+* [ ] Have R/QTL2 work for ITP
+
+Nice to have:
+* Think about editing publish data and consequent updates to LMDB.
+
+## @pjotr
+* Kickstart UTHSC VPN access for Felix and Alex.
+
+## 01-20-2024
+### @bonfacem
+
+* [ ] Report: OpenAI on Aider - use AI for programming - discuss with @alexm
+
+=> https://issues.genenetwork.org/topics/ai/aider
+
+* [-] Metadata: Provide list of case attributes for BXD to @flisso
+* [-] Code UI: GeneRIF and GenWiki should work from the mapping page - encourage people to use
+ - anyone logged in can edit
+ - If RIF does not exist point to GeneWiki
+ - If GeneWiki does not exist provide edit page
+* [ ] Code export: Exporting traits to lmdb PublishData - @alexm helps with SQL
+ - missing data should not be an X
+ - run lmdb design (first code) by @pjotrp
+ - start exporting traits for Johannes (he will need to write a python reader)
+* Later: Improve the work/dev container for @alexm
+
+### @flisso
+
+* [ ] Write: Uploader protocol. NOTES: Finished with C-elegans. Yet to test with other datasets.
+* [ ] Script: Run Reaper
+* [ ] Data: Case attributes - with @bonfacem
+* [ ] Write: Create protocol to upload case attributes
+
+### @alexm
+
+* [ ] Code: Rqtl2 match Rqtl1: match scan changes. Notes: PR out and added tests.
+* [ ] Bug: Fix pair scan. NOTES: Fixed it. But can't test it now since CD is down.
+* Later: AI changes
+
+### @Pjotr
+
+* [ ] Code: Work on precompute with GEMMA (w. Jameson)
+* [ ] Code: Take Bonface's trait files when they become available
+
+
+## 01-27-2024
+
+Last week's error with CD and production downtime:
+* [ level 1] Container: Error messages when data not loaded in Virtuoso, Indexing.
+* [ level 2] Sheepdog: Check services --- sheepdog. Health checkpoints.
+* [ level 3] User feedback. Escalate errors correctly to the users, so they can report to coders
+
+### @bonfacem
+* [ ] Troubleshoot CD.
+* [ ] Export files in lmdb. Yohannes read file in Python example
+* [ ] Metadata: Provide list of case attributes for BXD to @flisso
+* [ ] Aider: See if it can generate some guile and python. Give an example.
+
+### @alexm
+* [ ] UI for R/Qtl2.
+
+### @flisso
+* [ ] Look at Fred Python code for the uploader and report on this.
diff --git a/tasks/roadmap.gmi b/tasks/roadmap.gmi
new file mode 100644
index 0000000..9bed63d
--- /dev/null
+++ b/tasks/roadmap.gmi
@@ -0,0 +1,65 @@
+# GN Road map
+
+GN is a web service for complex traits. The main version is currently deployed in Memphis TN, mostly targetting mouse and rat.
+Here we define a road map to bring GN to more communities by providing federated services.
+The aim is to have plant.genenetwork.org, nematode.genenetwork.org, big.genenetwork.org running in the coming years.
+
+# Getting an instance up (step 1)
+
+## Deploy a new instance
+
+To test things we can use an existing database or a new one. We can deploy that as a (new) Guix service container.
+
+We'll need to run a few services including:
+
+* GN3
+* GN2
+* Auth (if required)
+* Uploader (if required)
+
+## Get database ready
+
+In the first step we have to upload data for the target community. This can be done by updating the databases with some example datasets. Care has to be taken that search etc. works and that we can do the mapping.
+
+* Add traits
+* Add genotype files
+* Add metadata
+
+# Branding and hosting (Step 2)
+
+Once we have a working database with a number of example use cases we can start rebranding the service and, ideally, host it on location.
+
+# Synchronization (Step 3)
+
+## Move traits into lmdb
+
+This is WIP. We need to adapt the GN3 code to work with lmdb when available.
+
+## Move genotypes into lmdb
+
+This is WIP. We need to adapt the GN3 code to work with lmdb when available.
+
+# Federated metadata (Step 4)
+
+## Move all metadata into RDF
+
+This is WIP and happening. We will need to document.
+
+# LLM Integration (Step 5)
+
+Provide an LLM that integrates well with the gn eco-system. Goals for the LLM:
+
+* Flexible data ingestion
+* Plug and play LLMS (local, OpenAI, Claude etc.)
+
+This is still a WIP.
+
+# Community (Step 6)
+
+## Uploading data examples
+
+## GN3 examples
+
+## UI examples
+
+## Provide programming examples
diff --git a/tasks/zachs.gmi b/tasks/zachs.gmi
new file mode 100644
index 0000000..6ae3df1
--- /dev/null
+++ b/tasks/zachs.gmi
@@ -0,0 +1,7 @@
+# Tasks for Zach
+
+# Tasks
+
+* [ ] Move non-ephemeral data out of redis into sqlite DB - see JSON dump
+* - [ ] Collections
+* - [ ] permanent URIs(?)
diff --git a/topics/ai/aider.gmi b/topics/ai/aider.gmi
index 71dfa9e..aa88e71 100644
--- a/topics/ai/aider.gmi
+++ b/topics/ai/aider.gmi
@@ -1,12 +1,16 @@
# Aider
-https://aider.chat/
+=> https://aider.chat/
+```
python3 -m venv ~/opt/python-aider
~/opt/python-aider/bin/python3 -m pip install aider-install
~/opt/python-aider/bin/aider-install
+```
Installed 1 executable: aider
Executable directory /home/wrk/.local/bin is already in PATH
+```
aider --model gpt-4o --openai-api-key aa...
+```
diff --git a/topics/ai/ontogpt.gmi b/topics/ai/ontogpt.gmi
new file mode 100644
index 0000000..94bd165
--- /dev/null
+++ b/topics/ai/ontogpt.gmi
@@ -0,0 +1,7 @@
+# OntoGPT
+
+python3 -m venv ~/opt/ontogpt
+~/opt/ontogpt/bin/python3 -m pip install ontogpt
+
+
+runoak set-apikey -e openai
diff --git a/topics/database/mariadb-database-architecture.gmi b/topics/database/mariadb-database-architecture.gmi
index 5c9b0c5..0454d71 100644
--- a/topics/database/mariadb-database-architecture.gmi
+++ b/topics/database/mariadb-database-architecture.gmi
@@ -28,6 +28,12 @@ Naming convention-wise there is a confusing use of id and data-id in particular.
The default install comes with a smaller database which includes a
number of the BXDs and the Human liver dataset (GSE9588).
+It can be downloaded from:
+
+=> https://files.genenetwork.org/database/
+
+Try the latest one first.
+
# GeneNetwork database
Estimated table sizes with metadata comment for the important tables
@@ -536,8 +542,8 @@ select * from ProbeSetSE limit 5;
For the other tables, you may check the GN2/doc/database.org document (the starting point for this document).
-# Contributions regarding data upload to the GeneNetwork webserver
-* Ideas shared by the GeneNetwork team to facilitate the process of uploading data to production
+# Contributions regarding data upload to the GeneNetwork webserver
+* Ideas shared by the GeneNetwork team to facilitate the process of uploading data to production
## Quality check and integrity of the data to be uploaded to gn2
@@ -556,7 +562,7 @@ For the other tables, you may check the GN2/doc/database.org document (the start
* Unique identifiers solve the hurdles that come with having duplicate genes. So, the QA tools in place should ensure the uploaded dataset adheres to the requirements mentioned
* However, newer RNA-seq data sets generated by sequencing do not usually have an official vendor identifier. The identifier is usually based on the NCBI mRNA model (NM_XXXXXX) that was used to evaluate an expression and on the sequence that is involved, usually the start and stop nucleotide positions based on a specific genome assembly or just a suffix to make sure it is unique. In this case, you are looking at mRNA assays for a single transcript, but different parts of the transcript that have different genome coordinates. We now typically use ENSEMBL identifiers.
* The mouse version of the sonic hedgehog gene as an example: `ENSMUST00000002708` or `ENSMUSG00000002633` sources should be fine. The important thing is to know the provenance of the ID—who is in charge of that ID type?
-* When a mRNA assay is super precise (one exon only or a part of the 5' UTR), then we should use exon identifiers from ENSEMBL probably.
+* When a mRNA assay is super precise (one exon only or a part of the 5' UTR), then we should use exon identifiers from ENSEMBL probably.
* Ideally, we should enter the sequence's first and last 100 nt in GeneNetwork for verification and alignment. We did this religiously for arrays, but have started to get lazy now. The sequence is the ultimate identifier
* For methylation arrays and CpG assays, we can use this format `cg14050475` as seen in MBD UTHSC Ben's data
* For metabolites like isoleucine—the ID we have been using is the mass-to-charge (MZ) ratio such as `130.0874220_MZ`
@@ -579,16 +585,16 @@ abcb10_q9ji39_t312
## BXD individuals
-* Basically groups (represented by the InbredSet tables) are primarily defined by their list of samples/strains (represented by the Strain tables). When we create a new group, it's because we have data with a distinct set of samples/strains from any existing groups.
-* So when we receive data for BXD individuals, as far as the database is concerned they are a completely separate group (since the list of samples is new/distinct from any other existing groups). We can choose to also enter it as part of the "generic" BXD group (by converting it to strain means/SEs using the strain of each individual, assuming it's provided like in the files Arthur was showing us).
+* Basically groups (represented by the InbredSet tables) are primarily defined by their list of samples/strains (represented by the Strain tables). When we create a new group, it's because we have data with a distinct set of samples/strains from any existing groups.
+* So when we receive data for BXD individuals, as far as the database is concerned they are a completely separate group (since the list of samples is new/distinct from any other existing groups). We can choose to also enter it as part of the "generic" BXD group (by converting it to strain means/SEs using the strain of each individual, assuming it's provided like in the files Arthur was showing us).
* This same logic could apply to other groups as well - we could choose to make one group the "strain mean" group for another set of groups that contain sample data for individuals. But the database doesn't reflect the relationship between these groups*
* As far as the database is concerned, there is no distinction between strain means and individual sample data - they're all rows in the ProbeSetData/PublishData tables. The only difference is that strain mean data will probably also have an SE value in the ProbeSetSE/PublishSE tables and/or an N (number of individuals per strain) value in the NStrain table
* As for what this means for the uploader - I think it depends on whether Rob/Arthur/etc wants to give users the ability to simultaneously upload both strain mean and individual data. For example, if someone uploads some BXD individuals' data, do we want the uploader to both create a new group for this (or add to an existing BXD individuals group) and calculate the strain means/SE and enter it into the "main" BXD group? My personal feeling is that it's probably best to postpone that for later and only upload the data with the specific set of samples indicated in the file since it would insert some extra complexity to the uploading process that could always be added later (since the user would need to select "the group the strains are from" as a separate option)
* The relationship is sorta captured in the CaseAttribute and CaseAttributeXRefNew tables (which contain sample metadata), but only in the form of the metadata that is sometimes displayed as extra columns in the trait page table - this data isn't used in any queries/analyses currently (outside of some JS filters run on the table itself) and isn't that important as part of the uploading process (or at least can be postponed)
-## Individual Datasets and Derivatives datasets in gn2
-* Individual dataset reflects the actual data provided or submitted by the investigator (user). Derivative datasets include the processed information from the individual dataset, as in the case of the average datasets.
-* An example of an individual dataset would look something like; (MBD dataset)
+## Individual Datasets and Derivatives datasets in gn2
+* Individual dataset reflects the actual data provided or submitted by the investigator (user). Derivative datasets include the processed information from the individual dataset, as in the case of the average datasets.
+* An example of an individual dataset would look something like; (MBD dataset)
```
#+begin_example
sample, strain, Sex, Age,…
@@ -600,13 +606,13 @@ FEB0005,BXD16,F,14,…
#+end_example
```
-* The strain column above has repetitive values. Each value has a one-to-many relationship with values on sample column. From this dataset, there can be several derivatives. For example;
-- Sex-based categories
-- Average data (3 sample values averaged to one strain value)
-- Standard error table computed for the averages
+* The strain column above has repetitive values. Each value has a one-to-many relationship with values on sample column. From this dataset, there can be several derivatives. For example;
+- Sex-based categories
+- Average data (3 sample values averaged to one strain value)
+- Standard error table computed for the averages
-## Saving data to database
-* Strain table schema
+## Saving data to database
+* Strain table schema
```
#+begin_src sql
MariaDB [db_webqtl]> DESC Strain;
@@ -639,21 +645,21 @@ FEB0005,BXD16,F,14,…
5 rows in set (0.00 sec)
#+end_src
```
-* Where the =InbredSetId= comes from the =InbredSet= table and the =StrainId= comes from the =Strain= table. The *individual data* would be linked to an inbredset group that is for individuals
+* Where the =InbredSetId= comes from the =InbredSet= table and the =StrainId= comes from the =Strain= table. The *individual data* would be linked to an inbredset group that is for individuals
* For the *average data*, the only value to save would be the =strain= field, which would be saved as =Name= in the =Strain= table and linked to an InbredSet group that is for averages
*Question 01*: How do we distinguish the inbredset groups?
*Answer*: The =Family= field is useful for this.
*Question 02*: If you have more derived "datasets", e.g. males-only, females-only, under-10-years, 10-to-25-years, etc. How would the =Strains= table handle all those differences?
-## Metadata
+## Metadata
* The data we looked at had =gene id= and =gene symbol= fields. These fields were used to fetch the *Ensembl ID* and *descriptions* from [[https://www.ncbi.nlm.nih.gov/][NCBI]] and the [[https://useast.ensembl.org/][Ensembl Genome Browser]]
-## Files for mapping
+## Files for mapping
* Files used for mapping need to be in =bimbam= or =.geno= formats. We would need to do conversions to at least one of these formats where necessary
-## Annotation files
-* Consider the following schema of DB tables
+## Annotation files
+* Consider the following schema of DB tables
#+begin_src sql
MariaDB [db_webqtl]> DESC InbredSet;
+-----------------+----------------------+------+-----+---------+----------------+
@@ -718,10 +724,10 @@ FEB0005,BXD16,F,14,…
- The =used_for_mapping= field should be set to ~Y~ unless otherwise informed
- The =PedigreeStatus= field is unknown to us for now: set to ~NULL~
-* Annotation file format
+* Annotation file format
The important fields are:
- =ChipId=: The platform that the data was collected from/with
-Consider the following table;
+Consider the following table;
#+begin_src sql
MariaDB [db_webqtl]> DESC GeneChip;
+---------------+----------------------+------+-----+---------+----------------+
@@ -744,7 +750,7 @@ Consider the following table;
- =Probe_set_Blat_Mb_start=/=Probe_set_Blat_Mb_end=: In Byron's and Beni's data, these correspond to the =geneStart= and =geneEnd= fields respectively. These are the positions, in megabasepairs, that the gene begins and ends at, respectively.
- =Mb=: This is the =geneStart=/=Probe_set_Blat_Mb_start= value divided by *1000000*. (*Note to self*: Maybe the Probe_set_Blat_Mb_* fields above might not be in megabase pairs — please confirm)
- =Strand_Probe= and =Strand_Gene=: These fields' values are simply ~+~ or ~-~. If these values are missing, you can [[https://ftp.ncbi.nih.gov/gene/README][retrieve them from NCBI]], specifically from the =orientation= field of seemingly any text file with the field
- - =Chr=: This is the chromosome on which the gene is found
+ - =Chr=: This is the chromosome on which the gene is found
* The final annotation file will have (at minimum) the following fields (or their
analogs):
@@ -765,8 +771,8 @@ analogs):
* =.geno= Files
- The =.geno= files have sample names, not the strain/symbol. The =Locus= field in the =.geno= file corresponds to the **marker**. =.geno= files are used with =QTLReaper=
- The sample names in the ~.geno~ files *MUST* be in the same order as the
-strains/symbols for that species. For example;
-Data format is as follows;
+strains/symbols for that species. For example;
+Data format is as follows;
```
#+begin_example
SampleName,Strain,…
@@ -779,7 +785,7 @@ BJCWI0005,BXD50,…
#+end_example
```
-and the order of strains is as follows;
+and the order of strains is as follows;
```
#+begin_example
…,BXD33,…,BXD40,…,BXD50,…
@@ -806,9 +812,9 @@ The order of samples that belong to the same strain is irrelevant - they share t
- Treatment
- Sex (Really? Isn't sex an expression of genes?)
- batch
- - Case ID, etc
+ - Case ID, etc
-* Summary steps to load data to the database
+* Summary steps to load data to the database
- [x] Create *InbredSet* group (think population)
- [x] Load the strains/samples data
- [x] Load the sample cross-reference data to link the samples to their
@@ -821,8 +827,4 @@ The order of samples that belong to the same strain is irrelevant - they share t
- [x] Load the *Log2* data (ProbeSetData and ProbeSetXRef tables)
- [x] Compute means (an SQL query was used — this could be pre-computed in code
and entered along with the data)
-- [x] Run QTLReaper
-
-
-
-
+- [x] Run QTLReaper
diff --git a/topics/deploy/genecup.gmi b/topics/deploy/genecup.gmi
index c5aec17..fc93d07 100644
--- a/topics/deploy/genecup.gmi
+++ b/topics/deploy/genecup.gmi
@@ -53,3 +53,72 @@ and port forward:
ssh -L 4200:127.0.0.1:4200 -f -N server
curl localhost:4200
```
+
+# Troubleshooting
+
+## Moving the PubMed dir
+
+After moving the PubMed dir GeneCup stopped displaying part of the connections. This can be reproduced by running the standard example on the home page - the result should look like the image on the right of the home page.
+
+After fixing the paths and restarting the service there still was no result.
+
+Genecup is currently managed by the shepherd as user shepherd. Stop the service as that user:
+
+```
+shepherd@tux02:~$ herd stop genecup
+guile: warning: failed to install locale
+Service genecup has been stopped.
+```
+
+Now the servic looks stopped, but it is still running and you need to kill by hand:
+
+```
+shepherd@tux02:~$ ps xau|grep genecup
+shepherd 89524 0.0 0.0 12780 944 pts/42 S+ 00:32 0:00 grep genecup
+shepherd 129334 0.0 0.7 42620944 2089640 ? Sl Mar05 66:30 /gnu/store/1w5v338qk5m8khcazwclprs3znqp6f7f-python-3.10.7/bin/python3 /gnu/store/a6z0mmj6iq6grwynfvkzd0xbbr4zdm0l-genecup-latest-with-tensorflow-native-HEAD-of-master-branch/.server.py-real
+shepherd@tux02:~$ kill -9 129334
+shepherd@tux02:~$ ps xau|grep genecup
+shepherd 89747 0.0 0.0 12780 944 pts/42 S+ 00:32 0:00 grep genecup
+shepherd@tux02:~$
+```
+
+The log file lives in
+
+```
+shepherd@tux02:~/logs$ tail -f genecup.log
+```
+
+and we were getting errors on a reload and I had to fix
+
+```
+shepherd@tux02:~/shepherd-services$ grep export run_genecup.sh
+export EDIRECT_PUBMED_MASTER=/export3/PubMed
+export TMPDIR=/export/ratspub/tmp
+export NLTK_DATA=/export3/PubMed/nltk_data
+```
+
+See
+
+=> https://git.genenetwork.org/gn-shepherd-services/commit/?id=cd4512634ce1407b14b0842b0ef6a9cd35e6d46c
+
+The symlink from /export2 is not honoured by the guix container. Now the service works.
+
+Note we have deprecation warnings that need to be addressed in the future:
+
+```
+2025-04-22 00:40:07 /home/shepherd/services/genecup/guix-past/modules/past/packages/python.scm:740:19: warning: 'texlive-union' is deprecated,
+ use 'texlive-updmap.cfg' instead
+2025-04-22 00:40:07 guix build: warning: 'texlive-latex-base' is deprecated, use 'texlive-latex-bin' instead
+2025-04-22 00:40:15 updating checkout of 'https://git.genenetwork.org/genecup'...
+/gnu/store/9lbn1l04y0xciasv6zzigqrrk1bzz543-tensorflow-native-1.9.0/lib/python3.10/site-packages/tensorflow/python/framewo
+rk/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
+2025-04-22 00:40:38 _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
+2025-04-22 00:40:38 /gnu/store/9lbn1l04y0xciasv6zzigqrrk1bzz543-tensorflow-native-1.9.0/lib/python3.10/site-packages/tensorflow/python/framewo
+rk/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
+2025-04-22 00:40:38 _np_qint32 = np.dtype([("qint32", np.int32, 1)])
+2025-04-22 00:40:38 /gnu/store/9lbn1l04y0xciasv6zzigqrrk1bzz543-tensorflow-native-1.9.0/lib/python3.10/site-packages/tensorflow/python/framewo
+rk/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
+2025-04-22 00:40:38 np_resource = np.dtype([("resource", np.ubyte, 1)])
+2025-04-22 00:40:39 /gnu/store/7sam0mr9kxrd4p7g1hlz9wrwag67a6x6-python-flask-sqlalchemy-2.5.1/lib/python3.10/site-packages/flask_sqlalchemy/__
+init__.py:872: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
+```
diff --git a/topics/deploy/installation.gmi b/topics/deploy/installation.gmi
index 757d848..d6baa79 100644
--- a/topics/deploy/installation.gmi
+++ b/topics/deploy/installation.gmi
@@ -319,7 +319,7 @@ Currently we have two databases for deployment,
from BXD mice and 'db_webqtl_plant' which contains all plant related
material.
-Download one database from
+Download a recent database from
=> https://files.genenetwork.org/database/
diff --git a/topics/deploy/machines.gmi b/topics/deploy/machines.gmi
index 9548e43..a7c197c 100644
--- a/topics/deploy/machines.gmi
+++ b/topics/deploy/machines.gmi
@@ -2,10 +2,11 @@
```
- [ ] bacchus 172.23.17.156 (00:11:32:ba:7f:17) - 1 Gbs
-- [X] lambda01 172.23.18.212 (7c:c2:55:11:9c:ac)
+- [ ] penguin2
+- [X] lambda01 172.23.18.212 (7c:c2:55:11:9c:ac) - currently 172.23.17.41
- [X] tux03i 172.23.17.181 (00:0a:f7:c1:00:8d) - 10 Gbs
[X] tux03 128.169.5.101 (00:0a:f7:c1:00:8b) - 1 Gbs
-- [ ] tux04i 172.23.17.170 (14:23:f2:4f:e6:10)
+- [X] tux04i 172.23.17.170 (14:23:f2:4f:e6:10)
- [X] tux04 128.169.5.119 (14:23:f2:4f:e6:11)
- [X] tux05 172.23.18.129 (14:23:f2:4f:35:00)
- [X] tux06 172.23.17.188 (14:23:f2:4e:29:10)
@@ -26,6 +27,8 @@ c for console or control
```
- [ ] DNS entries no longer visible
+- [X] penguin2-c 172.23.31.83
+- [ ] octolair01 172.23.16.228
- [X] lambda01-c 172.23.17.173 (3c:ec:ef:aa:e5:50)
- [X] tux01-c 172.23.31.85 (58:8A:5A:F9:3A:22)
- [X] tux02-c 172.23.30.40 (58:8A:5A:F0:E6:E4)
diff --git a/topics/deploy/setting-up-or-migrating-production-across-machines.gmi b/topics/deploy/setting-up-or-migrating-production-across-machines.gmi
new file mode 100644
index 0000000..1f35dae
--- /dev/null
+++ b/topics/deploy/setting-up-or-migrating-production-across-machines.gmi
@@ -0,0 +1,58 @@
+# Setting Up or Migrating Production Across Machines
+
+## Tags
+
+* type: documentation, docs, doc
+* status: in-progress
+* assigned: fredm
+* priority: undefined
+* keywords: migration, production, genenetwork
+* interested-parties: pjotrp, zachs
+
+## Introduction
+
+Recent events (Late 2024 and early 2025) have led to us needing to move the production system from one machine to the other several time, due to machine failures, disk space, security concerns, and the like.
+
+In this respect, a number of tasks rise to the front as necessary to accomplish for a successful migration. Each of the following sections will detail a task that's necessary for a successful migration.
+
+## Set Up the Database
+
+* Extract: detail this — link to existing document in this repo. Also, probably note that we symlink the extraction back to `/var/lib/mysql`?
+* Configure: detail this — link to existing document in this repo
+
+## Set Up the File System
+
+* TODO: List the necessary directories and describe what purpose each serves. This will be from the perspective of the container — actual paths on the host system are left to the builders choice, and can vary wildly.
+* TODO: Prefer explicit binding rather than implicit — makes the shell scripts longer, but no assumptions have to be made, everything is explicitly spelled out.
+
+## Redis
+
+We currently (2025-06-11) use Redis for:
+
+- Tracking user collection (this will be moved to SQLite database)
+- Tracking background jobs (this is being moved out to SQLite databases)
+- Tracking running-time (not sure what this is about)
+- Others?
+
+We do need to copy over the redis save file whenever we do a migration, at least until the user collections and background jobs features have been moved completely out of Redis.
+
+## Container Configurations: Secrets
+
+* TODO: Detail how to extract/restore the existing secrets configurations in the new machine
+
+## Build Production Container
+
+* TODO: Add notes on building
+* TODO: Add notes on setting up systemd
+
+## NGINX
+
+* TODO: Add notes on streaming and configuration of it thereof
+
+## SSL Certificates
+
+* TODO: Add notes on acquisition and setup of SSL certificates
+
+## DNS
+
+* TODO: Migrate DNS settings
diff --git a/topics/deploy/uthsc-vpn-with-free-software.gmi b/topics/deploy/uthsc-vpn-with-free-software.gmi
index 43f6944..95fd1cd 100644
--- a/topics/deploy/uthsc-vpn-with-free-software.gmi
+++ b/topics/deploy/uthsc-vpn-with-free-software.gmi
@@ -10,6 +10,11 @@ $ openconnect-sso --server uthscvpn1.uthsc.edu --authgroup UTHSC
```
Note that openconnect-sso should be run as a regular user, not as root. After passing Duo authentication, openconnect-sso will try to gain root priviliges to set up the network routes. At that point, it will prompt you for your password using sudo.
+## Recommended way
+
+The recommended way is to use Arun's g-expression setup using guix. See below. It should just work, provided you have the
+chained certificate that you can get from the browser or one of us.
+
## Avoid tunneling all your network traffic through the VPN (aka Split Tunneling)
openconnect, by default, tunnels all your traffic through the VPN. This is not good for your privacy. It is better to tunnel only the traffic destined to the specific hosts that you want to access. This can be done using the vpn-slice script.
@@ -72,6 +77,12 @@ Download it, download the UTHSC TLS certificate chain to uthsc-certificate.pem,
$(guix build -f uthsc-vpn.scm)
```
+to add a route by hand after you can do
+
+```
+ip route add 172.23.17.156 dev tun0
+```
+
# Troubleshooting
Older versions would not show a proper dialog for sign-in. Try
diff --git a/topics/deploy/uthsc-vpn.scm b/topics/deploy/uthsc-vpn.scm
index 73cb48b..82f67f5 100644
--- a/topics/deploy/uthsc-vpn.scm
+++ b/topics/deploy/uthsc-vpn.scm
@@ -9,7 +9,7 @@
;; Put in the hosts you are interested in here.
(define %hosts
(list "octopus01"
- "tux01.genenetwork.org"))
+ "spacex.uthsc.edu"))
(define (ini-file name scm)
"Return a file-like object representing INI file with @var{name} and
diff --git a/topics/genenetwork-releases.gmi b/topics/genenetwork-releases.gmi
new file mode 100644
index 0000000..e179629
--- /dev/null
+++ b/topics/genenetwork-releases.gmi
@@ -0,0 +1,77 @@
+# GeneNetwork Releases
+
+## Tags
+
+* status: open
+* priority:
+* assigned:
+* type: documentation
+* keywords: documentation, docs, release, releases, genenetwork
+
+## Introduction
+
+The sections that follow will be note down the commits used for various stable (and stable-ish) releases of genenetwork.
+
+The tagging of the commits will need to distinguish repository-specific tags from overall system tags.
+
+In this document, we only concern ourselves with the overall system tags, that shall have the template:
+
+```
+genenetwork-system-v<major>.<minor>.<patch>[-<commit>]
+```
+
+the portions in angle brackets will be replaced with the actual version numbers.
+
+## genenetwork-system-v1.0.0
+
+This is the first, guix-system-container-based, stable release of the entire genenetwork system.
+The commits involved are:
+
+=> https://github.com/genenetwork/genenetwork2/commit/314c6d597a96ac903071fcb6e50df3d9e88935e9 GN2: 314c6d5
+=> https://github.com/genenetwork/genenetwork3/commit/0d902ec267d96b87648669a7a43b699c8a22a3de GN3: 0d902ec
+=> https://git.genenetwork.org/gn-auth/commit/?id=8e64f7f8a392b8743a4f36c497cd2ec339fcfebc: gn-auth: 8e64f7f
+=> https://git.genenetwork.org/gn-libs/commit/?id=72a95f8ffa5401649f70978e863dd3f21900a611: gn-libs: 72a95f8
+
+The guix channels used for deployment of the system above are as follows:
+
+```
+(list (channel
+ (name 'guix-bioinformatics)
+ (url "https://git.genenetwork.org/guix-bioinformatics/")
+ (branch "master")
+ (commit
+ "039a3dd72c32d26b9c5d2cc99986fd7c968a90a5"))
+ (channel
+ (name 'guix-forge)
+ (url "https://git.systemreboot.net/guix-forge/")
+ (branch "main")
+ (commit
+ "bcb3e2353b9f6b5ac7bc89d639e630c12049fc42")
+ (introduction
+ (make-channel-introduction
+ "0432e37b20dd678a02efee21adf0b9525a670310"
+ (openpgp-fingerprint
+ "7F73 0343 F2F0 9F3C 77BF 79D3 2E25 EE8B 6180 2BB3"))))
+ (channel
+ (name 'guix-past)
+ (url "https://gitlab.inria.fr/guix-hpc/guix-past")
+ (branch "master")
+ (commit
+ "5fb77cce01f21a03b8f5a9c873067691cf09d057")
+ (introduction
+ (make-channel-introduction
+ "0c119db2ea86a389769f4d2b9c6f5c41c027e336"
+ (openpgp-fingerprint
+ "3CE4 6455 8A84 FDC6 9DB4 0CFB 090B 1199 3D9A EBB5"))))
+ (channel
+ (name 'guix)
+ (url "https://git.savannah.gnu.org/git/guix.git")
+ (branch "master")
+ (commit
+ "2394a7f5fbf60dd6adc0a870366adb57166b6d8b")
+ (introduction
+ (make-channel-introduction
+ "9edb3f66fd807b096b48283debdcddccfea34bad"
+ (openpgp-fingerprint
+ "BBB0 2DDF 2CEA F6A8 0D1D E643 A2A0 6DF2 A33A 54FA")))))
+```
diff --git a/topics/genenetwork/starting_gn1.gmi b/topics/genenetwork/starting_gn1.gmi
index efbfd0f..e31061f 100644
--- a/topics/genenetwork/starting_gn1.gmi
+++ b/topics/genenetwork/starting_gn1.gmi
@@ -51,9 +51,7 @@ On an update of guix the build may fail. Try
#######################################'
# Environment Variables - private
#########################################
- # sql_host = '[1]tux02.uthsc.edu'
- # sql_host = '128.169.4.67'
- sql_host = '172.23.18.213'
+ sql_host = '170.23.18.213'
SERVERNAME = sql_host
MYSQL_SERVER = sql_host
DB_NAME = 'db_webqtl'
diff --git a/topics/gn-learning-team/next-steps.gmi b/topics/gn-learning-team/next-steps.gmi
new file mode 100644
index 0000000..b427923
--- /dev/null
+++ b/topics/gn-learning-team/next-steps.gmi
@@ -0,0 +1,48 @@
+# Next steps
+
+Wednesday we had a wrap-up meeting of the gn-learning efforts.
+
+## Data uploading
+
+The goal of these meetings was to learn how to upload data into GN. In the process Felix has become the de facto uploader, next to Arthur. A C. elegans dataset was uploaded and Felix is preparing
+
+* More C. elegans
+* HSRat
+* Kilifish
+* Medaka
+
+Updates are here:
+
+=> https://issues.genenetwork.org/tasks/felixl
+
+We'll keep focussing on that work and hopefully we'll get more parties interested in doing some actual work down the line.
+
+## Hosting GN in Wageningen
+
+Harm commented that he thought these meetings were valuable, particularly we learnt a lot about GN ins and outs. Harm suggests we focus on hosting GN in Wageningen for C. elegans and Arabidopsis.
+Pjotr says that is a priority this year, even if we start on a privately hosted machine in NL. Wageningen requires Docker images and Bonface says that is possible - with some work. So:
+
+* Host GN in NL
+* Make GN specific for C.elegans and Arabidopsis - both trim and add datasets
+* Create Docker container
+* Host Docker container in Wageningen
+* Present to other parties in Wageningen
+
+Having above datasets will help this effort succeed.
+
+## AI
+
+Harm is also very interested in the AI efforts and wants to pursue that in the context of above server - i.e., functionality arrives when it lands in GN.
+
+## Wormbase
+
+Jameson suggest we can work with Wormbase and the Caender folks once we have a running system. Interactive data analysis is very powerful and could run in conjunction with those sites.
+
+=> https://caendr.org/
+=> https://wormbase.org/
+
+Other efforts are Flybase and Arabidopsis Magic which we can host, in principle.
+
+## Mapping methods
+
+Jameson will continue with his work on risiduals.
diff --git a/topics/octopus/maintenance.gmi b/topics/octopus/maintenance.gmi
new file mode 100644
index 0000000..65ea52e
--- /dev/null
+++ b/topics/octopus/maintenance.gmi
@@ -0,0 +1,98 @@
+# Octopus/Tux maintenance
+
+## To remember
+
+`fdisk -l` to see disk models
+`lsblk -nd` to see mounted disks
+
+## Status
+
+octopus02
+- Devices: 2 3.7T SSDs + 2 894.3G SSDs + 2 4.6T HDDs
+- **Status: Slurm not OK, LizardFS not OK**
+- Notes:
+ - `octopus02 mfsmount[31909]: can't resolve master hostname and/or portname (octopus01:9421)`,
+ - **I don't see 2 drives that are physically mounted**
+
+octopus03
+- Devices: 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK
+- Notes: **I don't see 2 drives that are physically mounted**
+
+octopus04
+- Devices: 4 7.3 T SSDs (Neil) + 1 4.6T HDD + 1 3.7T SSD + 2 894.3G SSDs
+- Status: Slurm NO, LizardFS OK (we don't share the HDD)
+- Notes: no
+
+octopus05
+- Devices: 1 7.3 T SSDs (Neil) + 5 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK
+- Notes: no
+
+octopus06
+- Devices: 1 7.3 T SSDs (Neil) + 1 4.6T HDD + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK (we don't share the HDD)
+- Notes: no
+
+octopus07
+- Devices: 1 7.3 T SSDs (Neil) + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK
+- Notes: **I don't see 1 device that is physically mounted**
+
+octopus08
+- Devices: 1 7.3 T SSDs (Neil) + 1 4.6T HDD + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK (we don't share the HDD)
+- Notes: no
+
+octopus09
+- Devices: 1 7.3 T SSDs (Neil) + 1 4.6T HDD + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK (we don't share the HDD)
+- Notes: no
+
+octopus10
+- Devices: 1 7.3 T SSDs (Neil) + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK (we don't share the HDD)
+- Notes: **I don't see 1 device that is physically mounted**
+
+octopus11
+- Devices: 1 7.3 T SSDs (Neil) + 5 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK
+- Notes: on
+
+tux05
+- Devices: 1 3.6 NVMe + 1 1.5T NVMe + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS OK (we don't share anything)
+- Notes: **I don't have a picture to confirm physically mounted devices**
+
+tux06
+- Devices: 2 3.6 T SSDs (1 from Neil) + 1 1.5T NVMe + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS (we don't share anything)
+- Notes:
+ - **Last picture reports 1 7.3 T SSD (Neil) that is missing**
+ - **Disk /dev/sdc: 3.64 TiB (Samsung SSD 990: free and usable for lizardfs**
+ - **Disk /dev/sdd: 3.64 TiB (Samsung SSD 990): free and usable for lizardfs**
+
+tux07
+- Devices: 3 3.6 T SSDs + 1 1.5T NVMe (Neil) + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS
+- Notes:
+ - **Disk /dev/sdb: 3.64 TiB (Samsung SSD 990): free and usable for lizardfs**
+ - **Disk /dev/sdd: 3.64 TiB (Samsung SSD 990): mounted at /mnt/sdb and shared on LIZARDFS: TO CHECK BECAUSE IT HAS NO PARTITIONS**
+
+tux08
+- Devices: 3 3.6 T SSDs + 1 1.5T NVMe (Neil) + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS
+- Notes: no
+
+tux09
+- Devices: 1 3.6 T SSDs + 1 1.5T NVMe + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS
+- Notes: **I don't see 1 device that is physically mounted**
+
+## Neil disks
+- four 8TB SSDs on the right of octopus04
+- one 8TB SSD in the left slot of octopus05
+- six 8TB SSDs bottom-right slot of octopus06,07,08,09,10,11
+- one 4TB NVMe and one 8TB SSDs on tux06, NVME in the bottom-right of the group of 4 on the left, SSD on the bottom-left of the group of 4 on the right
+- one 4TB NVMe on tux07, on the top-left of the group of 4 on the right
+- one 4TB NVMe on tux08, on the top-left of the group of 4 on the right
diff --git a/topics/octopus/recent-rust.gmi b/topics/octopus/recent-rust.gmi
new file mode 100644
index 0000000..7ce8968
--- /dev/null
+++ b/topics/octopus/recent-rust.gmi
@@ -0,0 +1,76 @@
+# Use a recent Rust on Octopus
+
+
+For impg we currently need a rust that is more recent than what we have in Debian
+or Guix. No panic, because Rust has few requirements.
+
+Install latest rust using the script
+
+```
+curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+```
+
+Set path
+
+```
+. ~/.cargo/env
+```
+
+Update rust
+
+```
+rustup default stable
+```
+
+Next update Rust
+
+```
+octopus01:~/tmp/impg$ . ~/.cargo/env
+octopus01:~/tmp/impg$ rustup default stable
+info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
+info: latest update on 2025-05-15, rust version 1.87.0 (17067e9ac 2025-05-09)
+info: downloading component 'cargo'
+info: downloading component 'clippy'
+info: downloading component 'rust-docs'
+info: downloading component 'rust-std'
+info: downloading component 'rustc'
+(...)
+```
+
+and build the package
+
+```
+octopus01:~/tmp/impg$ cargo build
+```
+
+Since we are not in guix we get the local dependencies:
+
+```
+octopus01:~/tmp/impg$ ldd target/debug/impg
+ linux-vdso.so.1 (0x00007ffdb266a000)
+ libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fe404001000)
+ librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fe403ff7000)
+ libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe403fd6000)
+ libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe403fd1000)
+ libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe403e11000)
+ /lib64/ld-linux-x86-64.so.2 (0x00007fe404682000)
+```
+
+Login on another octopus - say 02 you can run impg from this directory:
+
+```
+octopus02:~$ ~/tmp/impg/target/debug/impg
+Command-line tool for querying overlaps in PAF files
+
+Usage: impg <COMMAND>
+
+Commands:
+ index Create an IMPG index
+ partition Partition the alignment
+ query Query overlaps in the alignment
+ stats Print alignment statistics
+
+Options:
+ -h, --help Print help
+ -V, --version Print version
+```
diff --git a/topics/programming/autossh-for-keeping-ssh-tunnels.gmi b/topics/programming/autossh-for-keeping-ssh-tunnels.gmi
new file mode 100644
index 0000000..a977232
--- /dev/null
+++ b/topics/programming/autossh-for-keeping-ssh-tunnels.gmi
@@ -0,0 +1,65 @@
+# Using autossh to Keep SSH Tunnels Alive
+
+## Tags
+* keywords: ssh, autossh, tunnel, alive
+
+
+## TL;DR
+
+```
+guix package -i autossh # Install autossh with Guix
+autossh -M 0 -o "ServerAliveInterval 60" -o "ServerAliveCountMax 5" -L 4000:127.0.0.1:3306 alexander@remoteserver.org
+```
+
+## Introduction
+
+Autossh is a utility for automatically restarting SSH sessions and tunnels if they drop or become inactive. It's particularly useful for long-lived tunnels in unstable network environments.
+
+See official docs:
+
+=> https://www.harding.motd.ca/autossh/
+
+## Installing autossh
+
+Install autossh using Guix:
+
+```
+guix package -i autossh
+```
+
+Basic usage:
+
+```
+autossh [-V] [-M monitor_port[:echo_port]] [-f] [SSH_OPTIONS]
+```
+
+## Examples
+
+### Keep a database tunnel alive with autossh
+
+Forward a remote MySQL port to your local machine:
+
+**Using plain SSH:**
+
+```
+ssh -L 5000:localhost:3306 alexander@remoteserver.org
+```
+
+**Using autossh:**
+
+```
+autossh -L 5000:localhost:3306 alexander@remoteserver.org
+```
+
+### Better option
+
+```
+autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -L 5000:localhost:3306 alexander@remoteserver.org
+```
+
+#### Option explanations:
+
+- `ServerAliveInterval`: Seconds between sending keepalive packets to the server (default: 0).
+- `ServerAliveCountMax`: Number of unanswered keepalive packets before SSH disconnects (default: 3).
+
+You can also configure these options in your `~/.ssh/config` file to simplify command-line usage.
diff --git a/topics/systems/backup-drops.gmi b/topics/systems/backup-drops.gmi
index 191b185..3f81c5a 100644
--- a/topics/systems/backup-drops.gmi
+++ b/topics/systems/backup-drops.gmi
@@ -4,6 +4,10 @@ To make backups we use a combination of sheepdog, borg, sshfs, rsync. sheepdog i
This system proves pretty resilient over time. Only on the synology server I can't get it to work because of some CRON permission issue.
+For doing the actual backups see
+
+=> ./backups-with-borg.gmi
+
# Tags
* assigned: pjotrp
@@ -13,7 +17,7 @@ This system proves pretty resilient over time. Only on the synology server I can
## Borg backups
-It is advised to use a backup password and not store that on the remote.
+Despite our precautions it is advised to use a backup password and *not* store that on the remote.
## Running sheepdog on rabbit
@@ -59,14 +63,14 @@ where remote can be an IP address.
Warning: if you introduce this `AllowUsers` command all users should be listed or people may get locked out of the machine.
-Next create a special key on the backup machine's ibackup user (just hit enter):
+Next create a special password-less key on the backup machine's ibackup user (just hit enter):
```
su ibackup
ssh-keygen -t ecdsa -f $HOME/.ssh/id_ecdsa_backup
```
-and copy the public key into the remote /home/bacchus/.ssh/authorized_keys
+and copy the public key into the remote /home/bacchus/.ssh/authorized_keys.
Now test it from the backup server with
@@ -82,13 +86,20 @@ On the drop server you can track messages by
tail -40 /var/log/auth.log
```
+or on recent linux with systemd
+
+```
+journalctl -r
+```
+
Next
```
ssh -v -i ~/.ssh/id_ecdsa_backup bacchus@dropserver
```
-should give a Broken pipe(!). In auth.log you may see something like
+should give a Broken pipe(!) or -- more recently -- it says `This service allows sftp connections only`.
+When running sshd with a verbose switch you may see something like
fatal: bad ownership or modes for chroot directory component "/export/backup/"
@@ -110,6 +121,19 @@ chown bacchus.bacchus backup/bacchus/drop/
chmod 0700 backup/bacchus/drop/
```
+Another error may be:
+
+```
+fusermount3: mount failed: Operation not permitted
+```
+
+This means you need to set the suid on the fusermount3 command. Bit nasty in Guix.
+
+```
+apt-get install fuse(3) sshfs
+chmod 4755 /usr/bin/fusermount
+```
+
If auth.log says error: /dev/pts/11: No such file or directory on ssh, or received disconnect (...) disconnected by user we are good to go!
Note: at this stage it may pay to track the system log with
@@ -171,3 +195,5 @@ sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,IdentityFile=~/.
The recent scripts can be found at
=> https://github.com/genenetwork/gn-deploy-servers/blob/master/scripts/tux01/backup_drop.sh
+
+# borg-borg
diff --git a/topics/systems/backups-with-borg.gmi b/topics/systems/backups-with-borg.gmi
new file mode 100644
index 0000000..1ad0112
--- /dev/null
+++ b/topics/systems/backups-with-borg.gmi
@@ -0,0 +1,220 @@
+# Borg backups
+
+We use borg for backups. Borg is an amazing tool and after 25+ years of making backups it just feels right.
+With the new tux04 production install we need to organize backups off-site. The first step is to create a
+borg runner using sheepdog -- sheepdog we use for monitoring success/failure.
+Sheepdog essentially wraps a Unix command and sends a report to a local or remote redis instance.
+Sheepdog also includes a web server for output:
+
+=> http://sheepdog.genenetwork.org/sheepdog/status.html
+
+which I run on one of my machines.
+
+# Tags
+
+* assigned: pjotrp
+* keywords: systems, backup, sheepdog, database
+
+# Install borg
+
+Usually I use a version of borg from guix. This should really be done as the borg user (ibackup).
+
+```
+mkdir ~/opt
+guix package -i borg ~/opt/borg
+tux04:~$ ~/opt/borg/bin/borg --version
+ 1.2.2
+```
+
+# Create a new backup dir and user
+
+The backup should live on a different disk from the things we backup, so when that disk fails we have another.
+
+The SQL database lives on /export and the containers live on /export2. /export3 is a largish slow drive, so perfect.
+
+By convention I point /export/backup to the real backup dir on /export3/backup/borg/ Another convention is that we use an ibackup user which has the backup passphrase in ~/.borg-pass. As root:
+
+```
+mkdir /export/backup/borg
+chown ibackup:ibackup /export/backup/borg
+chown ibackup:ibackup /home/ibackup/.borg-pass
+su ibackup
+```
+
+Now you should be able to load the passphrase and create the backup dir
+
+```
+id
+ uid=1003(ibackup)
+. ~/.borg-pass
+cd /export/backup/borg
+~/opt/borg/bin/borg init --encryption=repokey-blake2 genenetwork
+```
+
+Now we can run our first backup. Note that ibackup should be a member of the mysql and gn groups
+
+```
+mysql:x:116:ibackup
+```
+
+# First backup
+
+Run the backup the first time:
+
+```
+id
+ uid=1003(ibackup) groups=1003(ibackup),116(mysql)
+~/opt/borg/bin/borg create --progress --stats genenetwork::first-backup /export/mysql/database/*
+```
+
+You may first need to update permissions to give group access
+
+```
+chmod g+rx -R /var/lib/mysql/*
+```
+
+When that works borg reports:
+
+```
+Archive name: first-backup
+Archive fingerprint: 376d32fda9738daa97078fe4ca6d084c3fa9be8013dc4d359f951f594f24184d
+Time (start): Sat, 2025-02-08 04:46:48
+Time (end): Sat, 2025-02-08 05:30:01
+Duration: 43 minutes 12.87 seconds
+Number of files: 799
+Utilization of max. archive size: 0%
+------------------------------------------------------------------------------
+ Original size Compressed size Deduplicated size
+This archive: 534.24 GB 238.43 GB 237.85 GB
+All archives: 534.24 GB 238.43 GB 238.38 GB
+ Unique chunks Total chunks
+Chunk index: 200049 227228
+------------------------------------------------------------------------------
+```
+
+50% compression is not bad. borg is incremental so it will only backup differences next round.
+
+Once borg works we could run a CRON job. But we should use the sheepdog monitor to make sure backups keep going without failure going unnoticed.
+
+# Using the sheepdog
+
+=> https://github.com/pjotrp/deploy sheepdog code
+
+## Clone sheepdog
+
+=> https://github.com/pjotrp/deploy#install sheepdog install
+
+Essentially clone the repo so it shows up in ~/deploy
+
+```
+cd /home/ibackup
+git clone https://github.com/pjotrp/deploy.git
+/export/backup/scripts/tux04/backup-tux04.sh
+```
+
+## Setup redis
+
+All sheepdog messages get pushed to redis. You can run it locally or remotely.
+
+By default we use redis, but syslog and others may also be used. The advantage of redis is that it is not bound to the same host, can cross firewalls using an ssh reverse tunnel, and is easy to query.
+
+=> https://github.com/pjotrp/deploy#install sheepdog install
+
+In our case we use redis on a remote host and the results get displayed by a webserver. Also some people get E-mail updates on failure. The configuration is in
+
+```
+/home/ibackup# cat .config/sheepdog/sheepdog.conf .
+{
+ "redis": {
+ "host" : "remote-host",
+ "password": "something"
+ }
+}
+```
+
+If you see localhost with port 6377 it is probably a reverse tunnel setup:
+
+=> https://github.com/pjotrp/deploy#redis-reverse-tunnel
+
+Update the fields according to what we use. Main thing is that is the definition of the sheepdog->redis connector. If you also use sheepdog as another user you'll need to add a config.
+
+Sheepdog should show a warning when you configure redis and it is not connecting.
+
+## Scripts
+
+Typically I run the cron job from root CRON so people can find it. Still it is probably a better idea to use an ibackup CRON. In my version a script is run that also captures output:
+
+```cron root
+0 6 * * * /bin/su ibackup -c /export/backup/scripts/tux04/backup-tux04.sh >> ~/cron.log 2>&1
+```
+
+The script contains something like
+
+```bash
+#! /bin/bash
+if [ "$EUID" -eq 0 ]
+ then echo "Please do not run as root. Run as: su ibackup -c $0"
+ exit
+fi
+rundir=$(dirname "$0")
+# ---- for sheepdog
+source $rundir/sheepdog_env.sh
+cd $rundir
+sheepdog_borg.rb -t borg-tux04-sql --group ibackup -v -b /export/backup/borg/genenetwork /export/mysql/database/*
+```
+
+and the accompanying sheepdov_env.sh
+
+```
+export GEM_PATH=/home/ibackup/opt/deploy/lib/ruby/vendor_ruby
+export PATH=/home/ibackup/opt/deploy/deploy/bin:/home/wrk/opt/deploy/bin:$PATH
+```
+
+If it reports
+
+```
+/export/backup/scripts/tux04/backup-tux04.sh: line 11: /export/backup/scripts/tux04/sheepdog_env.sh: No such file or directory
+```
+
+you need to install sheepdog first.
+
+If all shows green (and takes some time) we made a backup. Check the backup with
+
+```
+ibackup@tux04:/export/backup/borg$ borg list genenetwork/
+first-backup Sat, 2025-02-08 04:39:50 [58715b883c080996ab86630b3ae3db9bedb65e6dd2e83977b72c8a9eaa257cdf]
+borg-tux04-sql-20250209-01:43-Sun Sun, 2025-02-09 01:43:23 [5e9698a032143bd6c625cdfa12ec4462f67218aa3cedc4233c176e8ffb92e16a]
+```
+and you should see the latest. The contents with all files should be visible with
+
+```
+borg list genenetwork::borg-tux04-sql-20250209-01:43-Sun
+```
+
+Make sure you not only see just a symlink.
+
+# More backups
+
+Our production server runs databases and file stores that need to be backed up too.
+
+# Drop backups
+
+Once backups work it is useful to copy them to a remote server, so when the machine stops functioning we have another chance at recovery. See
+
+=> ./backup-drops.gmi
+
+# Recovery
+
+With tux04 we ran into a problem where all disks were getting corrupted(!) Probably due to the RAID controller, but we still need to figure that one out.
+
+Anyway, we have to assume the DB is corrupt. Files are corrupt AND the backups are corrupt. Borg backup has checksums which you can
+
+```
+borg check repo
+```
+
+it has a --repair switch which we needed to remove some faults in the backup itself:
+
+```
+borg check --repair repo
+```
diff --git a/topics/systems/ci-cd.gmi b/topics/systems/ci-cd.gmi
index 6aa17f2..a1ff2e3 100644
--- a/topics/systems/ci-cd.gmi
+++ b/topics/systems/ci-cd.gmi
@@ -31,7 +31,7 @@ Arun has figured out the CI part. It runs a suitably configured laminar CI servi
CD hasn't been figured out. Normally, Guix VMs and containers created by `guix system` can only access the store read-only. Since containers don't have write access to the store, you cannot `guix build' from within a container or deploy new containers from within a container. This is a problem for CD. How do you make Guix containers have write access to the store?
-Another alternative for CI/ CID were to have the quick running tests, e.g unit tests, run on each commit to branch "main". Once those are successful, the CI/CD system we choose should automatically pick the latest commit that passed the quick running tests for for further testing and deployment, maybe once an hour or so. Once the next battery of tests is passed, the CI/CD system will create a build/artifact to be deployed to staging and have the next battery of tests runs against it. If that passes, then that artifact could be deployed to production, and details on the commit and
+Another alternative for CI/ CD were to have the quick running tests, e.g unit tests, run on each commit to branch "main". Once those are successful, the CI/CD system we choose should automatically pick the latest commit that passed the quick running tests for for further testing and deployment, maybe once an hour or so. Once the next battery of tests is passed, the CI/CD system will create a build/artifact to be deployed to staging and have the next battery of tests runs against it. If that passes, then that artifact could be deployed to production, and details on the commit and
#### Possible Steps
@@ -90,3 +90,49 @@ This contains a check-list of things that need to be done:
=> /topics/systems/orchestration Orchestration
=> /issues/broken-cd Broken-cd (Resolved)
+
+## Adding a web-hook
+
+### Github hooks
+
+IIRC actions run artifacts inside github's infrastracture. We use webhooks: e.g.
+
+Update the hook at
+
+=> https://github.com/genenetwork/genenetwork3/settings/hooks
+
+=> ./screenshot-github-webhook.png
+
+To trigger CI manually, run this with the project name:
+
+```
+curl https://ci.genenetwork.org/hooks/example-gn3
+```
+
+For gemtext we have a github hook that adds a forge-project and looks like
+
+```lisp
+(define gn-gemtext-threads-project
+ (forge-project
+ (name "gn-gemtext-threads")
+ (repository "https://github.com/genenetwork/gn-gemtext-threads/")
+ (ci-jobs (list (forge-laminar-job
+ (name "gn-gemtext-threads")
+ (run (with-packages (list nss-certs openssl)
+ (with-imported-modules '((guix build utils))
+ #~(begin
+ (use-modules (guix build utils))
+
+ (setenv "LC_ALL" "en_US.UTF-8")
+ (invoke #$(file-append tissue "/bin/tissue")
+ "pull" "issues.genenetwork.org"))))))))
+ (ci-jobs-trigger 'webhook)))
+```
+
+Guix forge can be found at
+
+=> https://git.systemreboot.net/guix-forge/
+
+### git.genenetwork.org hooks
+
+TBD
diff --git a/topics/systems/mariadb/mariadb.gmi b/topics/systems/mariadb/mariadb.gmi
index ae0ab19..ec8b739 100644
--- a/topics/systems/mariadb/mariadb.gmi
+++ b/topics/systems/mariadb/mariadb.gmi
@@ -16,6 +16,8 @@ To install Mariadb (as a container) see below and
Start the client and:
```
+mysql
+show databases
MariaDB [db_webqtl]> show binary logs;
+-----------------------+-----------+
| Log_name | File_size |
@@ -60,4 +62,11 @@ Stop the running mariadb-guix.service. Restore the latest backup archive and ove
=> https://www.borgbackup.org/ Borg
=> https://borgbackup.readthedocs.io/en/stable/ Borg documentation
-#
+# Upgrade mariadb
+
+It is wise to upgrade mariadb once in a while. In a disaster recovery it is better to move forward in versions too.
+Before upgrading make sure there is a decent backup of the current setup.
+
+See also
+
+=> issues/systems/tux04-disk-issues.gmi
diff --git a/topics/systems/mariadb/precompute-mapping-input-data.gmi b/topics/systems/mariadb/precompute-mapping-input-data.gmi
index 0c89fe5..977120d 100644
--- a/topics/systems/mariadb/precompute-mapping-input-data.gmi
+++ b/topics/systems/mariadb/precompute-mapping-input-data.gmi
@@ -49,10 +49,29 @@ The original reaper precompute lives in
=> https://github.com/genenetwork/genenetwork2/blob/testing/scripts/maintenance/QTL_Reaper_v6.py
-This script first fetches inbredsets
+More recent incarnations are at v8, including a PublishData version that can be found in
+
+=> https://github.com/genenetwork/genenetwork2/tree/testing/scripts/maintenance
+
+Note that the locations are on space:
+
+```
+cd /mount/space2/lily-clone/acenteno/GN-Data
+ls -l
+python QTL_Reaper_v8_space_good.py 116
+--
+python UPDATE_Mean_MySQL_tab.py
+cd /mount/space2/lily-clone/gnshare/gn/web/webqtl/maintainance
+ls -l
+python QTL_Reaper_cal_lrs.py 7
+```
+
+The first task is to prepare an update script that can run a set at a time and compute GEMMA output (instead of reaper).
+
+The script first fetches inbredsets
```
- select Id,InbredSetId,InbredSetName,Name,SpeciesId,FullName,public,MappingMethodId,GeneticType,Family,FamilyOrder,MenuOrderId,InbredSetCode from InbredSet LIMIT 5;
+select Id,InbredSetId,InbredSetName,Name,SpeciesId,FullName,public,MappingMethodId,GeneticType,Family,FamilyOrder,MenuOrderId,InbredSetCode from InbredSet LIMIT 5;
+----+-------------+-------------------+----------+-----------+-------------------+--------+-----------------+-------------+--------------------------------------------------+-------------+-------------+---------------+
| Id | InbredSetId | InbredSetName | Name | SpeciesId | FullName | public | MappingMethodId | GeneticType | Family | FamilyOrder | MenuOrderId | InbredSetCode |
+----+-------------+-------------------+----------+-----------+-------------------+--------+-----------------+-------------+--------------------------------------------------+-------------+-------------+---------------+
diff --git a/topics/systems/migrate-p2.gmi b/topics/systems/migrate-p2.gmi
deleted file mode 100644
index c7fcb90..0000000
--- a/topics/systems/migrate-p2.gmi
+++ /dev/null
@@ -1,12 +0,0 @@
-* Penguin2 crash
-
-This week the boot partition of P2 crashed. We have a few lessons here, not least having a fallback for all services ;)
-
-* Tasks
-
-- [ ] setup space.uthsc.edu for GN2 development
-- [ ] update DNS to tux02 128.169.4.52 and space 128.169.5.175
-- [ ] move CI/CD to tux02
-
-
-* Notes
diff --git a/topics/systems/screenshot-github-webhook.png b/topics/systems/screenshot-github-webhook.png
new file mode 100644
index 0000000..08feed3
--- /dev/null
+++ b/topics/systems/screenshot-github-webhook.png
Binary files differ
diff --git a/topics/systems/synchronising-the-different-environments.gmi b/topics/systems/synchronising-the-different-environments.gmi
new file mode 100644
index 0000000..207b234
--- /dev/null
+++ b/topics/systems/synchronising-the-different-environments.gmi
@@ -0,0 +1,68 @@
+# Synchronising the Different Environments
+
+## Tags
+
+* status: open
+* priority:
+* type: documentation
+* assigned: fredm
+* keywords: doc, docs, documentation
+
+## Introduction
+
+We have different environments we run for various reasons, e.g.
+
+* Production: This is the user-facing environment. This is what GeneNetwork is about.
+* gn2-fred: production-adjacent. It is meant to test out changes before they get to production. It is **NOT** meant for users.
+* CI/CD: Used for development. The latest commits get auto-deployed here. It's the first place (outside of developer machines) where errors and breakages are caught and/or revealed. This will break a lot. Do not expose to users!
+* staging: Uploader environment. This is where Felix, Fred and Arthur flesh out the upload process, and tasks, and also test out the uploader.
+
+These different environments demand synchronisation, in order to have mostly similar results and failure modes.
+
+## Synchronisation of the Environments
+
+### Main Database: MariaDB
+
+* [ ] TODO: Describe process
+
+=> https://issues.genenetwork.org/topics/systems/restore-backups Extract borg archive
+* Automate? Will probably need some checks for data sanity.
+
+### Authorisation Database
+
+* [ ] TODO: Describe process
+
+* Copy backup from production
+* Update/replace GN2 client configs in database
+* What other things?
+
+### Virtuoso/RDF
+
+* [ ] TODO: Describe process
+
+* Copy TTL (Turtle) files from (where?). Production might not always be latest source of TTL files.
+=> https://issues.genenetwork.org/issues/set-up-virtuoso-on-production Run setup to "activate" database entries
+* Can we automate this? What checks are necessary?
+
+## Genotype Files
+
+* [ ] TODO: Describe process
+
+* Copy from source-of-truth (currently Zach's tux01 and/or production).
+* Rsync?
+
+### gn-docs
+
+* [ ] TODO: Describe process
+
+* Not sure changes from other environments should ever take
+
+### AI Summaries (aka. gnqna)
+
+* [ ] TODO: Describe process
+
+* Update configs (should be once, during container setup)
+
+### Others?
+
+* [ ] TODO: Describe process
diff --git a/topics/systems/update-production-checklist.gmi b/topics/systems/update-production-checklist.gmi
new file mode 100644
index 0000000..b17077b
--- /dev/null
+++ b/topics/systems/update-production-checklist.gmi
@@ -0,0 +1,182 @@
+# Update production checklist
+
+
+# Tasks
+
+* [X] Install underlying Debian
+* [X] Get guix going
+* [ ] Check database
+* [ ] Check gemma working
+* [ ] Check global search
+* [ ] Check authentication
+* [ ] Check sending E-mails
+* [ ] Make sure info.genenetwork.org can reach the DB
+* [ ] Backups
+
+The following are at the system level
+
+* [ ] Make journalctl presistent
+* [ ] Update certificates in CRON
+* [ ] Run trim in CRON
+
+# Install underlying Debian
+
+For our production systems we use Debian as a base install. Once installed:
+
+* [X] set up git in /etc and limit permissions to root user
+* [X] add ttyS0 support for grub and kernel - so out-of-band works
+* [X] start ssh server and configure not to use with passwords
+* [X] start nginx and check external networking
+* [ ] set up E-mail routing
+
+It may help to mount the old root if you have it. Now it is on
+
+```
+mount /dev/sdd2 /mnt/old-root/
+```
+
+# Get Guix going
+
+* [X] Install Guix daemon
+* [X] Move /gnu/store to larger partition
+* [X] Update Guix daemon and setup in systemd
+* [X] Make available in /usr/local/guix-profiles
+* [X] Clean up /etc/profile
+
+We can bootstrap with the Debian guix package. Next move the store to a large partion and hard mount it in /etc/fstab with
+
+```
+/export2/gnu /gnu none defaults,bind 0 0
+```
+
+Run guix pull
+
+```
+wrk@tux04:~$ guix pull -p ~/opt/guix-pull --url=https://codeberg.org/guix/guix-mirror.git
+```
+
+Use that to install guix in /usr/local/guix-profiles
+
+```
+guix package -i guix -p /usr/local/guix-profiles/guix
+```
+
+and update the daemon in systemd accordingly. After that I tend to remove /usr/bin/guix
+
+The Debian installer configures guix. I tend to remove the profiles from /etc/profile so people have a minimal profile.
+
+# Check database
+
+* [X] Install mariadb
+* [ ] Recover database
+* [ ] Test permissions
+* [ ] Mariadb update my.cnf
+
+Basically recover the database from a backup is the best start and set permissions. We usually take the default mariadb unless production is already on a newer version - so we move to guix deployment.
+
+On tux02 mariadb-10.5.8 is running. On Debian it is now 10.11.11-0+deb12u1, so we should be good. On Guix is 10.10 at this point.
+
+```
+apt-get install mariadb-server
+```
+
+Next unpack the database files and set permissions to the mysql user. And (don't forget) update the /etc/mysql config files.
+
+Restart mysql until you see:
+
+```
+mysql -u webqtlout -p -e "show databases"
++---------------------------+
+| Database |
++---------------------------+
+| 20081110_uthsc_dbdownload |
+| db_GeneOntology |
+| db_webqtl |
+| db_webqtl_s |
+| go |
+| information_schema |
+| kegg |
+| mysql |
+| performance_schema |
+| sys |
++---------------------------+
+```
+
+=> topics/systems/mariadb/mariadb.gmi
+
+## Recover database
+
+We use borg for backups. First restore the backup on the PCIe. Also a test for overheating!
+
+
+# Check sending E-mails
+
+The swaks package is quite useful to test for a valid receive host:
+
+```
+swaks --to testing-my-server@gmail.com --server smtp.uthsc.edu
+=== Trying smtp.uthsc.edu:25...
+=== Connected to smtp.uthsc.edu.
+<- 220 mailrouter8.uthsc.edu ESMTP NO UCE
+ -> EHLO tux04.uthsc.edu
+<- 250-mailrouter8.uthsc.edu
+<- 250-PIPELINING
+<- 250-SIZE 26214400
+<- 250-VRFY
+<- 250-ETRN
+<- 250-STARTTLS
+<- 250-ENHANCEDSTATUSCODES
+<- 250-8BITMIME
+<- 250-DSN
+<- 250 SMTPUTF8
+ -> MAIL FROM:<root@tux04.uthsc.edu>
+<- 250 2.1.0 Ok
+ -> RCPT TO:<pjotr2020@thebird.nl>
+<- 250 2.1.5 Ok
+ -> DATA
+<- 354 End data with <CR><LF>.<CR><LF>
+ -> Date: Thu, 06 Mar 2025 08:34:24 +0000
+ -> To: pjotr2020@thebird.nl
+ -> From: root@tux04.uthsc.edu
+ -> Subject: test Thu, 06 Mar 2025 08:34:24 +0000
+ -> Message-Id: <20250306083424.624509@tux04.uthsc.edu>
+ -> X-Mailer: swaks v20201014.0 jetmore.org/john/code/swaks/
+ ->
+ -> This is a test mailing
+ ->
+ ->
+ -> .
+<- 250 2.0.0 Ok: queued as 4157929DD
+ -> QUIT
+<- 221 2.0.0 Bye === Connection closed with remote host
+```
+
+An exim configuration can be
+
+```
+dc_eximconfig_configtype='smarthost'
+dc_other_hostnames='genenetwork.org'
+dc_local_interfaces='127.0.0.1 ; ::1'
+dc_readhost=''
+dc_relay_domains=''
+dc_minimaldns='false'
+dc_relay_nets=''
+dc_smarthost='smtp.uthsc.edu'
+CFILEMODE='644'
+dc_use_split_config='false'
+dc_hide_mailname='false'
+dc_mailname_in_oh='true'
+dc_localdelivery='maildir_home'
+```
+
+And this should work:
+
+```
+swaks --to myemailaddress --from john@uthsc.edu --server localhost
+```
+
+# Backups
+
+* [ ] Create an ibackup user.
+* [ ] Install borg (usually guix version)
+* [ ] Create a borg passphrase