summary refs log tree commit diff
path: root/topics
diff options
context:
space:
mode:
Diffstat (limited to 'topics')
-rw-r--r--topics/ai/aider.gmi6
-rw-r--r--topics/ai/ontogpt.gmi7
-rw-r--r--topics/database/mariadb-database-architecture.gmi66
-rw-r--r--topics/deploy/genecup.gmi69
-rw-r--r--topics/deploy/installation.gmi2
-rw-r--r--topics/deploy/machines.gmi7
-rw-r--r--topics/deploy/setting-up-or-migrating-production-across-machines.gmi58
-rw-r--r--topics/deploy/uthsc-vpn-with-free-software.gmi11
-rw-r--r--topics/deploy/uthsc-vpn.scm2
-rw-r--r--topics/genenetwork-releases.gmi77
-rw-r--r--topics/genenetwork/starting_gn1.gmi4
-rw-r--r--topics/gn-learning-team/next-steps.gmi48
-rw-r--r--topics/octopus/maintenance.gmi98
-rw-r--r--topics/octopus/recent-rust.gmi76
-rw-r--r--topics/programming/autossh-for-keeping-ssh-tunnels.gmi65
-rw-r--r--topics/systems/backup-drops.gmi34
-rw-r--r--topics/systems/backups-with-borg.gmi220
-rw-r--r--topics/systems/ci-cd.gmi48
-rw-r--r--topics/systems/mariadb/mariadb.gmi11
-rw-r--r--topics/systems/mariadb/precompute-mapping-input-data.gmi23
-rw-r--r--topics/systems/migrate-p2.gmi12
-rw-r--r--topics/systems/screenshot-github-webhook.pngbin0 -> 177112 bytes
-rw-r--r--topics/systems/synchronising-the-different-environments.gmi68
-rw-r--r--topics/systems/update-production-checklist.gmi182
24 files changed, 1134 insertions, 60 deletions
diff --git a/topics/ai/aider.gmi b/topics/ai/aider.gmi
index 71dfa9e..aa88e71 100644
--- a/topics/ai/aider.gmi
+++ b/topics/ai/aider.gmi
@@ -1,12 +1,16 @@
 # Aider
 
-https://aider.chat/
+=> https://aider.chat/
 
+```
 python3 -m venv ~/opt/python-aider
 ~/opt/python-aider/bin/python3 -m pip install aider-install
 ~/opt/python-aider/bin/aider-install
+```
 
 Installed 1 executable: aider
 Executable directory /home/wrk/.local/bin is already in PATH
 
+```
 aider --model gpt-4o --openai-api-key aa...
+```
diff --git a/topics/ai/ontogpt.gmi b/topics/ai/ontogpt.gmi
new file mode 100644
index 0000000..94bd165
--- /dev/null
+++ b/topics/ai/ontogpt.gmi
@@ -0,0 +1,7 @@
+# OntoGPT
+
+python3 -m venv ~/opt/ontogpt
+~/opt/ontogpt/bin/python3 -m pip install ontogpt
+
+
+runoak set-apikey -e openai
diff --git a/topics/database/mariadb-database-architecture.gmi b/topics/database/mariadb-database-architecture.gmi
index 5c9b0c5..0454d71 100644
--- a/topics/database/mariadb-database-architecture.gmi
+++ b/topics/database/mariadb-database-architecture.gmi
@@ -28,6 +28,12 @@ Naming convention-wise there is a confusing use of id and data-id in particular.
 The default install comes with a smaller database which includes a
 number of the BXDs and the Human liver dataset (GSE9588).
 
+It can be downloaded from:
+
+=> https://files.genenetwork.org/database/
+
+Try the latest one first.
+
 # GeneNetwork database
 
 Estimated table sizes with metadata comment for the important tables
@@ -536,8 +542,8 @@ select * from ProbeSetSE limit 5;
 
 For the other tables, you may check the GN2/doc/database.org document (the starting point for this document).
 
-# Contributions regarding data upload to the GeneNetwork webserver 
-* Ideas shared by the GeneNetwork team to facilitate the process of uploading data to production 
+# Contributions regarding data upload to the GeneNetwork webserver
+* Ideas shared by the GeneNetwork team to facilitate the process of uploading data to production
 
 ## Quality check and integrity of the data to be uploaded to gn2
 
@@ -556,7 +562,7 @@ For the other tables, you may check the GN2/doc/database.org document (the start
 * Unique identifiers solve the hurdles that come with having duplicate genes. So, the QA tools in place should ensure the uploaded dataset adheres to the requirements mentioned
 * However, newer RNA-seq data sets generated by sequencing do not usually have an official vendor identifier. The identifier is usually based on the NCBI mRNA model (NM_XXXXXX) that was used to evaluate an expression and on the sequence that is involved, usually the start and stop nucleotide positions based on a specific genome assembly or just a suffix to make sure it is unique. In this case, you are looking at mRNA assays for a single transcript, but different parts of the transcript that have different genome coordinates. We now typically use ENSEMBL identifiers.
 * The mouse version of the sonic hedgehog gene as an example: `ENSMUST00000002708` or `ENSMUSG00000002633` sources should be fine. The important thing is to know the provenance of the ID—who is in charge of that ID type?
-* When a mRNA assay is super precise (one exon only or a part of the 5' UTR), then we should use exon identifiers from ENSEMBL probably. 
+* When a mRNA assay is super precise (one exon only or a part of the 5' UTR), then we should use exon identifiers from ENSEMBL probably.
 * Ideally, we should enter the sequence's first and last 100 nt in GeneNetwork for verification and  alignment. We did this religiously for arrays, but have started to get lazy now. The sequence is the ultimate identifier
 * For methylation arrays and CpG assays, we can use this format `cg14050475` as seen in MBD UTHSC Ben's data
 * For metabolites like isoleucine—the ID we have been using is the mass-to-charge (MZ) ratio such as `130.0874220_MZ`
@@ -579,16 +585,16 @@ abcb10_q9ji39_t312
 
 ## BXD individuals
 
-* Basically groups (represented by the InbredSet tables) are primarily defined by their list of samples/strains (represented by the Strain tables). When we create a new group, it's because we have data with a distinct set of samples/strains from any existing groups. 
-* So when we receive data for BXD individuals, as far as the database is concerned they are a completely separate group (since the list of samples is new/distinct from any other existing groups). We can choose to also enter it as part of the "generic" BXD group (by converting it to strain means/SEs using the strain of each individual, assuming it's provided like in the files Arthur was showing us). 
+* Basically groups (represented by the InbredSet tables) are primarily defined by their list of samples/strains (represented by the Strain tables). When we create a new group, it's because we have data with a distinct set of samples/strains from any existing groups.
+* So when we receive data for BXD individuals, as far as the database is concerned they are a completely separate group (since the list of samples is new/distinct from any other existing groups). We can choose to also enter it as part of the "generic" BXD group (by converting it to strain means/SEs using the strain of each individual, assuming it's provided like in the files Arthur was showing us).
 * This same logic could apply to other groups as well - we could choose to make one group the "strain mean" group for another set of groups that contain sample data for individuals. But the database doesn't reflect the relationship between these groups*
 * As far as the database is concerned, there is no distinction between strain means and individual sample data - they're all rows in the ProbeSetData/PublishData tables. The only difference is that strain mean data will probably also have an SE value in the ProbeSetSE/PublishSE tables and/or an N (number of individuals per strain) value in the NStrain table
 * As for what this means for the uploader - I think it depends on whether Rob/Arthur/etc wants to give users the ability to simultaneously upload both strain mean and individual data. For example, if someone uploads some BXD individuals' data, do we want the uploader to both create a new group for this (or add to an existing BXD individuals group) and calculate the strain means/SE and enter it into the "main" BXD group? My personal feeling is that it's probably best to postpone that for later and only upload the data with the specific set of samples indicated in the file since it would insert some extra complexity to the uploading process that could always be added later (since the user would need to select "the group the strains are from" as a separate option)
 * The relationship is sorta captured in the CaseAttribute and CaseAttributeXRefNew tables (which contain sample metadata), but only in the form of the metadata that is sometimes displayed as extra columns in the trait page table - this data isn't used in any queries/analyses currently (outside of some JS filters run on the table itself) and isn't that important as part of the uploading process (or at least can be postponed)
 
-## Individual Datasets and Derivatives datasets in gn2 
-* Individual dataset reflects the actual data provided or submitted by the investigator (user). Derivative datasets include the processed information from the individual dataset, as in the case of the average datasets. 
-* An example of an individual dataset would look something like; (MBD dataset) 
+## Individual Datasets and Derivatives datasets in gn2
+* Individual dataset reflects the actual data provided or submitted by the investigator (user). Derivative datasets include the processed information from the individual dataset, as in the case of the average datasets.
+* An example of an individual dataset would look something like; (MBD dataset)
 ```
 #+begin_example
 sample, strain, Sex, Age,…
@@ -600,13 +606,13 @@ FEB0005,BXD16,F,14,…

 #+end_example
 ```
-* The strain column above has repetitive values. Each value has a one-to-many relationship with values on sample column. From this dataset, there can be several derivatives. For example; 
-- Sex-based categories 
-- Average data (3 sample values averaged to one strain value) 
-- Standard error table computed for the averages 
+* The strain column above has repetitive values. Each value has a one-to-many relationship with values on sample column. From this dataset, there can be several derivatives. For example;
+- Sex-based categories
+- Average data (3 sample values averaged to one strain value)
+- Standard error table computed for the averages
 
-## Saving data to database 
-* Strain table schema 
+## Saving data to database
+* Strain table schema
 ```
 #+begin_src sql
   MariaDB [db_webqtl]> DESC Strain;
@@ -639,21 +645,21 @@ FEB0005,BXD16,F,14,…
   5 rows in set (0.00 sec)
 #+end_src
 ```
-* Where the =InbredSetId= comes from the =InbredSet= table and the =StrainId= comes from the =Strain= table. The *individual data* would be linked to an inbredset group that is for individuals 
+* Where the =InbredSetId= comes from the =InbredSet= table and the =StrainId= comes from the =Strain= table. The *individual data* would be linked to an inbredset group that is for individuals
 * For the *average data*, the only value to save would be the =strain= field, which would be saved as =Name= in the =Strain= table and linked to an InbredSet group that is for averages
 *Question 01*: How do we distinguish the inbredset groups?
 *Answer*: The =Family= field is useful for this.
 
 *Question 02*: If you have more derived "datasets", e.g. males-only, females-only, under-10-years, 10-to-25-years, etc. How would the =Strains= table handle all those differences?
 
-## Metadata 
+## Metadata
 * The data we looked at had =gene id= and =gene symbol= fields. These fields were used to fetch the *Ensembl ID* and *descriptions* from [[https://www.ncbi.nlm.nih.gov/][NCBI]] and the [[https://useast.ensembl.org/][Ensembl Genome Browser]]
 
-## Files for mapping 
+## Files for mapping
 * Files used for mapping need to be in =bimbam= or =.geno= formats. We would need to do conversions to at least one of these formats where necessary
 
-## Annotation files 
-* Consider the following schema of DB tables 
+## Annotation files
+* Consider the following schema of DB tables
 #+begin_src sql
   MariaDB [db_webqtl]> DESC InbredSet;
   +-----------------+----------------------+------+-----+---------+----------------+
@@ -718,10 +724,10 @@ FEB0005,BXD16,F,14,…
 - The =used_for_mapping= field should be set to ~Y~ unless otherwise informed
 - The =PedigreeStatus= field is unknown to us for now: set to ~NULL~
 
-* Annotation file format 
+* Annotation file format
 The important fields are:
 - =ChipId=: The platform that the data was collected from/with
-Consider the following table; 
+Consider the following table;
 #+begin_src sql
     MariaDB [db_webqtl]> DESC GeneChip;
     +---------------+----------------------+------+-----+---------+----------------+
@@ -744,7 +750,7 @@ Consider the following table;
  - =Probe_set_Blat_Mb_start=/=Probe_set_Blat_Mb_end=: In Byron's and Beni's data, these correspond to the =geneStart= and =geneEnd= fields respectively. These are the positions, in megabasepairs, that the gene begins and ends at, respectively.
  - =Mb=: This is the =geneStart=/=Probe_set_Blat_Mb_start= value divided by *1000000*. (*Note to self*: Maybe the Probe_set_Blat_Mb_* fields above might not be in megabase pairs — please confirm)
  - =Strand_Probe= and =Strand_Gene=: These fields' values are simply ~+~ or ~-~. If these values are missing, you can [[https://ftp.ncbi.nih.gov/gene/README][retrieve them from NCBI]], specifically from the =orientation= field of seemingly any text file with the field
- - =Chr=: This is the chromosome on which the gene is found 
+ - =Chr=: This is the chromosome on which the gene is found
 
 * The final annotation file will have (at minimum) the following fields (or their
 analogs):
@@ -765,8 +771,8 @@ analogs):
 *  =.geno= Files
 - The =.geno= files have sample names, not the strain/symbol. The =Locus= field in the =.geno= file corresponds to the **marker**. =.geno= files are used with =QTLReaper=
 - The sample names in the ~.geno~ files *MUST* be in the same order as the
-strains/symbols for that species. For example; 
-Data format is as follows; 
+strains/symbols for that species. For example;
+Data format is as follows;
 ```
 #+begin_example
 SampleName,Strain,…
@@ -779,7 +785,7 @@ BJCWI0005,BXD50,…

 #+end_example
 ```
-and the order of strains is as follows; 
+and the order of strains is as follows;
 ```
 #+begin_example
 …,BXD33,…,BXD40,…,BXD50,…
@@ -806,9 +812,9 @@ The order of samples that belong to the same strain is irrelevant - they share t
  - Treatment
  - Sex (Really? Isn't sex an expression of genes?)
  - batch
- - Case ID, etc 
+ - Case ID, etc
 
-* Summary steps to load data to the database 
+* Summary steps to load data to the database
 - [x] Create *InbredSet* group (think population)
 - [x] Load the strains/samples data
 - [x] Load the sample cross-reference data to link the samples to their
@@ -821,8 +827,4 @@ The order of samples that belong to the same strain is irrelevant - they share t
 - [x] Load the *Log2* data (ProbeSetData and ProbeSetXRef tables)
 - [x] Compute means (an SQL query was used — this could be pre-computed in code
   and entered along with the data)
-- [x] Run QTLReaper 
-
-
-
-
+- [x] Run QTLReaper
diff --git a/topics/deploy/genecup.gmi b/topics/deploy/genecup.gmi
index c5aec17..fc93d07 100644
--- a/topics/deploy/genecup.gmi
+++ b/topics/deploy/genecup.gmi
@@ -53,3 +53,72 @@ and port forward:
 ssh -L 4200:127.0.0.1:4200 -f -N server
 curl localhost:4200
 ```
+
+# Troubleshooting
+
+## Moving the PubMed dir
+
+After moving the PubMed dir GeneCup stopped displaying part of the connections. This can be reproduced by running the standard example on the home page - the result should look like the image on the right of the home page.
+
+After fixing the paths and restarting the service there still was no result.
+
+Genecup is currently managed by the shepherd as user shepherd. Stop the service as that user:
+
+```
+shepherd@tux02:~$ herd stop genecup
+guile: warning: failed to install locale
+Service genecup has been stopped.
+```
+
+Now the servic looks stopped, but it is still running and you need to kill by hand:
+
+```
+shepherd@tux02:~$ ps xau|grep genecup
+shepherd  89524  0.0  0.0  12780   944 pts/42   S+   00:32   0:00 grep genecup
+shepherd 129334  0.0  0.7 42620944 2089640 ?    Sl   Mar05  66:30 /gnu/store/1w5v338qk5m8khcazwclprs3znqp6f7f-python-3.10.7/bin/python3 /gnu/store/a6z0mmj6iq6grwynfvkzd0xbbr4zdm0l-genecup-latest-with-tensorflow-native-HEAD-of-master-branch/.server.py-real
+shepherd@tux02:~$ kill -9 129334
+shepherd@tux02:~$ ps xau|grep genecup
+shepherd  89747  0.0  0.0  12780   944 pts/42   S+   00:32   0:00 grep genecup
+shepherd@tux02:~$
+```
+
+The log file lives in
+
+```
+shepherd@tux02:~/logs$ tail -f genecup.log
+```
+
+and we were getting errors on a reload and I had to fix
+
+```
+shepherd@tux02:~/shepherd-services$ grep export run_genecup.sh
+export EDIRECT_PUBMED_MASTER=/export3/PubMed
+export TMPDIR=/export/ratspub/tmp
+export NLTK_DATA=/export3/PubMed/nltk_data
+```
+
+See
+
+=> https://git.genenetwork.org/gn-shepherd-services/commit/?id=cd4512634ce1407b14b0842b0ef6a9cd35e6d46c
+
+The symlink from /export2 is not honoured by the guix container. Now the service works.
+
+Note we have deprecation warnings that need to be addressed in the future:
+
+```
+2025-04-22 00:40:07 /home/shepherd/services/genecup/guix-past/modules/past/packages/python.scm:740:19: warning: 'texlive-union' is deprecated,
+ use 'texlive-updmap.cfg' instead
+2025-04-22 00:40:07 guix build: warning: 'texlive-latex-base' is deprecated, use 'texlive-latex-bin' instead
+2025-04-22 00:40:15 updating checkout of 'https://git.genenetwork.org/genecup'...
+/gnu/store/9lbn1l04y0xciasv6zzigqrrk1bzz543-tensorflow-native-1.9.0/lib/python3.10/site-packages/tensorflow/python/framewo
+rk/dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
+2025-04-22 00:40:38   _np_quint16 = np.dtype([("quint16", np.uint16, 1)])
+2025-04-22 00:40:38 /gnu/store/9lbn1l04y0xciasv6zzigqrrk1bzz543-tensorflow-native-1.9.0/lib/python3.10/site-packages/tensorflow/python/framewo
+rk/dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
+2025-04-22 00:40:38   _np_qint32 = np.dtype([("qint32", np.int32, 1)])
+2025-04-22 00:40:38 /gnu/store/9lbn1l04y0xciasv6zzigqrrk1bzz543-tensorflow-native-1.9.0/lib/python3.10/site-packages/tensorflow/python/framewo
+rk/dtypes.py:532: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
+2025-04-22 00:40:38   np_resource = np.dtype([("resource", np.ubyte, 1)])
+2025-04-22 00:40:39 /gnu/store/7sam0mr9kxrd4p7g1hlz9wrwag67a6x6-python-flask-sqlalchemy-2.5.1/lib/python3.10/site-packages/flask_sqlalchemy/__
+init__.py:872: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
+```
diff --git a/topics/deploy/installation.gmi b/topics/deploy/installation.gmi
index 757d848..d6baa79 100644
--- a/topics/deploy/installation.gmi
+++ b/topics/deploy/installation.gmi
@@ -319,7 +319,7 @@ Currently we have two databases for deployment,
 from BXD mice and 'db_webqtl_plant' which contains all plant related
 material.
 
-Download one database from
+Download a recent database from
 
 => https://files.genenetwork.org/database/
 
diff --git a/topics/deploy/machines.gmi b/topics/deploy/machines.gmi
index 9548e43..a7c197c 100644
--- a/topics/deploy/machines.gmi
+++ b/topics/deploy/machines.gmi
@@ -2,10 +2,11 @@
 
 ```
 - [ ] bacchus             172.23.17.156   (00:11:32:ba:7f:17) -  1 Gbs
-- [X] lambda01            172.23.18.212   (7c:c2:55:11:9c:ac)
+- [ ] penguin2
+- [X] lambda01            172.23.18.212   (7c:c2:55:11:9c:ac) - currently 172.23.17.41
 - [X] tux03i              172.23.17.181   (00:0a:f7:c1:00:8d) - 10 Gbs
   [X] tux03               128.169.5.101   (00:0a:f7:c1:00:8b) -  1 Gbs
-- [ ] tux04i              172.23.17.170   (14:23:f2:4f:e6:10)
+- [X] tux04i              172.23.17.170   (14:23:f2:4f:e6:10)
 - [X] tux04               128.169.5.119   (14:23:f2:4f:e6:11)
 - [X] tux05               172.23.18.129   (14:23:f2:4f:35:00)
 - [X] tux06               172.23.17.188   (14:23:f2:4e:29:10)
@@ -26,6 +27,8 @@ c for console or control
 
 ```
 - [ ] DNS entries no longer visible
+- [X] penguin2-c        172.23.31.83
+- [ ] octolair01        172.23.16.228
 - [X] lambda01-c        172.23.17.173   (3c:ec:ef:aa:e5:50)
 - [X] tux01-c           172.23.31.85    (58:8A:5A:F9:3A:22)
 - [X] tux02-c           172.23.30.40    (58:8A:5A:F0:E6:E4)
diff --git a/topics/deploy/setting-up-or-migrating-production-across-machines.gmi b/topics/deploy/setting-up-or-migrating-production-across-machines.gmi
new file mode 100644
index 0000000..1f35dae
--- /dev/null
+++ b/topics/deploy/setting-up-or-migrating-production-across-machines.gmi
@@ -0,0 +1,58 @@
+# Setting Up or Migrating Production Across Machines
+
+## Tags
+
+* type: documentation, docs, doc
+* status: in-progress
+* assigned: fredm
+* priority: undefined
+* keywords: migration, production, genenetwork
+* interested-parties: pjotrp, zachs
+
+## Introduction
+
+Recent events (Late 2024 and early 2025) have led to us needing to move the production system from one machine to the other several time, due to machine failures, disk space, security concerns, and the like.
+
+In this respect, a number of tasks rise to the front as necessary to accomplish for a successful migration. Each of the following sections will detail a task that's necessary for a successful migration.
+
+## Set Up the Database
+
+* Extract: detail this — link to existing document in this repo. Also, probably note that we symlink the extraction back to `/var/lib/mysql`?
+* Configure: detail this — link to existing document in this repo
+
+## Set Up the File System
+
+* TODO: List the necessary directories and describe what purpose each serves. This will be from the perspective of the container — actual paths on the host system are left to the builders choice, and can vary wildly.
+* TODO: Prefer explicit binding rather than implicit — makes the shell scripts longer, but no assumptions have to be made, everything is explicitly spelled out.
+
+## Redis
+
+We currently (2025-06-11) use Redis for:
+
+- Tracking user collection (this will be moved to SQLite database)
+- Tracking background jobs (this is being moved out to SQLite databases)
+- Tracking running-time (not sure what this is about)
+- Others?
+
+We do need to copy over the redis save file whenever we do a migration, at least until the user collections and background jobs features have been moved completely out of Redis.
+
+## Container Configurations: Secrets
+
+* TODO: Detail how to extract/restore the existing secrets configurations in the new machine
+
+## Build Production Container
+
+* TODO: Add notes on building
+* TODO: Add notes on setting up systemd
+
+## NGINX
+
+* TODO: Add notes on streaming and configuration of it thereof
+
+## SSL Certificates
+
+* TODO: Add notes on acquisition and setup of SSL certificates
+
+## DNS
+
+* TODO: Migrate DNS settings
diff --git a/topics/deploy/uthsc-vpn-with-free-software.gmi b/topics/deploy/uthsc-vpn-with-free-software.gmi
index 43f6944..95fd1cd 100644
--- a/topics/deploy/uthsc-vpn-with-free-software.gmi
+++ b/topics/deploy/uthsc-vpn-with-free-software.gmi
@@ -10,6 +10,11 @@ $ openconnect-sso --server uthscvpn1.uthsc.edu --authgroup UTHSC
 ```
 Note that openconnect-sso should be run as a regular user, not as root. After passing Duo authentication, openconnect-sso will try to gain root priviliges to set up the network routes. At that point, it will prompt you for your password using sudo.
 
+## Recommended way
+
+The recommended way is to use Arun's g-expression setup using guix. See below. It should just work, provided you have the
+chained certificate that you can get from the browser or one of us.
+
 ## Avoid tunneling all your network traffic through the VPN (aka Split Tunneling)
 
 openconnect, by default, tunnels all your traffic through the VPN. This is not good for your privacy. It is better to tunnel only the traffic destined to the specific hosts that you want to access. This can be done using the vpn-slice script.
@@ -72,6 +77,12 @@ Download it, download the UTHSC TLS certificate chain to uthsc-certificate.pem,
 $(guix build -f uthsc-vpn.scm)
 ```
 
+to add a route by hand after you can do
+
+```
+ip route add 172.23.17.156 dev tun0
+```
+
 # Troubleshooting
 
 Older versions would not show a proper dialog for sign-in. Try
diff --git a/topics/deploy/uthsc-vpn.scm b/topics/deploy/uthsc-vpn.scm
index 73cb48b..82f67f5 100644
--- a/topics/deploy/uthsc-vpn.scm
+++ b/topics/deploy/uthsc-vpn.scm
@@ -9,7 +9,7 @@
 ;; Put in the hosts you are interested in here.
 (define %hosts
   (list "octopus01"
-        "tux01.genenetwork.org"))
+        "spacex.uthsc.edu"))
 
 (define (ini-file name scm)
   "Return a file-like object representing INI file with @var{name} and
diff --git a/topics/genenetwork-releases.gmi b/topics/genenetwork-releases.gmi
new file mode 100644
index 0000000..e179629
--- /dev/null
+++ b/topics/genenetwork-releases.gmi
@@ -0,0 +1,77 @@
+# GeneNetwork Releases
+
+## Tags
+
+* status: open
+* priority:
+* assigned:
+* type: documentation
+* keywords: documentation, docs, release, releases, genenetwork
+
+## Introduction
+
+The sections that follow will be note down the commits used for various stable (and stable-ish) releases of genenetwork.
+
+The tagging of the commits will need to distinguish repository-specific tags from overall system tags.
+
+In this document, we only concern ourselves with the overall system tags, that shall have the template:
+
+```
+genenetwork-system-v<major>.<minor>.<patch>[-<commit>]
+```
+
+the portions in angle brackets will be replaced with the actual version numbers.
+
+## genenetwork-system-v1.0.0
+
+This is the first, guix-system-container-based, stable release of the entire genenetwork system.
+The commits involved are:
+
+=> https://github.com/genenetwork/genenetwork2/commit/314c6d597a96ac903071fcb6e50df3d9e88935e9 GN2: 314c6d5
+=> https://github.com/genenetwork/genenetwork3/commit/0d902ec267d96b87648669a7a43b699c8a22a3de GN3: 0d902ec
+=> https://git.genenetwork.org/gn-auth/commit/?id=8e64f7f8a392b8743a4f36c497cd2ec339fcfebc: gn-auth: 8e64f7f
+=> https://git.genenetwork.org/gn-libs/commit/?id=72a95f8ffa5401649f70978e863dd3f21900a611: gn-libs: 72a95f8
+
+The guix channels used for deployment of the system above are as follows:
+
+```
+(list (channel
+       (name 'guix-bioinformatics)
+       (url "https://git.genenetwork.org/guix-bioinformatics/")
+       (branch "master")
+       (commit
+        "039a3dd72c32d26b9c5d2cc99986fd7c968a90a5"))
+      (channel
+       (name 'guix-forge)
+       (url "https://git.systemreboot.net/guix-forge/")
+       (branch "main")
+       (commit
+        "bcb3e2353b9f6b5ac7bc89d639e630c12049fc42")
+       (introduction
+        (make-channel-introduction
+         "0432e37b20dd678a02efee21adf0b9525a670310"
+         (openpgp-fingerprint
+          "7F73 0343 F2F0 9F3C 77BF  79D3 2E25 EE8B 6180 2BB3"))))
+      (channel
+       (name 'guix-past)
+       (url "https://gitlab.inria.fr/guix-hpc/guix-past")
+       (branch "master")
+       (commit
+        "5fb77cce01f21a03b8f5a9c873067691cf09d057")
+       (introduction
+        (make-channel-introduction
+         "0c119db2ea86a389769f4d2b9c6f5c41c027e336"
+         (openpgp-fingerprint
+          "3CE4 6455 8A84 FDC6 9DB4  0CFB 090B 1199 3D9A EBB5"))))
+      (channel
+       (name 'guix)
+       (url "https://git.savannah.gnu.org/git/guix.git")
+       (branch "master")
+       (commit
+        "2394a7f5fbf60dd6adc0a870366adb57166b6d8b")
+       (introduction
+        (make-channel-introduction
+         "9edb3f66fd807b096b48283debdcddccfea34bad"
+         (openpgp-fingerprint
+          "BBB0 2DDF 2CEA F6A8 0D1D  E643 A2A0 6DF2 A33A 54FA")))))
+```
diff --git a/topics/genenetwork/starting_gn1.gmi b/topics/genenetwork/starting_gn1.gmi
index efbfd0f..e31061f 100644
--- a/topics/genenetwork/starting_gn1.gmi
+++ b/topics/genenetwork/starting_gn1.gmi
@@ -51,9 +51,7 @@ On an update of guix the build may fail. Try
    #######################################'
    #      Environment Variables - private
    #########################################
-   # sql_host = '[1]tux02.uthsc.edu'
-   # sql_host = '128.169.4.67'
-   sql_host = '172.23.18.213'
+   sql_host = '170.23.18.213' 
    SERVERNAME = sql_host
    MYSQL_SERVER = sql_host
    DB_NAME = 'db_webqtl'
diff --git a/topics/gn-learning-team/next-steps.gmi b/topics/gn-learning-team/next-steps.gmi
new file mode 100644
index 0000000..b427923
--- /dev/null
+++ b/topics/gn-learning-team/next-steps.gmi
@@ -0,0 +1,48 @@
+# Next steps
+
+Wednesday we had a wrap-up meeting of the gn-learning efforts.
+
+## Data uploading
+
+The goal of these meetings was to learn how to upload data into GN. In the process Felix has become the de facto uploader, next to Arthur. A C. elegans dataset was uploaded and Felix is preparing
+
+* More C. elegans
+* HSRat
+* Kilifish
+* Medaka
+
+Updates are here:
+
+=> https://issues.genenetwork.org/tasks/felixl
+
+We'll keep focussing on that work and hopefully we'll get more parties interested in doing some actual work down the line.
+
+## Hosting GN in Wageningen
+
+Harm commented that he thought these meetings were valuable, particularly we learnt a lot about GN ins and outs. Harm suggests we focus on hosting GN in Wageningen for C. elegans and Arabidopsis.
+Pjotr says that is a priority this year, even if we start on a privately hosted machine in NL. Wageningen requires Docker images and Bonface says that is possible - with some work. So:
+
+* Host GN in NL
+* Make GN specific for C.elegans and Arabidopsis - both trim and add datasets
+* Create Docker container
+* Host Docker container in Wageningen
+* Present to other parties in Wageningen
+
+Having above datasets will help this effort succeed.
+
+## AI
+
+Harm is also very interested in the AI efforts and wants to pursue that in the context of above server - i.e., functionality arrives when it lands in GN.
+
+## Wormbase
+
+Jameson suggest we can work with Wormbase and the Caender folks once we have a running system. Interactive data analysis is very powerful and could run in conjunction with those sites.
+
+=> https://caendr.org/
+=> https://wormbase.org/
+
+Other efforts are Flybase and Arabidopsis Magic which we can host, in principle.
+
+## Mapping methods
+
+Jameson will continue with his work on risiduals.
diff --git a/topics/octopus/maintenance.gmi b/topics/octopus/maintenance.gmi
new file mode 100644
index 0000000..65ea52e
--- /dev/null
+++ b/topics/octopus/maintenance.gmi
@@ -0,0 +1,98 @@
+# Octopus/Tux maintenance
+
+## To remember
+
+`fdisk -l` to see disk models
+`lsblk -nd` to see mounted disks
+
+## Status
+
+octopus02
+- Devices: 2 3.7T SSDs + 2 894.3G SSDs + 2 4.6T HDDs
+- **Status: Slurm not OK, LizardFS not OK**
+- Notes:
+  - `octopus02 mfsmount[31909]: can't resolve master hostname and/or portname (octopus01:9421)`, 
+  - **I don't see 2 drives that are physically mounted**
+
+octopus03
+- Devices: 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK
+- Notes: **I don't see 2 drives that are physically mounted**
+
+octopus04
+- Devices: 4 7.3 T SSDs (Neil) + 1 4.6T HDD + 1 3.7T SSD + 2 894.3G SSDs
+- Status: Slurm NO, LizardFS OK (we don't share the HDD) 
+- Notes: no
+
+octopus05
+- Devices: 1 7.3 T SSDs (Neil) + 5 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK
+- Notes: no
+
+octopus06
+- Devices: 1 7.3 T SSDs (Neil) + 1 4.6T HDD + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK (we don't share the HDD) 
+- Notes: no
+
+octopus07
+- Devices: 1 7.3 T SSDs (Neil) + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK
+- Notes: **I don't see 1 device that is physically mounted**
+
+octopus08
+- Devices: 1 7.3 T SSDs (Neil) + 1 4.6T HDD + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK (we don't share the HDD) 
+- Notes: no
+
+octopus09
+- Devices: 1 7.3 T SSDs (Neil) + 1 4.6T HDD + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK (we don't share the HDD) 
+- Notes: no
+
+octopus10
+- Devices: 1 7.3 T SSDs (Neil) + 4 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK (we don't share the HDD) 
+- Notes: **I don't see 1 device that is physically mounted**
+
+octopus11
+- Devices: 1 7.3 T SSDs (Neil) + 5 3.7T SSDs + 2 894.3G SSDs
+- Status: Slurm OK, LizardFS OK
+- Notes: on
+
+tux05
+- Devices: 1 3.6 NVMe + 1 1.5T NVMe + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS OK (we don't share anything)
+- Notes: **I don't have a picture to confirm physically mounted devices**
+
+tux06
+- Devices: 2 3.6 T SSDs (1 from Neil) + 1 1.5T NVMe + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS (we don't share anything)
+- Notes:
+  - **Last picture reports 1 7.3 T SSD (Neil) that is missing**
+  - **Disk /dev/sdc: 3.64 TiB (Samsung SSD 990: free and usable for lizardfs**
+  - **Disk /dev/sdd: 3.64 TiB (Samsung SSD 990): free and usable for lizardfs**
+
+tux07
+- Devices: 3 3.6 T SSDs + 1 1.5T NVMe (Neil) + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS
+- Notes:
+  - **Disk /dev/sdb: 3.64 TiB (Samsung SSD 990): free and usable for lizardfs**
+  - **Disk /dev/sdd: 3.64 TiB (Samsung SSD 990): mounted at /mnt/sdb and shared on LIZARDFS: TO CHECK BECAUSE IT HAS NO PARTITIONS**
+
+tux08
+- Devices: 3 3.6 T SSDs + 1 1.5T NVMe (Neil) + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS
+- Notes: no
+
+tux09
+- Devices: 1 3.6 T SSDs + 1 1.5T NVMe + 1 894.3G NVMe
+- Status: Slurm OK, LizardFS
+- Notes: **I don't see 1 device that is physically mounted**
+
+## Neil disks
+- four 8TB SSDs on the right of octopus04
+- one 8TB SSD in the left slot of octopus05
+- six 8TB SSDs bottom-right slot of octopus06,07,08,09,10,11
+- one 4TB NVMe and one 8TB SSDs on tux06, NVME in the bottom-right of the group of 4 on the left, SSD on the bottom-left of the group of 4 on the right
+- one 4TB NVMe on tux07, on the top-left of the group of 4 on the right
+- one 4TB NVMe on tux08, on the top-left of the group of 4 on the right
diff --git a/topics/octopus/recent-rust.gmi b/topics/octopus/recent-rust.gmi
new file mode 100644
index 0000000..7ce8968
--- /dev/null
+++ b/topics/octopus/recent-rust.gmi
@@ -0,0 +1,76 @@
+# Use a recent Rust on Octopus
+
+
+For impg we currently need a rust that is more recent than what we have in Debian
+or Guix. No panic, because Rust has few requirements.
+
+Install latest rust using the script
+
+```
+curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
+```
+
+Set path
+
+```
+. ~/.cargo/env
+```
+
+Update rust
+
+```
+rustup default stable
+```
+
+Next update Rust
+
+```
+octopus01:~/tmp/impg$ . ~/.cargo/env
+octopus01:~/tmp/impg$ rustup default stable
+info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
+info: latest update on 2025-05-15, rust version 1.87.0 (17067e9ac 2025-05-09)
+info: downloading component 'cargo'
+info: downloading component 'clippy'
+info: downloading component 'rust-docs'
+info: downloading component 'rust-std'
+info: downloading component 'rustc'
+(...)
+```
+
+and build the package
+
+```
+octopus01:~/tmp/impg$ cargo build
+```
+
+Since we are not in guix we get the local dependencies:
+
+```
+octopus01:~/tmp/impg$ ldd target/debug/impg
+  linux-vdso.so.1 (0x00007ffdb266a000)
+  libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007fe404001000)
+  librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x00007fe403ff7000)
+  libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007fe403fd6000)
+  libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007fe403fd1000)
+  libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007fe403e11000)
+  /lib64/ld-linux-x86-64.so.2 (0x00007fe404682000)
+```
+
+Login on another octopus - say 02 you can run impg from this directory:
+
+```
+octopus02:~$ ~/tmp/impg/target/debug/impg
+Command-line tool for querying overlaps in PAF files
+
+Usage: impg <COMMAND>
+
+Commands:
+  index      Create an IMPG index
+  partition  Partition the alignment
+  query      Query overlaps in the alignment
+  stats      Print alignment statistics
+
+Options:
+  -h, --help     Print help
+  -V, --version  Print version
+```
diff --git a/topics/programming/autossh-for-keeping-ssh-tunnels.gmi b/topics/programming/autossh-for-keeping-ssh-tunnels.gmi
new file mode 100644
index 0000000..a977232
--- /dev/null
+++ b/topics/programming/autossh-for-keeping-ssh-tunnels.gmi
@@ -0,0 +1,65 @@
+# Using autossh to Keep SSH Tunnels Alive
+
+## Tags
+* keywords: ssh, autossh, tunnel, alive
+
+
+## TL;DR
+
+```
+guix package -i autossh  # Install autossh with Guix
+autossh -M 0 -o "ServerAliveInterval 60" -o "ServerAliveCountMax 5" -L 4000:127.0.0.1:3306 alexander@remoteserver.org
+```
+
+## Introduction
+
+Autossh is a utility for automatically restarting SSH sessions and tunnels if they drop or become inactive. It's particularly useful for long-lived tunnels in unstable network environments.
+
+See official docs:
+
+=> https://www.harding.motd.ca/autossh/
+
+## Installing autossh
+
+Install autossh using Guix:
+
+```
+guix package -i autossh
+```
+
+Basic usage:
+
+```
+autossh [-V] [-M monitor_port[:echo_port]] [-f] [SSH_OPTIONS]
+```
+
+## Examples
+
+### Keep a database tunnel alive with autossh
+
+Forward a remote MySQL port to your local machine:
+
+**Using plain SSH:**
+
+```
+ssh -L 5000:localhost:3306 alexander@remoteserver.org
+```
+
+**Using autossh:**
+
+```
+autossh -L 5000:localhost:3306 alexander@remoteserver.org
+```
+
+### Better option 
+
+```
+autossh -M 0 -o "ServerAliveInterval 30" -o "ServerAliveCountMax 3" -L 5000:localhost:3306 alexander@remoteserver.org
+```
+
+#### Option explanations:
+
+- `ServerAliveInterval`: Seconds between sending keepalive packets to the server (default: 0).
+- `ServerAliveCountMax`: Number of unanswered keepalive packets before SSH disconnects (default: 3).
+
+You can also configure these options in your `~/.ssh/config` file to simplify command-line usage.
diff --git a/topics/systems/backup-drops.gmi b/topics/systems/backup-drops.gmi
index 191b185..3f81c5a 100644
--- a/topics/systems/backup-drops.gmi
+++ b/topics/systems/backup-drops.gmi
@@ -4,6 +4,10 @@ To make backups we use a combination of sheepdog, borg, sshfs, rsync. sheepdog i
 
 This system proves pretty resilient over time. Only on the synology server I can't get it to work because of some CRON permission issue.
 
+For doing the actual backups see
+
+=> ./backups-with-borg.gmi
+
 # Tags
 
 * assigned: pjotrp
@@ -13,7 +17,7 @@ This system proves pretty resilient over time. Only on the synology server I can
 
 ## Borg backups
 
-It is advised to use a backup password and not store that on the remote.
+Despite our precautions it is advised to use a backup password and *not* store that on the remote.
 
 ## Running sheepdog on rabbit
 
@@ -59,14 +63,14 @@ where remote can be an IP address.
 
 Warning: if you introduce this `AllowUsers` command all users should be listed or people may get locked out of the machine.
 
-Next create a special key on the backup machine's ibackup user (just hit enter):
+Next create a special password-less key on the backup machine's ibackup user (just hit enter):
 
 ```
 su ibackup
 ssh-keygen -t ecdsa -f $HOME/.ssh/id_ecdsa_backup
 ```
 
-and copy the public key into the remote /home/bacchus/.ssh/authorized_keys
+and copy the public key into the remote /home/bacchus/.ssh/authorized_keys.
 
 Now test it from the backup server with
 
@@ -82,13 +86,20 @@ On the drop server you can track messages by
 tail -40 /var/log/auth.log
 ```
 
+or on recent linux with systemd
+
+```
+journalctl -r
+```
+
 Next
 
 ```
 ssh -v -i ~/.ssh/id_ecdsa_backup bacchus@dropserver
 ```
 
-should give a Broken pipe(!). In auth.log you may see something like
+should give a Broken pipe(!) or -- more recently -- it says `This service allows sftp connections only`.
+When running sshd with a verbose switch you may see something like
 
 fatal: bad ownership or modes for chroot directory component "/export/backup/"
 
@@ -110,6 +121,19 @@ chown bacchus.bacchus backup/bacchus/drop/
 chmod 0700 backup/bacchus/drop/
 ```
 
+Another error may be:
+
+```
+fusermount3: mount failed: Operation not permitted
+```
+
+This means you need to set the suid on the fusermount3 command. Bit nasty in Guix.
+
+```
+apt-get install fuse(3) sshfs
+chmod 4755 /usr/bin/fusermount
+```
+
 If auth.log says error: /dev/pts/11: No such file or directory on ssh, or received disconnect (...) disconnected by user we are good to go!
 
 Note: at this stage it may pay to track the system log with
@@ -171,3 +195,5 @@ sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,IdentityFile=~/.
 The recent scripts can be found at
 
 => https://github.com/genenetwork/gn-deploy-servers/blob/master/scripts/tux01/backup_drop.sh
+
+# borg-borg
diff --git a/topics/systems/backups-with-borg.gmi b/topics/systems/backups-with-borg.gmi
new file mode 100644
index 0000000..1ad0112
--- /dev/null
+++ b/topics/systems/backups-with-borg.gmi
@@ -0,0 +1,220 @@
+# Borg backups
+
+We use borg for backups. Borg is an amazing tool and after 25+ years of making backups it just feels right.
+With the new tux04 production install we need to organize backups off-site. The first step is to create a
+borg runner using sheepdog -- sheepdog we use for monitoring success/failure.
+Sheepdog essentially wraps a Unix command and sends a report to a local or remote redis instance.
+Sheepdog also includes a web server for output:
+
+=> http://sheepdog.genenetwork.org/sheepdog/status.html
+
+which I run on one of my machines.
+
+# Tags
+
+* assigned: pjotrp
+* keywords: systems, backup, sheepdog, database
+
+# Install borg
+
+Usually I use a version of borg from guix. This should really be done as the borg user (ibackup).
+
+```
+mkdir ~/opt
+guix package -i borg ~/opt/borg
+tux04:~$ ~/opt/borg/bin/borg --version
+  1.2.2
+```
+
+# Create a new backup dir and user
+
+The backup should live on a different disk from the things we backup, so when that disk fails we have another.
+
+The SQL database lives on /export and the containers live on /export2. /export3 is a largish slow drive, so perfect.
+
+By convention I point /export/backup to the real backup dir on /export3/backup/borg/ Another convention is that we use an ibackup user which has the backup passphrase in ~/.borg-pass. As root:
+
+```
+mkdir /export/backup/borg
+chown ibackup:ibackup /export/backup/borg
+chown ibackup:ibackup /home/ibackup/.borg-pass
+su ibackup
+```
+
+Now you should be able to load the passphrase and create the backup dir
+
+```
+id
+  uid=1003(ibackup)
+. ~/.borg-pass
+cd /export/backup/borg
+~/opt/borg/bin/borg init --encryption=repokey-blake2 genenetwork
+```
+
+Now we can run our first backup. Note that ibackup should be a member of the mysql and gn groups
+
+```
+mysql:x:116:ibackup
+```
+
+# First backup
+
+Run the backup the first time:
+
+```
+id
+  uid=1003(ibackup) groups=1003(ibackup),116(mysql)
+~/opt/borg/bin/borg create --progress --stats genenetwork::first-backup /export/mysql/database/*
+```
+
+You may first need to update permissions to give group  access
+
+```
+chmod g+rx -R /var/lib/mysql/*
+```
+
+When that works borg reports:
+
+```
+Archive name: first-backup
+Archive fingerprint: 376d32fda9738daa97078fe4ca6d084c3fa9be8013dc4d359f951f594f24184d
+Time (start): Sat, 2025-02-08 04:46:48
+Time (end):   Sat, 2025-02-08 05:30:01
+Duration: 43 minutes 12.87 seconds
+Number of files: 799
+Utilization of max. archive size: 0%
+------------------------------------------------------------------------------
+                       Original size      Compressed size    Deduplicated size
+This archive:              534.24 GB            238.43 GB            237.85 GB
+All archives:              534.24 GB            238.43 GB            238.38 GB
+                       Unique chunks         Total chunks
+Chunk index:                  200049               227228
+------------------------------------------------------------------------------
+```
+
+50% compression is not bad. borg is incremental so it will only backup differences next round.
+
+Once borg works we could run a CRON job. But we should use the sheepdog monitor to make sure backups keep going without failure going unnoticed.
+
+# Using the sheepdog
+
+=> https://github.com/pjotrp/deploy sheepdog code
+
+## Clone sheepdog
+
+=> https://github.com/pjotrp/deploy#install sheepdog install
+
+Essentially clone the repo so it shows up in ~/deploy
+
+```
+cd /home/ibackup
+git clone https://github.com/pjotrp/deploy.git
+/export/backup/scripts/tux04/backup-tux04.sh
+```
+
+## Setup redis
+
+All sheepdog messages get pushed to redis. You can run it locally or remotely.
+
+By default we use redis, but syslog and others may also be used. The advantage of redis is that it is not bound to the same host, can cross firewalls using an ssh reverse tunnel, and is easy to query.
+
+=> https://github.com/pjotrp/deploy#install sheepdog install
+
+In our case we use redis on a remote host and the results get displayed by a webserver. Also some people get E-mail updates on failure. The configuration is in
+
+```
+/home/ibackup# cat .config/sheepdog/sheepdog.conf .
+{
+  "redis": {
+    "host"  : "remote-host",
+    "password": "something"
+  }
+}
+```
+
+If you see localhost with port 6377 it is probably a reverse tunnel setup:
+
+=> https://github.com/pjotrp/deploy#redis-reverse-tunnel
+
+Update the fields according to what we use. Main thing is that is the definition of the sheepdog->redis connector. If you also use sheepdog as another user you'll need to add a config.
+
+Sheepdog should show a warning when you configure redis and it is not connecting.
+
+## Scripts
+
+Typically I run the cron job from root CRON so people can find it. Still it is probably a better idea to use an ibackup CRON. In my version a script is run that also captures output:
+
+```cron root
+0 6 * * * /bin/su ibackup -c /export/backup/scripts/tux04/backup-tux04.sh >> ~/cron.log 2>&1
+```
+
+The script contains something like
+
+```bash
+#! /bin/bash
+if [ "$EUID" -eq 0 ]
+  then echo "Please do not run as root. Run as: su ibackup -c $0"
+  exit
+fi
+rundir=$(dirname "$0")
+# ---- for sheepdog
+source $rundir/sheepdog_env.sh
+cd $rundir
+sheepdog_borg.rb -t borg-tux04-sql --group ibackup -v -b /export/backup/borg/genenetwork /export/mysql/database/*
+```
+
+and the accompanying sheepdov_env.sh
+
+```
+export GEM_PATH=/home/ibackup/opt/deploy/lib/ruby/vendor_ruby
+export PATH=/home/ibackup/opt/deploy/deploy/bin:/home/wrk/opt/deploy/bin:$PATH
+```
+
+If it reports
+
+```
+/export/backup/scripts/tux04/backup-tux04.sh: line 11: /export/backup/scripts/tux04/sheepdog_env.sh: No such file or directory
+```
+
+you need to install sheepdog first.
+
+If all shows green (and takes some time) we made a backup. Check the backup with
+
+```
+ibackup@tux04:/export/backup/borg$ borg list genenetwork/
+first-backup                         Sat, 2025-02-08 04:39:50 [58715b883c080996ab86630b3ae3db9bedb65e6dd2e83977b72c8a9eaa257cdf]
+borg-tux04-sql-20250209-01:43-Sun    Sun, 2025-02-09 01:43:23 [5e9698a032143bd6c625cdfa12ec4462f67218aa3cedc4233c176e8ffb92e16a]
+```
+and you should see the latest. The contents with all files should be visible with
+
+```
+borg list genenetwork::borg-tux04-sql-20250209-01:43-Sun
+```
+
+Make sure you not only see just a symlink.
+
+# More backups
+
+Our production server runs databases and file stores that need to be backed up too.
+
+# Drop backups
+
+Once backups work it is useful to copy them to a remote server, so when the machine stops functioning we have another chance at recovery. See
+
+=> ./backup-drops.gmi
+
+# Recovery
+
+With tux04 we ran into a problem where all disks were getting corrupted(!) Probably due to the RAID controller, but we still need to figure that one out.
+
+Anyway, we have to assume the DB is corrupt. Files are corrupt AND the backups are corrupt. Borg backup has checksums which you can
+
+```
+borg check repo
+```
+
+it has a --repair switch which we needed to remove some faults in the backup itself:
+
+```
+borg check --repair repo
+```
diff --git a/topics/systems/ci-cd.gmi b/topics/systems/ci-cd.gmi
index 6aa17f2..a1ff2e3 100644
--- a/topics/systems/ci-cd.gmi
+++ b/topics/systems/ci-cd.gmi
@@ -31,7 +31,7 @@ Arun has figured out the CI part. It runs a suitably configured laminar CI servi
 
 CD hasn't been figured out. Normally, Guix VMs and containers created by `guix system` can only access the store read-only. Since containers don't have write access to the store, you cannot `guix build' from within a container or deploy new containers from within a container. This is a problem for CD. How do you make Guix containers have write access to the store?
 
-Another alternative for CI/ CID were to have the quick running tests, e.g unit tests, run on each commit to branch "main". Once those are successful, the CI/CD system we choose should automatically pick the latest commit that passed the quick running tests for for further testing and deployment, maybe once an hour or so. Once the next battery of tests is passed, the CI/CD system will create a build/artifact to be deployed to staging and have the next battery of tests runs against it. If that passes, then that artifact could be deployed to production, and details on the commit and
+Another alternative for CI/ CD were to have the quick running tests, e.g unit tests, run on each commit to branch "main". Once those are successful, the CI/CD system we choose should automatically pick the latest commit that passed the quick running tests for for further testing and deployment, maybe once an hour or so. Once the next battery of tests is passed, the CI/CD system will create a build/artifact to be deployed to staging and have the next battery of tests runs against it. If that passes, then that artifact could be deployed to production, and details on the commit and
 
 #### Possible Steps
 
@@ -90,3 +90,49 @@ This contains a check-list of things that need to be done:
 => /topics/systems/orchestration Orchestration
 
 => /issues/broken-cd  Broken-cd (Resolved)
+
+## Adding a web-hook
+
+### Github hooks
+
+IIRC actions run artifacts inside github's infrastracture.  We use webhooks: e.g.
+
+Update the hook at
+
+=> https://github.com/genenetwork/genenetwork3/settings/hooks
+
+=> ./screenshot-github-webhook.png
+
+To trigger CI manually, run this with the project name:
+
+```
+curl https://ci.genenetwork.org/hooks/example-gn3
+```
+
+For gemtext we have a github hook that adds a forge-project and looks like
+
+```lisp
+(define gn-gemtext-threads-project
+  (forge-project
+   (name "gn-gemtext-threads")
+   (repository "https://github.com/genenetwork/gn-gemtext-threads/")
+   (ci-jobs (list (forge-laminar-job
+                   (name "gn-gemtext-threads")
+                   (run (with-packages (list nss-certs openssl)
+                          (with-imported-modules '((guix build utils))
+                            #~(begin
+                                (use-modules (guix build utils))
+
+                                (setenv "LC_ALL" "en_US.UTF-8")
+                                (invoke #$(file-append tissue "/bin/tissue")
+                                        "pull" "issues.genenetwork.org"))))))))
+   (ci-jobs-trigger 'webhook)))
+```
+
+Guix forge can be found at
+
+=> https://git.systemreboot.net/guix-forge/
+
+### git.genenetwork.org hooks
+
+TBD
diff --git a/topics/systems/mariadb/mariadb.gmi b/topics/systems/mariadb/mariadb.gmi
index ae0ab19..ec8b739 100644
--- a/topics/systems/mariadb/mariadb.gmi
+++ b/topics/systems/mariadb/mariadb.gmi
@@ -16,6 +16,8 @@ To install Mariadb (as a container) see below and
 Start the client and:
 
 ```
+mysql
+show databases
 MariaDB [db_webqtl]> show binary logs;
 +-----------------------+-----------+
 | Log_name              | File_size |
@@ -60,4 +62,11 @@ Stop the running mariadb-guix.service. Restore the latest backup archive and ove
 => https://www.borgbackup.org/ Borg
 => https://borgbackup.readthedocs.io/en/stable/ Borg documentation
 
-#
+# Upgrade mariadb
+
+It is wise to upgrade mariadb once in a while. In a disaster recovery it is better to move forward in versions too.
+Before upgrading make sure there is a decent backup of the current setup.
+
+See also
+
+=> issues/systems/tux04-disk-issues.gmi
diff --git a/topics/systems/mariadb/precompute-mapping-input-data.gmi b/topics/systems/mariadb/precompute-mapping-input-data.gmi
index 0c89fe5..977120d 100644
--- a/topics/systems/mariadb/precompute-mapping-input-data.gmi
+++ b/topics/systems/mariadb/precompute-mapping-input-data.gmi
@@ -49,10 +49,29 @@ The original reaper precompute lives in
 
 => https://github.com/genenetwork/genenetwork2/blob/testing/scripts/maintenance/QTL_Reaper_v6.py
 
-This script first fetches inbredsets
+More recent incarnations are at v8, including a PublishData version that can be found in
+
+=> https://github.com/genenetwork/genenetwork2/tree/testing/scripts/maintenance
+
+Note that the locations are on space:
+
+```
+cd /mount/space2/lily-clone/acenteno/GN-Data
+ls -l
+python QTL_Reaper_v8_space_good.py 116
+--
+python UPDATE_Mean_MySQL_tab.py
+cd /mount/space2/lily-clone/gnshare/gn/web/webqtl/maintainance
+ls -l
+python QTL_Reaper_cal_lrs.py 7
+```
+
+The first task is to prepare an update script that can run a set at a time and compute GEMMA output (instead of reaper).
+
+The script first fetches inbredsets
 
 ```
- select Id,InbredSetId,InbredSetName,Name,SpeciesId,FullName,public,MappingMethodId,GeneticType,Family,FamilyOrder,MenuOrderId,InbredSetCode from InbredSet LIMIT 5;
+select Id,InbredSetId,InbredSetName,Name,SpeciesId,FullName,public,MappingMethodId,GeneticType,Family,FamilyOrder,MenuOrderId,InbredSetCode from InbredSet LIMIT 5;
 +----+-------------+-------------------+----------+-----------+-------------------+--------+-----------------+-------------+--------------------------------------------------+-------------+-------------+---------------+
 | Id | InbredSetId | InbredSetName     | Name     | SpeciesId | FullName          | public | MappingMethodId | GeneticType | Family                                           | FamilyOrder | MenuOrderId | InbredSetCode |
 +----+-------------+-------------------+----------+-----------+-------------------+--------+-----------------+-------------+--------------------------------------------------+-------------+-------------+---------------+
diff --git a/topics/systems/migrate-p2.gmi b/topics/systems/migrate-p2.gmi
deleted file mode 100644
index c7fcb90..0000000
--- a/topics/systems/migrate-p2.gmi
+++ /dev/null
@@ -1,12 +0,0 @@
-* Penguin2 crash
-
-This week the boot partition of P2 crashed. We have a few lessons here, not least having a fallback for all services ;)
-
-* Tasks
-
-- [ ] setup space.uthsc.edu for GN2 development
-- [ ] update DNS to tux02 128.169.4.52 and space 128.169.5.175
-- [ ] move CI/CD to tux02
-
-
-* Notes
diff --git a/topics/systems/screenshot-github-webhook.png b/topics/systems/screenshot-github-webhook.png
new file mode 100644
index 0000000..08feed3
--- /dev/null
+++ b/topics/systems/screenshot-github-webhook.png
Binary files differdiff --git a/topics/systems/synchronising-the-different-environments.gmi b/topics/systems/synchronising-the-different-environments.gmi
new file mode 100644
index 0000000..207b234
--- /dev/null
+++ b/topics/systems/synchronising-the-different-environments.gmi
@@ -0,0 +1,68 @@
+# Synchronising the Different Environments
+
+## Tags
+
+* status: open
+* priority:
+* type: documentation
+* assigned: fredm
+* keywords: doc, docs, documentation
+
+## Introduction
+
+We have different environments we run for various reasons, e.g.
+
+* Production: This is the user-facing environment. This is what GeneNetwork is about.
+* gn2-fred: production-adjacent. It is meant to test out changes before they get to production. It is **NOT** meant for users.
+* CI/CD: Used for development. The latest commits get auto-deployed here. It's the first place (outside of developer machines) where errors and breakages are caught and/or revealed. This will break a lot. Do not expose to users!
+* staging: Uploader environment. This is where Felix, Fred and Arthur flesh out the upload process, and tasks, and also test out the uploader.
+
+These different environments demand synchronisation, in order to have mostly similar results and failure modes.
+
+## Synchronisation of the Environments
+
+### Main Database: MariaDB
+
+* [ ] TODO: Describe process
+
+=> https://issues.genenetwork.org/topics/systems/restore-backups Extract borg archive
+* Automate? Will probably need some checks for data sanity.
+
+### Authorisation Database
+
+* [ ] TODO: Describe process
+
+* Copy backup from production
+* Update/replace GN2 client configs in database
+* What other things?
+
+### Virtuoso/RDF
+
+* [ ] TODO: Describe process
+
+* Copy TTL (Turtle) files from (where?). Production might not always be latest source of TTL files.
+=> https://issues.genenetwork.org/issues/set-up-virtuoso-on-production Run setup to "activate" database entries
+* Can we automate this? What checks are necessary?
+
+## Genotype Files
+
+* [ ] TODO: Describe process
+
+* Copy from source-of-truth (currently Zach's tux01 and/or production).
+* Rsync?
+
+### gn-docs
+
+* [ ] TODO: Describe process
+
+* Not sure changes from other environments should ever take
+
+### AI Summaries (aka. gnqna)
+
+* [ ] TODO: Describe process
+
+* Update configs (should be once, during container setup)
+
+### Others?
+
+* [ ] TODO: Describe process
diff --git a/topics/systems/update-production-checklist.gmi b/topics/systems/update-production-checklist.gmi
new file mode 100644
index 0000000..b17077b
--- /dev/null
+++ b/topics/systems/update-production-checklist.gmi
@@ -0,0 +1,182 @@
+# Update production checklist
+
+
+# Tasks
+
+* [X] Install underlying Debian
+* [X] Get guix going
+* [ ] Check database
+* [ ] Check gemma working
+* [ ] Check global search
+* [ ] Check authentication
+* [ ] Check sending E-mails
+* [ ] Make sure info.genenetwork.org can reach the DB
+* [ ] Backups
+
+The following are at the system level
+
+* [ ] Make journalctl presistent
+* [ ] Update certificates in CRON
+* [ ] Run trim in CRON
+
+# Install underlying Debian
+
+For our production systems we use Debian as a base install. Once installed:
+
+* [X] set up git in /etc and limit permissions to root user
+* [X] add ttyS0 support for grub and kernel - so out-of-band works
+* [X] start ssh server and configure not to use with passwords
+* [X] start nginx and check external networking
+* [ ] set up E-mail routing
+
+It may help to mount the old root if you have it. Now it is on
+
+```
+mount /dev/sdd2 /mnt/old-root/
+```
+
+# Get Guix going
+
+* [X] Install Guix daemon
+* [X] Move /gnu/store to larger partition
+* [X] Update Guix daemon and setup in systemd
+* [X] Make available in /usr/local/guix-profiles
+* [X] Clean up /etc/profile
+
+We can bootstrap with the Debian guix package. Next move the store to a large partion and hard mount it in /etc/fstab with
+
+```
+/export2/gnu /gnu none defaults,bind 0 0
+```
+
+Run guix pull
+
+```
+wrk@tux04:~$ guix pull -p ~/opt/guix-pull --url=https://codeberg.org/guix/guix-mirror.git
+```
+
+Use that to install guix in /usr/local/guix-profiles
+
+```
+guix package -i guix -p /usr/local/guix-profiles/guix
+```
+
+and update the daemon in systemd accordingly. After that I tend to remove /usr/bin/guix
+
+The Debian installer configures guix. I tend to remove the profiles from /etc/profile so people have a minimal profile.
+
+# Check database
+
+* [X] Install mariadb
+* [ ] Recover database
+* [ ] Test permissions
+* [ ] Mariadb update my.cnf
+
+Basically recover the database from a backup is the best start and set permissions. We usually take the default mariadb unless production is already on a newer version - so we move to guix deployment.
+
+On tux02 mariadb-10.5.8 is running. On Debian it is now 10.11.11-0+deb12u1, so we should be good. On Guix is 10.10 at this point.
+
+```
+apt-get install mariadb-server
+```
+
+Next unpack the database files and set permissions to the mysql user. And (don't forget) update the /etc/mysql config files.
+
+Restart mysql until you see:
+
+```
+mysql -u webqtlout -p -e "show databases"
++---------------------------+
+| Database                  |
++---------------------------+
+| 20081110_uthsc_dbdownload |
+| db_GeneOntology           |
+| db_webqtl                 |
+| db_webqtl_s               |
+| go                        |
+| information_schema        |
+| kegg                      |
+| mysql                     |
+| performance_schema        |
+| sys                       |
++---------------------------+
+```
+
+=> topics/systems/mariadb/mariadb.gmi
+
+## Recover database
+
+We use borg for backups. First restore the backup on the PCIe. Also a test for overheating!
+
+
+# Check sending E-mails
+
+The swaks package is quite useful to test for a valid receive host:
+
+```
+swaks --to testing-my-server@gmail.com --server smtp.uthsc.edu
+=== Trying smtp.uthsc.edu:25...
+=== Connected to smtp.uthsc.edu.
+<-  220 mailrouter8.uthsc.edu ESMTP NO UCE
+ -> EHLO tux04.uthsc.edu
+<-  250-mailrouter8.uthsc.edu
+<-  250-PIPELINING
+<-  250-SIZE 26214400
+<-  250-VRFY
+<-  250-ETRN
+<-  250-STARTTLS
+<-  250-ENHANCEDSTATUSCODES
+<-  250-8BITMIME
+<-  250-DSN
+<-  250 SMTPUTF8
+ -> MAIL FROM:<root@tux04.uthsc.edu>
+<-  250 2.1.0 Ok
+ -> RCPT TO:<pjotr2020@thebird.nl>
+<-  250 2.1.5 Ok
+ -> DATA
+<-  354 End data with <CR><LF>.<CR><LF>
+ -> Date: Thu, 06 Mar 2025 08:34:24 +0000
+ -> To: pjotr2020@thebird.nl
+ -> From: root@tux04.uthsc.edu
+ -> Subject: test Thu, 06 Mar 2025 08:34:24 +0000
+ -> Message-Id: <20250306083424.624509@tux04.uthsc.edu>
+ -> X-Mailer: swaks v20201014.0 jetmore.org/john/code/swaks/
+ ->
+ -> This is a test mailing
+ ->
+ ->
+ -> .
+<-  250 2.0.0 Ok: queued as 4157929DD
+ -> QUIT
+<-  221 2.0.0 Bye                                                                                                                             === Connection closed with remote host
+```
+
+An exim configuration can be
+
+```
+dc_eximconfig_configtype='smarthost'
+dc_other_hostnames='genenetwork.org'
+dc_local_interfaces='127.0.0.1 ; ::1'
+dc_readhost=''
+dc_relay_domains=''
+dc_minimaldns='false'
+dc_relay_nets=''
+dc_smarthost='smtp.uthsc.edu'
+CFILEMODE='644'
+dc_use_split_config='false'
+dc_hide_mailname='false'
+dc_mailname_in_oh='true'
+dc_localdelivery='maildir_home'
+```
+
+And this should work:
+
+```
+swaks --to myemailaddress --from john@uthsc.edu --server localhost
+```
+
+# Backups
+
+* [ ] Create an ibackup user.
+* [ ] Install borg (usually guix version)
+* [ ] Create a borg passphrase