summaryrefslogtreecommitdiff
path: root/topics/systems
diff options
context:
space:
mode:
Diffstat (limited to 'topics/systems')
-rw-r--r--topics/systems/backup-drops.gmi34
-rw-r--r--topics/systems/backups-with-borg.gmi220
-rw-r--r--topics/systems/ci-cd.gmi48
-rw-r--r--topics/systems/mariadb/mariadb.gmi11
-rw-r--r--topics/systems/mariadb/precompute-mapping-input-data.gmi23
-rw-r--r--topics/systems/migrate-p2.gmi12
-rw-r--r--topics/systems/screenshot-github-webhook.pngbin0 -> 177112 bytes
-rw-r--r--topics/systems/synchronising-the-different-environments.gmi68
-rw-r--r--topics/systems/update-production-checklist.gmi182
9 files changed, 578 insertions, 20 deletions
diff --git a/topics/systems/backup-drops.gmi b/topics/systems/backup-drops.gmi
index 191b185..3f81c5a 100644
--- a/topics/systems/backup-drops.gmi
+++ b/topics/systems/backup-drops.gmi
@@ -4,6 +4,10 @@ To make backups we use a combination of sheepdog, borg, sshfs, rsync. sheepdog i
This system proves pretty resilient over time. Only on the synology server I can't get it to work because of some CRON permission issue.
+For doing the actual backups see
+
+=> ./backups-with-borg.gmi
+
# Tags
* assigned: pjotrp
@@ -13,7 +17,7 @@ This system proves pretty resilient over time. Only on the synology server I can
## Borg backups
-It is advised to use a backup password and not store that on the remote.
+Despite our precautions it is advised to use a backup password and *not* store that on the remote.
## Running sheepdog on rabbit
@@ -59,14 +63,14 @@ where remote can be an IP address.
Warning: if you introduce this `AllowUsers` command all users should be listed or people may get locked out of the machine.
-Next create a special key on the backup machine's ibackup user (just hit enter):
+Next create a special password-less key on the backup machine's ibackup user (just hit enter):
```
su ibackup
ssh-keygen -t ecdsa -f $HOME/.ssh/id_ecdsa_backup
```
-and copy the public key into the remote /home/bacchus/.ssh/authorized_keys
+and copy the public key into the remote /home/bacchus/.ssh/authorized_keys.
Now test it from the backup server with
@@ -82,13 +86,20 @@ On the drop server you can track messages by
tail -40 /var/log/auth.log
```
+or on recent linux with systemd
+
+```
+journalctl -r
+```
+
Next
```
ssh -v -i ~/.ssh/id_ecdsa_backup bacchus@dropserver
```
-should give a Broken pipe(!). In auth.log you may see something like
+should give a Broken pipe(!) or -- more recently -- it says `This service allows sftp connections only`.
+When running sshd with a verbose switch you may see something like
fatal: bad ownership or modes for chroot directory component "/export/backup/"
@@ -110,6 +121,19 @@ chown bacchus.bacchus backup/bacchus/drop/
chmod 0700 backup/bacchus/drop/
```
+Another error may be:
+
+```
+fusermount3: mount failed: Operation not permitted
+```
+
+This means you need to set the suid on the fusermount3 command. Bit nasty in Guix.
+
+```
+apt-get install fuse(3) sshfs
+chmod 4755 /usr/bin/fusermount
+```
+
If auth.log says error: /dev/pts/11: No such file or directory on ssh, or received disconnect (...) disconnected by user we are good to go!
Note: at this stage it may pay to track the system log with
@@ -171,3 +195,5 @@ sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,IdentityFile=~/.
The recent scripts can be found at
=> https://github.com/genenetwork/gn-deploy-servers/blob/master/scripts/tux01/backup_drop.sh
+
+# borg-borg
diff --git a/topics/systems/backups-with-borg.gmi b/topics/systems/backups-with-borg.gmi
new file mode 100644
index 0000000..1ad0112
--- /dev/null
+++ b/topics/systems/backups-with-borg.gmi
@@ -0,0 +1,220 @@
+# Borg backups
+
+We use borg for backups. Borg is an amazing tool and after 25+ years of making backups it just feels right.
+With the new tux04 production install we need to organize backups off-site. The first step is to create a
+borg runner using sheepdog -- sheepdog we use for monitoring success/failure.
+Sheepdog essentially wraps a Unix command and sends a report to a local or remote redis instance.
+Sheepdog also includes a web server for output:
+
+=> http://sheepdog.genenetwork.org/sheepdog/status.html
+
+which I run on one of my machines.
+
+# Tags
+
+* assigned: pjotrp
+* keywords: systems, backup, sheepdog, database
+
+# Install borg
+
+Usually I use a version of borg from guix. This should really be done as the borg user (ibackup).
+
+```
+mkdir ~/opt
+guix package -i borg ~/opt/borg
+tux04:~$ ~/opt/borg/bin/borg --version
+ 1.2.2
+```
+
+# Create a new backup dir and user
+
+The backup should live on a different disk from the things we backup, so when that disk fails we have another.
+
+The SQL database lives on /export and the containers live on /export2. /export3 is a largish slow drive, so perfect.
+
+By convention I point /export/backup to the real backup dir on /export3/backup/borg/ Another convention is that we use an ibackup user which has the backup passphrase in ~/.borg-pass. As root:
+
+```
+mkdir /export/backup/borg
+chown ibackup:ibackup /export/backup/borg
+chown ibackup:ibackup /home/ibackup/.borg-pass
+su ibackup
+```
+
+Now you should be able to load the passphrase and create the backup dir
+
+```
+id
+ uid=1003(ibackup)
+. ~/.borg-pass
+cd /export/backup/borg
+~/opt/borg/bin/borg init --encryption=repokey-blake2 genenetwork
+```
+
+Now we can run our first backup. Note that ibackup should be a member of the mysql and gn groups
+
+```
+mysql:x:116:ibackup
+```
+
+# First backup
+
+Run the backup the first time:
+
+```
+id
+ uid=1003(ibackup) groups=1003(ibackup),116(mysql)
+~/opt/borg/bin/borg create --progress --stats genenetwork::first-backup /export/mysql/database/*
+```
+
+You may first need to update permissions to give group access
+
+```
+chmod g+rx -R /var/lib/mysql/*
+```
+
+When that works borg reports:
+
+```
+Archive name: first-backup
+Archive fingerprint: 376d32fda9738daa97078fe4ca6d084c3fa9be8013dc4d359f951f594f24184d
+Time (start): Sat, 2025-02-08 04:46:48
+Time (end): Sat, 2025-02-08 05:30:01
+Duration: 43 minutes 12.87 seconds
+Number of files: 799
+Utilization of max. archive size: 0%
+------------------------------------------------------------------------------
+ Original size Compressed size Deduplicated size
+This archive: 534.24 GB 238.43 GB 237.85 GB
+All archives: 534.24 GB 238.43 GB 238.38 GB
+ Unique chunks Total chunks
+Chunk index: 200049 227228
+------------------------------------------------------------------------------
+```
+
+50% compression is not bad. borg is incremental so it will only backup differences next round.
+
+Once borg works we could run a CRON job. But we should use the sheepdog monitor to make sure backups keep going without failure going unnoticed.
+
+# Using the sheepdog
+
+=> https://github.com/pjotrp/deploy sheepdog code
+
+## Clone sheepdog
+
+=> https://github.com/pjotrp/deploy#install sheepdog install
+
+Essentially clone the repo so it shows up in ~/deploy
+
+```
+cd /home/ibackup
+git clone https://github.com/pjotrp/deploy.git
+/export/backup/scripts/tux04/backup-tux04.sh
+```
+
+## Setup redis
+
+All sheepdog messages get pushed to redis. You can run it locally or remotely.
+
+By default we use redis, but syslog and others may also be used. The advantage of redis is that it is not bound to the same host, can cross firewalls using an ssh reverse tunnel, and is easy to query.
+
+=> https://github.com/pjotrp/deploy#install sheepdog install
+
+In our case we use redis on a remote host and the results get displayed by a webserver. Also some people get E-mail updates on failure. The configuration is in
+
+```
+/home/ibackup# cat .config/sheepdog/sheepdog.conf .
+{
+ "redis": {
+ "host" : "remote-host",
+ "password": "something"
+ }
+}
+```
+
+If you see localhost with port 6377 it is probably a reverse tunnel setup:
+
+=> https://github.com/pjotrp/deploy#redis-reverse-tunnel
+
+Update the fields according to what we use. Main thing is that is the definition of the sheepdog->redis connector. If you also use sheepdog as another user you'll need to add a config.
+
+Sheepdog should show a warning when you configure redis and it is not connecting.
+
+## Scripts
+
+Typically I run the cron job from root CRON so people can find it. Still it is probably a better idea to use an ibackup CRON. In my version a script is run that also captures output:
+
+```cron root
+0 6 * * * /bin/su ibackup -c /export/backup/scripts/tux04/backup-tux04.sh >> ~/cron.log 2>&1
+```
+
+The script contains something like
+
+```bash
+#! /bin/bash
+if [ "$EUID" -eq 0 ]
+ then echo "Please do not run as root. Run as: su ibackup -c $0"
+ exit
+fi
+rundir=$(dirname "$0")
+# ---- for sheepdog
+source $rundir/sheepdog_env.sh
+cd $rundir
+sheepdog_borg.rb -t borg-tux04-sql --group ibackup -v -b /export/backup/borg/genenetwork /export/mysql/database/*
+```
+
+and the accompanying sheepdov_env.sh
+
+```
+export GEM_PATH=/home/ibackup/opt/deploy/lib/ruby/vendor_ruby
+export PATH=/home/ibackup/opt/deploy/deploy/bin:/home/wrk/opt/deploy/bin:$PATH
+```
+
+If it reports
+
+```
+/export/backup/scripts/tux04/backup-tux04.sh: line 11: /export/backup/scripts/tux04/sheepdog_env.sh: No such file or directory
+```
+
+you need to install sheepdog first.
+
+If all shows green (and takes some time) we made a backup. Check the backup with
+
+```
+ibackup@tux04:/export/backup/borg$ borg list genenetwork/
+first-backup Sat, 2025-02-08 04:39:50 [58715b883c080996ab86630b3ae3db9bedb65e6dd2e83977b72c8a9eaa257cdf]
+borg-tux04-sql-20250209-01:43-Sun Sun, 2025-02-09 01:43:23 [5e9698a032143bd6c625cdfa12ec4462f67218aa3cedc4233c176e8ffb92e16a]
+```
+and you should see the latest. The contents with all files should be visible with
+
+```
+borg list genenetwork::borg-tux04-sql-20250209-01:43-Sun
+```
+
+Make sure you not only see just a symlink.
+
+# More backups
+
+Our production server runs databases and file stores that need to be backed up too.
+
+# Drop backups
+
+Once backups work it is useful to copy them to a remote server, so when the machine stops functioning we have another chance at recovery. See
+
+=> ./backup-drops.gmi
+
+# Recovery
+
+With tux04 we ran into a problem where all disks were getting corrupted(!) Probably due to the RAID controller, but we still need to figure that one out.
+
+Anyway, we have to assume the DB is corrupt. Files are corrupt AND the backups are corrupt. Borg backup has checksums which you can
+
+```
+borg check repo
+```
+
+it has a --repair switch which we needed to remove some faults in the backup itself:
+
+```
+borg check --repair repo
+```
diff --git a/topics/systems/ci-cd.gmi b/topics/systems/ci-cd.gmi
index 6aa17f2..a1ff2e3 100644
--- a/topics/systems/ci-cd.gmi
+++ b/topics/systems/ci-cd.gmi
@@ -31,7 +31,7 @@ Arun has figured out the CI part. It runs a suitably configured laminar CI servi
CD hasn't been figured out. Normally, Guix VMs and containers created by `guix system` can only access the store read-only. Since containers don't have write access to the store, you cannot `guix build' from within a container or deploy new containers from within a container. This is a problem for CD. How do you make Guix containers have write access to the store?
-Another alternative for CI/ CID were to have the quick running tests, e.g unit tests, run on each commit to branch "main". Once those are successful, the CI/CD system we choose should automatically pick the latest commit that passed the quick running tests for for further testing and deployment, maybe once an hour or so. Once the next battery of tests is passed, the CI/CD system will create a build/artifact to be deployed to staging and have the next battery of tests runs against it. If that passes, then that artifact could be deployed to production, and details on the commit and
+Another alternative for CI/ CD were to have the quick running tests, e.g unit tests, run on each commit to branch "main". Once those are successful, the CI/CD system we choose should automatically pick the latest commit that passed the quick running tests for for further testing and deployment, maybe once an hour or so. Once the next battery of tests is passed, the CI/CD system will create a build/artifact to be deployed to staging and have the next battery of tests runs against it. If that passes, then that artifact could be deployed to production, and details on the commit and
#### Possible Steps
@@ -90,3 +90,49 @@ This contains a check-list of things that need to be done:
=> /topics/systems/orchestration Orchestration
=> /issues/broken-cd Broken-cd (Resolved)
+
+## Adding a web-hook
+
+### Github hooks
+
+IIRC actions run artifacts inside github's infrastracture. We use webhooks: e.g.
+
+Update the hook at
+
+=> https://github.com/genenetwork/genenetwork3/settings/hooks
+
+=> ./screenshot-github-webhook.png
+
+To trigger CI manually, run this with the project name:
+
+```
+curl https://ci.genenetwork.org/hooks/example-gn3
+```
+
+For gemtext we have a github hook that adds a forge-project and looks like
+
+```lisp
+(define gn-gemtext-threads-project
+ (forge-project
+ (name "gn-gemtext-threads")
+ (repository "https://github.com/genenetwork/gn-gemtext-threads/")
+ (ci-jobs (list (forge-laminar-job
+ (name "gn-gemtext-threads")
+ (run (with-packages (list nss-certs openssl)
+ (with-imported-modules '((guix build utils))
+ #~(begin
+ (use-modules (guix build utils))
+
+ (setenv "LC_ALL" "en_US.UTF-8")
+ (invoke #$(file-append tissue "/bin/tissue")
+ "pull" "issues.genenetwork.org"))))))))
+ (ci-jobs-trigger 'webhook)))
+```
+
+Guix forge can be found at
+
+=> https://git.systemreboot.net/guix-forge/
+
+### git.genenetwork.org hooks
+
+TBD
diff --git a/topics/systems/mariadb/mariadb.gmi b/topics/systems/mariadb/mariadb.gmi
index ae0ab19..ec8b739 100644
--- a/topics/systems/mariadb/mariadb.gmi
+++ b/topics/systems/mariadb/mariadb.gmi
@@ -16,6 +16,8 @@ To install Mariadb (as a container) see below and
Start the client and:
```
+mysql
+show databases
MariaDB [db_webqtl]> show binary logs;
+-----------------------+-----------+
| Log_name | File_size |
@@ -60,4 +62,11 @@ Stop the running mariadb-guix.service. Restore the latest backup archive and ove
=> https://www.borgbackup.org/ Borg
=> https://borgbackup.readthedocs.io/en/stable/ Borg documentation
-#
+# Upgrade mariadb
+
+It is wise to upgrade mariadb once in a while. In a disaster recovery it is better to move forward in versions too.
+Before upgrading make sure there is a decent backup of the current setup.
+
+See also
+
+=> issues/systems/tux04-disk-issues.gmi
diff --git a/topics/systems/mariadb/precompute-mapping-input-data.gmi b/topics/systems/mariadb/precompute-mapping-input-data.gmi
index 0c89fe5..977120d 100644
--- a/topics/systems/mariadb/precompute-mapping-input-data.gmi
+++ b/topics/systems/mariadb/precompute-mapping-input-data.gmi
@@ -49,10 +49,29 @@ The original reaper precompute lives in
=> https://github.com/genenetwork/genenetwork2/blob/testing/scripts/maintenance/QTL_Reaper_v6.py
-This script first fetches inbredsets
+More recent incarnations are at v8, including a PublishData version that can be found in
+
+=> https://github.com/genenetwork/genenetwork2/tree/testing/scripts/maintenance
+
+Note that the locations are on space:
+
+```
+cd /mount/space2/lily-clone/acenteno/GN-Data
+ls -l
+python QTL_Reaper_v8_space_good.py 116
+--
+python UPDATE_Mean_MySQL_tab.py
+cd /mount/space2/lily-clone/gnshare/gn/web/webqtl/maintainance
+ls -l
+python QTL_Reaper_cal_lrs.py 7
+```
+
+The first task is to prepare an update script that can run a set at a time and compute GEMMA output (instead of reaper).
+
+The script first fetches inbredsets
```
- select Id,InbredSetId,InbredSetName,Name,SpeciesId,FullName,public,MappingMethodId,GeneticType,Family,FamilyOrder,MenuOrderId,InbredSetCode from InbredSet LIMIT 5;
+select Id,InbredSetId,InbredSetName,Name,SpeciesId,FullName,public,MappingMethodId,GeneticType,Family,FamilyOrder,MenuOrderId,InbredSetCode from InbredSet LIMIT 5;
+----+-------------+-------------------+----------+-----------+-------------------+--------+-----------------+-------------+--------------------------------------------------+-------------+-------------+---------------+
| Id | InbredSetId | InbredSetName | Name | SpeciesId | FullName | public | MappingMethodId | GeneticType | Family | FamilyOrder | MenuOrderId | InbredSetCode |
+----+-------------+-------------------+----------+-----------+-------------------+--------+-----------------+-------------+--------------------------------------------------+-------------+-------------+---------------+
diff --git a/topics/systems/migrate-p2.gmi b/topics/systems/migrate-p2.gmi
deleted file mode 100644
index c7fcb90..0000000
--- a/topics/systems/migrate-p2.gmi
+++ /dev/null
@@ -1,12 +0,0 @@
-* Penguin2 crash
-
-This week the boot partition of P2 crashed. We have a few lessons here, not least having a fallback for all services ;)
-
-* Tasks
-
-- [ ] setup space.uthsc.edu for GN2 development
-- [ ] update DNS to tux02 128.169.4.52 and space 128.169.5.175
-- [ ] move CI/CD to tux02
-
-
-* Notes
diff --git a/topics/systems/screenshot-github-webhook.png b/topics/systems/screenshot-github-webhook.png
new file mode 100644
index 0000000..08feed3
--- /dev/null
+++ b/topics/systems/screenshot-github-webhook.png
Binary files differ
diff --git a/topics/systems/synchronising-the-different-environments.gmi b/topics/systems/synchronising-the-different-environments.gmi
new file mode 100644
index 0000000..207b234
--- /dev/null
+++ b/topics/systems/synchronising-the-different-environments.gmi
@@ -0,0 +1,68 @@
+# Synchronising the Different Environments
+
+## Tags
+
+* status: open
+* priority:
+* type: documentation
+* assigned: fredm
+* keywords: doc, docs, documentation
+
+## Introduction
+
+We have different environments we run for various reasons, e.g.
+
+* Production: This is the user-facing environment. This is what GeneNetwork is about.
+* gn2-fred: production-adjacent. It is meant to test out changes before they get to production. It is **NOT** meant for users.
+* CI/CD: Used for development. The latest commits get auto-deployed here. It's the first place (outside of developer machines) where errors and breakages are caught and/or revealed. This will break a lot. Do not expose to users!
+* staging: Uploader environment. This is where Felix, Fred and Arthur flesh out the upload process, and tasks, and also test out the uploader.
+
+These different environments demand synchronisation, in order to have mostly similar results and failure modes.
+
+## Synchronisation of the Environments
+
+### Main Database: MariaDB
+
+* [ ] TODO: Describe process
+
+=> https://issues.genenetwork.org/topics/systems/restore-backups Extract borg archive
+* Automate? Will probably need some checks for data sanity.
+
+### Authorisation Database
+
+* [ ] TODO: Describe process
+
+* Copy backup from production
+* Update/replace GN2 client configs in database
+* What other things?
+
+### Virtuoso/RDF
+
+* [ ] TODO: Describe process
+
+* Copy TTL (Turtle) files from (where?). Production might not always be latest source of TTL files.
+=> https://issues.genenetwork.org/issues/set-up-virtuoso-on-production Run setup to "activate" database entries
+* Can we automate this? What checks are necessary?
+
+## Genotype Files
+
+* [ ] TODO: Describe process
+
+* Copy from source-of-truth (currently Zach's tux01 and/or production).
+* Rsync?
+
+### gn-docs
+
+* [ ] TODO: Describe process
+
+* Not sure changes from other environments should ever take
+
+### AI Summaries (aka. gnqna)
+
+* [ ] TODO: Describe process
+
+* Update configs (should be once, during container setup)
+
+### Others?
+
+* [ ] TODO: Describe process
diff --git a/topics/systems/update-production-checklist.gmi b/topics/systems/update-production-checklist.gmi
new file mode 100644
index 0000000..b17077b
--- /dev/null
+++ b/topics/systems/update-production-checklist.gmi
@@ -0,0 +1,182 @@
+# Update production checklist
+
+
+# Tasks
+
+* [X] Install underlying Debian
+* [X] Get guix going
+* [ ] Check database
+* [ ] Check gemma working
+* [ ] Check global search
+* [ ] Check authentication
+* [ ] Check sending E-mails
+* [ ] Make sure info.genenetwork.org can reach the DB
+* [ ] Backups
+
+The following are at the system level
+
+* [ ] Make journalctl presistent
+* [ ] Update certificates in CRON
+* [ ] Run trim in CRON
+
+# Install underlying Debian
+
+For our production systems we use Debian as a base install. Once installed:
+
+* [X] set up git in /etc and limit permissions to root user
+* [X] add ttyS0 support for grub and kernel - so out-of-band works
+* [X] start ssh server and configure not to use with passwords
+* [X] start nginx and check external networking
+* [ ] set up E-mail routing
+
+It may help to mount the old root if you have it. Now it is on
+
+```
+mount /dev/sdd2 /mnt/old-root/
+```
+
+# Get Guix going
+
+* [X] Install Guix daemon
+* [X] Move /gnu/store to larger partition
+* [X] Update Guix daemon and setup in systemd
+* [X] Make available in /usr/local/guix-profiles
+* [X] Clean up /etc/profile
+
+We can bootstrap with the Debian guix package. Next move the store to a large partion and hard mount it in /etc/fstab with
+
+```
+/export2/gnu /gnu none defaults,bind 0 0
+```
+
+Run guix pull
+
+```
+wrk@tux04:~$ guix pull -p ~/opt/guix-pull --url=https://codeberg.org/guix/guix-mirror.git
+```
+
+Use that to install guix in /usr/local/guix-profiles
+
+```
+guix package -i guix -p /usr/local/guix-profiles/guix
+```
+
+and update the daemon in systemd accordingly. After that I tend to remove /usr/bin/guix
+
+The Debian installer configures guix. I tend to remove the profiles from /etc/profile so people have a minimal profile.
+
+# Check database
+
+* [X] Install mariadb
+* [ ] Recover database
+* [ ] Test permissions
+* [ ] Mariadb update my.cnf
+
+Basically recover the database from a backup is the best start and set permissions. We usually take the default mariadb unless production is already on a newer version - so we move to guix deployment.
+
+On tux02 mariadb-10.5.8 is running. On Debian it is now 10.11.11-0+deb12u1, so we should be good. On Guix is 10.10 at this point.
+
+```
+apt-get install mariadb-server
+```
+
+Next unpack the database files and set permissions to the mysql user. And (don't forget) update the /etc/mysql config files.
+
+Restart mysql until you see:
+
+```
+mysql -u webqtlout -p -e "show databases"
++---------------------------+
+| Database |
++---------------------------+
+| 20081110_uthsc_dbdownload |
+| db_GeneOntology |
+| db_webqtl |
+| db_webqtl_s |
+| go |
+| information_schema |
+| kegg |
+| mysql |
+| performance_schema |
+| sys |
++---------------------------+
+```
+
+=> topics/systems/mariadb/mariadb.gmi
+
+## Recover database
+
+We use borg for backups. First restore the backup on the PCIe. Also a test for overheating!
+
+
+# Check sending E-mails
+
+The swaks package is quite useful to test for a valid receive host:
+
+```
+swaks --to testing-my-server@gmail.com --server smtp.uthsc.edu
+=== Trying smtp.uthsc.edu:25...
+=== Connected to smtp.uthsc.edu.
+<- 220 mailrouter8.uthsc.edu ESMTP NO UCE
+ -> EHLO tux04.uthsc.edu
+<- 250-mailrouter8.uthsc.edu
+<- 250-PIPELINING
+<- 250-SIZE 26214400
+<- 250-VRFY
+<- 250-ETRN
+<- 250-STARTTLS
+<- 250-ENHANCEDSTATUSCODES
+<- 250-8BITMIME
+<- 250-DSN
+<- 250 SMTPUTF8
+ -> MAIL FROM:<root@tux04.uthsc.edu>
+<- 250 2.1.0 Ok
+ -> RCPT TO:<pjotr2020@thebird.nl>
+<- 250 2.1.5 Ok
+ -> DATA
+<- 354 End data with <CR><LF>.<CR><LF>
+ -> Date: Thu, 06 Mar 2025 08:34:24 +0000
+ -> To: pjotr2020@thebird.nl
+ -> From: root@tux04.uthsc.edu
+ -> Subject: test Thu, 06 Mar 2025 08:34:24 +0000
+ -> Message-Id: <20250306083424.624509@tux04.uthsc.edu>
+ -> X-Mailer: swaks v20201014.0 jetmore.org/john/code/swaks/
+ ->
+ -> This is a test mailing
+ ->
+ ->
+ -> .
+<- 250 2.0.0 Ok: queued as 4157929DD
+ -> QUIT
+<- 221 2.0.0 Bye === Connection closed with remote host
+```
+
+An exim configuration can be
+
+```
+dc_eximconfig_configtype='smarthost'
+dc_other_hostnames='genenetwork.org'
+dc_local_interfaces='127.0.0.1 ; ::1'
+dc_readhost=''
+dc_relay_domains=''
+dc_minimaldns='false'
+dc_relay_nets=''
+dc_smarthost='smtp.uthsc.edu'
+CFILEMODE='644'
+dc_use_split_config='false'
+dc_hide_mailname='false'
+dc_mailname_in_oh='true'
+dc_localdelivery='maildir_home'
+```
+
+And this should work:
+
+```
+swaks --to myemailaddress --from john@uthsc.edu --server localhost
+```
+
+# Backups
+
+* [ ] Create an ibackup user.
+* [ ] Install borg (usually guix version)
+* [ ] Create a borg passphrase