summaryrefslogtreecommitdiff
path: root/issues
diff options
context:
space:
mode:
Diffstat (limited to 'issues')
-rw-r--r--issues/genenetwork2/broken-collections-features.gmi44
-rw-r--r--issues/genenetwork2/fix-display-for-time-consumed-for-correlations.gmi15
-rw-r--r--issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zipbin0 -> 143152 bytes
-rw-r--r--issues/genenetwork3/ctl-maps-error.gmi46
-rw-r--r--issues/genenetwork3/generate-heatmaps-failing.gmi34
-rw-r--r--issues/genenetwork3/rqtl2-mapping-error.gmi42
-rw-r--r--issues/gn-uploader/AuthorisationError-gn-uploader.gmi66
-rw-r--r--issues/gn-uploader/replace-redis-with-sqlite3.gmi17
-rw-r--r--issues/gnqa/implement-no-login-requirement-for-gnqa.gmi20
-rw-r--r--issues/production-container-mechanical-rob-failure.gmi2
-rw-r--r--issues/systems/apps.gmi22
-rw-r--r--issues/systems/octoraid-storage.gmi18
-rw-r--r--issues/systems/penguin2-raid5.gmi61
-rw-r--r--issues/systems/tux02-production.gmi4
-rw-r--r--issues/systems/tux04-disk-issues.gmi277
-rw-r--r--issues/systems/tux04-production.gmi4
16 files changed, 664 insertions, 8 deletions
diff --git a/issues/genenetwork2/broken-collections-features.gmi b/issues/genenetwork2/broken-collections-features.gmi
new file mode 100644
index 0000000..4239929
--- /dev/null
+++ b/issues/genenetwork2/broken-collections-features.gmi
@@ -0,0 +1,44 @@
+# Broken Collections Features
+
+## Tags
+
+* type: bug
+* status: open
+* priority: high
+* assigned: zachs, fredm
+* keywords: gn2, genenetwork2, genenetwork 2, collections
+
+## Descriptions
+
+There are some features in the search results page, and/or the collections page that are broken — these are:
+
+* "CTL" feature
+* "MultiMap" feature
+* "Partial Correlations" feature
+* "Generate Heatmap" feature
+
+### Reproduce Issue
+
+* Go to https://genenetwork.org
+* Select "Mouse (Mus musculus, mm10) for "Species"
+* Select "BXD Family" for "Group"
+* Select "Traits and Cofactors" for "Type"
+* Select "BXD Published Phenotypes" for "Dataset"
+* Type "locomotion" in the "Get Any" field (without the quotes)
+* Click "Search"
+* In the results page, select the traits with the following "Record" values: "BXD_10050", "BXD_10051", "BXD_10088", "BXD_10091", "BXD_10092", "BXD_10455", "BXD_10569", "BXD_10570", "BXD_11316", "BXD_11317"
+* Click the "Add" button and add them to a new collection
+* In the resulting collections page, click the button for any of the listed failing features above
+
+### Failure modes
+
+* The "CTL" and "WCGNA" features have a failure mode that might have been caused by recent changes making use of AJAX calls, rather than submitting the form manually.
+* The "MultiMap" and "Generate Heatmap" features raise exceptions that need to be investigated and resolved
+* The "Partial Correlations" feature seems to run forever
+
+## Break-out Issues
+
+We break-out the issues above into separate pages to track the progress of the fixes for each feature separately.
+
+=> /issues/genenetwork3/ctl-maps-error
+=> /issues/genenetwork3/generate-heatmaps-failing
diff --git a/issues/genenetwork2/fix-display-for-time-consumed-for-correlations.gmi b/issues/genenetwork2/fix-display-for-time-consumed-for-correlations.gmi
new file mode 100644
index 0000000..0c8e9c8
--- /dev/null
+++ b/issues/genenetwork2/fix-display-for-time-consumed-for-correlations.gmi
@@ -0,0 +1,15 @@
+# Fix Display for the Time Consumed for Correlations
+
+## Tags
+
+* type: bug
+* status: closed, completed
+* priority: low
+* assigned: @alexm, @bonz
+* keywords: gn2, genenetwork2, genenetwork 2, gn3, genenetwork3 genenetwork 3, correlations, time display
+
+## Description
+
+The breakdown of the time consumed for the correlations computations, displayed at the bottom of the page, is not representative of reality. The time that GeneNetwork3 (or background process) takes for the computations is not actually represented in the breakdown, leading to wildly inaccurate displays of total time.
+
+This will need to be fixed.
diff --git a/issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zip b/issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zip
new file mode 100644
index 0000000..7681b88
--- /dev/null
+++ b/issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zip
Binary files differ
diff --git a/issues/genenetwork3/ctl-maps-error.gmi b/issues/genenetwork3/ctl-maps-error.gmi
new file mode 100644
index 0000000..6726357
--- /dev/null
+++ b/issues/genenetwork3/ctl-maps-error.gmi
@@ -0,0 +1,46 @@
+# CTL Maps Error
+
+## Tags
+
+* type: bug
+* status: open
+* priority: high
+* assigned: alexm, zachs, fredm
+* keywords: CTL, CTL Maps, gn3, genetwork3, genenetwork 3
+
+## Description
+
+Trying to run the CTL Maps feature in the collections page as described in
+=> /issues/genenetwork2/broken-collections-feature
+
+We get an error in the results page of the form:
+
+```
+{'error': '{\'code\': 1, \'output\': \'Loading required package: MASS\\nLoading required package: parallel\\nLoading required package: qtl\\nThere were 13 warnings (use warnings() to see them)\\nError in xspline(x, y, shape = 0, lwd = lwd, border = col, lty = lty, : \\n invalid value specified for graphical parameter "lwd"\\nCalls: ctl.lineplot -> draw.spline -> xspline\\nExecution halted\\n\'}'}
+```
+
+on the CLI the same error is rendered:
+```
+Loading required package: MASS
+Loading required package: parallel
+Loading required package: qtl
+There were 13 warnings (use warnings() to see them)
+Error in xspline(x, y, shape = 0, lwd = lwd, border = col, lty = lty, :
+ invalid value specified for graphical parameter "lwd"
+Calls: ctl.lineplot -> draw.spline -> xspline
+Execution halted
+```
+
+On my local development machine, the command run was
+```
+Rscript /home/frederick/genenetwork/genenetwork3/scripts/ctl_analysis.R /tmp/01828928-26e6-4cad-bbc8-59fd7a7977de.json
+```
+
+Here is a zipped version of the json file (follow the link and click download):
+=> https://github.com/genenetwork/gn-gemtext-threads/blob/main/issues/genenetwork3/01828928-26e6-4cad-bbc8-59fd7a7977de.json.zip
+
+Troubleshooting a while, I suspect
+=> https://github.com/genenetwork/genenetwork3/blob/27d9c9d6ef7f37066fc63af3d6585bf18aeec925/scripts/ctl_analysis.R#L79-L80 this is the offending code.
+
+=> https://cran.r-project.org/web/packages/ctl/ctl.pdf The manual for the ctl library
+indicates that our call above might be okay, which might mean something changed in the dependencies that the ctl library used.
diff --git a/issues/genenetwork3/generate-heatmaps-failing.gmi b/issues/genenetwork3/generate-heatmaps-failing.gmi
index 03256e6..522dc27 100644
--- a/issues/genenetwork3/generate-heatmaps-failing.gmi
+++ b/issues/genenetwork3/generate-heatmaps-failing.gmi
@@ -28,3 +28,37 @@ On https://gn2-fred.genenetwork.org the heatmaps fails with a note ("ERROR: unde
=> https://github.com/scipy/scipy/issues/19972
This issue should not be present with python-plotly@5.20.0 but since guix-bioinformatics pins the guix version to `b0b988c41c9e0e591274495a1b2d6f27fcdae15a`, we are not able to pull in newer versions of packages from guix.
+
+
+### Update 2025-04-08T10:59CDT
+
+Got the following error when I ran the background command manually:
+
+```
+$ export RUST_BACKTRACE=full
+$ /gnu/store/dp4zq4xiap6rp7h6vslwl1n52bd8gnwm-profile/bin/qtlreaper --geno /home/frederick/genotype_files/genotype/genotype/BXD.geno --n_permutations 1000 --traits /tmp/traits_test_file_n2E7V06Cx7.txt --main_output /tmp/qtlreaper/main_output_NGVW4sfYha.txt --permu_output /tmp/qtlreaper/permu_output_MJnzLbrsrC.txt
+thread 'main' panicked at src/regression.rs:216:25:
+index out of bounds: the len is 20 but the index is 20
+stack backtrace:
+ 0: 0x61399d77d46d - <unknown>
+ 1: 0x61399d7b5e13 - <unknown>
+ 2: 0x61399d78b649 - <unknown>
+ 3: 0x61399d78f26f - <unknown>
+ 4: 0x61399d78ee98 - <unknown>
+ 5: 0x61399d78f815 - <unknown>
+ 6: 0x61399d77d859 - <unknown>
+ 7: 0x61399d77d679 - <unknown>
+ 8: 0x61399d78f3f4 - <unknown>
+ 9: 0x61399d6f4063 - <unknown>
+ 10: 0x61399d6f41f7 - <unknown>
+ 11: 0x61399d708f18 - <unknown>
+ 12: 0x61399d6f6e4e - <unknown>
+ 13: 0x61399d6f9e93 - <unknown>
+ 14: 0x61399d6f9e89 - <unknown>
+ 15: 0x61399d78e505 - <unknown>
+ 16: 0x61399d6f8d55 - <unknown>
+ 17: 0x75ee2b945bf7 - __libc_start_call_main
+ 18: 0x75ee2b945cac - __libc_start_main@GLIBC_2.2.5
+ 19: 0x61399d6f4861 - <unknown>
+ 20: 0x0 - <unknown>
+```
diff --git a/issues/genenetwork3/rqtl2-mapping-error.gmi b/issues/genenetwork3/rqtl2-mapping-error.gmi
new file mode 100644
index 0000000..480c7c6
--- /dev/null
+++ b/issues/genenetwork3/rqtl2-mapping-error.gmi
@@ -0,0 +1,42 @@
+# R/qtl2 Maps Error
+
+## Tags
+
+* type: bug
+* status: open
+* priority: high
+* assigned: alexm, zachs, fredm
+* keywords: R/qtl2, R/qtl2 Maps, gn3, genetwork3, genenetwork 3
+
+## Reproduce
+
+* Go to https://genenetwork.org/
+* In the "Get Any" field, enter "synap*" and press the "Enter" key
+* In the search results, click on the "1435464_at" trait
+* Expand the "Mapping Tools" accordion section
+* Select the "R/qtl2" option
+* Click "Compute"
+* In the "Computing the Maps" page that results, click on "Display System Log"
+
+### Observed
+
+A traceback is observed, with an error of the following form:
+
+```
+⋮
+FileNotFoundError: [Errno 2] No such file or directory: '/opt/gn/tmp/gn3-tmpdir/JL9PvKm3OyKk.txt'
+```
+
+### Expected
+
+The mapping runs successfully and the results are presented in the form of a mapping chart/graph and a table of values.
+
+### Debug Notes
+
+The directory "/opt/gn/tmp/gn3-tmpdir/" exists, and is actually used by other mappings (i.e. The "R/qtl" and "Pair Scan" mappings) successfully.
+
+This might imply a code issue: Perhaps
+* a path is hardcoded, or
+* the wrong path value is passed
+
+The same error occurs on https://cd.genenetwork.org but does not seem to prevent CD from running the mapping to completion. Maybe something is missing on production — what, though?
diff --git a/issues/gn-uploader/AuthorisationError-gn-uploader.gmi b/issues/gn-uploader/AuthorisationError-gn-uploader.gmi
new file mode 100644
index 0000000..50a236d
--- /dev/null
+++ b/issues/gn-uploader/AuthorisationError-gn-uploader.gmi
@@ -0,0 +1,66 @@
+# AuthorisationError in gn uploader
+
+## Tags
+* assigned: fredm
+* status: open
+* priority: critical
+* type: error
+* key words: authorisation, permission
+
+## Description
+
+Trying to create population for Kilifish dataset in the gn-uploader webpage,
+then encountered the following error:
+```sh
+Traceback (most recent call last):
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/flask/app.py", line 917, in full_dispatch_request
+ rv = self.dispatch_request()
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/flask/app.py", line 902, in dispatch_request
+ return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/uploader/authorisation.py", line 23, in __is_session_valid__
+ return session.user_token().either(
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/pymonad/either.py", line 89, in either
+ return right_function(self.value)
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/uploader/authorisation.py", line 25, in <lambda>
+ lambda token: function(*args, **kwargs))
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/uploader/population/views.py", line 185, in create_population
+ ).either(
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/pymonad/either.py", line 91, in either
+ return left_function(self.monoid[0])
+ File "/gnu/store/wxb6rqf7125sb6xqd4kng44zf9yzsm5p-profile/lib/python3.10/site-packages/uploader/monadic_requests.py", line 99, in __fail__
+ raise Exception(_data)
+Exception: {'error': 'AuthorisationError', 'error-trace': 'Traceback (most recent call last):
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/flask/app.py", line 917, in full_dispatch_request
+ rv = self.dispatch_request()
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/flask/app.py", line 902, in dispatch_request
+ return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/authlib/integrations/flask_oauth2/resource_protector.py", line 110, in decorated
+ return f(*args, **kwargs)
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/gn_auth/auth/authorisation/resources/inbredset/views.py", line 95, in create_population_resource
+ ).then(
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/pymonad/monad.py", line 152, in then
+ result = self.map(function)
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/pymonad/either.py", line 106, in map
+ return self.__class__(function(self.value), (None, True))
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/gn_auth/auth/authorisation/resources/inbredset/views.py", line 98, in <lambda>
+ "resource": create_resource(
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/gn_auth/auth/authorisation/resources/inbredset/models.py", line 25, in create_resource
+ return _create_resource(cursor,
+ File "/gnu/store/38iayxz7dgm86f2x76kfaa6gwicnnjg4-profile/lib/python3.10/site-packages/gn_auth/auth/authorisation/checks.py", line 56, in __authoriser__
+ raise AuthorisationError(error_description)
+gn_auth.auth.errors.AuthorisationError: Insufficient privileges to create a resource
+', 'error_description': 'Insufficient privileges to create a resource'}
+
+```
+The error above resulted from the attempt to upload the following information on the gn-uploader-`create population section`
+Input details are as follows:
+Full Name: Kilifish F2 Intercross Lines
+Name: KF2_Lines
+Population code: KF2
+Description: Kilifish second generation population
+Family: Crosses, AIL, HS
+Mapping Methods: GEMMA, QTLReaper, R/qtl
+Genetic type: intercross
+
+And when pressed the `Create Population` icon, it led to the error above.
+
diff --git a/issues/gn-uploader/replace-redis-with-sqlite3.gmi b/issues/gn-uploader/replace-redis-with-sqlite3.gmi
new file mode 100644
index 0000000..3e5020a
--- /dev/null
+++ b/issues/gn-uploader/replace-redis-with-sqlite3.gmi
@@ -0,0 +1,17 @@
+# Replace Redis with SQL
+
+## Tags
+
+* status: open
+* priority: low
+* assigned: fredm
+* type: feature, feature-request, feature request
+* keywords: gn-uploader, uploader, redis, sqlite, sqlite3
+
+## Description
+
+We currently (as of 2024-06-27) use Redis for tracking any asynchronous jobs (e.g. QC on uploaded files).
+
+A lot of what we use redis for, we can do in one of the many SQL databases (we'll probably use SQLite3 anyway), which are more standardised, and easier to migrate data from and to. It has the added advantage that we can open multiple connections to the database, enabling the different processes to update the status and metadata of the same job consistently.
+
+Changes done here can then be migrated to the other systems, i.e. GN2, GN3, and gn-auth, as necessary.
diff --git a/issues/gnqa/implement-no-login-requirement-for-gnqa.gmi b/issues/gnqa/implement-no-login-requirement-for-gnqa.gmi
new file mode 100644
index 0000000..9dcef53
--- /dev/null
+++ b/issues/gnqa/implement-no-login-requirement-for-gnqa.gmi
@@ -0,0 +1,20 @@
+# Implement No-Login Requirement for GNQA
+
+## Tags
+
+* type: feature
+* status: progress
+* priority: medium
+* assigned: alexm,
+* keywords: gnqa, user experience, authentication, login, llm
+
+## Description
+This feature will allow usage of LLM/GNQA features without requiring user authentication, while implementing measures to filter out bots
+
+
+## Tasks
+
+* [x] If logged in: perform AI search with zero penalty
+* [ ] Add caching lifetime to save on token usage
+* [ ] Routes: check for referrer headers — if the previous search was not from the homepage, perform AI search
+* [ ] If global search returns more than *n* results (*n = number*), perform an AI search
diff --git a/issues/production-container-mechanical-rob-failure.gmi b/issues/production-container-mechanical-rob-failure.gmi
index a32194e..ae6bae8 100644
--- a/issues/production-container-mechanical-rob-failure.gmi
+++ b/issues/production-container-mechanical-rob-failure.gmi
@@ -2,7 +2,7 @@
## Tags
-* status: open
+* status: closed, completed, fixed
* priority: high
* type: bug
* assigned: fredm
diff --git a/issues/systems/apps.gmi b/issues/systems/apps.gmi
index 51c9d24..b9d4155 100644
--- a/issues/systems/apps.gmi
+++ b/issues/systems/apps.gmi
@@ -153,7 +153,7 @@ downloading from http://cran.r-project.org/src/contrib/Archive/KernSmooth/KernSm
- 'configure' phasesha256 hash mismatch for /gnu/store/n05zjfhxl0iqx1jbw8i6vv1174zkj7ja-KernSmooth_2.23-17.tar.gz:
expected hash: 11g6b0q67vasxag6v9m4px33qqxpmnx47c73yv1dninv2pz76g9b
actual hash: 1ciaycyp79l5aj78gpmwsyx164zi5jc60mh84vxxzq4j7vlcdb5p
-hash mismatch for store item '/gnu/store/n05zjfhxl0iqx1jbw8i6vv1174zkj7ja-KernSmooth_2.23-17.tar.gz'
+ hash mismatch for store item '/gnu/store/n05zjfhxl0iqx1jbw8i6vv1174zkj7ja-KernSmooth_2.23-17.tar.gz'
```
Guix checks and it is not great CRAN allows for changing tarballs with the same version number!! Luckily building with a more recent version of Guix just worked (TM). Now we create a root too:
@@ -184,12 +184,24 @@ and it looks like lines like these need to be updated:
=> https://github.com/genenetwork/singleCellRshiny/blob/6b2a344dd0d02f65228ad8c350bac0ced5850d05/app.R#L167
-Let me ask the author Siamak Yousefi.
+Let me ask the author Siamak Yousefi. I think we'll drop it.
+
+## longevity
+
+Package definition is at
+
+=> https://git.genenetwork.org/guix-bioinformatics/tree/gn/packages/mouse-longevity.scm
+
+Container is at
+
+=> https://git.genenetwork.org/guix-bioinformatics/tree/gn/services/bxd-power-container.scm
## jumpshiny
+Jumpshiny is hosted on balg01. Scripts are in tux02 git.
+
```
-balg01:~/gn-machines$ guix system container --network -L . -L ../guix-bioinformatics/ -L ../guix-past/modules/ --substitute-urls='https:
-//ci.guix.gnu.org https://bordeaux.guix.gnu.org https://cuirass.genenetwork.org' test-r-container.scm -L ../guix-forge/guix/
-/gnu/store/xyks73sf6pk78rvrwf45ik181v0zw8rx-run-container
+root@balg01:/home/j*/gn-machines# . /usr/local/guix-profiles/guix-pull/etc/profile
+guix system container --network -L . -L ../guix-forge/guix/ -L ../guix-bioinformatics/ -L ../guix-past/modules/ --substitute-urls='https://ci.guix.gnu.org https://bordeaux.guix.gnu.org https://cuirass.genenetwork.org' test-r-container.scm -L ../guix-forge/guix/gnu/store/xyks73sf6pk78rvrwf45ik181v0zw8rx-run-container
+/gnu/store/6y65x5jk3lxy4yckssnl32yayjx9nwl5-run-container
```
diff --git a/issues/systems/octoraid-storage.gmi b/issues/systems/octoraid-storage.gmi
new file mode 100644
index 0000000..97e0e55
--- /dev/null
+++ b/issues/systems/octoraid-storage.gmi
@@ -0,0 +1,18 @@
+# OctoRAID
+
+We are building machines that can handle cheap drives.
+
+# octoraid01
+
+This is a jetson with 4 22TB seagate-ironwolf-pro-st22000nt001-22tb-enterprise-nas-hard-drives-7200-rpm.
+
+Unfortunately the stock kernel has no RAID support, so we simple mount the 4 drives (hosted on a USB-SATA bridge).
+
+Stress testing:
+
+```
+cd /export/nfs/lair01
+stress -v -d 1
+```
+
+Running on multiple disks the jetson is holding up well!
diff --git a/issues/systems/penguin2-raid5.gmi b/issues/systems/penguin2-raid5.gmi
new file mode 100644
index 0000000..f03075d
--- /dev/null
+++ b/issues/systems/penguin2-raid5.gmi
@@ -0,0 +1,61 @@
+# Penguin2 RAID 5
+
+# Tags
+
+* assigned: @fredm, @pjotrp
+* status: in progress
+
+# Description
+
+The current RAID contains 3 disks:
+
+```
+root@penguin2:~# cat /proc/mdstat
+md0 : active raid5 sdb1[1] sda1[0] sdg1[4]
+/dev/md0 33T 27T 4.2T 87% /export
+```
+
+using /dev/sda,sdb,sdg
+
+The current root and swap is on
+
+```
+# root
+/dev/sdd1 393G 121G 252G 33% /
+# swap
+/dev/sdd5 partition 976M 76.5M -2
+```
+
+We can therefore add four new disks in slots /dev/sdc,sde,sdf,sdh
+
+penguin2 has no out-of-band and no serial connector right now. That means any work needs to be done on the terminal.
+
+Boot loader menu:
+
+```
+menuentry 'Debian GNU/Linux' --class debian --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-7ff268df-cb90-4cbc-9d76-7fd6677b4964' {
+ load_video
+ insmod gzio
+ if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
+ insmod part_msdos
+ insmod ext2
+ set root='hd2,msdos1'
+ if [ x$feature_platform_search_hint = xy ]; then
+ search --no-floppy --fs-uuid --set=root --hint-bios=hd2,msdos1 --hint-efi=hd2,msdos1 --hint-baremetal=ahci2,msdos1 7ff268df-cb90-4cbc-9d76-7fd6677b4964
+ else
+ search --no-floppy --fs-uuid --set=root 7ff268df-cb90-4cbc-9d76-7fd6677b4964
+ fi
+ echo 'Loading Linux 5.10.0-18-amd64 ...'
+ linux /boot/vmlinuz-5.10.0-18-amd64 root=UUID=7ff268df-cb90-4cbc-9d76-7fd6677b4964 ro quiet
+ echo 'Loading initial ramdisk ...'
+ initrd /boot/initrd.img-5.10.0-18-amd64
+}
+```
+
+Added to sdd MBR
+
+```
+root@penguin2:~# grub-install /dev/sdd
+Installing for i386-pc platform.
+Installation finished. No error reported.
+```
diff --git a/issues/systems/tux02-production.gmi b/issues/systems/tux02-production.gmi
index 7de911f..d811c5e 100644
--- a/issues/systems/tux02-production.gmi
+++ b/issues/systems/tux02-production.gmi
@@ -14,9 +14,9 @@ We are going to move production to tux02 - tux01 will be the staging machine. Th
* [X] update guix guix-1.3.0-9.f743f20
* [X] set up nginx (Debian)
-* [X] test ipmi console (172.23.30.40)
+* [X] test ipmi console
* [X] test ports (nginx)
-* [?] set up network for external tux02e.uthsc.edu (128.169.4.52)
+* [?] set up network for external tux02
* [X] set up deployment evironment
* [X] sheepdog copy database backup from tux01 on a daily basis using ibackup user
* [X] same for GN2 production environment
diff --git a/issues/systems/tux04-disk-issues.gmi b/issues/systems/tux04-disk-issues.gmi
index 9bba105..bc6e1db 100644
--- a/issues/systems/tux04-disk-issues.gmi
+++ b/issues/systems/tux04-disk-issues.gmi
@@ -101,3 +101,280 @@ and nothing ;). Megacli is actually the tool to use
```
megacli -AdpAllInfo -aAll
```
+
+# Database
+
+During a backup the DB shows this error:
+
+```
+2025-03-02 06:28:33 Database page corruption detected at page 1079428, retrying...\n[01] 2025-03-02 06:29:33 Database page corruption detected at page 1103108, retrying...
+```
+
+
+Interestingly the DB recovered on a second backup.
+
+The database is hosted on a solid /dev/sde Dell Ent NVMe FI. The log says
+
+```
+kernel: I/O error, dev sde, sector 2136655448 op 0x0:(READ) flags 0x80700 phys_seg 40 prio class 2
+```
+
+Suggests:
+
+=> https://stackoverflow.com/questions/50312219/blk-update-request-i-o-error-dev-sda-sector-xxxxxxxxxxx
+
+> The errors that you see are interface errors, they are not coming from the disk itself but rather from the connection to it. It can be the cable or any of the ports in the connection.
+> Since the CRC errors on the drive do not increase I can only assume that the problem is on the receive side of the machine you use. You should check the cable and try a different SATA port on the server.
+
+and someone wrote
+
+> analyzed that most of the reasons are caused by intensive reading and writing. This is a CDN cache node. Type reading NVME temperature is relatively high, if it continues, it will start to throttle and then slowly collapse.
+
+and temperature on that drive has been 70 C.
+
+Mariabd log is showing errors:
+
+```
+2025-03-02 6:54:47 0 [ERROR] InnoDB: Failed to read page 449925 from file './db_webqtl/SnpAll.ibd': Page read from tablespace is corrupted.
+2025-03-02 7:01:43 489015 [ERROR] Got error 180 when reading table './db_webqtl/ProbeSetXRef'
+2025-03-02 8:10:32 489143 [ERROR] Got error 180 when reading table './db_webqtl/ProbeSetXRef'
+```
+
+Let's try and dump those tables when the backup is done.
+
+```
+mariadb-dump -uwebqtlout db_webqtl SnpAll
+mariadb-dump: Error 1030: Got error 1877 "Unknown error 1877" from storage engine InnoDB when dumping table `SnpAll` at row: 0
+mariadb-dump -uwebqtlout db_webqtl ProbeSetXRef > ProbeSetXRef.sql
+```
+
+Eeep:
+
+```
+tux04:/etc$ mariadb-check -uwebqtlout -c db_webqtl ProbeSetXRef
+db_webqtl.ProbeSetXRef
+Warning : InnoDB: Index ProbeSetFreezeId is marked as corrupted
+Warning : InnoDB: Index ProbeSetId is marked as corrupted
+error : Corrupt
+tux04:/etc$ mariadb-check -uwebqtlout -c db_webqtl SnpAll
+db_webqtl.SnpAll
+Warning : InnoDB: Index PRIMARY is marked as corrupted
+Warning : InnoDB: Index SnpName is marked as corrupted
+Warning : InnoDB: Index Rs is marked as corrupted
+Warning : InnoDB: Index Position is marked as corrupted
+Warning : InnoDB: Index Source is marked as corrupted
+error : Corrupt
+```
+
+On tux01 we have a working database, we can test with
+
+```
+mysqldump --no-data --all-databases > table_schema.sql
+mysqldump -uwebqtlout db_webqtl SnpAll > SnpAll.sql
+```
+
+Running the backup with rate limiting from:
+
+```
+Mar 02 17:09:59 tux04 sudo[548058]: pam_unix(sudo:session): session opened for user root(uid=0) by wrk(uid=1000)
+Mar 02 17:09:59 tux04 sudo[548058]: wrk : TTY=pts/3 ; PWD=/export3/local/home/wrk/iwrk/deploy/gn-deploy-servers/scripts/tux04 ; USER=roo>
+Mar 02 17:09:55 tux04 sudo[548058]: pam_unix(sudo:auth): authentication failure; logname=wrk uid=1000 euid=0 tty=/dev/pts/3 ruser=wrk rhost= >
+Mar 02 17:04:26 tux04 su[548006]: pam_unix(su:session): session opened for user ibackup(uid=1003) by wrk(uid=0)
+```
+
+Oh oh
+
+Tux04 is showing errors on all disks. We have to bail out. I am copying the potentially corrupted files to tux01 right now. We have backups, so nothing serious I hope. I am only worried about the myisam files we have because they have no strong internal validation:
+
+```
+2025-03-04 8:32:45 502 [ERROR] db_webqtl.ProbeSetData: Record-count is not ok; is 5264578601 Should be: 5264580806
+2025-03-04 8:32:45 502 [Warning] db_webqtl.ProbeSetData: Found 28665 deleted space. Should be 0
+2025-03-04 8:32:45 502 [Warning] db_webqtl.ProbeSetData: Found 2205 deleted blocks Should be: 0
+2025-03-04 8:32:45 502 [ERROR] Got an error from thread_id=502, ./storage/myisam/ha_myisam.cc:1120
+2025-03-04 8:32:45 502 [ERROR] MariaDB thread id 502, OS thread handle 139625162532544, query id 837999 localhost webqtlout Checking table
+CHECK TABLE ProbeSetData
+2025-03-04 8:34:02 79695 [ERROR] mariadbd: Table './db_webqtl/ProbeSetData' is marked as crashed and should be repaired
+```
+
+See also
+
+=> https://dev.mysql.com/doc/refman/8.4/en/myisam-check.html
+
+Tux04 will require open heart 'disk controller' surgery and some severe testing before we move back. We'll also look at tux05-8 to see if they have similar problems.
+
+## Recovery
+
+According to the logs tux04 started showing serious errors on March 2nd - when I introduced sanitizing the mariadb backup:
+
+```
+Mar 02 05:00:42 tux04 kernel: I/O error, dev sde, sector 2071078320 op 0x0:(READ) flags 0x80700 phys_seg 16 prio class 2
+Mar 02 05:00:58 tux04 kernel: I/O error, dev sde, sector 2083650928 op 0x0:(READ) flags 0x80700 phys_seg 59 prio class 2
+...
+```
+
+The log started on Feb 23 when we had our last reboot. It probably is a good idea to turn on persistent logging! Anyway, it is likely files were fine until March 2nd. Similarly the mariadb logs also show
+
+```
+2025-03-02 6:53:52 489007 [ERROR] mariadbd: Index for table './db_webqtl/ProbeSetData.MYI' is corrupt; try to repair it
+2025-03-02 6:53:52 489007 [ERROR] db_webqtl.ProbeSetData: Can't read key from filepos: 2269659136
+```
+
+So, if we can restore a backup from March 1st we should be reasonably confident it is sane.
+
+First is to backup the existing database(!) Next restore the new DB by changing the DB location (symlink in /var/lib/mysql as well as check /etc/mysql/mariadb.cnf).
+
+When upgrading it is an idea to switch on these in mariadb.cnf
+
+```
+# forcing recovery with these two lines:
+innodb_force_recovery=3
+innodb_purge_threads=0
+```
+
+Make sure to disable (and restart) once it is up and running!
+
+So the steps are:
+
+* [X] install updated guix version of mariadb in /usr/local/guix-profiles (don't use Debian!!)
+* [X] repair borg backup
+* [X] Stop old mariadb (on new host tux02)
+* [X] backup old mariadb database
+* [X] restore 'sane' version of DB from borg March 1st
+* [X] point to new DB in /var/lib/mysql and cnf file
+* [X] update systemd settings
+* [X] start mariadb new version with recovery setting in cnf
+* [X] check logs
+* [X] once running revert on recovery setting in cnf and restart
+
+OK, looks like we are in business again. In the next phase we need to validate files. Normal files can be checked with
+
+```
+find -type f \( -not -name "md5sum.txt" \) -exec md5sum '{}' \; > md5sum.txt
+```
+
+and compared with another set on a different server with
+
+```
+md5sum -c md5sum.txt
+```
+
+* [X] check genotype file directory - some MAGIC files missing on tux01
+
+gn-docs is a git repo, so that is easily checked
+
+* [X] check gn-docs and sync with master repo
+
+
+## Other servers
+
+```
+journalctl -r|grep -i "I/O error"|less
+# tux05
+Nov 18 02:19:55 tux05 kernel: XFS (sdc2): metadata I/O error in "xfs_da_read_buf+0xd9/0x130 [xfs]" at daddr 0x78 len 8 error 74
+Nov 05 14:36:32 tux05 kernel: blk_update_request: I/O error, dev sdb, sector 1993616 op 0x1:(WRITE) flags
+0x0 phys_seg 35 prio class 0
+Jul 27 11:56:22 tux05 kernel: blk_update_request: I/O error, dev sdc, sector 55676616 op 0x0:(READ) flags
+0x80700 phys_seg 26 prio class 0
+Jul 27 11:56:22 tux05 kernel: blk_update_request: I/O error, dev sdc, sector 55676616 op 0x0:(READ) flags
+0x80700 phys_seg 26 prio class 0
+# tux06
+Apr 15 08:10:57 tux06 kernel: I/O error, dev sda, sector 21740352 op 0x1:(WRITE) flags 0x1000 phys_seg 4 prio class 2
+Dec 13 12:56:14 tux06 kernel: I/O error, dev sdb, sector 3910157327 op 0x9:(WRITE_ZEROES) flags 0x8000000 phys_seg 0 prio class 2
+# tux07
+Mar 27 08:00:11 tux07 mfschunkserver[1927469]: replication error: failed to create chunk (No space left)
+# tux08
+Mar 27 08:12:11 tux08 mfschunkserver[464794]: replication error: failed to create chunk (No space left)
+```
+
+Tux04, 05 and 06 show disk errors. Tux07 and Tux08 are overloaded with a full disk, but no other errors. We need to babysit Lizard more!
+
+```
+stress -v -d 1
+```
+
+Write test:
+
+```
+dd if=/dev/zero of=./test bs=512k count=2048 oflag=direct
+```
+
+Read test:
+
+```
+/sbin/sysctl -w vm.drop_caches=3
+dd if=./test of=/dev/zero bs=512k count=2048
+```
+
+
+smartctl -a /dev/sdd -d megaraid,0
+
+RAID Controller in SL 3: Dell PERC H755N Front
+
+# The story continues
+
+I don't know what happened but the server gave a hard
+error in the logs:
+
+```
+racadm getsel # get system log
+Record: 340
+Date/Time: 05/31/2025 09:25:17
+Source: system
+Severity: Critical
+Description: A high-severity issue has occurred at the Power-On
+Self-Test (POST) phase which has resulted in the system BIOS to
+abruptly stop functioning.
+```
+
+Woops! I fixed it by resetting idrac and rebooting remotely. Nasty.
+
+Looking around I found this link
+
+=>
+https://tomaskalabis.com/wordpress/a-high-severity-issue-has-occurred-at-the-power-on-self-te
+st-post-phase-which-has-resulted-in-the-system-bios-to-abruptly-stop-functioning/
+
+suggesting we should upgrade idrac firmware. I am not going to do that
+without backups and a fully up-to-date fallback online. It may fix the
+other hardware issues we have been seeing (who knows?).
+
+Fred, the boot sequence is not perfect yet. Turned out the network
+interfaces do not come up in the right order and nginx failed because
+of a missing /var/run/nginx. The container would not restart because -
+missing above - it could not check the certificates.
+
+## A week later
+
+```
+[SMM] APIC 0x00 S00:C00:T00 > ASSERT [AmdPlatformRasRsSmm] u:\EDK2\MdePkg\Library\BasePciSegmentLibPci\PciSegmentLib.c(766): ((Address) & (0xfffffffff0000000ULL | (3))) == 0 !!!! X64 Exception Type - 03(#BP - Breakpoint) CPU Apic ID - 00000000 !!!!
+RIP - 0000000076DA4343, CS - 0000000000000038, RFLAGS - 0000000000000002
+RAX - 0000000000000010, RCX - 00000000770D5B58, RDX - 00000000000002F8
+RBX - 0000000000000000, RSP - 0000000077773278, RBP - 0000000000000000
+RSI - 0000000000000087, RDI - 00000000777733E0 R8 - 00000000777731F8, R9 - 0000000000000000, R10 - 0000000000000000
+R11 - 00000000000000A0, R12 - 0000000000000000, R13 - 0000000000000000
+R14 - FFFFFFFFA0C1A118, R15 - 000000000005B000
+DS - 0000000000000020, ES - 0000000000000020, FS - 0000000000000020
+GS - 0000000000000020, SS - 0000000000000020
+CR0 - 0000000080010033, CR2 - 0000000015502000, CR3 - 0000000077749000
+CR4 - 0000000000001668, CR8 - 0000000000000001
+DR0 - 0000000000000000, DR1 - 0000000000000000, DR2 - 0000000000000000 DR3 - 0000000000000000, DR6 - 00000000FFFF0FF0, DR7 - 0000000000000400
+GDTR - 000000007773C000 000000000000004F, LDTR - 0000000000000000 IDTR - 0000000077761000 00000000000001FF, TR - 0000000000000040
+FXSAVE_STATE - 0000000077772ED0
+!!!! Find image based on IP(0x76DA4343) u:\Build_Genoa\DellBrazosPkg\DEBUG_MYTOOLS\X64\DellPkgs\DellChipsetPkgs\AmdGenoaModulePkg\Override\AmdCpmPkg\Features\PlatformRas\Rs\Smm\AmdPlatformRasRsSmm\DEBUG\AmdPlatformRasRsSmm.pdb (ImageBase=0000000076D3E000, EntryPoint=0000000076D3E6C0) !!!!
+```
+
+New error in system log:
+
+```
+Record: 341 Date/Time: 06/04/2025 19:47:08
+Source: system
+Severity: Critical Description: A high-severity issue has occurred at the Power-On Self-Test (POST) phase which has resulted in the system BIOS to abruptly stop functioning.
+```
+
+The error appears to relate to AMD Brazos which is probably part of the on board APU/GPU.
+
+The code where it segfaulted is online at:
+
+=> https://github.com/tianocore/edk2/blame/master/MdePkg/Library/BasePciSegmentLibPci/PciSegmentLib.c
+
+and has to do with PCI registers and that can actually be caused by the new PCIe card we hosted.
diff --git a/issues/systems/tux04-production.gmi b/issues/systems/tux04-production.gmi
index 01e1638..58ff8c1 100644
--- a/issues/systems/tux04-production.gmi
+++ b/issues/systems/tux04-production.gmi
@@ -6,6 +6,10 @@ Lately we have been running production on tux04. Unfortunately Debian got broken
and that is alarming. We might as well try an upgrade. I created a new partition on /dev/sda4 using debootstrap.
+The hardware RAID has proven unreliable on this machine (and perhaps others).
+
+We added a drive on a PCIe raiser outside the RAID. Use this for bulk data copying. We still bootstrap from the RAID.
+
Luckily not too much is running on this machine and if we mount things again, most should work.
# Tasks