summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorJohn Nduli2024-06-21 19:11:54 +0300
committerBonfaceKilz2024-06-21 19:51:24 +0300
commit67ac02bbcc78cdcb4f3695918b35ccac58fa36f7 (patch)
treea819b356395ea62e9e3e22c8529696e9b841335d
parent84b69e5d2fc88b8ff899678724db0b8aa8c2241a (diff)
downloadgn-gemtext-67ac02bbcc78cdcb4f3695918b35ccac58fa36f7.tar.gz
docs: add section for sigterm and python processes
-rw-r--r--topics/meetings/jnduli_bmunyoki.gmi3
1 files changed, 3 insertions, 0 deletions
diff --git a/topics/meetings/jnduli_bmunyoki.gmi b/topics/meetings/jnduli_bmunyoki.gmi
index 382bd7e..5af7221 100644
--- a/topics/meetings/jnduli_bmunyoki.gmi
+++ b/topics/meetings/jnduli_bmunyoki.gmi
@@ -27,6 +27,9 @@ How do we prevent something similar from happening in the future?
* Reduce the amount of gunicorn processes to reduce their memory footprints. How did we end up with the no of processes we currently have? What impact will reducing this have on our users?
* Attempt to get an estimate memory footprint for `index-genenetwork` and use this to determine when it's safe to run the script or not. This can even end up integrated into the cron job.
* Create some alerting mechanism for sane thresholds that can be send to a common channel/framework e.g. when CPU usage > 90%, memory usage > 90% etc. This allows someone to be on the look out in case something drastic needs to be taken.
+* Python doesn't kill child processes when SIGTERM is used. This means when testings, we were creating more and more orphaned processes. Investigate how to propagate the SIGTERM signal to all children processes.
+
+=> https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Process.terminate