summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorPjotr Prins2023-06-14 21:03:20 -0500
committerPjotr Prins2023-06-16 13:55:22 -0500
commitc910f591854970edad235dfb1315d708412edee9 (patch)
treed68fd1b75c603fffe9c5bf1c42df4984701e75e7
parentd4c388b4c287f10c5be47493a572e3237733d8ac (diff)
downloadgn-gemtext-c910f591854970edad235dfb1315d708412edee9.tar.gz
Started design of split front-end
-rw-r--r--issues/design/branding-front.gmi28
-rw-r--r--tasks/pjotrp.gmi1
-rw-r--r--topics/systems/mariadb/precompute-mapping-input-data.gmi4
3 files changed, 31 insertions, 2 deletions
diff --git a/issues/design/branding-front.gmi b/issues/design/branding-front.gmi
new file mode 100644
index 0000000..0131b62
--- /dev/null
+++ b/issues/design/branding-front.gmi
@@ -0,0 +1,28 @@
+# Branding and front-end-services
+
+The original GN2 is a monolithic flask server. In the next phase we will allow for rebranding and multiple servers of GN2 services. A community website can be created in this way with its own 'portal', dedicated search and interactive GN tooling.
+To make it even more fun we will also allow for using different platforms.
+I.e., a portal can be written in any language with dedicated GN services being injected as htmx, divx or similar. Does this make GN more complicated? I argue not. It will be fun to work in non-python languages even though core functionality will be python. The other way should work too - if we write further htmx it may be coming from other sources.
+
+The first step is to replace header and footers with something branded. This means some the current header and footer is specific to the main service and can be generalised with a 'src=default' or 'src=portal-gn4msk' flag - or similar.
+Based on this we can split the templates into reading specific headers/footers and feed those from other endpoints. Flask is already path-based, so we need another service to generate the necessary HTML. Note that these servers may be single threaded in principle. Flask kicks off backend work and needs to be multithreaded for that. But simple HTML services will be fine on single threads and easier to debug that way too. GN does not have thousands of users operating at the same time. Its main challenge is running long operations while still being able to serve others.
+
+The ultimate goal is increased flexibility in creating front-ends and portals. Typically these GN services will be running off a single machine.
+
+# Tasks
+
+* [ ] Create community server for GN4MSK
+* + [ ] Fire up parallel page server
+* + [ ] Handle headers and footers
+* + [ ] Create portal page
+* + [ ] Show specific datasets
+* + [ ] Show specific publications
+* + [ ] Show specific twitter feed
+
+# Tags
+
+* assigned: pjotrp
+* type: feature
+* priority: high
+* status: in progress
+* keywords: website, branding
diff --git a/tasks/pjotrp.gmi b/tasks/pjotrp.gmi
index b946fdf..4313873 100644
--- a/tasks/pjotrp.gmi
+++ b/tasks/pjotrp.gmi
@@ -19,6 +19,7 @@ The tasks here should probably be broken out into appropriately tagged issues, w
Now
* [ ] Set up stable GeneNetwork server instance with new hardware (see below)
+* [ ] GN4MSK rebranding - see issue
* [ ] Drive for stability of GN tools (particularly GEMMA OOMP)
* [ ] GEMMA batch run and precompute
* [ ] GEMMA/bulklmm speedups
diff --git a/topics/systems/mariadb/precompute-mapping-input-data.gmi b/topics/systems/mariadb/precompute-mapping-input-data.gmi
index d26a97a..6329667 100644
--- a/topics/systems/mariadb/precompute-mapping-input-data.gmi
+++ b/topics/systems/mariadb/precompute-mapping-input-data.gmi
@@ -15,12 +15,12 @@ GN relies on precomputed mapping scores for search and other functionality. Here
* [ ] Start using GEMMA for precomputed values as a background pipeline on a different machine
* [ ] Update the table values using GEMMA output (single highest score)
-Above is the quick win for plugging in GEMMA value. We will make sure not to recompute the values that are already up to date.
+Above is the quick win for plugging in GEMMA values. We will make sure not to recompute the values that are already up to date.
Next:
-* [ ] Track metadata of computed datasets in RDF
* [ ] Store all GEMMA values efficiently
+* [ ] Track metadata of computed datasets (in RDF?)
* [ ] Compute significance with GEMMA or other LMM (bulkLMM?)
* [ ] Store signficance and significant values for processing
* [ ] Update search & correlations to use these