aboutsummaryrefslogtreecommitdiff
path: root/gnqa/paper2_eval/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'gnqa/paper2_eval/README.md')
-rw-r--r--gnqa/paper2_eval/README.md44
1 files changed, 43 insertions, 1 deletions
diff --git a/gnqa/paper2_eval/README.md b/gnqa/paper2_eval/README.md
index 13cb1130..d5c7e4e6 100644
--- a/gnqa/paper2_eval/README.md
+++ b/gnqa/paper2_eval/README.md
@@ -1,6 +1,48 @@
-# Paper 2 Evaluation
+#
+
+# Study Evaluation
This directory contains the code created to evaluate questions submitted to GNQA.
Unlike the evaluation in paper 1, this work uses different LLMs and a different RAG engine.
RAGAS is still used to evaluate the queries.
+
+The RAG engine being used is [R2R](https://github.com/SciPhi-AI/R2R). It is open source and has performance similar to the engine we used for our 1st GNQA paper.
+
+The evaluation workflow is organized around reading questions that can be organized with two sets of categories, e.g. category 1 - who asked the questions, category 2 - the field to which the question belongs.
+In our initial work our category 1 consists of citizen scientists and domain experts.
+While category 2 consists of three fields or specializations: Genenetwork.org systems genetics, the genetics of diabetes and the genetics of aging.
+
+We will have make the code more configurable by pulling the categories out of the source code and keeping them strictly in settings files.
+
+It is best to define a structure for your different types of data: sets, lists, responses, and scores.
+
+## Tasks
+
+1. Create list(s) of questions (not automated)
+1. Run question list through RAG (automated)
+1. Save responses (automated)
+1. Create datasets from responses (automated)
+1. Run datasets through evaluator to get scores (not automated)
+1. Create plots of scores (not automated)
+
+## Covering the tasks
+
+*ID refers to the task number from the previous section*
+
+| ID | File Operator | From directory | To directory | command |
+|:--|:---:|---:|---:|:--|
+| 2 | run_questions | list | responses | python run_questions.py \ |
+| | | | |     ../data/list/catA_question_list.json \ |
+| | | | |     ../data/responses/resp_catA_catB.json |
+| 3 | parse_r2r_result | responses | dataset | |
+| | | | |     ../data/responses/resp_catA_catB.json \ |
+| | | | |     ../data/dataset/intermediate_files/catA_catB_.json |
+| 4 | create_dataset | list | dataset | python create_dataset.py \
+| | | | |     ../data/lists/list_catA_catB.json \ |
+| | | | |     ../data/dataset/catA_catB.json |
+| 5 | ragas_eval | dataset | scores | python3 ragas_eval.py \ |
+| | | | |     ../data/datasets/catA/catB_1.json \ |
+| | | | |     ../data/scores/catA/catB_1.json \ |
+| | | | |     3 # run evaluation 3 times |
+ \ No newline at end of file