summaryrefslogtreecommitdiff
path: root/issues/gn_llm_db_cache_integration.gmi
blob: 86f7c80863567c40c5e1c36c4c463a8a22576c1d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# Implementing Efficient Database Caching for Query Responses in GN-LLM system


## Tags:

* assigned: alexm,shelby
* keywords: llm,caching,database,query,response
* type: enhancements

* status: closed, done, completed


## Description:

This implementation task aims to enhance the performance and responsiveness of our GN-LLM (Large Language Model) system by incorporating a robust database caching mechanism. The focus will be on utilizing a database to store queries along with their corresponding answers and references, ensuring quicker retrieval and reduced computational load. Users can go back in time and see their search results at a given time.




## Task

* [x] implement endpoint for user caching 

* [x] implement UI for QNQA search

* [x] More customization features  like e.g user clearing their history

See main issue:

=> topics/lmms/llm-metadata

## Note for commits See:

=> https://github.com/genenetwork/genenetwork3/pull/165