summaryrefslogtreecommitdiff
path: root/issues/gn_llm_db_cache_integration.gmi
blob: 3193f6f27fd07e23f93ddb0ad6a95a509b9ece7e (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
# Implementing Efficient Database Caching for Query Responses in GN-LLM system


## Tags:

* assigned: alexm,shelby
* keywords: llm,caching,database,query,response
* type: enhancements

* status: in progress


## Description:

This implementation task aims to enhance the performance and responsiveness of our GN-LLM (Large Language Model) system by incorporating a robust database caching mechanism. The focus will be on utilizing a database to store queries along with their corresponding answers and references, ensuring quicker retrieval and reduced computational load. Users can go back in time and see their search results at a given time.




## Task

* [x] implement endpoint for user caching 

* [x] implement UI for QNQA search

* [] More customization features  like e.g user clearing their history

See main issue:

=> topics/lmms/llm-metadata