aboutsummaryrefslogtreecommitdiff
path: root/.venv/lib/python3.12/site-packages/litellm/batch_completion/Readme.md
diff options
context:
space:
mode:
Diffstat (limited to '.venv/lib/python3.12/site-packages/litellm/batch_completion/Readme.md')
-rw-r--r--.venv/lib/python3.12/site-packages/litellm/batch_completion/Readme.md11
1 files changed, 11 insertions, 0 deletions
diff --git a/.venv/lib/python3.12/site-packages/litellm/batch_completion/Readme.md b/.venv/lib/python3.12/site-packages/litellm/batch_completion/Readme.md
new file mode 100644
index 00000000..23cc8712
--- /dev/null
+++ b/.venv/lib/python3.12/site-packages/litellm/batch_completion/Readme.md
@@ -0,0 +1,11 @@
+# Implementation of `litellm.batch_completion`, `litellm.batch_completion_models`, `litellm.batch_completion_models_all_responses`
+
+Doc: https://docs.litellm.ai/docs/completion/batching
+
+
+LiteLLM Python SDK allows you to:
+1. `litellm.batch_completion` Batch litellm.completion function for a given model.
+2. `litellm.batch_completion_models` Send a request to multiple language models concurrently and return the response
+ as soon as one of the models responds.
+3. `litellm.batch_completion_models_all_responses` Send a request to multiple language models concurrently and return a list of responses
+ from all models that respond. \ No newline at end of file