On this page:
base-send-prompt!
cached-send-prompt!
4.1 Open  AI
OPENAI_  API_  KEY
4.1.1 GPT 3.5 Turbo
gpt3-5-send-prompt!
4.1.2 GPT 4o Mini
gpt4-add-image!
gpt4o-mini-send-prompt!
4.2 Ollama
4.2.1 Phi3
phi3-send-prompt!
4.2.2 Llava
llava-add-image!
llava-send-prompt!
8.15.0.2

4 LLM Backends🔗ℹ

 (require llm/base) package: llm-lib

Internal tools for developing new backends.

procedure

(base-send-prompt! uri    
  headers    
  json    
  cost-base-info    
  inference-cost-maker)  void?
  uri : string?
  headers : headers/c
  json : jsexpr?
  cost-base-info : model-cost-info?
  inference-cost-maker : (-> response? inference-cost-info?)
A wrapper around post used to send a prompt to the backend at uri, using the provided headers and json payload.

The wrapper records some debug and cost logging information. inference-cost-maker receives the response, and should build an inference-cost-info for use in the cost-log-entry for this prompt.

procedure

(cached-send-prompt! uri    
  headers    
  json    
  cost-base-info    
  inference-cost-maker    
  prompt-key)  void?
  uri : string?
  headers : headers/c
  json : jsexpr?
  cost-base-info : model-cost-info?
  inference-cost-maker : (-> response? inference-cost-info?)
  prompt-key : string?
A wrapper around base-send-prompt! that uses with-cache to cache the response. This should be the default for new backends. The prompt-key should uniquely identify the prompt and its response; you could just use the prompt text directly. This, and other values relavant to the query, are used to compute a key into the cache for the response. All other parameters are the same as for base-send-prompt!.

4.1 OpenAI🔗ℹ

Some OpenAI models are supported in the collection llm/openai. Not much of the API is supported at this time.

 (require llm/openai/config) package: llm-lib

parameter

(OPENAI_API_KEY)  string?

(OPENAI_API_KEY key)  void?
  key : string?
 = (getenv "OPENAI_API_KEY")
A parameter defining the API key used when calling the OpenAI API. Tries to read the key from an environment variable, by default.

4.1.1 GPT 3.5 Turbo🔗ℹ

 (require llm/openai/gpt3-5) package: llm-lib

Sets the current-send-prompt! to gpt3-5-send-prompt! when visited.

procedure

(gpt3-5-send-prompt! prompt)  string?

  prompt : string?
Sends the prompt prompt to the OpenAI API using the GPT 3.5 Turbo model, and returns the first choice in the response messages.

4.1.2 GPT 4o Mini🔗ℹ

 (require llm/openai/gpt4o-mini) package: llm-lib

Sets the current-send-prompt! to gpt4o-mini-send-prompt! when visited.

procedure

(gpt4-add-image! type base64-data)  void?

  type : (or/c 'png 'jpeg)
  base64-data : string?
Prepend an base64-encoded image, either a PNG or JPEG, to the list of images sent with the next prompt. The prompt may refer to the image and rely on the order in which they were added.

Images can make prompts very expensive.

procedure

(gpt4o-mini-send-prompt! prompt)  string?

  prompt : string?
Sends the prompt prompt to the OpenAI API using the GPT 4o Mini model, and returns the first choice in the response messages.

4.2 Ollama🔗ℹ

Some Ollama is a platform for distributing, building, and running models locally, and several are supported in the collection llm/ollama.

It’s API is documented at https://github.com/ollama/ollama/blob/main/docs/api.md Not much of the API is supported at this time.

4.2.1 Phi3🔗ℹ

 (require llm/ollama/phi3) package: llm-lib

Sets the current-send-prompt! to phi3-send-prompt! when visited.

procedure

(phi3-send-prompt! prompt)  string?

  prompt : string?
Sends the prompt prompt to the Ollama API using the Phi3 model, and returns the response. Assumes Ollama is running on localhost at port 11434.

4.2.2 Llava🔗ℹ

 (require llm/ollama/llava) package: llm-lib

Sets the current-send-prompt! to llava-send-prompt! when visited.

procedure

(llava-add-image! base64-data)  void?

  base64-data : string?
Prepend an base64-encoded image, either a PNG or JPEG, to the list of images sent with the next prompt. The prompt may refer to the image and rely on the order in which they were added.

procedure

(llava-send-prompt! prompt)  string?

  prompt : string?
Sends the prompt prompt to the Ollama API using the Llava model, and returns the response. Assumes Ollama is running on localhost at port 11434.