Skip to main content
Open In ColabOpen on GitHub

如何缓存聊天模型响应

前置条件

本指南假设您熟悉以下概念:

LangChain 为 聊天模型 提供了可选的缓存层。这主要有两个用途:

  • 它可以通过减少您向大语言模型提供商发出的重复完成请求次数来为您节省费用,尤其是在频繁多次请求相同内容的情况下。这在应用开发阶段尤其有用。
  • 它可以通过减少您向 LLM 提供商发出的 API 调用次数来加快您的应用程序运行速度。

本指南将引导您如何在应用中启用此功能。

pip install -qU "langchain[openai]"
import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain.chat_models import init_chat_model

llm = init_chat_model("gpt-4o-mini", model_provider="openai")
# <!-- ruff: noqa: F821 -->
from langchain_core.globals import set_llm_cache
API 参考:set_llm_cache

内存缓存

这是一个临时缓存,用于在内存中存储模型调用。当您的环境重启时它将被清除,且无法在进程间共享。

%%time
from langchain_core.caches import InMemoryCache

set_llm_cache(InMemoryCache())

# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
API 参考:内存缓存
CPU times: user 645 ms, sys: 214 ms, total: 859 ms
Wall time: 829 ms
AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0')
%%time
# The second time it is, so it goes faster
llm.invoke("Tell me a joke")
CPU times: user 822 µs, sys: 288 µs, total: 1.11 ms
Wall time: 1.06 ms
AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", response_metadata={'token_usage': {'completion_tokens': 13, 'prompt_tokens': 11, 'total_tokens': 24}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-b6836bdd-8c30-436b-828f-0ac5fc9ab50e-0')

SQLite 缓存

此缓存实现使用 SQLite 数据库存储响应,并在进程重启后持续有效。

!rm .langchain.db
# We can do the same thing with a SQLite cache
from langchain_community.cache import SQLiteCache

set_llm_cache(SQLiteCache(database_path=".langchain.db"))
API 参考:SQLite缓存
%%time
# The first time, it is not yet in cache, so it should take longer
llm.invoke("Tell me a joke")
CPU times: user 9.91 ms, sys: 7.68 ms, total: 17.6 ms
Wall time: 657 ms
AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', response_metadata={'token_usage': {'completion_tokens': 17, 'prompt_tokens': 11, 'total_tokens': 28}, 'model_name': 'gpt-3.5-turbo', 'system_fingerprint': 'fp_c2295e73ad', 'finish_reason': 'stop', 'logprobs': None}, id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0')
%%time
# The second time it is, so it goes faster
llm.invoke("Tell me a joke")
CPU times: user 52.2 ms, sys: 60.5 ms, total: 113 ms
Wall time: 127 ms
AIMessage(content='Why did the scarecrow win an award? Because he was outstanding in his field!', id='run-39d9e1e8-7766-4970-b1d8-f50213fd94c5-0')

下一步

您现在已学会如何缓存模型响应以节省时间和金钱。

接下来,查看本节中其他关于聊天模型的指南,例如如何让模型返回结构化输出如何创建您自己的自定义聊天模型