迁移自 ConversationBufferWindowMemory 或 ConversationTokenBufferMemory
如果您正尝试从以下列出的旧内存类中迁移,请遵循本指南:
| 内存类型 | 描述 |
|---|---|
ConversationBufferWindowMemory | Keeps the last n messages of the conversation. Drops the oldest messages when there are more than n messages. |
ConversationTokenBufferMemory | Keeps only the most recent messages in the conversation under the constraint that the total number of tokens in the conversation does not exceed a certain limit. |
ConversationBufferWindowMemory 和 ConversationTokenBufferMemory 在原始对话历史之上应用额外的处理,将对话历史修剪为适合聊天模型上下文窗口的大小。
此处理功能可以使用 LangChain 内置的 trim_messages 函数来实现。
我们将首先探索一种直接的方法,该方法涉及对整个对话历史应用处理逻辑。
虽然这种方法易于实现,但它有一个缺点:随着对话的进行,延迟也会增加,因为逻辑在每一轮中都会重新应用于对话中的所有先前交互。
更高级的策略侧重于逐步更新对话历史,以避免冗余处理。
例如,langgraph 关于摘要的指南 展示了如何在丢弃旧消息的同时维护对话的实时摘要,确保它们在后续轮次中不会被重新处理。
设置
%%capture --no-stderr
%pip install --upgrade --quiet langchain-openai langchain
import os
from getpass import getpass
if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()
带有 LLMChain / Conversation Chain 的旧用法
详细信息
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferWindowMemory
from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate(
[
SystemMessage(content="You are a helpful assistant."),
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
)
memory = ConversationBufferWindowMemory(memory_key="chat_history", return_messages=True)
legacy_chain = LLMChain(
llm=ChatOpenAI(),
prompt=prompt,
memory=memory,
)
legacy_result = legacy_chain.invoke({"text": "my name is bob"})
print(legacy_result)
legacy_result = legacy_chain.invoke({"text": "what was my name"})
print(legacy_result)
{'text': 'Nice to meet you, Bob! How can I assist you today?', 'chat_history': []}
{'text': 'Your name is Bob. How can I assist you further, Bob?', 'chat_history': [HumanMessage(content='my name is bob', additional_kwargs={}, response_metadata={}), AIMessage(content='Nice to meet you, Bob! How can I assist you today?', additional_kwargs={}, response_metadata={})]}
重新实现 ConversationBufferWindowMemory 逻辑
让我们首先创建适当的逻辑来处理对话历史,然后我们将看看如何将其集成到应用程序中。您稍后可以用更高级的逻辑替换此基本设置,以适应您的特定需求。
我们将使用 trim_messages 来实现逻辑,该逻辑保留对话中的最后 n 条消息。当消息数量超过 n 时,它将丢弃最早的消息。
此外,如果存在系统消息,我们也会保留它——当存在时,它是包含聊天模型指令的对话中的第一条消息。
from langchain_core.messages import (
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
trim_messages,
)
from langchain_openai import ChatOpenAI
messages = [
SystemMessage("you're a good assistant, you always respond with a joke."),
HumanMessage("i wonder why it's called langchain"),
AIMessage(
'Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!'
),
HumanMessage("and who is harrison chasing anyways"),
AIMessage(
"Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"
),
HumanMessage("why is 42 always the answer?"),
AIMessage(
"Because it’s the only number that’s constantly right, even when it doesn’t add up!"
),
HumanMessage("What did the cow say?"),
]
from langchain_core.messages import trim_messages
selected_messages = trim_messages(
messages,
token_counter=len, # <-- len will simply count the number of messages rather than tokens
max_tokens=5, # <-- allow up to 5 messages.
strategy="last",
# Most chat models expect that chat history starts with either:
# (1) a HumanMessage or
# (2) a SystemMessage followed by a HumanMessage
# start_on="human" makes sure we produce a valid chat history
start_on="human",
# Usually, we want to keep the SystemMessage
# if it's present in the original history.
# The SystemMessage has special instructions for the model.
include_system=True,
allow_partial=False,
)
for msg in selected_messages:
msg.pretty_print()
================================[1m System Message [0m================================
you're a good assistant, you always respond with a joke.
==================================[1m Ai Message [0m==================================
Hmmm let me think.
Why, he's probably chasing after the last cup of coffee in the office!
================================[1m Human Message [0m=================================
why is 42 always the answer?
==================================[1m Ai Message [0m==================================
Because it’s the only number that’s constantly right, even when it doesn’t add up!
================================[1m Human Message [0m=================================
What did the cow say?
重新实现 ConversationTokenBufferMemory 逻辑
在这里,我们将使用 trim_messages 来保留系统消息和对话中的最新消息,同时确保对话中的总令牌数不超过某个限制。
from langchain_core.messages import trim_messages
selected_messages = trim_messages(
messages,
# Please see API reference for trim_messages for other ways to specify a token counter.
token_counter=ChatOpenAI(model="gpt-4o"),
max_tokens=80, # <-- token limit
# The start_on is specified
# Most chat models expect that chat history starts with either:
# (1) a HumanMessage or
# (2) a SystemMessage followed by a HumanMessage
# start_on="human" makes sure we produce a valid chat history
start_on="human",
# Usually, we want to keep the SystemMessage
# if it's present in the original history.
# The SystemMessage has special instructions for the model.
include_system=True,
strategy="last",
)
for msg in selected_messages:
msg.pretty_print()
================================[1m System Message [0m================================
you're a good assistant, you always respond with a joke.
================================[1m Human Message [0m=================================
why is 42 always the answer?
==================================[1m Ai Message [0m==================================
Because it’s the only number that’s constantly right, even when it doesn’t add up!
================================[1m Human Message [0m=================================
What did the cow say?
现代用法与 LangGraph
下面的示例展示了如何使用 LangGraph 添加简单的对话预处理逻辑。
如果您希望避免在每次运行时都对整个对话历史进行计算,您可以参考 关于摘要的指南,该指南展示了如何丢弃旧消息,确保它们在后续轮次中不会被重新处理。
详细信息
import uuid
from IPython.display import Image, display
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, MessagesState, StateGraph
# Define a new graph
workflow = StateGraph(state_schema=MessagesState)
# Define a chat model
model = ChatOpenAI()
# Define the function that calls the model
def call_model(state: MessagesState):
selected_messages = trim_messages(
state["messages"],
token_counter=len, # <-- len will simply count the number of messages rather than tokens
max_tokens=5, # <-- allow up to 5 messages.
strategy="last",
# Most chat models expect that chat history starts with either:
# (1) a HumanMessage or
# (2) a SystemMessage followed by a HumanMessage
# start_on="human" makes sure we produce a valid chat history
start_on="human",
# Usually, we want to keep the SystemMessage
# if it's present in the original history.
# The SystemMessage has special instructions for the model.
include_system=True,
allow_partial=False,
)
response = model.invoke(selected_messages)
# We return a list, because this will get added to the existing list
return {"messages": response}
# Define the two nodes we will cycle between
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)
# Adding memory is straight forward in langgraph!
memory = MemorySaver()
app = workflow.compile(
checkpointer=memory
)
# The thread id is a unique key that identifies
# this particular conversation.
# We'll just generate a random uuid here.
thread_id = uuid.uuid4()
config = {"configurable": {"thread_id": thread_id}}
input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
# Here, let's confirm that the AI remembers our name!
config = {"configurable": {"thread_id": thread_id}}
input_message = HumanMessage(content="what was my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
hi! I'm bob
==================================[1m Ai Message [0m==================================
Hello Bob! How can I assist you today?
================================[1m Human Message [0m=================================
what was my name?
==================================[1m Ai Message [0m==================================
Your name is Bob. How can I help you, Bob?
与预构建的 langgraph 代理配合使用
此示例展示了使用代理执行器与预构建代理的用法,该代理是使用 create_tool_calling_agent 函数创建的。
如果您正在使用其中一个 旧版 LangChain 预构建代理,您应该能够用新的 LangGraph 预构建代理 替换该代码。该代理利用聊天模型的原生工具调用功能,开箱即用效果可能更佳。
详细信息
import uuid
from langchain_core.messages import (
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
trim_messages,
)
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
@tool
def get_user_age(name: str) -> str:
"""Use this tool to find the user's age."""
# This is a placeholder for the actual implementation
if "bob" in name.lower():
return "42 years old"
return "41 years old"
memory = MemorySaver()
model = ChatOpenAI()
def prompt(state) -> list[BaseMessage]:
"""Given the agent state, return a list of messages for the chat model."""
# We're using the message processor defined above.
return trim_messages(
state["messages"],
token_counter=len, # <-- len will simply count the number of messages rather than tokens
max_tokens=5, # <-- allow up to 5 messages.
strategy="last",
# Most chat models expect that chat history starts with either:
# (1) a HumanMessage or
# (2) a SystemMessage followed by a HumanMessage
# start_on="human" makes sure we produce a valid chat history
start_on="human",
# Usually, we want to keep the SystemMessage
# if it's present in the original history.
# The SystemMessage has special instructions for the model.
include_system=True,
allow_partial=False,
)
app = create_react_agent(
model,
tools=[get_user_age],
checkpointer=memory,
prompt=prompt,
)
# The thread id is a unique key that identifies
# this particular conversation.
# We'll just generate a random uuid here.
thread_id = uuid.uuid4()
config = {"configurable": {"thread_id": thread_id}}
# Tell the AI that our name is Bob, and ask it to use a tool to confirm
# that it's capable of working like an agent.
input_message = HumanMessage(content="hi! I'm bob. What is my age?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
# Confirm that the chat bot has access to previous conversation
# and can respond to the user saying that the user's name is Bob.
input_message = HumanMessage(content="do you remember my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
hi! I'm bob. What is my age?
==================================[1m Ai Message [0m==================================
Tool Calls:
get_user_age (call_jsMvoIFv970DhqqLCJDzPKsp)
Call ID: call_jsMvoIFv970DhqqLCJDzPKsp
Args:
name: bob
=================================[1m Tool Message [0m=================================
Name: get_user_age
42 years old
==================================[1m Ai Message [0m==================================
Bob, you are 42 years old.
================================[1m Human Message [0m=================================
do you remember my name?
==================================[1m Ai Message [0m==================================
Yes, your name is Bob.
LCEL:添加预处理步骤
添加复杂对话管理的最简单方法是在聊天模型之前引入一个预处理步骤,并将完整的对话历史传递给该预处理步骤。
这种方法概念简单,在多种场景下都能适用;例如,如果使用 RunnableWithMessageHistory 而非包裹聊天模型,则应使用预处理器来包裹聊天模型。
这种方法的明显缺点是,随着对话历史的增加,延迟会开始增加,原因有两个:
- 随着对话变得越来越长,可能需要从您用于存储对话历史的各种存储中获取更多数据(如果未将其存储在内存中)。
- 预处理逻辑将导致大量冗余计算,重复对话前序步骤中的计算。
如果您想使用聊天模型的工具调用功能,请务必在为其添加历史预处理步骤之前,先将工具绑定到该模型!
详细信息
from langchain_core.messages import (
AIMessage,
BaseMessage,
HumanMessage,
SystemMessage,
trim_messages,
)
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
model = ChatOpenAI()
@tool
def what_did_the_cow_say() -> str:
"""Check to see what the cow said."""
return "foo"
message_processor = trim_messages( # Returns a Runnable if no messages are provided
token_counter=len, # <-- len will simply count the number of messages rather than tokens
max_tokens=5, # <-- allow up to 5 messages.
strategy="last",
# The start_on is specified
# to make sure we do not generate a sequence where
# a ToolMessage that contains the result of a tool invocation
# appears before the AIMessage that requested a tool invocation
# as this will cause some chat models to raise an error.
start_on=("human", "ai"),
include_system=True, # <-- Keep the system message
allow_partial=False,
)
# Note that we bind tools to the model first!
model_with_tools = model.bind_tools([what_did_the_cow_say])
model_with_preprocessor = message_processor | model_with_tools
full_history = [
SystemMessage("you're a good assistant, you always respond with a joke."),
HumanMessage("i wonder why it's called langchain"),
AIMessage(
'Well, I guess they thought "WordRope" and "SentenceString" just didn\'t have the same ring to it!'
),
HumanMessage("and who is harrison chasing anyways"),
AIMessage(
"Hmmm let me think.\n\nWhy, he's probably chasing after the last cup of coffee in the office!"
),
HumanMessage("why is 42 always the answer?"),
AIMessage(
"Because it’s the only number that’s constantly right, even when it doesn’t add up!"
),
HumanMessage("What did the cow say?"),
]
# We pass it explicity to the model_with_preprocesor for illustrative purposes.
# If you're using `RunnableWithMessageHistory` the history will be automatically
# read from the source the you configure.
model_with_preprocessor.invoke(full_history).pretty_print()
==================================[1m Ai Message [0m==================================
Tool Calls:
what_did_the_cow_say (call_urHTB5CShhcKz37QiVzNBlIS)
Call ID: call_urHTB5CShhcKz37QiVzNBlIS
Args:
如果您需要实现更高效的逻辑,并希望暂时使用RunnableWithMessageHistory,实现这一点的做法是从BaseChatMessageHistory继承子类,并为add_messages定义适当的逻辑(该逻辑不应简单地追加历史记录,而是重写它)。
除非您有充分的理由实施此解决方案,否则您应该改用 LangGraph。
下一步
使用 LangGraph 探索持久性:
使用简单的 LCEL 添加持久化(对于更复杂的使用场景,推荐使用 LangGraph):
处理消息历史: