Skip to main content
Open In ColabOpen on GitHub

从 ConversationBufferMemory 或 ConversationStringBufferMemory 迁移

ConversationBufferMemoryConversationStringBufferMemory 用于在人类与AI助手之间保持对话记录,而无需任何额外处理。

笔记

ConversationStringBufferMemory 等价于 ConversationBufferMemory,但其目标是未针对聊天模型的 LLM。

使用现有现代原语处理对话历史的方法有:

  1. 使用 LangGraph 持久化 以及对消息历史记录的适当处理
  2. 使用 LCEL 与 RunnableWithMessageHistory 结合,并对消息历史记录进行适当的处理。

大多数用户会发现 LangGraph持久化 在使用和配置上比相应的LCEL更简单,尤其是在更复杂的用例中。

设置

%%capture --no-stderr
%pip install --upgrade --quiet langchain-openai langchain
import os
from getpass import getpass

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass()

与 LLMChain / ConversationChain 一起使用

此部分展示了如何迁移离开与 ConversationBufferMemoryConversationStringBufferMemory 一起使用的 LLMChainConversationChain

旧版

以下是使用 ConversationBufferMemory 的示例,配合 LLMChain 或等效的 ConversationChain

详细信息
from langchain.chains import LLMChain
from langchain.memory import ConversationBufferMemory
from langchain_core.messages import SystemMessage
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
MessagesPlaceholder,
)
from langchain_openai import ChatOpenAI

prompt = ChatPromptTemplate(
[
MessagesPlaceholder(variable_name="chat_history"),
HumanMessagePromptTemplate.from_template("{text}"),
]
)

memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

legacy_chain = LLMChain(
llm=ChatOpenAI(),
prompt=prompt,
memory=memory,
)

legacy_result = legacy_chain.invoke({"text": "my name is bob"})
print(legacy_result)

legacy_result = legacy_chain.invoke({"text": "what was my name"})
{'text': 'Hello Bob! How can I assist you today?', 'chat_history': [HumanMessage(content='my name is bob', additional_kwargs={}, response_metadata={}), AIMessage(content='Hello Bob! How can I assist you today?', additional_kwargs={}, response_metadata={})]}
legacy_result["text"]
'Your name is Bob. How can I assist you today, Bob?'
笔记

请注意,单个 memory 对象中不支持分离对话线程

LangGraph

下面的示例展示了如何使用 LangGraph 实现 ConversationChainLLMChain,其中包含 ConversationBufferMemory

此示例假设您已经对 LangGraph 有一定程度的了解。如果您还不熟悉,请参阅 LangGraph 快速入门指南 获取更多详细信息。

LangGraph 提供了许多额外的功能(例如时间旅行和中断),并且在其他更复杂(和更真实)的架构中也能很好地工作。

详细信息
import uuid

from IPython.display import Image, display
from langchain_core.messages import HumanMessage
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import START, MessagesState, StateGraph

# Define a new graph
workflow = StateGraph(state_schema=MessagesState)

# Define a chat model
model = ChatOpenAI()


# Define the function that calls the model
def call_model(state: MessagesState):
response = model.invoke(state["messages"])
# We return a list, because this will get added to the existing list
return {"messages": response}


# Define the two nodes we will cycle between
workflow.add_edge(START, "model")
workflow.add_node("model", call_model)


# Adding memory is straight forward in langgraph!
memory = MemorySaver()

app = workflow.compile(
checkpointer=memory
)


# The thread id is a unique key that identifies
# this particular conversation.
# We'll just generate a random uuid here.
# This enables a single application to manage conversations among multiple users.
thread_id = uuid.uuid4()
config = {"configurable": {"thread_id": thread_id}}


input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()

# Here, let's confirm that the AI remembers our name!
input_message = HumanMessage(content="what was my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob
================================== Ai Message ==================================

Hello Bob! How can I assist you today?
================================ Human Message =================================

what was my name?
================================== Ai Message ==================================

Your name is Bob. How can I help you today, Bob?

LCEL 可运行的带消息历史记录功能

或者,如果你有一个简单的链,可以将链中的聊天模型包装在 RunnableWithMessageHistory 中。

请参阅以下迁移指南以获取更多信息。

与预构建代理一起使用

此示例展示了如何使用代理执行器与使用 create_tool_calling_agent 函数构建的预构建代理。

如果您正在使用旧版 LangChain 预构建代理之一,您应该能够用新版的 langgraph 预构建代理 替换该代码,它利用了聊天模型的原生工具调用功能,可能开箱即用效果更好。

旧版用法

详细信息
from langchain import hub
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.memory import ConversationBufferMemory
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI

model = ChatOpenAI(temperature=0)


@tool
def get_user_age(name: str) -> str:
"""Use this tool to find the user's age."""
# This is a placeholder for the actual implementation
if "bob" in name.lower():
return "42 years old"
return "41 years old"


tools = [get_user_age]

prompt = ChatPromptTemplate.from_messages(
[
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)

# Construct the Tools agent
agent = create_tool_calling_agent(model, tools, prompt)
# Instantiate memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

# Create an agent
agent = create_tool_calling_agent(model, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory, # Pass the memory to the executor
)

# Verify that the agent can use tools
print(agent_executor.invoke({"input": "hi! my name is bob what is my age?"}))
print()
# Verify that the agent has access to conversation history.
# The agent should be able to answer that the user's name is bob.
print(agent_executor.invoke({"input": "do you remember my name?"}))
{'input': 'hi! my name is bob what is my age?', 'chat_history': [HumanMessage(content='hi! my name is bob what is my age?', additional_kwargs={}, response_metadata={}), AIMessage(content='Bob, you are 42 years old.', additional_kwargs={}, response_metadata={})], 'output': 'Bob, you are 42 years old.'}

{'input': 'do you remember my name?', 'chat_history': [HumanMessage(content='hi! my name is bob what is my age?', additional_kwargs={}, response_metadata={}), AIMessage(content='Bob, you are 42 years old.', additional_kwargs={}, response_metadata={}), HumanMessage(content='do you remember my name?', additional_kwargs={}, response_metadata={}), AIMessage(content='Yes, your name is Bob.', additional_kwargs={}, response_metadata={})], 'output': 'Yes, your name is Bob.'}

LangGraph

您可以按照标准的LangChain教程来 构建一个代理,并深入了解其工作原理。

此示例在此处明确展示,以便用户更容易比较传统实现与相应的 langgraph 实现。

此示例展示了如何向 langgraph 中的 预构建的 react 代理 添加记忆功能。

有关更多详细信息,请参阅 langgraph 中的 如何向预构建的 ReAct 代理添加记忆 指南。

详细信息
import uuid

from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent


@tool
def get_user_age(name: str) -> str:
"""Use this tool to find the user's age."""
# This is a placeholder for the actual implementation
if "bob" in name.lower():
return "42 years old"
return "41 years old"


memory = MemorySaver()
model = ChatOpenAI()
app = create_react_agent(
model,
tools=[get_user_age],
checkpointer=memory,
)

# The thread id is a unique key that identifies
# this particular conversation.
# We'll just generate a random uuid here.
# This enables a single application to manage conversations among multiple users.
thread_id = uuid.uuid4()
config = {"configurable": {"thread_id": thread_id}}

# Tell the AI that our name is Bob, and ask it to use a tool to confirm
# that it's capable of working like an agent.
input_message = HumanMessage(content="hi! I'm bob. What is my age?")

for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()

# Confirm that the chat bot has access to previous conversation
# and can respond to the user saying that the user's name is Bob.
input_message = HumanMessage(content="do you remember my name?")

for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob. What is my age?
================================== Ai Message ==================================
Tool Calls:
get_user_age (call_oEDwEbIDNdokwqhAV6Azn47c)
Call ID: call_oEDwEbIDNdokwqhAV6Azn47c
Args:
name: bob
================================= Tool Message =================================
Name: get_user_age

42 years old
================================== Ai Message ==================================

Bob, you are 42 years old! If you need any more assistance or information, feel free to ask.
================================ Human Message =================================

do you remember my name?
================================== Ai Message ==================================

Yes, your name is Bob. If you have any other questions or need assistance, feel free to ask!

如果我们使用不同的线程 ID,就会开启一个新的对话,机器人将不知道我们的名字!

config = {"configurable": {"thread_id": "123456789"}}

input_message = HumanMessage(content="hi! do you remember my name?")

for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message =================================

hi! do you remember my name?
================================== Ai Message ==================================

Hello! Yes, I remember your name. It's great to see you again! How can I assist you today?

下一步

探索使用LangGraph进行持久化:

通过简单的 LCEL 实现持久化存储(更复杂的用例建议使用 langgraph):

处理消息历史记录: