如何从消息对象中解析文本
前置条件
本指南假设您熟悉以下概念:
LangChain 消息 对象支持多种格式的内容,包括文本、多模态数据以及一个包含内容块字典的列表。
Chat 模型响应内容的格式可能因提供者而异。例如,Anthropic的聊天模型对于典型的字符串输入将返回字符串内容:
from langchain_anthropic import ChatAnthropic
llm = ChatAnthropic(model="claude-3-5-haiku-latest")
response = llm.invoke("Hello")
response.content
API 参考:ChatAnthropic
'Hi there! How are you doing today? Is there anything I can help you with?'
但是当工具调用被生成时,响应内容会被结构化地组织为内容块,以传达模型的推理过程:
from langchain_core.tools import tool
@tool
def get_weather(location: str) -> str:
"""Get the weather from a location."""
return "Sunny."
llm_with_tools = llm.bind_tools([get_weather])
response = llm_with_tools.invoke("What's the weather in San Francisco, CA?")
response.content
API 参考:工具
[{'text': "I'll help you get the current weather for San Francisco, California. Let me check that for you right away.",
'type': 'text'},
{'id': 'toolu_015PwwcKxWYctKfY3pruHFyy',
'input': {'location': 'San Francisco, CA'},
'name': 'get_weather',
'type': 'tool_use'}]
为了自动解析消息对象中的文本,无论其底层内容的格式如何,我们可以使用 StrOutputParser。我们可以将其与聊天模型组合如下:
from langchain_core.output_parsers import StrOutputParser
chain = llm_with_tools | StrOutputParser()
API 参考:字符串输出解析器
StrOutputParser 简化了从消息对象中提取文本的过程:
response = chain.invoke("What's the weather in San Francisco, CA?")
print(response)
I'll help you check the weather in San Francisco, CA right away.
这在流式上下文中特别有用:
for chunk in chain.stream("What's the weather in San Francisco, CA?"):
print(chunk, end="|")
|I'll| help| you get| the current| weather for| San Francisco, California|. Let| me retrieve| that| information for you.||||||||||
有关更多信息,请参阅 API 参考。