Skip to main content
Open In ColabOpen on GitHub

如何处理长文本提取

在处理文件(如PDF)时,你很可能会遇到超出语言模型上下文窗口限制的文本。为了处理这些文本,请考虑以下策略:

  1. 更换LLM 选择一个支持更大上下文窗口的其他LLM。
  2. 暴力分割 将文档进行暴力分块,并从每个分块中提取内容。
  3. 检索增强生成 将文档分块,索引这些分块,并仅从看起来“相关”的分块子集中提取内容。

请记住,这些策略各有权衡,最适合的策略很可能取决于您所设计的应用!

本指南演示了如何实现策略2和策略3。

设置

首先,我们将安装本指南所需的依赖项:

%pip install -qU langchain-community lxml faiss-cpu langchain-openai
Note: you may need to restart the kernel to use updated packages.

现在我们需要一些示例数据!让我们从维基百科下载一篇关于汽车的文章,并将其加载为LangChain的文档

import re

import requests
from langchain_community.document_loaders import BSHTMLLoader

# Download the content
response = requests.get("https://en.wikipedia.org/wiki/Car")
# Write it to a file
with open("car.html", "w", encoding="utf-8") as f:
f.write(response.text)
# Load it with an HTML parser
loader = BSHTMLLoader("car.html")
document = loader.load()[0]
# Clean up code
# Replace consecutive new lines with a single new line
document.page_content = re.sub("\n\n+", "\n", document.page_content)
API 参考:BSHTMLLoader
print(len(document.page_content))
78865

定义模式

在完成提取教程后,我们将使用 Pydantic 来定义希望提取的信息模式。在此案例中,我们将提取包含年份和描述的“关键发展”(例如重要的历史事件)列表。

请注意,我们还包含一个 evidence 键,并指示模型原样提供文章中的相关句子文本。这使我们能够将提取结果与(模型对原始文档的重构)文本进行比较。

from typing import List, Optional

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from pydantic import BaseModel, Field


class KeyDevelopment(BaseModel):
"""Information about a development in the history of cars."""

year: int = Field(
..., description="The year when there was an important historic development."
)
description: str = Field(
..., description="What happened in this year? What was the development?"
)
evidence: str = Field(
...,
description="Repeat in verbatim the sentence(s) from which the year and description information were extracted",
)


class ExtractionData(BaseModel):
"""Extracted information about key developments in the history of cars."""

key_developments: List[KeyDevelopment]


# Define a custom prompt to provide instructions and any additional context.
# 1) You can add examples into the prompt template to improve extraction quality
# 2) Introduce additional parameters to take context into account (e.g., include metadata
# about the document from which the text was extracted.)
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are an expert at identifying key historic development in text. "
"Only extract important historic developments. Extract nothing if no important information can be found in the text.",
),
("human", "{text}"),
]
)

创建一个提取器

让我们选择一个大型语言模型。由于我们使用工具调用,因此需要一个支持工具调用功能的模型。有关可用大型语言模型的详细信息,请参见 此表格

pip install -qU "langchain[openai]"
import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain.chat_models import init_chat_model

llm = init_chat_model("gpt-4o", model_provider="openai", temperature=0)
extractor = prompt | llm.with_structured_output(
schema=ExtractionData,
include_raw=False,
)

暴力破解方法

将文档拆分为多个片段,使得每个片段都能放入大型语言模型的上下文窗口中。

from langchain_text_splitters import TokenTextSplitter

text_splitter = TokenTextSplitter(
# Controls the size of each chunk
chunk_size=2000,
# Controls overlap between chunks
chunk_overlap=20,
)

texts = text_splitter.split_text(document.page_content)
API 参考:TokenTextSplitter

使用 批量 功能在每个数据块上以 并行 方式运行提取!

提示

您通常可以使用 .batch() 来并行化提取!.batch 在底层使用线程池来帮助您并行处理任务。

如果您的模型通过 API 暴露,这很可能会加快您的提取流程!

# Limit just to the first 3 chunks
# so the code can be re-run quickly
first_few = texts[:3]

extractions = extractor.batch(
[{"text": text} for text in first_few],
{"max_concurrency": 5}, # limit the concurrency by passing max concurrency!
)

合并结果

在从各个数据块中提取信息后,我们需要将这些提取结果合并在一起。

key_developments = []

for extraction in extractions:
key_developments.extend(extraction.key_developments)

key_developments[:10]
[KeyDevelopment(year=1769, description='Nicolas-Joseph Cugnot built the first steam-powered road vehicle.', evidence='The French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769, while the Swiss inventor François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile in 1808.'),
KeyDevelopment(year=1808, description='François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile.', evidence='The French inventor Nicolas-Joseph Cugnot built the first steam-powered road vehicle in 1769, while the Swiss inventor François Isaac de Rivaz designed and constructed the first internal combustion-powered automobile in 1808.'),
KeyDevelopment(year=1886, description='Carl Benz invented the modern car, a practical, marketable automobile for everyday use, and patented his Benz Patent-Motorwagen.', evidence='The modern car—a practical, marketable automobile for everyday use—was invented in 1886, when the German inventor Carl Benz patented his Benz Patent-Motorwagen.'),
KeyDevelopment(year=1901, description='The Oldsmobile Curved Dash became the first mass-produced car.', evidence='The 1901 Oldsmobile Curved Dash and the 1908 Ford Model T, both American cars, are widely considered the first mass-produced[3][4] and mass-affordable[5][6][7] cars, respectively.'),
KeyDevelopment(year=1908, description='The Ford Model T became the first mass-affordable car.', evidence='The 1901 Oldsmobile Curved Dash and the 1908 Ford Model T, both American cars, are widely considered the first mass-produced[3][4] and mass-affordable[5][6][7] cars, respectively.'),
KeyDevelopment(year=1885, description='Carl Benz built the original Benz Patent-Motorwagen, the first modern car.', evidence='The original Benz Patent-Motorwagen, the first modern car, built in 1885 and awarded the patent for the concept'),
KeyDevelopment(year=1881, description='Gustave Trouvé demonstrated a three-wheeled car powered by electricity.', evidence='In November 1881, French inventor Gustave Trouvé demonstrated a three-wheeled car powered by electricity at the International Exposition of Electricity.'),
KeyDevelopment(year=1888, description="Bertha Benz undertook the first road trip by car to prove the road-worthiness of her husband's invention.", evidence="In August 1888, Bertha Benz, the wife and business partner of Carl Benz, undertook the first road trip by car, to prove the road-worthiness of her husband's invention."),
KeyDevelopment(year=1896, description='Benz designed and patented the first internal-combustion flat engine, called boxermotor.', evidence='In 1896, Benz designed and patented the first internal-combustion flat engine, called boxermotor.'),
KeyDevelopment(year=1897, description='The first motor car in central Europe and one of the first factory-made cars in the world was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra), the Präsident automobil.', evidence='The first motor car in central Europe and one of the first factory-made cars in the world, was produced by Czech company Nesselsdorfer Wagenbau (later renamed to Tatra) in 1897, the Präsident automobil.')]

基于RAG的方法

另一个简单的想法是将文本分块,但不必从每一块中提取信息,只需关注最相关的那些块即可。

注意

很难确定哪些片段是相关的。

例如,在此处使用的 car 篇文章中,文章的大部分内容都包含关键开发信息。因此,通过使用 RAG,我们很可能会丢弃大量相关信息。

我们建议您针对您的使用场景进行试验,以确定这种方法是否适用。

实现基于RAG的方法:

  1. 将您的文档拆分成块并对其进行索引(例如,存储在向量数据库中);
  2. 在链的开头添加一个使用向量存储的检索步骤。

这是一个简单的示例,依赖于 FAISS 向量存储。

from langchain_community.vectorstores import FAISS
from langchain_core.documents import Document
from langchain_core.runnables import RunnableLambda
from langchain_openai import OpenAIEmbeddings
from langchain_text_splitters import CharacterTextSplitter

texts = text_splitter.split_text(document.page_content)
vectorstore = FAISS.from_texts(texts, embedding=OpenAIEmbeddings())

retriever = vectorstore.as_retriever(
search_kwargs={"k": 1}
) # Only extract from first document

在这种情况下,RAG 提取器仅关注最高排名的文档。

rag_extractor = {
"text": retriever | (lambda docs: docs[0].page_content) # fetch content of top doc
} | extractor
results = rag_extractor.invoke("Key developments associated with cars")
for key_development in results.key_developments:
print(key_development)
year=2006 description='Car-sharing services in the US experienced double-digit growth in revenue and membership.' evidence='in the US, some car-sharing services have experienced double-digit growth in revenue and membership growth between 2006 and 2007.'
year=2020 description='56 million cars were manufactured worldwide, with China producing the most.' evidence='In 2020, there were 56 million cars manufactured worldwide, down from 67 million the previous year. The automotive industry in China produces by far the most (20 million in 2020).'

常见问题

不同的方法在成本、速度和准确性方面各有优缺点。

注意以下问题:

  • 分块处理内容意味着,如果信息分散在多个分块中,大型语言模型可能无法提取出完整的信息。
  • 较大的块重叠可能导致相同的信息被提取两次,因此请做好去重准备!
  • 大型语言模型可能会编造数据。如果在大段文本中寻找单一事实,并采用暴力搜索的方法,你最终可能得到更多被编造的数据。