Skip to main content
Open In ColabOpen on GitHub

CogneeRetriever

这将帮助您开始使用 Cognee 检索器。有关 CogneeRetriever 所有功能和配置的详细文档,请访问 API 参考

集成详情

自带数据(即,索引和搜索自定义文档语料库):

检索器Self-host云服务
CogneeRetrieverlangchain-cognee

设置

对于 Cognee 的默认设置,你只需要你的 OpenAI API 密钥即可。

如果要获取来自单个查询的自动跟踪,还可以通过取消注释以下内容来设置您的 LangSmith API 密钥:

# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

安装

此检索器位于 langchain-cognee 包中:

%pip install -qU langchain-cognee
import nest_asyncio

nest_asyncio.apply()

实例化

现在我们可以实例化我们的检索器:

from langchain_cognee import CogneeRetriever

retriever = CogneeRetriever(
llm_api_key="sk-", # OpenAI API Key
dataset_name="my_dataset",
k=3,
)

使用

添加一些文档,处理它们,然后运行查询。Cognee 会检索与您的查询相关的知识,并生成最终答案。

# Example of adding and processing documents
from langchain_core.documents import Document

docs = [
Document(page_content="Elon Musk is the CEO of SpaceX."),
Document(page_content="SpaceX focuses on rockets and space travel."),
]

retriever.add_documents(docs)
retriever.process_data()

# Now let's query the retriever
query = "Tell me about Elon Musk"
results = retriever.invoke(query)

for idx, doc in enumerate(results, start=1):
print(f"Doc {idx}: {doc.page_content}")
API 参考:文档

在链中使用

像其他检索器一样,CogneeRetriever 可以通过Chains集成到大型语言模型应用中。

我们需要一个LLM或聊天模型:

pip install -qU "langchain[openai]"
import getpass
import os

if not os.environ.get("OPENAI_API_KEY"):
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter API key for OpenAI: ")

from langchain.chat_models import init_chat_model

llm = init_chat_model("gpt-4o-mini", model_provider="openai")
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
API 参考:ChatOpenAI
from langchain_cognee import CogneeRetriever
from langchain_core.documents import Document
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

# Instantiate the retriever with your Cognee config
retriever = CogneeRetriever(llm_api_key="sk-", dataset_name="my_dataset", k=3)

# Optionally, prune/reset the dataset for a clean slate
retriever.prune()

# Add some documents
docs = [
Document(page_content="Elon Musk is the CEO of SpaceX."),
Document(page_content="SpaceX focuses on space travel."),
]
retriever.add_documents(docs)
retriever.process_data()


prompt = ChatPromptTemplate.from_template(
"""Answer the question based only on the context provided.

Context: {context}

Question: {question}"""
)


def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)


chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
answer = chain.invoke("What companies do Elon Musk own?")

print("\nFinal chain answer:\n", answer)

API 参考

TODO:添加API参考链接。