Skip to main content
Open In ColabOpen on GitHub

IPEX-LLM:在英特尔CPU上本地运行的BGE嵌入

IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency.

此示例介绍了如何使用 LangChain 在 Intel CPU 上进行嵌入任务,且不进行任何优化。这在诸如 RAG、文档问答等应用中会有所帮助。

设置

%pip install -qU langchain langchain-community

在Intel CPU上安装IPEX-LLM以进行优化,以及sentence-transformers

%pip install --pre --upgrade ipex-llm[all] --extra-index-url https://download.pytorch.org/whl/cpu
%pip install sentence-transformers

Note

For Windows users, --extra-index-url https://download.pytorch.org/whl/cpu when install ipex-llm is not required.

基本用法

from langchain_community.embeddings import IpexLLMBgeEmbeddings

embedding_model = IpexLLMBgeEmbeddings(
model_name="BAAI/bge-large-en-v1.5",
model_kwargs={},
encode_kwargs={"normalize_embeddings": True},
)

API 参考

sentence = "IPEX-LLM is a PyTorch library for running LLM on Intel CPU and GPU (e.g., local PC with iGPU, discrete GPU such as Arc, Flex and Max) with very low latency."
query = "What is IPEX-LLM?"

text_embeddings = embedding_model.embed_documents([sentence, query])
print(f"text_embeddings[0][:10]: {text_embeddings[0][:10]}")
print(f"text_embeddings[1][:10]: {text_embeddings[1][:10]}")

query_embedding = embedding_model.embed_query(query)
print(f"query_embedding[:10]: {query_embedding[:10]}")