LlamaEdge
LlamaEdge 允许您通过本地或聊天服务与 GGUF 格式的大型语言模型进行聊天。
-
LlamaEdgeChatService为开发者提供了一个与 OpenAI API 兼容的服务,可通过 HTTP 请求与大型语言模型进行交互。 -
LlamaEdgeChatLocal使开发人员能够本地与大型语言模型聊天(即将推出)。
LlamaEdgeChatService 和 LlamaEdgeChatLocal 都运行在由 WasmEdge Runtime 驱动的基础设施上,它为大型语言模型推理任务提供了一个轻量级且可移植的 WebAssembly 容器环境。
通过API服务聊天
LlamaEdgeChatService 适用于 llama-api-server。按照 llama-api-server 快速入门 中的步骤,您可以托管自己的 API 服务,从而可以在任何有网络的设备上与任何您喜欢的模型进行聊天。
from langchain_community.chat_models.llama_edge import LlamaEdgeChatService
from langchain_core.messages import HumanMessage, SystemMessage
在非流模式下与大型语言模型聊天
# service url
service_url = "https://b008-54-186-154-209.ngrok-free.app"
# create wasm-chat service instance
chat = LlamaEdgeChatService(service_url=service_url)
# create message sequence
system_message = SystemMessage(content="You are an AI assistant")
user_message = HumanMessage(content="What is the capital of France?")
messages = [system_message, user_message]
# chat with wasm-chat service
response = chat.invoke(messages)
print(f"[Bot] {response.content}")
[Bot] Hello! The capital of France is Paris.
以流模式与大型语言模型聊天
# service url
service_url = "https://b008-54-186-154-209.ngrok-free.app"
# create wasm-chat service instance
chat = LlamaEdgeChatService(service_url=service_url, streaming=True)
# create message sequence
system_message = SystemMessage(content="You are an AI assistant")
user_message = HumanMessage(content="What is the capital of Norway?")
messages = [
system_message,
user_message,
]
output = ""
for chunk in chat.stream(messages):
# print(chunk.content, end="", flush=True)
output += chunk.content
print(f"[Bot] {output}")
[Bot] Hello! I'm happy to help you with your question. The capital of Norway is Oslo.