Skip to main content
Open In ColabOpen on GitHub

IBM watsonx.ai

WatsonxLLM is a wrapper for IBM watsonx.ai foundation models.

此示例展示了如何使用 LangChainwatsonx.ai 个模型进行通信。

概览

集成详情

本地可序列化的JS 支持软件包下载最新包裹
WatsonxLLMlangchain-ibmPyPI - DownloadsPyPI - Version

设置

要访问IBM watsonx.ai模型,您需要创建一个IBM watsonx.ai帐户,获取API密钥,并安装langchain-ibm集成包。

凭据

下面的单元格定义了与 watsonx 基础模型推理一起使用所需的凭据。

操作:提供IBM Cloud用户的API密钥。有关详细信息,请参阅 管理用户API密钥

import os
from getpass import getpass

watsonx_api_key = getpass()
os.environ["WATSONX_APIKEY"] = watsonx_api_key

此外,您还可以将其他密钥作为环境变量传递。

import os

os.environ["WATSONX_URL"] = "your service instance url"
os.environ["WATSONX_TOKEN"] = "your token for accessing the CPD cluster"
os.environ["WATSONX_PASSWORD"] = "your password for accessing the CPD cluster"
os.environ["WATSONX_USERNAME"] = "your username for accessing the CPD cluster"
os.environ["WATSONX_INSTANCE_ID"] = "your instance_id for accessing the CPD cluster"

安装

LangChain IBM 集成位于 langchain-ibm 包中:

!pip install -qU langchain-ibm

实例化

你可能需要调整模型 parameters 以适应不同的模型或任务。详情请参阅 文档

parameters = {
"decoding_method": "sample",
"max_new_tokens": 100,
"min_new_tokens": 1,
"temperature": 0.5,
"top_k": 50,
"top_p": 1,
}

使用先前设置的参数初始化 WatsonxLLM 类。

注意:

  • 为了给API调用提供上下文,您必须添加project_idspace_id。更多信息请参见文档
  • 根据您配置的服务实例的区域,使用此处描述的其中一个URL。

在此示例中,我们将使用 project_id 和 Dallas URL。

你需要指定一个用于推理的model_id。你可以在文档中找到所有可用的模型。

from langchain_ibm import WatsonxLLM

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
url="https://us-south.ml.cloud.ibm.com",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)
API 参考:WatsonxLLM

或者您可以使用 Cloud Pak for Data 凭据。有关详细信息,请参阅文档

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
url="PASTE YOUR URL HERE",
username="PASTE YOUR USERNAME HERE",
password="PASTE YOUR PASSWORD HERE",
instance_id="openshift",
version="4.8",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)

而不是 model_id,你也可以传递之前调优模型的 deployment_id。整个模型调优工作流程在 使用 TuneExperiment 和 PromptTuner 中进行了描述。

watsonx_llm = WatsonxLLM(
deployment_id="PASTE YOUR DEPLOYMENT_ID HERE",
url="https://us-south.ml.cloud.ibm.com",
project_id="PASTE YOUR PROJECT_ID HERE",
params=parameters,
)

对于某些需求,可以选择将IBM的APIClient对象传递到WatsonxLLM类中。

from ibm_watsonx_ai import APIClient

api_client = APIClient(...)

watsonx_llm = WatsonxLLM(
model_id="ibm/granite-13b-instruct-v2",
watsonx_client=api_client,
)

您还可以将 IBM 的 ModelInference 对象传递给 WatsonxLLM 类。

from ibm_watsonx_ai.foundation_models import ModelInference

model = ModelInference(...)

watsonx_llm = WatsonxLLM(watsonx_model=model)

调用

要获取补全内容,您可以使用字符串提示直接调用模型。

# Calling a single prompt

watsonx_llm.invoke("Who is man's best friend?")
"Man's best friend is his dog. Dogs are man's best friend because they are always there for you, they never judge you, and they love you unconditionally. Dogs are also great companions and can help reduce stress levels. "
# Calling multiple prompts

watsonx_llm.generate(
[
"The fastest dog in the world?",
"Describe your chosen dog breed",
]
)
LLMResult(generations=[[Generation(text='The fastest dog in the world is the greyhound. Greyhounds can run up to 45 mph, which is about the same speed as a Usain Bolt.', generation_info={'finish_reason': 'eos_token'})], [Generation(text='The Labrador Retriever is a breed of retriever that was bred for hunting. They are a very smart breed and are very easy to train. They are also very loyal and will make great companions. ', generation_info={'finish_reason': 'eos_token'})]], llm_output={'token_usage': {'generated_token_count': 82, 'input_token_count': 13}, 'model_id': 'ibm/granite-13b-instruct-v2', 'deployment_id': None}, run=[RunInfo(run_id=UUID('750b8a0f-8846-456d-93d0-e039e95b1276')), RunInfo(run_id=UUID('aa4c2a1c-5b08-4fcf-87aa-50228de46db5'))], type='LLMResult')

流式输出模型结果

你可以流式传输模型输出。

for chunk in watsonx_llm.stream(
"Describe your favorite breed of dog and why it is your favorite."
):
print(chunk, end="")
My favorite breed of dog is a Labrador Retriever. They are my favorite breed because they are my favorite color, yellow. They are also very smart and easy to train.

链式调用

创建 PromptTemplate 个对象,这些对象将负责生成一个随机问题。

from langchain_core.prompts import PromptTemplate

template = "Generate a random question about {topic}: Question: "

prompt = PromptTemplate.from_template(template)
API 参考:PromptTemplate

提供一个主题并运行链式操作。

llm_chain = prompt | watsonx_llm

topic = "dog"

llm_chain.invoke(topic)
'What is the origin of the name "Pomeranian"?'

API 参考

有关所有 WatsonxLLM 功能和配置的详细文档,请访问 API 参考