Skip to main content
Open on GitHub

Layerup 安全

LangChain大型语言模型开发框架的Layerup Security集成允许您保护对任何LangChain LLM、LLM链或LLM代理的调用。LLM对象包装任何现有的LLM对象,允许在您的用户和LLMs之间建立一个安全层。

Layerup Security 对象被设计为一个大型语言模型(LLM),但它本身并不是一个大型语言模型,而只是围绕一个大型语言模型进行封装,使其能够适应底层大型语言模型的相同功能。

设置

首先,您需要从Layerup 网站获取一个Layerup Security账户。

接下来,通过仪表盘创建一个项目,并复制您的 API 密钥。我们建议将您的 API 密钥放入项目的环境变量中。

安装Layerup安全SDK:

pip install LayerupSecurity

并安装 LangChain 社区版:

pip install langchain-community

现在你已经准备好使用Layerup Security来保护你的LLM调用了!

from langchain_community.llms.layerup_security import LayerupSecurity
from langchain_openai import OpenAI

# Create an instance of your favorite LLM
openai = OpenAI(
model_name="gpt-3.5-turbo",
openai_api_key="OPENAI_API_KEY",
)

# Configure Layerup Security
layerup_security = LayerupSecurity(
# Specify a LLM that Layerup Security will wrap around
llm=openai,

# Layerup API key, from the Layerup dashboard
layerup_api_key="LAYERUP_API_KEY",

# Custom base URL, if self hosting
layerup_api_base_url="https://api.uselayerup.com/v1",

# List of guardrails to run on prompts before the LLM is invoked
prompt_guardrails=[],

# List of guardrails to run on responses from the LLM
response_guardrails=["layerup.hallucination"],

# Whether or not to mask the prompt for PII & sensitive data before it is sent to the LLM
mask=False,

# Metadata for abuse tracking, customer tracking, and scope tracking.
metadata={"customer": "example@uselayerup.com"},

# Handler for guardrail violations on the prompt guardrails
handle_prompt_guardrail_violation=(
lambda violation: {
"role": "assistant",
"content": (
"There was sensitive data! I cannot respond. "
"Here's a dynamic canned response. Current date: {}"
).format(datetime.now())
}
if violation["offending_guardrail"] == "layerup.sensitive_data"
else None
),

# Handler for guardrail violations on the response guardrails
handle_response_guardrail_violation=(
lambda violation: {
"role": "assistant",
"content": (
"Custom canned response with dynamic data! "
"The violation rule was {}."
).format(violation["offending_guardrail"])
}
),
)

response = layerup_security.invoke(
"Summarize this message: my name is Bob Dylan. My SSN is 123-45-6789."
)
API 参考:LayerupSecurity | OpenAI