Momento
Momento Cache is the world's first truly serverless caching service, offering instant elasticity, scale-to-zero capability, and blazing-fast performance.
Momento Vector Index stands out as the most productive, easiest-to-use, fully serverless vector index.
For both services, simply grab the SDK, obtain an API key, input a few lines into your code, and you're set to go. Together, they provide a comprehensive solution for your LLM data needs.
本页面介绍如何在LangChain中使用Momento生态系统。
安装与设置
- 在此注册免费账户以获取API密钥
- 使用
pip install momento安装 Momento Python SDK
缓存
使用 Momento 作为无服务器、分布式、低延迟的缓存,用于存储 LLM 提示和响应。标准缓存是 Momento 用户在任何环境中的主要用例。
要将 Momento 缓存集成到您的应用程序中:
from langchain.cache import MomentoCache
API 参考:MomentoCache
然后,使用以下代码进行设置:
from datetime import timedelta
from momento import CacheClient, Configurations, CredentialProvider
from langchain.globals import set_llm_cache
# Instantiate the Momento client
cache_client = CacheClient(
Configurations.Laptop.v1(),
CredentialProvider.from_environment_variable("MOMENTO_API_KEY"),
default_ttl=timedelta(days=1))
# Choose a Momento cache name of your choice
cache_name = "langchain"
# Instantiate the LLM cache
set_llm_cache(MomentoCache(cache_client, cache_name))
API 参考:set_llm_cache
存储
Momento 可用作 LLM 的分布式内存存储。
参见此笔记本,了解如何使用Momento作为聊天消息历史的存储器。
from langchain.memory import MomentoChatMessageHistory
API 参考:MomentoChatMessageHistory
向量存储
Momento 向量索引 (MVI) 可用作向量存储。
参见此笔记本,了解如何使用MVI作为向量存储的详细步骤。
from langchain_community.vectorstores import MomentoVectorIndex
API 参考:MomentoVectorIndex