Spark
Apache Spark is a unified analytics engine for large-scale data processing. It provides high-level APIs in Scala, Java, Python, and R, and an optimized engine that supports general computation graphs for data analysis. It also supports a rich set of higher-level tools including
Spark SQLfor SQL and DataFrames,pandas API on Sparkfor pandas workloads,MLlibfor machine learning,GraphXfor graph processing, andStructured Streamingfor stream processing.
文档加载器
PySpark
它从一个 PySpark DataFrame 加载数据。
查看一个 使用示例。
from langchain_community.document_loaders import PySparkDataFrameLoader
API 参考:PySparkDataFrameLoader
Tools/Toolkits
Spark SQL 工具包
与Spark SQL交互的工具包。
查看一个 使用示例。
from langchain_community.agent_toolkits import SparkSQLToolkit, create_spark_sql_agent
from langchain_community.utilities.spark_sql import SparkSQL
Spark SQL 单个工具
您可以使用 Spark SQL 工具包中的单个工具:
InfoSparkSQLTool: 用于获取有关Spark SQL的元数据的工具ListSparkSQLTool: 获取表名的工具QueryCheckerTool: 工具使用LLM检查查询是否正确QuerySparkSQLTool: 用于查询Spark SQL的工具
from langchain_community.tools.spark_sql.tool import InfoSparkSQLTool
from langchain_community.tools.spark_sql.tool import ListSparkSQLTool
from langchain_community.tools.spark_sql.tool import QueryCheckerTool
from langchain_community.tools.spark_sql.tool import QuerySparkSQLTool