This class will be removed in 0.3.0. Use the LangChain Expression Language (LCEL) instead. See the example below for how to use LCEL with the LLMChain class:
Chain to run queries against LLMs.
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const prompt = ChatPromptTemplate.fromTemplate("Tell me a {adjective} joke");const llm = new ChatOpenAI();const chain = prompt.pipe(llm);const response = await chain.invoke({ adjective: "funny" }); Copy
import { ChatPromptTemplate } from "@langchain/core/prompts";import { ChatOpenAI } from "@langchain/openai";const prompt = ChatPromptTemplate.fromTemplate("Tell me a {adjective} joke");const llm = new ChatOpenAI();const chain = prompt.pipe(llm);const response = await chain.invoke({ adjective: "funny" });
LLM Wrapper to use
Key to use for output, defaults to text
text
Prompt object to use
Optional
Kwargs to pass to LLM
OutputParser to use
Use .batch() instead. Will be removed in 0.2.0.
Call the chain on all inputs in the list
Run the core logic of this chain and add to output if desired.
Wraps _call and handles memory.
Invoke the chain with the provided input and returns the output.
Input values for the chain run.
Promise that resolves with the output of the chain run.
Format prompt with values and pass to LLM
keys to pass to prompt template
CallbackManager to use
Completion from LLM.
llm.predict({ adjective: "funny" }) Copy
llm.predict({ adjective: "funny" })
Use .invoke() instead. Will be removed in 0.2.0.
Static
Load a chain from a json-like object describing it.
Deprecated
This class will be removed in 0.3.0. Use the LangChain Expression Language (LCEL) instead. See the example below for how to use LCEL with the LLMChain class:
Chain to run queries against LLMs.
Example