RAG Applications with LangChain: Building AI Search
RAG Applications with LangChain: Building AI Search
Retrieval Augmented Generation (RAG) combines document retrieval with LLM generation.
Setup
npm install langchain @langchain/openai @langchain/community
Document Loading and Splitting
import { TextLoader } from 'langchain/document_loaders/fs/text';
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter';
const loader = new TextLoader('documents/data.txt');
const docs = await loader.load();
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const splitDocs = await splitter.splitDocuments(docs);
Vector Store
import { OpenAIEmbeddings } from '@langchain/openai';
import { MemoryVectorStore } from 'langchain/vectorstores/memory';
const vectorStore = await MemoryVectorStore.fromDocuments(
splitDocs,
new OpenAIEmbeddings()
);
Retrieval Chain
import { createRetrievalChain } from 'langchain/chains/retrieval';
import { createStuffDocumentsChain } from 'langchain/chains/combine_documents';
import { ChatOpenAI } from '@langchain/openai';
import { PromptTemplate } from '@langchain/core/prompts';
const llm = new ChatOpenAI({ model: 'gpt-4' });
const prompt = PromptTemplate.fromTemplate(`
Answer based on context:
{context}
Question: {input}
`);
const combineDocsChain = await createStuffDocumentsChain({
llm,
prompt,
});
const retriever = vectorStore.asRetriever();
const retrievalChain = await createRetrievalChain({
retriever,
combineDocsChain,
});
Query
const response = await retrievalChain.invoke({
input: 'What is the main topic?',
});
console.log(response.answer);
Conclusion
RAG enables building intelligent Q&A systems that can search and reason over your documents.