Skip to main content

Knowledge Bases for Amazon Bedrock

Overview​

This will help you getting started with the AmazonKnowledgeBaseRetriever. For detailed documentation of all AmazonKnowledgeBaseRetriever features and configurations head to the API reference.

Knowledge Bases for Amazon Bedrock is a fully managed support for end-to-end RAG workflow provided by Amazon Web Services (AWS). It provides an entire ingestion workflow of converting your documents into embeddings (vector) and storing the embeddings in a specialized vector database. Knowledge Bases for Amazon Bedrock supports popular databases for vector storage, including vector engine for Amazon OpenSearch Serverless, Pinecone, Redis Enterprise Cloud, Amazon Aurora (coming soon), and MongoDB (coming soon).

Integration details​

RetrieverSelf-hostCloud offeringPackagePy support
AmazonKnowledgeBaseRetriever🟠 (see details below)βœ…@langchain/awsβœ…

AWS Knowledge Base Retriever can be β€˜self hosted’ in the sense you can run it on your own AWS infrastructure. However it is not possible to run on another cloud provider or on-premises.

Setup​

In order to use the AmazonKnowledgeBaseRetriever, you need to have an AWS account, where you can manage your indexes and documents. Once you’ve setup your account, set the following environment variables:

process.env.AWS_KNOWLEDGE_BASE_ID=your-knowledge-base-id
process.env.AWS_ACCESS_KEY_ID=your-access-key-id
process.env.AWS_SECRET_ACCESS_KEY=your-secret-access-key

If you want to get automated tracing from individual queries, you can also set your LangSmith API key by uncommenting below:

// process.env.LANGSMITH_API_KEY = "<YOUR API KEY HERE>";
// process.env.LANGSMITH_TRACING = "true";

Installation​

This retriever lives in the @langchain/aws package:

yarn add @langchain/aws

Instantiation​

Now we can instantiate our retriever:

import { AmazonKnowledgeBaseRetriever } from "@langchain/aws";

const retriever = new AmazonKnowledgeBaseRetriever({
topK: 10,
knowledgeBaseId: process.env.AWS_KNOWLEDGE_BASE_ID,
region: "us-east-2",
clientOptions: {
credentials: {
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
},
},
});

Usage​

const query = "...";

await retriever.invoke(query);

Use within a chain​

Like other retrievers, AmazonKnowledgeBaseRetriever can be incorporated into LLM applications via chains.

We will need a LLM or chat model:

Pick your chat model:

Install dependencies

yarn add @langchain/openai 

Add environment variables

OPENAI_API_KEY=your-api-key

Instantiate the model

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0
});
// @ls-docs-hide-cell

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
model: "gpt-4o-mini",
temperature: 0,
});
import { ChatPromptTemplate } from "@langchain/core/prompts";
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { StringOutputParser } from "@langchain/core/output_parsers";

import type { Document } from "@langchain/core/documents";

const prompt = ChatPromptTemplate.fromTemplate(`
Answer the question based only on the context provided.

Context: {context}

Question: {question}`);

const formatDocs = (docs: Document[]) => {
return docs.map((doc) => doc.pageContent).join("\n\n");
};

// See https://js.langchain.com/v0.2/docs/tutorials/rag
const ragChain = RunnableSequence.from([
{
context: retriever.pipe(formatDocs),
question: new RunnablePassthrough(),
},
prompt,
llm,
new StringOutputParser(),
]);
tip

See our RAG tutorial for more information and examples on RunnableSequence's like the one above.

await ragChain.invoke("...");

API reference​

For detailed documentation of all AmazonKnowledgeBaseRetriever features and configurations head to the API reference.


Was this page helpful?


You can also leave detailed feedback on GitHub.