- Home
- Technical Products
Enterprise Cloud IT Solutions
Test Measurement
- Solutions
Enterprise Cloud IT Solutions
Test Measurement
Industrial Internet of Things
- Resource Center
- About Us
- EN
Hongke Articles
Customizing AI Agents for LangChain with Redis - OpenGPTs
OpenAI recently launched OpenAI GPTs, a code-free "app store" for building customized AI agents, and LangChain has since developed a similar open source tool called OpenGPTs.
OpenGPTs is a low-code open source framework for building customized AI agents. LangChain has chosen Redis as the default vector database for OpenGPTs because of its high speed and stability.
How to build an intelligent AI agent using OpenGPTs and Redis?
LangChain founder Harrison Chase said, "We use Redis in OpenGPTs to store all of our long-term content, including vector storage for its use in retrieval and a database for storing message and agent configurations.The fact that Redis integrates all of this functionality into a single database is very appealing! Redis integrates all of these capabilities in one database, which is very attractive."
I. Introduction to OpenGPTs
With a few configuration steps, we can use OpenGPTs to build a Redis-savvy robot called "RedisGuru".
Below are examples of our selection of LLMs, system messages, and tools, including DuckDuckGo searches, Wikipedia searches, and public press releases accessed through Kay.ai.
After configuring and deploying RedisGuru, you can test how the agent works:
In addition, we can also ask questions about Redis data structures. As you can see, RedisGuru remembered my name:
The RedisGuru agent we built is just a sample application. You can also build applications such as email copy editors, intelligent research assistants, code reviewers, and more.
II. Role of Redis in OpenGPTs
Behind this OpenGPTs demo, Redis provides a powerful and high-performance data layer that is an integral part of the OpenGPTs technology stack.
Redis can persist user chat sessions (threads), agent configurations, and embedded document blocks, and is used for vector database retrieval.
- User Chat Sessions: In order to maintain "state" during a conversation, Redis provides OpenGPTs with persistent chat threads between users and AI agents. These chat sessions are also imported into the LLM to provide contextual information about the current state of the conversation.
- Proxy Configuration: To support a multi-tenant proxy architecture, Redis provides a remote, low-latency storage layer for OpenGPTs. When an application starts, it reads the specified proxy settings from Redis and begins processing requests.
- Vector databases for RAG: In order to keep conversations grounded in reality, OpenGPTs allows us to upload sources of "knowledge" for LLM to combine with the generated answers. Through a process called Retrieval Augmented Generation (RAG), OpenGPTs stores the uploaded documents in Redis and provides real-time vector searches to retrieve context relevant to LLM.
These features are part of the Redis platform and are incorporated into the application through our LangChain + Redis integration.
Redis' integration with LangChain's OpenGPTs brings increased flexibility, scalability, and real-time processing and search capabilities. The ability to handle diverse data structures is what makes Redis the preferred solution for OpenGPTs memory capabilities.
III. Using OpenGPTs locally
For a hands-on experience with OpenGPTs, you can find a more detailed guide in the project's README file. Here is a quick overview:
(https://github.com/langchain-ai/opengpts/blob/main/README.md)
1. Install backend dependencies: Clone the project repository, then navigate to the backend directory and install the necessary Python dependencies.
cd backend
pip install -r requirements.txt
2. Connecting Redis and OpenAI: OpenGPTs use Redis as the memory for LLMs and OpenAI for LLM generation and embedding. Set the environment variables REDIS_URL and OPENAI_API_KEY to connect to your Redis instance and OpenAI account.
export OPENAI_API_KEY=your-openai-api-key
export REDIS_URL=redis://your-redis-url
3. Start the backend server: Run the LangChain server on the local machine to provide application services.
langchain serve -port=8100
4. Start the front-end: In the front-end directory, use yarn to start the development server.
cd frontend
yarn
yarn dev
5. Open http://localhost:5173/ in your browser to interact with your local OpenGPTs deployment.
IV. Using OpenGPTs in the Cloud
If you wish to use OpenGPTs without a local setup, you can try deploying on Google Cloud. Alternatively, you can access a hosted preview deployment powered by LangChain, LangServe, and Redis. This deployment demonstrates the customizability and ease of use of OpenGPTs.
V. Redis and LangChain Enabling Innovation
Redis Enterprise Edition is an enterprise-grade, low-latency vector database that is uniquely suited to support generative AI projects. It provides not only vector search, but also versatile data structures that effectively meet the application state requirements of LLMs. This powerful platform provides the scalability and performance that has made Redis an important tool in the generative AI space.