Discover the powerful combination of LangChain and LangGraph for building stateful AI applications and unlock the benefits of using a managed-database service like NetApp® Instaclustr® backed by Azure NetApp Files for seamless data persistence and scalability.
Table of Contents
- Abstract
- Introduction
- Prerequisites
- Workstation setup
- Instaclustr PostgreSQL deployment
- Database configuration
- Azure OpenAI deployment
- Using the chatbot
- Summary
- Additional Information
Abstract
This article explores LangChain and LangGraph for developing AI applications and highlights the benefits of using a managed-database service like NetApp® Instaclustr® backed by Azure NetApp Files. LangChain and LangGraph provide a robust framework for building stateful, multi-actor applications capable of handling complex conversational flows. By integrating with Instaclustr, developers can achieve seamless data persistence, scalability, and reliable performance for their AI applications. This article covers the advantages of LangChain and LangGraph, the benefits of using a managed-database service, and the integration process. It also delves into the setup and configuration of the database, showcasing the advantages of persistent storage for Azure OpenAI chat models. By the end, readers will have a comprehensive understanding of how these technologies enable the development of robust and scalable AI applications in production environments.
Co-authors:
- Michael Haigh, Senior Technical Marketing Engineer
- Kyle Radder, Azure NetApp Files Technical Marketing Engineer
Introduction
LangChain and LangGraph offer numerous advantages for AI application development. LangChain provides a framework for developing applications powered by large language models (LLMs), while LangGraph enhances LangChain by enabling the creation of cyclical graphs, which are particularly useful for agent run times. These frameworks offer the flexibility to integrate with various chat models and provide the necessary tools and abstractions for managing state, flow, and persistence.
When running AI applications in production, using a managed-database service like Instaclustr backed by Azure NetApp Files brings several benefits. It offloads the burden of managing and maintaining the database infrastructure, allowing developers to focus on their core applications and data management. Instaclustr provides provisioning, monitoring, scaling, and expert support for open-source databases like PostgreSQL, which means high availability and reliability.
Azure NetApp Files, an Azure native, first-party high-performance file storage service, enhances the performance and scalability of the Instaclustr PostgreSQL instance. It offers low-latency access to data, enabling efficient and responsive interactions with the database. By leveraging a managed-database service like Instaclustr backed by Azure NetApp Files, developers can achieve seamless data persistence, data integrity, and scalability for their AI/ML applications. This combination means that applications can handle large volumes of data, provide reliable responses, and scale effortlessly to meet growing demands.
This article delves into the process of adding persistence to a LangChain and LangGraph chatbot by using an Instaclustr PostgreSQL instance backed by Azure NetApp Files. It explores the setup and configuration of the database, the integration of LangGraph with the managed-database service, and the benefits of using persistent storage for Azure OpenAI chat models. By the end of the article, readers will have a clear understanding of how LangChain, LangGraph, and managed-database services like Instaclustr and Azure NetApp Files can empower them to build robust and scalable AI applications.
Prerequisites
To follow along step by step, ensure the following items are available:
- An Azure account with the ability to create Azure OpenAI and Azure NetApp Files resources
- An Instaclustr account with the ability to create databases
- A local workstation with psql, Python (3.8 or later), and Git installed
Workstation setup
This GitHub repository contains the basic code and setup scripts needed to run our persistent chatbot. Run the following command to clone the repository to the local workstation (or alternatively fork the repo if making changes is expected) and change into the new directory:
git clone https://github.com/MichaelHaigh/langgraph-instaclustr-anf.git
cd langgraph-instaclustr-anf
Configure the Python virtual environment and install the necessary packages for the chatbot:
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
With the workstation configured, begin deploying the infrastructure.
Instaclustr PostgreSQL deployment
Instaclustr enables businesses to focus on their core applications and data management by handling the provisioning, monitoring, scaling, and support for open-source database technologies. Instaclustr provides a unified management console, backup and recovery features, security measures, and 24/7 expert support.
Log in to the Instaclustr console and click Create Cluster to start deploying the PostgreSQL database, which will back the persistent chatbot.
(!) Note Throughout this deployment, ensure to make selections appropriate for demonstration purposes, not production. If running in production, NetApp recommends the production SLA tier, synchronous replication, and a second data center. |
On the first page of the deployment wizard, give the database cluster a descriptive name and select the desired SLA tier (using langgraph-postgres and non-production, respectively in this example). Next, select the PostgreSQL tile from the application selection and then scroll down.
In Provider Selection, select Microsoft Azure. Under Enterprise Features, leave Private Network Cluster unselected unless the workstation can access the Azure virtual network through private IPs. Click Next.
On the PostgreSQL Setup page of the deployment wizard, choose the PostgreSQL Version (this examples uses 16.4), select the checkbox to add the current IP to the cluster firewall (unless the Private Network Cluster option was selected on the previous page), leave the Client to Cluster Encryption checkbox unselected, and choose the replication mode (this example uses asynchronous).
Scrolling down, optionally enable the available extensions and add-ons (left unselected in this example), and click Next.
On the Data Centre Options page of the deployment wizard, select the Provider Account (choosing our own account, but running in the Instaclustr account also works) and Data Centre (using East US in this example). Optionally modify the data center Custom Name and Cluster Network (leaving the default for both in this example).
If selecting your own account from the Provider dropdown, choose the resource group that corresponds to the Data Centre (region) dropdown. If running in the Instaclustr account, this field is not present. Scroll down.
Click the Change Node Size button to modify the size and data disk type of the PostgreSQL instance.
Under the Production section, expand the Standard_E8s_v4 with Azure NetApp Files Premium NFSv3 section, and then select the PGS-PRD-Standard_E8s_v4-ANF-2048 instance (or optionally select a larger Azure NetApp Files-based instance, depending on the application requirements).
Optionally modify the Storage Subnet for the Azure NetApp Files resources ( leave as default) and click Next.
If running a production workload, NetApp recommends provisioning a secondary data center and running multiple PostgreSQL nodes for high availability. However, for this demonstration use case, leave the Provision a Secondary Data Centre checkbox unselected. Make the appropriate selections for your use case, scroll down, and click Next.
On the confirmation page, validate that all fields look as expected, read the Instaclustr Terms of Service and select the checkbox, and click Create Cluster.
You will be redirected to the details page of the langgraph-postgres cluster, where the provisioning status can be monitored. The exact provisioning time varies based on several factors, but it took about 10 minutes in this environment.
If deploying the PostgreSQL instance in your own Azure account, head over to the Azure Portal and select the resource group specified on the third page of the deployment wizard. A NetApp Account and the corresponding Azure NetApp Files capacity pool and volume for the instance should be visible.
Back in the Instaclustr console, wait for the database to go into a Running state.
When it’s in a Running state, the database and chatbot application are ready for configuration.
Database configuration
When Instaclustr deploys a new PostgreSQL instance, a superuser (icpostgres) is created. However, NetApp does not recommend using this superuser for the chatbot application. Instead, use setup_db.sh to create a dedicated user and password.
View the contents of this script:
$ cat setup_db.sh
#!/bin/bash
psql "user=icpostgresql host=$POSTGRES_HOST port=5432 dbname=postgres target_session_attrs=read-write" <\set pass `echo $POSTGRES_PASS`
CREATE DATABASE langgraph;
CREATE USER langgraph WITH PASSWORD :'pass';
GRANT ALL PRIVILEGES ON DATABASE "langgraph" to langgraph;
\connect langgraph
GRANT ALL ON SCHEMA public TO langgraph;
SQL
This script uses psql to connect to the database (via an environment variable that’s about to be set) and executes a SQL script via standard input. This script creates a langgraph user and database, and then grants necessary database privileges to the user.
Back in the Instaclustr console, navigate to the Connection Info page of your PostgreSQL instance.
Copy the public IP address, then head to the terminal and export the IP as an environment variable:
export POSTGRES_HOST="copied-IP"
While still in the terminal, run the following command to create a random password for the new langgraph PostgreSQL user:
export POSTGRES_PASS=$(openssl rand -base64 14)
Scroll down and copy the icpostgresql password. (It’s also best practice to change this password via the PostgreSQL Users page.)
Back in the terminal, run the setup shell script with the following command:
sh setup_db.sh
At the password prompt, paste in the copied password from the clipboard. If everything has been done correctly, confirmation messages about a database, role, and two grants being created should appear:
$ sh setup_db.sh
Password for user icpostgresql:
CREATE DATABASE
CREATE ROLE
GRANT
You are now connected to database "langgraph" as user "icpostgresql".
GRANT
Now that the database and user have been created, deploy the Azure OpenAI environment.
Azure OpenAI deployment
LangChain integrates with a wide array of chat models to provide flexibility and choice to developers, leverage existing ecosystems and expertise, and keep up with rapid model advancements. This example will use AzureChatOpenAI, but feel free to choose another model–simply update the LLM section of the chatbot, as detailed in the next section.
Navigate to the Azure Portal, enter OpenAI in the search bar, and select the Azure OpenAI service. Click the Create button to create an OpenAI instance.
On the Basics page of the wizard, specify the resource group to deploy the OpenAI instance into, the region (this example uses the same value as the PostgreSQL instance, East US), the name (which must be unique across all deployments within Azure), and the pricing tier (Standard S0 in this case). Click Next.
On the Network page of the wizard, choose how the OpenAI instance can be accessed. Allowing all resources (0.0.0.0/0), is fine for demonstration purposes, but for production use cases NetApp strongly suggests choosing one of the other options.
On the Tags page of the wizard, optionally add any necessary tags and then click Next. On the final page, review the fields and then click Create.
Upon redirection to a deployment screen, and after about 1 minute, the instance should be deployed. Click Go To Resource, and then on the next page click Go to Azure OpenAI Studio.
The Azure OpenAI Studio opens in a new tab. Use this interface to deploy the large language model. But first, copy the API key displayed on the home page.
Head back into the terminal and set an environment variable named AZURE_OPENAI_API_KEY to the copied value:
export AZURE_OPENAI_API_KEY="copied-api-key"
Back in the OpenAI Studio, click Deployments under Shared Resources in the left column, then Deploy Model, and finally Deploy Base Model.
Choose any of the models with an inference task of Chat Completion. This example uses gpt-35-turbo, because it’s relatively lightweight, cost effective, and capable. Click Confirm.
Optionally modify the Deployment Name or Type (leaving the defaults) and click Deploy.
The model takes only a few seconds to deploy, at which point its deployment information will be visible. Feel free to investigate. When ready to move on, click Open in Playground.
In the playground, click View Code, and then in the pop-up, click Key Authentication. There are several values contained in this code block that need to be set as environment variables. The values in lines 5, 6, and 13 (as shown in the following image) need to be set as ENDPOINT_URL, DEPLOYMENT_NAME, and AZURE_OPENAI_API_VERSION, respectively.
Depending on the selections that were made during your deployment, the following commands need to be updated to match your values:
export ENDPOINT_URL="https://langgraph-openai.openai.azure.com/"
export DEPLOYMENT_NAME="gpt-35-turbo"
export AZURE_OPENAI_API_VERSION="2024-05-01-preview"
We’re now ready to start using the chatbot!
Using the chatbot
All the necessary infrastructure to run stateful, multi-actor applications that support multiple conversational turns with LangChain and LangGraph has now been deployed. However, before diving in, it’s important to explore some more rudimentary LangChain chatbots to learn how they’re constructed. If you’re already familiar with LangChain and LangGraph and just want to get to the PostgreSQL section, feel free to skip ahead.
Basic question and answer chatbot
In the Git repository, there’s a chatbot-basic.py file that defines a basic Q and A chatbot. Feel free to inspect the file, but most of the code here will be covered here.
The very beginning contains the imports, which should be self-explanatory. The following section of code defines the large language model (as mentioned earlier it’s easy to use a different large language model), pulling in the environment variables that were defined in the previous section:
LLM = AzureChatOpenAI(
azure_endpoint=os.getenv("ENDPOINT_URL"),
azure_deployment=os.getenv("DEPLOYMENT_NAME"),
openai_api_version=os.getenv("AZURE_OPENAI_API_VERSION"),
)
At the core of LangGraph is the StateGraph, which is a class that represents the graph structure in its framework. The StateGraph is responsible for managing the state and the flow of information through the graph.
When creating the StateGraph, it’s initilaized by passing a State definition. The State definition represents a central State object that is updated over time as the graph executes. This State object typically contains attributes that store relevant information for the application (in this case a list of messages that are added to over time):
class State(TypedDict):
messages: Annotated[list, add_messages]
Nodes in a StateGraph refer to the individual components within the graph that perform specific operations and contribute to the overall functionality of the application. They accept the State object as input and output a dictionary with keys representing the State attributes to update (only messages in this simple example). The chatbot function defines a node that invokes the LLM defined earlier:
def chatbot(state: State) -> dict:
return {"messages": [LLM.invoke(state["messages"])]}
The previous two sections of code defined the State object and the single node of the StateGraph. This section puts everything together by defining graph_builder, which is an instantiated StateGraph object. It then adds the chatbot node and two edges, which instruct the graph where to start and end its work when it is executed. Finally, this function returns the compiled StateGraph object:
def build_graph() -> CompiledStateGraph:
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
return graph_builder.compile()
The get_ai_response function takes the compiled StateGraph object and the user’s input as parameters, and invokes the graph (and therefore the underlying LLM) through streaming. (Streaming makes AI applications feel more responsive to end users by returning intermediate progress.) The function then iterates through the response and prints out the content of the latest response from the large language model:
def get_ai_response(graph: CompiledStateGraph, user_input: str) -> None:
for event in graph.stream({"messages": [("user", user_input)]}, stream_mode="values"):
for value in event.values():
if isinstance(value[-1], AIMessage):
print("Assistant:", value[-1].content)
The final section of the chatbot is the main function. It builds and compiles the graph and then loops until the user specifies an exit word, while calling the get_ai_response function for all other entries:
if __name__ == "__main__":
graph = build_graph() print("=== CHATBOT ===")
while True:
user_input = input("User: ")
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
get_ai_response(graph, user_input)
Now that the basic chatbot code has been covered, try it out. Feel free to play around in your environment as well.
$ python chatbot-basic.py
=== CHATBOT ===
User: What is the capital of North Carolina?
Assistant: The capital of North Carolina is Raleigh.
User: Who won the Super Bowl for the 2010-2011 NFL season?
Assistant: The Green Bay Packers won the Super Bowl for the 2010-2011 NFL season.
User: Was it a good game?
Assistant: As an AI language model, I am not capable of playing games or having personal opinions. Could you please specify which game you are referring to?
User: q
Goodbye!
The chatbot successfully queries the Azure OpenAI deployment and returns the correct answer to basic questions. However, as shown with the final question, the chatbot doesn’t understand that “it” refers to the Super Bowl played in 2011. Explore how to expand this chatbot from only answering questions to supporting multiple conversational turns.
In-memory persistent chatbot
Take a closer look at chatbot-memory.py, which adds in-memory persistence to our chatbot. Rather than displaying the entire contents of the file, run a diff of the two chatbot files, since only a handful of lines are different. Run the following command in your terminal for color-coded output:
diff -u chatbot-basic.py chatbot-memory.py
The first section shows the import of a new module called MemorySaver, which provides in-memory persistence for our graph:
--- chatbot-basic.py 2024-10-20 21:08:47
+++ chatbot-memory.py 2024-10-20 20:54:56
@@ -8,6 +8,7 @@
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.graph.state import CompiledStateGraph
+from langgraph.checkpoint.memory import MemorySaver
The second section shows the addition of a MemorySaver object as an argument to the build_graph function, and then providing that object when compiling the graph:
-def build_graph() -> CompiledStateGraph:
+def build_graph(checkpointer: MemorySaver) -> CompiledStateGraph:
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
- return graph_builder.compile()
+ return graph_builder.compile(checkpointer=checkpointer)
In this section add another input to the get_ai_response function, a config dictionary, which is also specified when invoking the LLM through streaming. As seen in the next section, this config dictionary provides a unique ID to support multiple concurrent users:
-def get_ai_response(graph: CompiledStateGraph, user_input: str) -> None:
- for event in graph.stream({"messages": [("user", user_input)]}, stream_mode="values"):
+def get_ai_response(graph: CompiledStateGraph, config: dict, user_input: str) -> None:
+ for event in graph.stream({"messages": [("user", user_input)]}, config, stream_mode="values"):
for value in event.values():
if isinstance(value[-1], AIMessage):
print("Assistant:", value[-1].content)
The final section should be self-explanatory: Passing an instantiated object of the MemorySaver module to the build_graph function, creating the config dictionary thread ID, and passing this dictionary to the get_ai_response function:
if __name__ == "__main__":
- graph = build_graph()
+ graph = build_graph(MemorySaver())
+ config = {"configurable": {"thread_id": "1"}}
print("=== CHATBOT ===")
while True:
@@ -49,4 +51,4 @@
if user_input.lower() in ["quit", "exit", "q"]:
print("Goodbye!")
break
- get_ai_response(graph, user_input)
+ get_ai_response(graph, config, user_input)
Explore how adding persistence to the chatbot improves the overall experience. As before, feel free to play around in your own environment.
$ python chatbot-memory.py
=== CHATBOT ===
User: Hey there, my name is Michael.
Assistant: Hi Michael, it's nice to meet you. How can I assist you today?
User: Who won the Super Bowl for the 2011-2012 NFL season?
Assistant: The New York Giants won the Super Bowl for the 2011-2012 NFL season. They defeated the New England Patriots in Super Bowl XLVI with a score of 21-17.
User: Was it a good game?
Assistant: Yes, it was a very exciting game. The New York Giants were able to come back from a 9-0 deficit in the first half and went on to win the game with a late touchdown in the fourth quarter. The game was well-played by both teams and had some memorable moments, such as Mario Manningham's incredible catch along the sideline in the fourth quarter. Overall, it was a great Super Bowl.
User: What's my name?
Assistant: Your name is Michael.
User: q
Goodbye!
The chatbot now supports multiple conversational turns. It recognizes that “it” refers to the Super Bowl it was just asked about and remembers the name of the user throughout the duration of the chat.
This is quite an improvement, and for many AI chatbots, it might be good enough. However, enter another chat session to see its main drawback:
python chatbot-memory.py
=== CHATBOT ===
User: What's my name?
Assistant: I'm sorry, as an AI language model, I do not have access to personal information such as your name.
User: q
Goodbye!
As mentioned earlier, this chatbot uses in-memory persistence, so there’s no ability to “remember” conversations from previous chat sessions. Finally, make use of the Instaclustr PostgreSQL instance and provide persistence across sessions with the final chatbot.
PostgreSQL persistent chatbot
Investigate chatbot-postgres.py, which incorporates the chatbot with the Instaclustr PostgreSQL database instance. Feel free to view the entire file, but again use a diff between it and the in-memory chatbot with the following command:
diff -u chatbot-memory.py chatbot-postgres.py
In the first section, import the PostgresSaver instead of the in-memory checkpoint module to tie into the PostgreSQL instance:
--- chatbot-memory.py 2024-10-20 20:54:56
+++ chatbot-postgres.py 2024-10-21 11:51:12
@@ -8,7 +8,7 @@
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.graph.state import CompiledStateGraph
-from langgraph.checkpoint.memory import MemorySaver
+from langgraph.checkpoint.postgres import PostgresSaver
In this section, pull in the environment variables that were set during the PostgreSQL deployment section. Then, specify the connection URI, which uses standard PostgreSQL syntax to connect to the langgraph database:
+POSTGRES_PASS = os.getenv("POSTGRES_PASS")
+POSTGRES_HOST = os.getenv("POSTGRES_HOST")
+DB_URI = f"postgresql://langgraph:{POSTGRES_PASS}@{POSTGRES_HOST}:5432/langgraph"
+
+
class State(TypedDict):
messages: Annotated[list, add_messages]
This section only changes the checkpointer type hint from MemorySaver to PostgresSaver; all other logic within the build_graph function is identical:
-def build_graph(checkpointer: MemorySaver) -> CompiledStateGraph:
+def build_graph(checkpointer: PostgresSaver) -> CompiledStateGraph:
graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
This section is entirely new. It adds a simple function that gathers a username that is later used as the unique identifier for the config dictionary. In a production chatbot, this function would be replaced by logic that gathers the user’s actual username after proper authentication and authorization.
+def get_username() -> str:
+ return input("Please enter your username to log in: ")
In the main function, the updated chatbot file uses a connection string (based on the DB_URI mentioned earlier) to create a connection to the PostgreSQL instance, named checkpointer. It then calls a LangGraph setup function (this must be invoked before using checkpointer), the graph is built with checkpointer, and the username gathered by the get_username function is set as a value in the config dictionary.
(!) Note There are several different ways to connect to the PostgreSQL instance, depending on your use case. |
if __name__ == "__main__":
- graph = build_graph(MemorySaver())
- config = {"configurable": {"thread_id": "1"}}
- print("=== CHATBOT ===")
+ with PostgresSaver.from_conn_string(DB_URI) as checkpointer:
+ checkpointer.setup()
+ graph = build_graph(checkpointer)
+ config = {"configurable": {"thread_id": get_username()}}
+ print("=== CHATBOT ===")
The final section doesn’t have any actual differences other than indentation:
- while True:
- user_input = input("User: ")
- if user_input.lower() in ["quit", "exit", "q"]:
- print("Goodbye!")
- break
- get_ai_response(graph, config, user_input)
+ while True:
+ user_input = input("User: ")
+ if user_input.lower() in ["quit", "exit", "q"]:
+ print("Goodbye!")
+ break
+ get_ai_response(graph, config, user_input)
Let’s see how the final, PostgreSQL-integrated chatbot performs:
$ python chatbot-postgres.py
Please enter your username to log in: michael
=== CHATBOT ===
User: Hey there, I'm Michael.
Assistant: Hello Michael, I'm an AI language model. How can I assist you today?
User: Who won the Super Bowl for the 2012-2013 NFL season?
Assistant: The Baltimore Ravens won the Super Bowl for the 2012-2013 NFL season. They defeated the San Francisco 49ers with a score of 34-31.
User: Was it a good game?
Assistant: Yes, the 2012-2013 Super Bowl was considered to be a very exciting and competitive game. The game was close throughout, with both teams trading the lead multiple times. In addition, there was a 34-minute power outage in the third quarter, which added to the drama and intrigue of the game. Ultimately, the Ravens were able to hold off a late comeback attempt by the 49ers to secure the victory.
User: q
Goodbye!
The chatbot first asks for our username, which is used as the unique identifier. Then it moves on to the normal chat model seen in the other examples, showing it still has the ability for conversational turns.
Start a new session to see the key difference between PostgreSQL and in-memory persistence:
$ python chatbot-postgres.py
Please enter your username to log in: michael
=== CHATBOT ===
User: What's my name?
Assistant: Your name is Michael.
User: q
Goodbye!
The PostgreSQL checkpointer is now in a persisting state across chatbot sessions! Specifying a different username still results in a fresh session:
$ python chatbot-postgres.py
Please enter your username to log in: bill
=== CHATBOT ===
User: What's my name?
Assistant: As an AI language model, I am not capable of knowing your name unless you tell me.
User: Sorry about that, my name is Bill.
Assistant: Nice to meet you, Bill! How can I assist you today?
User: q
Goodbye!
Specify bill again as the username, and see that the chatbot will remember:
$ python chatbot-postgres.py
Please enter your username to log in: bill
=== CHATBOT ===
User: What's my name?
Assistant: Your name is Bill, as you mentioned earlier. How can I help you today, Bill?
User: q
Goodbye!
Finally, going back to the old username at any point is still valid:
$ python chatbot-postgres.py
Please enter your username to log in: michael
=== CHATBOT ===
User: What's my name?
Assistant: Your name is Michael.
User: q
Goodbye!
Summary
The combination of LangChain and LangGraph gives developers a powerful framework for building stateful and multi-actor AI applications. By integrating with a managed-database service like Instaclustr backed by Azure NetApp Files, developers can achieve seamless data persistence, scalability, and reliable performance for their AI applications.
This article has walked through deploying an Instaclustr PostgreSQL instance backed by Azure NetApp Files, deploying an Azure OpenAI instance and associated chat model, and the process of adding persistence to LangGraph. By leveraging these technologies, developers can unlock the full potential of their AI applications and deliver robust, scalable, and efficient solutions.