Human in the Loop: Integrating AI Agents with Human Oversight
A hands-on guide to implementing autonomous AI Agent with Human In the loop
Intro to AI Agents:
👉 Article 1: Simple ReAct Agent from Scratch
👉 Article 2: ReAct Agent in LangGraph
👉 Article 3: Persistence (memory) and streaming in Langgraph
👉 (This) Article 4: Human in the loop
[Inspired by] Deeplearning.ai course
The human in the loop is often essential to ensure that the AI agent is progressing in the way it is expected. In this blog, the goal is to interrupt the AI agent every time before it wants to use any tool.
Setting Up Interruptions in LangChain
LangGraph offers many ways to interrupt execution. One simple way is to specify when to interrupt in the agent declaration.
Example: Interrupt Before Action Calls
If we want to interrupt before taking any action call, we can set up the interrupt like this:
self.graph = graph.compile(
checkpointer=checkpointer,
interrupt_before=["action"]
)
When the corresponding state appears, the agent is interrupted and waits for a signal to proceed.
abot.graph.get_state(thread) # Gets the state of the AI agent
abot.graph.get_state(thread).next # Returns the next state that AI agent wants to step
Continuing After Interruption
To continue after an interrupt, we can:
for event in abot.graph.stream(None, thread):
for v in event.values():
print(v)
Agent State Setup
While most of the agent state setup remains the same, human-in-the-loop agents require replacing messages in the state rather than appending them with operator.add
. This requires declaring a reduce_message
method.
Advanced Techniques with Human in the Loop
LangGraph allows programmatic state modification, enabling precise control over the agent's progression. Under the hood, LangGraph uses a state history: whenever a state gets updated, a copy of the current state is made and modified. This allows debugging and execution from any historical state, and even adding new messages or mocking actions.
When the agent continues with stream
or invoke
, it uses the last modified state, resuming execution seamlessly.
Conclusion
Incorporating human oversight into AI systems is not just a safety measure—it's a strategic advantage that enables AI to adapt and respond to real-world complexities with greater reliability. By having humans in the loop, we ensure that AI actions align with human values and dynamically adjust to unforeseen challenges. This collaborative approach enhances trust in AI technologies, fostering a harmonious integration that amplifies human decision-making and innovation. As we continue to explore AI capabilities, maintaining this balance will be crucial in paving the way for widespread, ethical, and impactful AI adoption.
Recap
This article is the fourth installment in our series on building AI agents with LangGraph:
Article 1: Explored building a reasoning and acting pattern from scratch.
Article 2: Delved into re-implementing the Re-Act pattern using LangGraph terminology.
Article 3: Introduced concepts of persistence (memory) and streaming within AI agents.
Article 4: Focused on integrating human oversight with AI agents for improved control and reliability.