Knowledge Base Article

The fantastic duo: How to build your modern APIs

Integrate FastAPI and MCP with Streamlit for absolute integration

 

Intro

Picture this: You're developing AI-powered applications and need to test different conversation contexts rapidly. Maybe you want to see how your AI behaves as a technical support agent versus a creative writing assistant. Traditionally, this meant:

  • Spinning up separate applications for each scenario
  • Manually managing different system prompts
  • Juggling multiple terminal windows
  • Losing context when switching between use cases

We found ourselves constantly context-switching between different AI personas, each requiring its own setup, configuration, and mental overhead. There had to be a better way. The Chat Playground System turned out to be our solution—a web-based tool that organizes and manages multiple scenario-specific chat sessions, each with its own unique personality, context, and settings. It's like a "chat session manager" where every conversation operates independently but can be overseen from one central dashboard.

Key Capabilities We Achieved:

  • Scenario-Aware Sessions: One-click launch for pre-configured chat contexts (Technical Support, Creative Writing, Code Review, Document Analysis)
  • Session Management: Web interface to create, monitor, and terminate chat sessions
  • Dual Access Patterns: Both REST API for web applications AND MCP protocol for AI tool integration
  • Protocol Bridge: Smart wrapper that makes Streamlit sessions accessible via Model Context Protocol
  • Automatic Resource Management: Smart port allocation, process monitoring, and cleanup
  • Context Passing: Seamless data flow between orchestrator and individual chat instances
  • Zero-Friction Launch: From idea to running chat session in under 30 seconds

The Architecture That Emerged

After several iterations, we settled on a dual-access architecture that serves both traditional web applications and modern AI tooling protocols:

The Dual Access Innovation

What makes this architecture unique is the duality of access patterns we support: FastAPI for Traditional Web Apps: RESTful endpoints, web UI, perfect for browser-based interactions and traditional application integration. MCP (Model Context Protocol) for AI Tooling: Direct integration with AI development tools like Claude Desktop, IDEs, and other MCP-compatible systems. Smart MCP-to-Streamlit Bridge: The technical innovation that wraps MCP protocol around Streamlit sessions, allowing AI tools to interact with chat sessions as if they were native MCP resources.

Why Both?

  • REST API: Familiar, web-friendly, great for dashboards and traditional applications
  • MCP Protocol: Native AI tool integration, enables seamless workflow within AI development environments
  • Streamlit Wrapping: Allows existing Streamlit apps to participate in MCP ecosystems without rewriting

This dual-access pattern means developers can choose their integration style: build traditional web interfaces using REST, or integrate directly into AI toolchains using MCP.

The Challenges We Faced

Challenge #1: Bridging Two Worlds - MCP and Streamlit

The Problem: How do you make Streamlit applications (designed for web interaction) work seamlessly with the Model Context Protocol (designed for AI tool integration)? Our Innovation: We developed a smart MCP-to-Streamlit wrapper that:

  • Translates MCP protocol calls into Streamlit session operations
  • Maintains session state across protocol boundaries
  • Enables AI tools to treat Streamlit chats as native MCP resources
  • Preserves the real-time interactivity that makes Streamlit powerful

This bridge was non-trivial because MCP expects stateless, resource-oriented interactions while Streamlit is inherently stateful and session-based. Key Learning: Protocol translation layers are where the real innovation happens. The wrapper became our secret sauce for unified access patterns.

Challenge #2: Process Management at Scale

The Problem: How do you reliably start, monitor, and stop multiple Streamlit processes without creating zombie processes or port conflicts? Our Solution: Built an async process manager that:

  • Tracks process IDs and health status
  • Implements automatic port discovery (starts at 8501, increments until finding available)
  • Provides graceful shutdown with cleanup routines
  • Monitors process health and removes dead sessions

Key Learning: Process lifecycle management is harder than it looks. We learned to always plan for failure scenarios and implement proper cleanup.

Challenge #3: Context Passing Between Processes

The Problem: How do you pass rich context (system prompts, user data, configuration) from the web interface to individual Streamlit processes? Our Solution: A hybrid approach using:

  • Environment variables for session identification
  • Temporary JSON files for complex context data
  • Cleanup routines to prevent file accumulation
// Context passing pattern we developed

 async def launch_streamlit_session(
 self, 
 session_id: str, 
 port: int, 
 context: Dict[str, Any]
 ) -> tuple[bool, Optional[subprocess.Popen]]:
 """Launch a Streamlit session with given context"""
 try:
 # Create session context file
 context_file = os.path.join(tempfile.gettempdir(), f"chat_session_{session_id}.json")
 logger.info(f"Creating context file: {context_file}")
 with open(context_file, 'w') as f:
 json.dump(context, f, indent=2)
 
 # Prepare environment
 env = os.environ.copy()
 env.update({
 "STREAMLIT_SESSION_CONTEXT": context_file,
 "STREAMLIT_SESSION_ID": session_id,
 "STREAMLIT_BROWSER_GATHER_USAGE_STATS": "false"
 })

This system transformed our AI development workflow:


  • Faster Iteration: Testing new scenarios went from minutes to seconds

 

  • Better Collaboration: Team members could easily share and test different AI contexts

 

  • Reduced Cognitive Load: No more mental overhead managing multiple environments

 

  • Improved Testing: Each scenario could be thoroughly tested in isolation




Key Takeaways

Key Learning: Sometimes the simplest solutions (temp files) work better than complex IPC mechanisms.

Challenge #4: User Experience vs. Technical Complexity

The Problem: How do you hide the complexity of multi-process management behind a simple, intuitive interface?

Our Solution:

  • One-click scenario buttons that hide all the technical setup
  • Real-time session status updates
  • Progressive disclosure (simple interface, advanced options available)
  • Immediate feedback on all operations

Key Learning: The best technical solutions are invisible to the end user. Success is measured by how natural the system feels, not by technical sophistication.

Lessons Learned

1. Protocol Bridging is Where Innovation Lives

Building the MCP-to-Streamlit wrapper taught us that the most valuable technical work often happens at the boundaries between systems. The wrapper became our differentiator—allowing AI tools to seamlessly interact with web-based chat sessions opened up entirely new use cases.

2. Dual Access Patterns Multiply Adoption

By supporting both REST and MCP, we didn't force users to choose sides. Web developers could use familiar REST endpoints while AI tooling enthusiasts could integrate via MCP. This dual approach significantly expanded our potential user base.

3. Start Simple, Then Scale

Our first version was a basic script that launched a single Streamlit app. We gradually added session management, web interface, MCP protocol support, and multi-scenario capabilities. This incremental approach helped us validate each piece before adding complexity.

4. Process Management is a First-Class Concern

Don't treat process lifecycle as an afterthought. We learned to:

  • Always track process health
  • Implement proper cleanup routines
  • Plan for unexpected terminations
  • Use async operations to prevent blocking

5. Context is Everything

The real value wasn't in launching multiple chat sessions—it was in making each session contextually aware. Each scenario needed its own:

  • System prompts optimized for the use case
  • UI elements relevant to the domain
  • Default configurations that "just work"

6. Developer Experience Multiplies Value

By focusing on eliminating friction (one command to start, one click to launch scenarios), we made the system something people actually wanted to use. The technical architecture mattered less than the experience.

Screenshots

Behind the scenes:

  1. Agent Client starts FastAPI server on port 8080
  2. Web Interface provides scenario buttons and session management
  3. Session Launch finds available port, writes context file, starts Streamlit process
  4. Context Loading Streamlit app reads context and configures itself
  5. Session Management Agent tracks all processes and provides control interface

The Impact

This system transformed our AI development workflow:

  • Faster Iteration: Testing new scenarios went from minutes to seconds
  • Better Collaboration: Team members could easily share and test different AI contexts
  • Reduced Cognitive Load: No more mental overhead managing multiple environments
  • Improved Testing: Each scenario could be thoroughly tested in isolation

What's Next

The architecture we built is inherently extensible. Future directions include:

  • Authentication & Multi-tenancy: Supporting multiple users with isolated sessions
  • Persistence: Saving conversation history and context across sessions
  • External Integrations: Connecting to databases, APIs, and other tools
  • Advanced Orchestration: Workflow management between different AI agents

Key Takeaways

  1. Architecture for Change: Build systems that can evolve. Our modular design let us add features without breaking existing functionality.
  2. Developer Experience is Product: The most technically impressive solution that's hard to use will lose to a simpler system that feels natural.
  3. Process Management is Hard: Don't underestimate the complexity of reliably managing multiple processes. Plan for failures from day one.
  4. Context is King: The real value in AI systems often comes from context management, not just the AI itself.
  5. Start with User Needs: We could have built a much more complex system, but starting with "I want to test different AI scenarios quickly" led us to the right solution.

This Chat Playground System might seem like a simple orchestration tool, but it represents something more: a shift toward treating AI development tooling with the same care we give to production systems. When development tools are delightful to use, better products emerge.

Updated Sep 08, 2025
Version 2.0
No CommentsBe the first to comment