Forum Discussion

neptunedesert's avatar
neptunedesert
Copper Contributor
Jun 16, 2025

Prompt management?

Is anyone writing agent or LLM API call apps in Python or C# that *avoid* inline prompts?

What's your approach? Loading from files, blob storage or using other solutions?

Any experience with or comparable Azure AI approaches similar to:

- LangChain / LangSmith load_prompt and prompt client push and pull to their Hub
- Amazon Bedrock Converse API
- PromptLayer
- Other?

It doesn't seem like there are good project or folder conventions for AI agents, etc. Code samples are inline prompt spaghetti. It's like web apps before MVC frameworks.

Who should write and own prompts in an enterprise? Versioning, maybe signing?

I see that Azure AI has prompt and evaluation tools, but not seeing a way to get at these with an API and SDK. Also, GitHub Models just released something, but has say limits right now. 

And MCP is taking off with its approach to Prompts and Roots.


No RepliesBe the first to reply

Resources