developer
8006 TopicsWhy build your AI apps and agents with Microsoft
Customer demand for AI solutions is accelerating rapidly, Microsoft has seen 2x growth in customers purchasing AI products. Organizations are looking for trusted, enterprise-ready platforms to support their innovation. Microsoft's AI-native ecosystem, backed by industry leading security, Responsible AI principles, and rapidly growing catalog of AI apps and agents, provides a strong foundation for building scalable, compliant, and high-impact AI solutions. Explore how developers and software companies can take advantage of Microsoft's integrated tools, streamlined publishing experience, and expansive Marketplace reach to deliver AI solution that meet customers where they are. Learn more and read the full article: Discover why to build AI apps and agents with Microsoft and sell through Marketplace | Microsoft Community HubSharePoint List Web Part - major caching issues
Hi All, I've spent a lot of time building a List based company calendar in our SharePoint Intranet Portal, and the calendar itself is working great, however I'm having no end of headaches with the List Web Part. (not just for this calendar, but all list web parts for that matter) The calendar has a Start and End field - both are Date and Time fields. A custom view is created for use with the List Web Part to embed an "upcoming week" view, this is based on a filter which checks both Start and End dates to see whether an event exists within or spans the date range from today to 7 days in the future, so it is a rolling 7 day window. This is based on a number of filter criteria which include [Today] and [Today]+6. The view also has a sort criteria to sort based on the Start (Date/Time) field. This all works fine when previewing the view within the list itself - at midnight the results update to include events from the day that is now 7 days in the future which were previously excluded. So the view itself is working fine. However the same view in a List Web part on another page suffers from a ridiculous amount of browser side (?) caching, to the point that it is basically broken and unusable. When I open the page the next day in a browser (even if the browser was closed) one of three things happens, somewhat at random: The events 7 days in the future (which just came into filter scope today) just don't appear until a forced page reload is done. The events 7 days in the future do appear but they are sorted incorrectly, appearing at the TOP when the date sort order should show them at the BOTTOM. Sometimes 2 happens but the event is shown with the Start and End Date/Time fields empty - so not only is it at the wrong end of the list it doesn't even show a date at all until the page is refreshed. Here is a picture showing the sort order being incorrect as in case 2: When these various problems happen a full CTRL-F5 browser refresh always updates the list to be complete, up to date and sorted correctly, however, if after that you click away from the page and follow a link to return to it OR press a regular F5 refresh it goes back to being incorrect! It takes many page reloads or a lot of time to pass (hours) before it finally settles down and gives the correct results every time. Then the next day the same caching problems happen again. If you go to a new browser or PC the same problems happen again, suggesting this is browser side caching not something at the servers. While the actual content of the list items update in real-time if you edit the list content in another page (which is pretty cool) the "result set" of list items (which items should or should not be seen) is heavily cached, and the sorting is unreliable. Has anyone else found a solution to this ? I have already done things like disabling offline mode for the list, (this only seems to affect caching of the actual data in the list items, not caching of filter results) etc and I cannot find a solution. The only thing I know which would probably work, as I have had to use this approach on another list is to extend the date range of the filter criteria for the view further into the future, then filter out the extra days using json code in "format view - however AFAIK you can only selectively hide rows like this if you use a custom rowFormatter, which means you have to fully re-implement the standard view including hard coding all the columns you want and it still won't look quite the same. This is a lot of work and maintenance overhead in the future to work around a caching problem that shouldn't exist in the first place. Any thoughts appreciated as a lot of time and effort has gone into building an entire calendar system around a SharePoint List, only to find that the list web part just doesn't work properly.13Views0likes0CommentsA Visual Introduction To Azure Fundamentals
Are you a visual learner? Do you like to see "the big picture" before you dive into details? Does seeing visual notes or metaphors help you understand new concepts better, and retain or recall them more effectively? Then this is for you. A Visual Introduction To Azure Fundamentals - the first in a series of visualized modules that I hope will be helpful for anyone exploring Azure Fundamentals, or preparing for the AZ-900 exam! Want to learn more? Check out this accompanying article at A Cloud Guru! Have questions, or want to see other modules visualized similarly? Leave me a comment on this post!12KViews7likes4CommentsClassic SharePoint features appearing on a Modern Site
I have a modern SharePoint site, with a modern list that has never been associated with classic SharePoint features. But today, two users have experienced intermittent instances of the list appearing in the classic interface. They have not accessed this site before now so I do not believe there should be a cache issue. Does anyone know why this is happening and how it's possible when this site has no association with Classic features? They are using the same browser, same permissions across the site but sometimes the list opens in Classic mode, sometimes it opens in Modern. It also is happening in incognito mode. They are unable to use the site while in Classic Interface at all - the people pickers deny them access ("Error: Sorry, you do not have permission to query for users" and they cannot edit anything because of this error.12Views0likes0CommentsVisual Studio Enterprise Monthly Azure Credits
Please confirm that all developers of MS partners are losing the individual Azure credits (150$ monthly) in the new MAICPP benefits. We just realized that our team will lose the ability to test Marketplace solutions, and now, in the time that we are trying to enforce the usage and explore the AI technology possibilities, it will set us back a lot. Also, the increased bulk credits do not add up, since we have 35 developers who are using these credits, totaling 63k$ per year, opposed to the 16k$ increase of bulk credits (we have 4 areas) is a significant loss. How come Microsoft decided to stop supporting the development? Is there any way to change these new benefits? Or is there a way to keep these Azure credits, maybe with an Enterprise agreement?Rethinking Documentation Translation: Treating Translations as Versioned Software Assets
Rethinking Documentation Translation: Treating Translations as Versioned Software Assets This article is written from the perspective of maintaining large, open-source documentation repositories in the Microsoft ecosystem. I am the maintainer of Co-op Translator, an open-source tool for automating multilingual documentation translation, used across multiple large documentation repositories, including Microsoft’s For Beginners series. In large documentation repositories, translation problems rarely fail loudly. They fail quietly, and they accumulate over time. Recently, we made a fundamental design decision in how Co-op Translator handles translations. Translations are treated as versioned software assets, not static outputs. This article explains why we reached that conclusion, and what this perspective enables for teams maintaining large, fast-moving documentation repositories. When translations quietly become a liability In most documentation projects, translations are treated as finished outputs. Once a file is translated, it is assumed to remain valid until someone explicitly notices a problem. But documentation rarely stands still. Text changes. Code examples evolve. Screenshots are replaced. Notebooks are updated to reflect new behavior. The problem is that these changes are often invisible in translated content. A translation may still read fluently, while the information it contains is already out of date. At that point, the issue is no longer about translation quality. It becomes a maintenance problem. Reframing the question Most translation workflows implicitly ask: Is this translation correct? In practice, maintainers struggle with a different question: Is this translation still synchronized with the current source? This distinction matters. A translation can be correct and still be out of sync. Once we acknowledged this, it became clear that treating translations as static content was no longer sufficient. The design decision: translations as versioned assets Starting with Co-op Translator 0.16.2, we made a deliberate design decision: Translations are treated as versioned software assets. This applies not only to Markdown files, but also to images, notebooks, and any other translated artifacts. Translated content is not just text. It is an artifact generated from a specific version of a source. To make this abstraction operational rather than theoretical, we did not invent a new mechanism. Instead, we looked to systems that already solve a similar problem: pip, poetry, and npm. These tools are designed to track artifacts as their sources evolve. We applied the same thinking to translated content. Closer to dependency management than translation jobs The closest analogy is software dependency management. When a dependency becomes outdated: it is not suddenly “wrong,” it is simply no longer aligned with the current version. Translations behave the same way. When the source document changes: the translated file does not immediately become incorrect, it becomes out of sync with its source version. This framing shifts the problem away from translation output and toward state and synchronization. Why file-level versioning matters Many translation systems operate at the string or segment level. That model works well for UI text and relatively stable resources. Documentation is different. A Markdown file is an artifact. A screenshot is an artifact. A notebook is an artifact. They are consumed as units, not as isolated strings. Managing translation state at the file level allows maintainers to reason about translations using the same mental model they already apply to other repository assets. What changed in practice From embedded markers to explicit state Previously, translation metadata lived inside translated files as embedded comments or markers. This approach had clear limitations: translation state was fragmented, difficult to inspect globally, and easy to miss as repositories grew. We moved to language-scoped JSON state files that explicitly track: the source version, the translated artifact, and its synchronization status. Translation state is no longer hidden inside content. It is a first-class, inspectable part of the repository. Extending the model to images and notebooks The same model now applies consistently to: translated images, localized notebooks, and other non-text artifacts. If an image changes in the source language, the translated image becomes out of sync. If a notebook is updated, its translated versions are evaluated against the new source version. The format does not matter. The lifecycle does. Once translations are treated as versioned assets, the system remains consistent across all content types. What this enables This design enables: Explicit drift detection See which translations are out of sync without guessing. Consistent maintenance signals Text, images, and notebooks follow the same rules. Clear responsibility boundaries The system reports state. Humans decide action. Scalability for fast-moving repositories Translation maintenance becomes observable, not reactive. In large documentation sets, this difference determines whether translation maintenance is sustainable at all. What this is not This system does not: judge translation quality, determine semantic correctness, or auto-approve content. It answers one question only: Is this translated artifact synchronized with its source version? Who this is for This approach is designed for teams that: maintain multilingual documentation, update content frequently, and need confidence in what is actually up to date. When documentation evolves faster than translations, treating translations as versioned assets becomes a necessity, not an optimization. Closing thought Once translations are modeled as software assets, long-standing ambiguities disappear. State becomes visible. Maintenance becomes manageable. And translations fit naturally into existing software workflows. At that point, the question is no longer whether translation drift exists, but: Can you see it? Reference Co-op Translator repository https://github.com/Azure/co-op-translatorMicrosoft Project Service Core Jan 2026 Update locked Plans table
We recently noted the Microsoft Project Service Core was updated to 1.0.161.1772: This update has locked our Plans table, no changes are allowed to the forms and creating new columns is greyed out: We cannot make use of Planner in our model driven app (Planner Premium, non-default environment). After a new Plan is created, the user receives this message when opening the Plan: Two days ago we were in this version 1.0.160.2874): and everything was working as expected. Any guidance would be much appreciated as we are not sure if this is a permanent lock or once the update cycle is completed we will be able to go back to edit Plan forms and columns. Thank you in advance387Views7likes5CommentsHelp needed with IF and COUNTIFS Formulas
Is anyone able to advise the following formula: =COUNTIFS($B$5:$B$15,$R$4,$C5:$C15,"<=" & V3,$D5:$D15, ">" & V3)-COUNTIFS($B$5:$B$15,"="&$R$4,$G5:$G15,"<=" & V3,$H5:$H15, ">" & V3)-COUNTIFS($B$5:$B$15,"="&$R$4,$K5:$K15,"<=" & V3,$L5:$L15, ">" & V3)-COUNTIFS($B$5:$B$15,"="&$R$4,$O5:$O15,"<=" & V3,$P5:$P15, ">" & V3) Is there a way to simplify this? Is there a way to make this more accurate? Cells in column G & H, I & J, O & P are using the following format: =IF(C6="","",C6+E6) Cells in U4:CC4 are using the following format: =COUNTIFS($B$5:$B$15,$R$4,$C5:$C15,"<=" & U3,$D5:$D15, ">" & U3)-COUNTIFS($B$5:$B$15,"="&$R$4,$G5:$G15,"<=" & U3,$H5:$H15, ">" & U3)-COUNTIFS($B$5:$B$15,"="&$R$4,$K5:$K15,"<=" & U3,$L5:$L15, ">" & U3)-COUNTIFS($B$5:$B$15,"="&$R$4,$O5:$O15,"<=" & U3,$P5:$P15, ">" & U3) Cells in U5:CC15 are using the following format: =IF(U$4>=$T5,1,"") My issue is is when I put in the three break times, the mid break comes out at a shorter time. My other issue is is that when I put in the times in row 5,6and 11, the data is coming up as a combined data in rows 5, 6 and seven on the page two. Just for reference, "page two" is the same spreadsheet. What I need to happen is that I enter in the shift start time and finish time. This then populates through to Break 1, 2 and 3. The Time entry is the time the break starts. ie: 1 hour after start of shift, 1 hour after coming back from break, etc. The break entry is the duration of the break taken. ie: 30 minutes. Once all the info is put in, the relevant "Time Block" on "Page 2" shows a 1. What is happening at the moment is that when I enter all the time data, the time blocks are not populating correctly in accordance to the entry. Basically, If I have numerous people on shiftI need the time blocks to show where I have shortfalls in shift cover and not having too many people on break at the same time. IE: Link to Live Copy: https://www.dropbox.com/scl/fi/eur1j526htu1j8a4d4290/Staff-Breaks.xlsx?rlkey=r4tm9xts4tonofpa2th2cusfw&st=nueyk0d7&dl=0 Any ideas would be greatly appreciated.117Views0likes4CommentsHow do you actually unlock growth from Microsoft Teams Marketplace?
Hey folks 👋 Looking for some real-world advice from people who’ve been through this. Context: We’ve been listed as a Microsoft Teams app for several years now. The app is stable, actively used, and well-maintained - but for a long time, Teams Marketplace wasn’t a meaningful acquisition channel for us. Things changed a bit last year. We started seeing organic growth without running any dedicated campaigns, plus more mid-market and enterprise teams installing the app, running trials, and even using it in production. That was encouraging - but it also raised a bigger question. How do you actually systematize this and get real, repeatable benefits from the Teams Marketplace? I know there are Microsoft Partner programs, co-sell motions, marketplace benefits, etc. - but honestly, it’s been very hard to figure out: - where exactly to start - what applies to ISVs building Teams apps - how to apply correctly - and what actually moves the needle vs. what’s just “nice to have” On top of that, it’s unclear how (or if) you can interact directly with the Teams/Marketplace team. From our perspective, this should be a win-win: we invest heavily into the platform, build for Teams users, and want to make that experience better. Questions to the community: If you’re a Teams app developer: what actually worked for you in terms of marketplace growth? Which Partner programs or motions are worth the effort, and which can be safely ignored early on? Is there a realistic way to engage with the Teams Marketplace team (feedback loops, programs, office hours, etc.)? How do you go from “organic installs happen” to a structured channel? Would really appreciate any practical advice, lessons learned, or even “what not to do” stories 🙏 Thanks in advance!How to Build Safe Natural Language-Driven APIs
TL;DR Building production natural language APIs requires separating semantic parsing from execution. Use LLMs to translate user text into canonical structured requests (via schemas), then execute those requests deterministically. Key patterns: schema completion for clarification, confidence gates to prevent silent failures, code-based ontologies for normalization, and an orchestration layer. This keeps language as input, not as your API contract. Introduction APIs that accept natural language as input are quickly becoming the norm in the age of agentic AI apps and LLMs. From search and recommendations to workflows and automation, users increasingly expect to "just ask" and get results. But treating natural language as an API contract introduces serious risks in production systems: Nondeterministic behavior Prompt-driven business logic Difficult debugging and replay Silent failures that are hard to detect In this post, I'll describe a production-grade architecture for building safe, natural language-driven APIs: one that embraces LLMs for intent discovery and entity extraction while preserving the determinism, observability, and reliability that backend systems require. This approach is based on building real systems using Azure OpenAI and LangGraph, and on lessons learned the hard way. The Core Problem with Natural Language APIs Natural language is an excellent interface for humans. It is a poor interface for systems. When APIs accept raw text directly and execute logic based on it, several problems emerge: The API contract becomes implicit and unversioned Small prompt changes cause behavioral changes Business logic quietly migrates into prompts In short: language becomes the contract, and that's fragile. The solution is not to avoid natural language, but to contain it. A Key Principle: Natural Language Is Input, Not a Contract So how do we contain it? The answer lies in treating natural language fundamentally differently than we treat traditional API inputs. The most important design decision we made was this: Natural language should be translated into structure, not executed directly. That single principle drives the entire architecture. Instead of building "chatty APIs," we split responsibilities clearly: Natural language is used for intent discovery and entity extraction Structured data is used for execution Two Explicit API Layers This principle translates into a concrete architecture with two distinct API layers, each with a single, clear responsibility. 1. Semantic Parse API (Natural Language → Structure) This API: Accepts user text Extracts intent and entities using LLMs Completes a predefined schema Asks clarifying questions when required Returns a canonical, structured request Does not execute business logic Think of this as a compiler, not an engine. 2. Structured Execution API (Structure → Action) This API: Accepts only structured input Calls downstream systems to process the request and get results Is deterministic and versioned Contains no natural language handling Is fully testable and replayable This is where execution happens. Why This Separation Matters Separating these layers gives you: A stable, versionable API contract Freedom to improve NLP without breaking clients Clear ownership boundaries Deterministic execution paths Most importantly, it prevents LLM behavior from leaking into core business logic. Canonical Schemas Are the Backbone Now that we've established the two-layer architecture, let's dive into what makes it work: canonical schemas. Each supported intent is defined by a canonical schema that lives in code. Example (simplified): This schema is used when a user is looking for similar product recommendations. The entities capture which product to use as reference and how to bias the recommendations toward price or quality. { "intent": "recommend_similar", "entities": { "reference_product_id": "string", "price_bias": "number (-1 to 1)", "quality_bias": "number (-1 to 1)" } } Schemas define: Required vs optional fields Allowed ranges and types Validation rules They are the contract, not the prompt. When a user says "show me products like the blue backpack but cheaper", the LLM extracts: Intent: recommend_similar reference_product_id: "blue_backpack_123" price_bias: -0.8 (strongly prefer cheaper) quality_bias: 0.0 (neutral) The schema ensures that even if the user phrased it as "find alternatives to item 123 with better pricing" or "cheaper versions of that blue bag", the output is always the same structure. The natural language variation is absorbed at the semantic layer. The execution layer receives a consistent, validated request every time. This decoupling is what makes the system maintainable. Schema Completion, Not Free-Form Chat But what happens when the user's input doesn't contain all the information needed to complete the schema? This is where structured clarification comes in. A common misconception is that clarification means "chatting until it feels right." In production systems, clarification is schema completion. If required fields are missing or ambiguous, the semantic API responds with: What information is missing A targeted clarification question The current schema state Example response: { "status": "needs_clarification", "missing_fields": ["reference_product_id"], "question": "Which product should I compare against?", "state": { "intent": "recommend_similar", "entities": { "reference_product_id": null, "price_bias": -0.3, "quality_bias": 0.4 } } } The state object is the memory. The API itself remains stateless. A Complete Conversation Flow To illustrate how schema completion works in practice, here's a full conversation flow where the user's initial request is missing required information: Initial Request: User: "Show me cheaper alternatives with good quality" API Response (needs clarification): { "status": "needs_clarification", "missing_fields": ["reference_product_id"], "question": "Which product should I compare against?", "state": { "intent": "recommend_similar", "entities": { "reference_product_id": null, "price_bias": -0.3, "quality_bias": 0.4 } } } Follow-up Request: User: "The blue backpack" Client sends: { "user_input": "The blue backpack", "state": { "intent": "recommend_similar", "entities": { "reference_product_id": null, "price_bias": -0.3, "quality_bias": 0.4 } } } API Response (complete): { "status": "complete", "canonical_request": { "intent": "recommend_similar", "entities": { "reference_product_id": "blue_backpack_123", "price_bias": -0.3, "quality_bias": 0.4 } } } The client passes the state back with each clarification. The API remains stateless, while the client manages the conversation context. Once complete, the canonical_request can be sent directly to the execution API. Why LangGraph Fits This Problem Perfectly With schemas and clarification flows defined, we need a way to orchestrate the semantic parsing workflow reliably. This is where LangGraph becomes valuable. LangGraph allows semantic parsing to be modeled as a structured, deterministic workflow with explicit decision points: Classify intent: Determine what the user wants to do from a predefined set of supported actions Extract candidate entities: Pull out relevant parameters from the natural language input using the LLM Merge into schema state: Map the extracted values into the canonical schema structure Validate required fields: Check if all mandatory fields are present and values are within acceptable ranges Either complete or request clarification: Return the canonical request if complete, or ask a targeted question if information is missing Each node has a single responsibility. Validation and routing are done in code, not by the LLM. LangGraph provides: Explicit state transitions Deterministic routing Observable execution Safe retries Used this way, it becomes a powerful orchestration tool, not a conversational agent. Confidence Gates Prevent Silent Failures Structured workflows handle the process, but there's another critical safety mechanism we need: knowing when the LLM isn't confident about its extraction. Even when outputs are structurally valid, they may not be reliable. We require the semantic layer to emit a confidence score. If confidence falls below a threshold, execution is blocked and clarification is requested. This simple rule eliminates an entire class of silent misinterpretations that are otherwise very hard to detect. Example: When a user says "Show me items similar to the bag", the LLM might extract: { "intent": "recommend_similar", "confidence": 0.55, "entities": { "reference_product_id": "generic_bag_001", "confidence_scores": { "reference_product_id": 0.4 } } } The overall confidence is low (0.55), and the entity confidence for reference_product_id is very low (0.4) because "the bag" is ambiguous. There might be hundreds of bags in the catalog. Instead of proceeding with a potentially wrong guess, the API responds: { "status": "needs_clarification", "reason": "low_confidence", "question": "I found multiple bags. Did you mean the blue backpack, the leather tote, or the travel duffel?", "confidence": 0.55 } This prevents the system from silently executing the wrong recommendation and provides a better user experience. Lightweight Ontologies (Keep Them in Code) Beyond confidence scoring, we need a way to normalize the variety of terms users might use into consistent canonical values. We also introduced lightweight, code-level ontologies: Allowed intents Required entities per intent Synonym-to-canonical mappings Cross-field validation rules These live in code and configuration, not in prompts. LLMs propose values. Code enforces meaning. Example: Consider these user inputs that all mean the same thing: "Show me cheaper options" "Find budget-friendly alternatives" "I want something more affordable" "Give me lower-priced items" The LLM might extract different values: "cheaper", "budget-friendly", "affordable", "lower-priced". The ontology maps all of these to a canonical value: PRICE_BIAS_SYNONYMS = { "cheaper": -0.7, "budget-friendly": -0.7, "affordable": -0.7, "lower-priced": -0.7, "expensive": 0.7, "premium": 0.7, "high-end": 0.7 } When the LLM extracts "budget-friendly", the code normalizes it to -0.7 for the price_bias field. Similarly, cross-field validation catches logical inconsistencies: if entities["price_bias"] < -0.5 and entities["quality_bias"] > 0.5: return clarification("You want cheaper items with higher quality. This might be difficult. Should I prioritize price or quality?") The LLM proposes. The ontology normalizes. The validation enforces business rules. What About Latency? A common concern with multi-step semantic parsing is performance. In practice, we observed: Intent classification: ~40 ms Entity extraction: ~200 ms Validation and routing: ~1 ms Total overhead: ~250–300 ms. For chat-driven user experiences, this is well within acceptable bounds and far cheaper than incorrect or inconsistent execution. Key Takeaways Let's bring it all together. If you're building APIs that accept natural language in production: Do not make language your API contract Translate language into canonical structure Own schema completion server-side Use LLMs for discovery and extraction, not execution Treat safety and determinism as first-class requirements Natural language is an input format. Structure is the contract. Closing Thoughts LLMs make it easy to build impressive demos. Building safe, reliable systems with them requires discipline. By separating semantic interpretation from execution, and by using tools like Azure OpenAI and LangGraph thoughtfully, you can build natural language-driven APIs that scale, evolve, and behave predictably in production. Hopefully, this architecture saves you a few painful iterations.