partner center
99 TopicsTracking transaction to payout in Partner Center reports
We’ve captured the why and how behind the data in an on‑demand video based on the recent Partner Center reporting office hours—plus a blog that breaks down how these insights support marketplace growth and operations. Have a video topic you want next? Drop it in the comments 👇 ▶️ Watch the 7‑minute video (embedded) 📖 Read the blog: Unlocking the power of Partner Center reporting: Why these insights matter for Marketplace success | Microsoft Community Hub
Microsoft Partner Center account structure: Best practices for long-term success
About the author: David Starr is the founder and CEO of Cumulus26, where focus is on accelerating customer's Azure Marketplace journey from onboarding to business success. He is a former Principal Architect at Microsoft working on Azure Marketplace and a 6-time Microsoft MVP in Developer Tooling. Why account structure matters in Partner Center When first creating a Partner Center account, many Software Development Company (SDC) partners I’ve worked with dive straight into creating their transactable offers without first considering how their accounts are structured. This often leads to confusion about account setup, creating multiple “orphan” accounts, and support incidents that can delay the publication of your software to the Microsoft Marketplace--or even result in losing access to Partner Center. This article examines Partner Center account structures and the primary decisions to make when setting up your company’s accounts in the portal. We’ll cover the following. Initial considerations: Individual accounts. Understanding organizational account structures and configurations. Working with important identifiers used in account management and support scenarios. Setting up for long-term successful management of your accounts. This article ensures you’ll know how to structure your Microsoft Partner Center account so that it supports your organization’s needs today and can scale with you as you grow. Understanding Partner Center account management After initially creating your account, it’s tempting to skip user management and move on to other tasks in the portal. This can lead to the common mistake of failing to assign multiple account administrators right away. The predictable outcome is that if an account administrator leaves your organization, your staff could lose the ability to administer-- or even access-- Partner Center. This may sound intuitive upon reading it, so why mention it? It’s because I have worked with many publishers who failed to do this and were later unable to get the access they needed. This leads to time spent resolving support incidents, which can delay publishing your solution. Before diving into setting up an account, it’s helpful to understand there are three different accounts involved: Microsoft accounts, Azure Entra ID accounts, and Partner Center accounts. Although, the Microsoft account is essentially an extension of the Azure Entra ID account. In short, you must have an Azure Entra ID account to have a Partner Center account. These account types are shown in the image below. Each has its own features and capabilities. It is worth noting while you do need an Entra ID account, you do not need an Azure subscription, which allows creation of services like databases or virtual machines. This can be an important point for Azure administrators who provide accounts strictly for use with Partner Center. Setting up an Azure tenant in Partner Center Azure accounts for your organization are stored in tenants, which provide identity, security, and account management through Microsoft Entra ID. At least one tenant must be associated with Partner Center to manage the portal’s accounts. This allows those with accounts in the tenant to also have accounts in Partner Center. You may associate a pre-existing Entra ID account with Partner Center, or you may create one if needed. Regardless of which technique you use, you can manage users and permissions for Partner Center after configuring your tenant. User accounts After configuring your tenant, head over to the user management screen in Account settings, then select User management in the left side menu. As we mentioned earlier, the next account you’ll want to configure is another Global administrator. If you created the Azure tenant you are working with, you already have Global Administrator permissions in Partner Center. Otherwise, you may need to contact your Azure administrator to get the permissions you need. This is why it’s common (and good) practice for organizations with pre-existing Azure tenants to have an Azure administrator initially set up Partner Center. Adding another Partner Center administrator For this next step, there are three options for adding that new person to Partner Center: Create new user – Used if there are no other user accounts in the tenant. Add existing user – Use this if there are existing user accounts in the tenant. Invite outside user – May be used for inviting someone from outside your organization to manage Partner Center for you. Regardless of which method you choose, since you are adding a second Global administrator, give them that role during account setup. This is the first role listed in the account setup process as shown here. Configuring partner global and location accounts Now that you have at least two global administrators, you can turn your attention to setting up your organizational accounts. There are two types in Partner Center. Partner global account (PGA) Partner location account (PLA) Structuring your accounts There is one PGA per SDC and one or more PLAs. A PGA is an overarching account containing contact and other information for your organization. Each PLA account represents a different location for the organization. A single PLA is created when you first create a Partner Center account. This may be enough for some organizations, but for many SDCs it’s a good idea to consider how you will organize the company and its products in the future. See the image below for a typical example of PGA and PLA structures, the information associated with them, and their roles. Some organizations may want multiple PLAs to represent different sales centers or divisions within the SDC. It’s also a good idea for smaller SDCs to consider future growth at this stage. Think about how and where your company may eventually do business. However, you do not need multiple PLAs to sell your solution in multiple countries--you can sell worldwide even if you have only one PLA. Both PGAs and PLAs have unique identifiers, examples of which are shown in the below image. You may need to access these when working with Microsoft. To do so, go to: Account Settings > Identifiers > Microsoft AI Cloud Partner Program Managing publisher accounts and identifiers Each PLA has one or more publisher accounts, which are established when enrolling in the Microsoft Marketplace program. Each publisher also receives its own set of identifiers, and it’s common to be asked for these in customer support scenarios. When creating a new publisher, you get to specify your publisher account’s primary ID, but a second Seller ID is automatically assigned for you. To access publisher IDs, visit: Account settings > Identifiers > Publisher Tax and payment profiles-- used by Microsoft to bill on your behalf and to pay you for customer purchases-- are associated with publisher accounts. Publisher accounts are sometimes used by different billing departments or to organize products into logical groups. See the image below for a typical example. As you can see, the account structure is straightforward. If you consider it in advance of setting up Partner Center, you will be more likely to avoid configuration mistakes and be set up well for future growth. Organizing offers and plans for marketplace publishing We’ve seen how to structure user and organizational accounts to ensure a great Partner Center experience. When it’s time to set up your products to sell in the marketplace there are two more entities involved, offers and plans. Offers represent your base software product and plans are used to sell one or more SKUs of the product. For example, Cumulus26’s AMPup solution for marketplace publishers may be our offer, and has different plans for team, professional, and enterprise versions. To support global software sales, each plan is associated with one or more global markets. For example, a US-based publisher may sell software in Canada, the UK, and Germany. Selling markets are designated for each plan. Of course, each offer and plan receives its own ID. For each, you must specify the ID as you create each entity, and I recommend planning a logical naming convention for these IDs as you may need to navigate marketplace features using them at some point. Now you have a complete picture of Partner Center structures from PGAs all the way to plans as shown in the image below, which represents a single-region seller. This turns out to be the most common Partner Center account configuration due to its simplicity and the needs of most SDCs. Conclusion: Building for scalability and support There is a strong relationship between Microsoft Azure Entra ID and Partner Center accounts. For many SDCs the simplest path to successful user management is to start with an Entra ID Global Administrator setting up your initial Partner Center account. Don’t forget the important first step of adding a second Partner Center account administrator. You are now ready to model your organization and products in Partner Center, from PLAs and PGAs to offers and plans. You also understand the ID structures of each entity. You can refer to this article for help on where to find them when needed. With a solid understanding of Partner Center user and organizational account structures, you are ready to begin configuring your users and organization in Partner Center. To learn more and ask questions, attend the How to structure your Microsoft Partner Center account for long term success | Microsoft Community Hub session on November 4th. If you are unable to attend, the session will be recorded for viewing after.985Views6likes0CommentsSecuring AI apps and agents on Microsoft Marketplace
Why security must be designed in—not validated later AI apps and agents expand the security surface beyond that of traditional applications. Prompt inputs, agent reasoning, tool execution, and downstream integrations introduce opportunities for misuse or unintended behavior when security assumptions are implicit. These risks surface quickly in production environments where AI systems interact with real users and data. Deferring security decisions until late in the lifecycle often exposes architectural limitations that restrict where controls can be enforced. Retrofitting security after deployment is costly and can force tradeoffs that affect reliability, performance, or customer trust. Designing security early establishes clear boundaries, enables consistent enforcement, and reduces friction during Marketplace review, onboarding, and long‑term operation. In the Marketplace context, security is a foundational requirement for trust and scale. You can always get a curated step-by-step guidance through building, publishing and selling apps for Marketplace through App Advisor. This post is part of a series on building and publishing well-architected AI apps and agents in Microsoft Marketplace. The series focuses on AI apps and agents that are architected, hosted, and operated on Azure, with guidance aligned to building and selling solutions through Microsoft Marketplace. How AI apps and agents expand the attack surface Without a clear view of where trust boundaries exist and how behavior propagates across systems, security controls risk being applied too narrowly or too late. AI apps and agents introduce security risks that extend beyond those of traditional applications. AI systems accept open‑ended prompts, reason dynamically, and often act autonomously across systems and data sources. These interaction patterns expand the attack surface in several important ways: New trust boundaries introduced by prompts and inputs, where unstructured user input can influence reasoning and downstream actions Autonomous behavior, which increases the blast radius when authentication or authorization gaps exist Tool and integration execution, where agents interact with external APIs, plugins, and services across security domains Dynamic model responses, which can unintentionally expose sensitive data or amplify errors if guardrails are incomplete Each API, plugin, or external dependency becomes a security choke point where identity validation, audit logging, and data handling must be enforced consistently—especially when AI systems span tenants, subscriptions, or ownership boundaries. Using OWASP GenAI Top 10 as a threat lens The OWASP GenAI Top 10 provides a practical, industry‑recognized lens for identifying and categorizing AI‑specific security threats that extend beyond traditional application risks. Rather than serving as a checklist, the OWASP GenAI Top 10 helps teams ask the right questions early in the design process. It highlights where assumptions about trust, input handling, autonomy, and data access can break down in AI‑driven systems—often in ways that are difficult to detect after deployment. Common risk categories highlighted by OWASP include: Prompt injection and manipulation, where malicious input influences agent behavior or downstream actions Sensitive data exposure, including leakage through prompts, responses, logs, or tool outputs Excessive agency, where agents are granted broader permissions or action scope than intended Insecure integrations, where tools, plugins, or external systems become unintended attack paths Highly regulated industries, sensitive data domains, or mission‑critical workloads may require additional risk assessment and security considerations that extend beyond the OWASP categories. The OWASP GenAI Top 10 allows teams to connect high‑level risks to architectural decisions by creating a shared vocabulary that sets the foundation for designing guardrails that are enforceable both at design time and at runtime. Designing security guardrails into the architecture Security guardrails must be designed into the architecture, shaping where and how policies are enforced, evaluated, and monitored throughout the solution lifecycle. Guardrails operate at two complementary layers: Design time, where architectural decisions determine what is possible, permitted, or blocked by default Runtime, where controls actively govern behavior as the AI app or agent interacts with users, data, and systems When architectural boundaries are not defined early, teams often discover that critical controls—such as input validation, authorization checks, or action constraints—cannot be applied consistently without redesign: Tenancy boundaries, defining how isolation is enforced between customers, environments, or subscriptions Identity boundaries, governing how users, agents, and services authenticate and what actions they can perform Environment separation, limiting the blast radius of experimentation, updates, or failures Control planes, where configuration, policy, and behavior can be adjusted without redeploying core logic Data planes, controlling how data is accessed, processed, and moved across trust boundaries Designing security guardrails into the architecture transforms security from reactive to preventative, while also reducing friction later in the Marketplace journey. Clear enforcement boundaries simplify review, clarify risk ownership, and enable AI apps and agents to evolve safely as capabilities and integrations expand. Identity as a security boundary for AI apps and agents Identity defines who can access the system, what actions can be taken, and which resources an AI app or agent is permitted to interact with across tenants, subscriptions, and environments. Agents often act on behalf of users, invoke tools, and access downstream systems autonomously. Without clear identity boundaries, these actions can unintentionally bypass least‑privilege controls or expand access beyond what users or customers expect. Strong identity design shapes security in several key ways: Authentication and authorization, determines how users, agents, and services establish trust and what operations they are allowed to perform Delegated access, constraints agents to act with permissions tied to user intent and context Service‑to‑service trust, ensures that all interactions between components are explicitly authenticated and authorized Auditability, traces actions taken by agents back to identities, roles, and decisions A zero-trust approach is essential in this context. Every request—whether initiated by a user, an agent, or a backend service—should be treated as untrusted until proven otherwise. Identity becomes the primary control plane for enforcing least privilege, limiting blast radius, and reducing downstream integration risk. This foundation not only improves security posture, but also supports compliance, simplifies Marketplace review, and enables AI apps and agents to scale safely as integrations and capabilities evolve. Protecting data across boundaries Data may reside in customer‑owned tenants, subscriptions, or external systems, while the AI app or agent runs in a publisher‑managed environment or a separate customer environment. Protecting data across boundaries requires teams to reason about more than storage location. Several factors shape the security posture: Data ownership, including whether data is owned and controlled by the customer, the publisher, or a third party Boundary crossings, such as cross‑tenant, cross‑subscription, or cross‑environment access patterns Data sensitivity, particularly for regulated, proprietary, or personally identifiable information Access duration and scope, ensuring data access is limited to the minimum required context and time When these factors are implicit, AI systems can unintentionally broaden access through prompts, retrieval‑augmented generation, or agent‑initiated actions. This risk increases when agents autonomously select data sources or chain actions across multiple systems. To mitigate these risks, access patterns must be explicit, auditable, and revocable. Data access should be treated as a continuous security decision, evaluated on every interaction rather than trusted by default once a connection exists. This approach aligns with zero-trust principles, where no data access is implicitly trusted and every request is validated based on identity, context, and intent. Runtime protections and monitoring For AI apps and agents, security does not end at deployment. In customer environments, these systems interact continuously with users, data, and external services, making runtime visibility and control essential to a strong security posture. AI behavior is also dynamic: the same prompt, context, or integration can produce different outcomes over time as models, data sources, and agent logic evolve, so monitoring must extend beyond infrastructure health to include behavioral signals that indicate misuse, drift, or unintended actions. Effective runtime protections focus on five core capabilities: Vulnerability management, including regular scanning of the full solution to identify missing patches, insecure interfaces, and exposure points Observability, so agent decisions, actions, and outcomes can be traced and understood in production Behavioral monitoring, to detect abnormal patterns such as unexpected tool usage, unusual access paths, or excessive action frequency Containment and response, enabling rapid intervention when risky or unauthorized behavior is detected Forensics readiness, ensuring system-state replicability and chain-of-custody are retained to investigate what happened, why it happened, and what was impacted Monitoring that only tracks availability or performance is insufficient. Runtime signals must provide enough context to explain not just what happened, but why an AI app or agent behaved the way it did, and which identities, data sources, or integrations were involved. Equally important is integration with broader security event and incident management workflows. Runtime insights should flow into existing security operations so AI-related incidents can be triaged, investigated, and resolved alongside other enterprise security events—otherwise AI solutions risk becoming blind spots in a customer’s operating environment. Preparing for incidents and abuse scenarios No AI app or agent operates in a perfectly controlled environment. Once deployed, these systems are exposed to real users, unpredictable inputs, evolving data, and changing integrations. Preparing for incidents and abuse scenarios is therefore a core security requirement, not a contingency plan. AI apps and agents introduce unique incident patterns compared to traditional software. In addition to infrastructure failures, teams must be prepared for prompt abuse, unintended agent actions, data exposure, and misuse of delegated access. Because agents may act autonomously or continuously, incidents can propagate quickly if safeguards and response paths are unclear. Effective incident readiness starts with acknowledging that: Abuse is not always malicious, misuse can stem from ambiguous prompts, unexpected context, or misunderstood capabilities Agent autonomy may increase impact, especially when actions span multiple systems or data sources Security incidents may be behavioral, not just technical, requiring interpretation of intent and outcomes Preparing for these scenarios requires clearly defined response strategies that account for how AI systems behave in production. AI solutions should be designed to support pause, constrain, or revoke agent capabilities when risk is detected, and to do so without destabilizing the broader system or customer environment. Incident response must also align with customer expectations and regulatory obligations. Customers need confidence that AI‑related issues will be handled transparently, proportionately, and in accordance with applicable security and privacy standards. Clear boundaries around responsibility, communication, and remediation help preserve trust when issues arise. How security decisions shape Marketplace readiness From initial review to customer adoption and long‑term operation, security posture is a visible and consequential signal of readiness. AI apps and agents with clear boundaries—around identity, data access, autonomy, and runtime behavior—are easier to evaluate, onboard, and trust. When security assumptions are explicit, Marketplace review becomes more predictable, customer expectations are clearer, and operational risk is reduced. Ambiguous trust boundaries, implicit data access, or uncontrolled agent actions can introduce friction during review, delay onboarding, or undermine customer confidence after deployment. Marketplace‑ready security is therefore not about meeting a minimum bar. It is about enabling scale. Well-designed security allows AI apps and agents to integrate into enterprise environments, align with customer governance models, and evolve safely as capabilities expand. When security is treated as a first‑class architectural concern, it becomes an enabler rather than a blocker—supporting faster time to market, stronger customer trust, and sustainable growth through Microsoft Marketplace. What’s next in the journey Security for AI apps and agents is not a one‑time decision, but an ongoing design discipline that evolves as systems, data, and customer expectations change. By establishing clear boundaries, embedding guardrails into the architecture, and preparing for real‑world operation, publishers create a foundation that supports safe iteration, predictable behavior, and long‑term trust. This mindset enables AI apps and agents to scale confidently within enterprise environments while meeting the expectations of customers adopting solutions through Microsoft Marketplace. See the next post in the series: Designing AI guardrails for apps and agents in Marketplace | Microsoft Community Hub. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor, Quick-Start Development Toolkit Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success157Views5likes0CommentsGoverning AI apps and agents for Marketplace
Governing AI apps and agents Governance is what turns powerful AI functionality into a solution that enterprises can confidently adopt, operate, and scale. It establishes clear responsibility for actions taken by the system, defines explicit boundaries for acceptable behavior, and creates mechanisms to review, explain, and correct outcomes over time. Without this structure, AI systems can become difficult to manage as they grow more connected and autonomous. For publishers, governance is how trust is earned — and sustained — in enterprise environments. It signals that AI behavior is intentional, accountable, and aligned with customer expectations, not left to inference or assumption. As AI apps and agents operate across users, data, and systems, risk shifts away from what a model can generate and toward how its behavior is governed in real‑world conditions. Marketplace readiness reflects this shift. It is defined less by raw capability and more by control, accountability, and trust. You can always get a curated step-by-step guidance through building, publishing and selling apps for Marketplace through App Advisor. This post is part of a series on building and publishing well-architected AI apps and agents in Microsoft Marketplace. The series focuses on AI apps and agents that are architected, hosted, and operated on Azure, with guidance aligned to building and selling solutions through Microsoft Marketplace. What governance means for AI apps and agents Governance in AI systems is operational and continuous. It is not limited to documentation, checklists, or periodic reviews — it shapes how an AI app or agent behaves while it is running in real customer environments. For AI apps and agents, governance spans three closely connected dimensions: Policy What the system is allowed to do, what data it is allowed to access, what is restricted, and what is explicitly prohibited. Enforcement How those policies are applied consistently in production, even as context, inputs, and conditions change. Evidence How decisions and actions are traced, reviewed, and audited over time. Governance works when intent, behavior, and proof move together — turning expectations into outcomes that can be trusted and examined. These dimensions are interdependent. Policy without enforcement is aspiration. Enforcement without evidence is unverifiable. Governance in action Governance becomes real when responsibility is explicit. For AI apps and agents, this starts with clarity around who is responsible for what: Who the agent acts for — and how its use protects business value Ensuring the agent is used for its intended purpose, produces measurable value, and is not misused, over‑extended, or operating outside approved business contexts. Who owns data access and data quality decisions Governing how the agent consumes and produces data, whether access is appropriate, and whether the data used or generated is reliable, accurate, and aligned with business and integrity expectations. Who is accountable for outcomes when behavior deviates Defining responsibility when the agent’s behavior creates risk, degrades value, or produces unexpected outcomes — so corrective action is timely, intentional, and owned. When governance is left vague or undefined, accountability gaps surface and agent actions become difficult to justify and explain across the publisher, the customer, and the solution itself. In this model, responsibility is shared but distinct. The publisher is responsible for designing and implementing the governance capabilities within the solution — defining boundaries, enforcement points, and evidence mechanisms that protect business value by default. Marketplace customers expect to understand who is accountable before they adopt an AI solution, not after an incident forces the question. The customer is responsible for configuring, operating, and applying those capabilities within their own environment, aligning them to internal policies, risk tolerance, and day‑to‑day use. Governance works when both roles are clear: the publisher provides the structure, and the customer brings it to life in practice. Data governance for AI: beyond storage and access For Marketplace‑ready AI apps and agents, data governance must account for where data moves, not just where it resides. Understanding how data flows across systems, tools, and tenants is essential to maintaining trust as solutions scale. Data governance for AI apps and agents extends beyond where data is stored. These systems introduce new artifacts that influence behavior and outcomes, including prompts and responses, retrieval context and embeddings, and agent‑initiated actions and tool outputs. Each of these elements can carry sensitive information and shape downstream decisions. Effective data governance for AI apps and agents requires clear structure: Explicit data ownership — defining who owns the data and under what conditions it can be accessed or used Access boundaries and context‑aware authorization — ensuring access decisions reflect identity, intent, and environment, not just static permissions Retention, auditability, and deletion strategies — so data use remains traceable and aligned with customer expectations over time Relying on prompts or inferred intent to determine access is a governance gap, not a shortcut. Without explicit controls, data exposure becomes difficult to predict or explain. Runtime policy enforcement in production Policies are stress tested when the agent is responding to real prompts, touching real data, and taking actions that carry real consequences. For software companies building AI apps and agents for Microsoft Marketplace, runtime enforcement is also how you keep the system fit for purpose: aligned to its intended use, supported by evidence, and constrained when conditions change. At runtime, governance becomes enforceable through three clear lanes of behavior: Decisions that require human approval Use approval gates for higher‑impact steps (for example: executing a write operation, sending an external request, or performing an irreversible workflow). This protects the business value of the agent by preventing “helpful” behavior from turning into misuse. Actions that can proceed automatically — within defined limits Automation is earned through clarity: define the agent’s intended uses and keep tool access, data access, and action scope anchored to those uses. Fit‑for‑purpose isn’t a feeling — it’s something you support with defined performance metrics, known error types, and release criteria that you measure and re‑measure as the system runs. Behaviors that are never permitted — regardless of context or intent Block classes of behavior that violate policy (including jailbreak attempts that try to override instructions, expand tool scope, or access disallowed data). When an intended use is not supported by evidence — or new evidence shows it no longer holds — treat that as a governance trigger: remove or revise the intended use in customer‑facing materials, notify customers as appropriate, and close the gap or discontinue the capability. To keep runtime enforcement meaningful over time, pair it with ongoing evaluation: document how you’ll measure performance and error patterns, run those evaluations pre‑release and continuously, and decide how often re‑evaluation is needed as models, prompts, tools, and data shift. This is what keeps autonomy intentional. It allows AI apps and agents to operate usefully and confidently, while ensuring behavior remains aligned with defined expectations — and backed by evidence — as systems evolve and scale. Auditability, explainability, and evidence Guardrails are the points in the system where governance becomes observable: where decisions are evaluated, actions are constrained, and outcomes are recorded. As described in Designing AI guardrails for apps and agents in Marketplace, guardrails shape how AI systems reason, access data, and take action — consistently and by default. Guardrails may be embedded within the agent itself or implemented as a separate supervisory layer — another agent or policy service — that evaluates actions before they proceed. Guardrail responses exist on a spectrum. Some enforce in the moment — blocking an action or requiring approval before it proceeds — while others generate evidence for post‑hoc review. Marketplace‑ready AI apps and agents could implement both, with the response mode matched to the severity, reversibility, and business impact of the action in question. These expectations align with the governance and evidence requirements outlined in the Microsoft Responsible AI Standard v2 General Requirements. In practice, guardrails support auditability and explainability by: Constraining behavior at design time Establishing clear defaults around what the system can and cannot do, so intended use is enforced before the system ever reaches production. Evaluating actions at runtime Making decisions visible as they happen — which tools were invoked, which data was accessed, and why an action was allowed to proceed or blocked. When governance is unclear, even strong guardrails lose their effectiveness. Controls may exist, but without clear intent they become difficult to justify, unevenly applied across environments, or disconnected from customer expectations. Over time, teams lose confidence not because the system failed, but because they can’t clearly explain why it behaved the way it did. When governance and guardrails are aligned, the result is different. Behavior is intentional. Decisions are traceable. Outcomes can be explained without guesswork. Auditability stops being a reporting exercise and becomes a natural byproduct of how the system operates day to day. Aligning governance with Marketplace expectations Governance for AI apps and agents must operate continuously, across all in‑scope environments — in both the publisher’s and the customer’s tenants. Marketplace solutions don’t live in a single boundary, and governance cannot stop at deployment or certification. Runtime enforcement is what keeps governance active as systems run and evolve. In practice, this means: Blocking or constraining actions that violate policy — such as stopping jailbreak attempts that try to override system instructions, escalate tool access, or bypass safety constraints through crafted prompts Adapting controls based on identity, environment, and risk — applying stricter limits when an agent acts across tenants, accesses sensitive data, or operates with elevated permissions Aligning agent behavior with enterprise expectations in real time — ensuring actions taken on behalf of users remain within approved roles, scopes, and approval paths These controls matter because AI behavior is dynamic. The same agent may behave differently depending on context, inputs, and downstream integrations. Governance must be able to respond to those shifts as they happen. Runtime enforcement is distinct from monitoring. Enforcement determines what is allowed to continue. Monitoring explains what happened once it’s already done. Marketplace‑ready AI solutions need both, but governance depends on enforcement to keep behavior aligned while it matters most. Operational health through auditability and traceability Operational health is the combination of traceability (what happened) and intelligibility (how to use it responsibly). When both are present, governance becomes a quality signal customers can feel day to day — not because you promised it, but because the system consistently behaves in ways they can understand and trust. Healthy AI apps and agents are not only traceable — they are intelligible in the moments that matter. For Marketplace customers, operational trust comes from being able to understand what the system is intended to do, interpret its behavior well enough to make decisions, and avoid over‑relying on outputs simply because they are produced confidently. A practical way to ground this is to be explicit about who needs to understand the system: Decision makers — the people using agent outputs to choose an action or approve a step Impacted users — the people or teams affected by decisions informed by the system’s outputs Once those stakeholders are clear, governance shows up as three operational promises you can actually support: Clarity of intended use Customers can see what the agent is designed to do (and what it is not designed to do), so outputs are used in the right contexts. Interpretability of behavior When an agent produces an output or recommendation, stakeholders can interpret it effectively — not perfectly, but reasonably well — with the context they need to make informed decisions. Protection against automation bias Your UX, guidance, and operational cues help customers stay aware of the natural tendency to over‑trust AI output, especially in high‑tempo workflows. This is where auditability and traceability become more than logs. Well governed AI systems should still answer: Who initiated an action — a user, an agent acting on their behalf, or an automated workflow What data was accessed — under which identity, scope, and context What decision was made, and why — especially when downstream systems or people are affected The logs should show evidence that stakeholders can interpret those outputs in realistic conditions — and there is a method to evaluate this, with clear criteria for release and ongoing evaluation as the solution evolves. Explainability still needs balance. Customers deserve transparency into intended use, behavior boundaries, and how to interpret outcomes — without requiring you to expose proprietary prompts, internal logic, or implementation details. For more information on securing your AI apps and agents, visit Securing AI apps and agents on Microsoft Marketplace | Microsoft Community Hub. What's next in the journey Governance creates the conditions for AI apps and agents to operate with confidence over time. With clear policies, enforcement, and evidence in place, publishers are better prepared to focus on operational maturity — how solutions are observed, maintained, and evolved safely in production. The next post explores what it takes to keep AI apps and agents healthy as they run, change, and scale in real customer environments. See the next post in the series: Quality and evaluation framework for successful AI apps and agents in Microsoft Marketplace | Microsoft Community Hub. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor Quick-Start Development Toolkit can connect you with code templates for AI solution patterns Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success156Views4likes0CommentsMeet customer business needs with flexible billing schedules in the marketplace
Co-authored by Trevor_Yeats Today, we’re announcing that Microsoft now offers flexible billing schedules through private offers to better align with customer needs. Flexible billing schedules are available globally to all marketplace-supported currencies. Watch these demo videos to learn more about flexible billing schedules. The value for your customers and for you With flexible billing schedules, customers can buy with confidence knowing that private offers can be customized for virtually any contract value and billing timeline to align with their requirements. Partners can tailor customer private offers and multiparty private offers to meet those requirements. This streamlines sales and accelerates deal velocity. With over 100 partners in our private preview, we’re excited to make this capability publicly available. Many of these partners have achieved remarkable success, closing deals worth millions of dollars. “Flexible billing in Microsoft marketplace has significantly improved how our sales teams engage with customers. It allows them to meet each organization's and customer's procurement needs, whether it's aligning with fiscal year budgets or accelerating project timelines from evaluations to implementations. Additionally, it helps with managing cloud commitment benefits. This flexibility has made it easier for our customers to purchase and deploy solutions faster, without waiting for specific budgets to become available. We can now set up flexible billing schedules to accommodate their needs.” Brett Ferancy, Global Alliance Leader, Abnormal AI Example use cases See below for some real-world examples of how partners and customers are leveraging flexible billing. Variable pricing with specific dates. In this example, the customer pays a setup fee at the start of billing, followed by variable pricing throughout the contract to match consumption patterns and budget cycles. 3-year deal $80M total Notes Immediate charge when billing starts $2M Setup fee – 1 st month 01 Jan 2025 $5M Year 1 installment #2 15 Jul 2025 $3M Year 1 installment #3 01 Jan 2026 $10M Year 2 installment #1 15 Jul 2026 $10M Year 2 installment #2 01 Jan 2027 $20M Year 3 installment #1 15 Jul 2027 $30M Year 3 installment #2 Variable quarterly billing. In this example, the customer pays a setup fee at the start of billing, followed by variable pricing each quarter. 1-year deal $10M total Notes Immediate when billing starts $2M Q1 01 Jun 2025 $3M Q2 01 Sep 2025 $2M Q3 01 Dec 2025 $3M Q4 Delayed start for billing. In this example, the customer gets the first two months free, followed by varied payments throughout the contract to match budget cycles 2-year deal $25M total Notes Immediate when billing starts $0M Free – 2 months 01 Mar 2024 $10M Year 1 fee 15 Jan 2025 $5M Year 2 installment 1 01 Jul 2025 $10M Year 2 installment 2 How it works To start using flexible billing for private offers: The software partner creates a private offer in the marketplace. Currently flexible billing supports SaaS flat rate offers, VM software reservations, and professional services. Partner must choose “Customize SaaS plans and Professional Services” or “Customize VM software reservations” when creating a new private offer. On the configure pricing page, under “billing frequency,” the partner will select “flexible schedule when the contract duration is 1-year or greater.” The software partner creates the billing schedule with up to 70 installments up to $100,000,000 USD, or any of the currencies supported by marketplace, over the length of the deal. There is also an option to book an immediate charge when billing starts or delay the first charge to a date in the future. Private offers can have up to ten included product plans. Each plan has its own billing frequency and may include a unique flexible schedule. A flexible schedule does not apply to all plans included in the private offer and must be set up independently. The software partner can also create a schedule in their customer’s local billing currency using the market pricing template. The customer accepts and purchases the private offer with the flexible billing schedule. For a multiparty private offer, the process is the same except: The software partner sends the private offer to the channel partner. The channel partner adds their price adjustment percentage aligned to the flexible billing schedule and passes it to the customer. Eligibility Any company who is part of the Microsoft AI Cloud Partner Program can sell on the marketplace through private offers with flexible billing. Details are provided in our documentation, but at a high-level: Be a member of the Microsoft AI Cloud Partner Program (it’s free to join) Sign the marketplace publisher agreement Publish your offer Sell private offers with flexible billing In addition, we have many support resources for partners depending on where they are on their marketplace journey. For example, software development companies can join ISV Success for tools and resources that help them publish their solution and maximize its reach on the marketplace. Get started with flexible billing on marketplace We invite you to start leveraging these new improvements to flexible billing today. Learn more by visiting aka.ms/flexbill-docs.1.9KViews4likes0CommentsIssues with Account Verification and Revoked Partner Status
Hello everyone, Since July 30th, I have been struggling to resolve an issue with account verification in Microsoft Partner Center and to restore my partner status. Here's a summary of the problem: In April, I successfully completed the verification process by submitting all the required documents to confirm my domain ownership. The documents met all the criteria, including the 12-month validity requirement. On July 30th, my developer status was revoked without any explanation. I have requested clarification several times, but each time I only received requests to resubmit the same documents I had already provided. I resubmitted the documents on August 8th, but I still haven't received any constructive response, just automated messages. Recently, I received a message saying that several attempts to verify my information were unsuccessful, and the process was closed. This could result in my relationship with Microsoft being terminated within 30 days. My team has spent a year learning the Graph API and developing our Outlook application, and now it seems we won’t be able to publish it due to unclear verification issues. Could someone please advise if there is a way to expedite the resolution of this problem? I have submitted all the required documents and met the necessary criteria, yet I continue to face rejections. Here are my support ticket numbers for reference: 2404250040003907 2409030040003632 2408210040001904 2409060040005012 I would greatly appreciate any help or advice on how to proceed. Thank you!Solved3.4KViews4likes26CommentsExplore Curated Marketplace Resources
Good morning, afternoon, evening community! I wanted to introduce myself; I am Stephanie, a pug and plant lover, and a leader in Marketplace FastTrack at Microsoft. I am here to help you streamline your Marketplace journey through curated self-serve resources to help you go further, faster. Reach out if you have any questions, feedback, or need help to publish an offer or win an imminent deal. 📘 Looking for a guide on publishing and transacting on the Azure Marketplace? Explore the Marketplace Playbook. ⚙️ Need to simplify SaaS offer creation? Use the open-source code on Git Hub for SaaS Accelerator. 🎥 Want to learn how flexible billing can unlock new deal structures? Learn more about Flexible Billing. Use Mastering the Marketplace videos and labs to upskill end-to-end. 💸 Bringing an off-Marketplace renewal or renewing a Marketplace offer? Don’t forget the agency fee discount. 🤖 Have you tried the App Advisor or Partner Center AI Assistant? We’d love your feedback!🚀 Now available: Flexible billing schedules in Microsoft marketplace
Microsoft is making it easier than ever to meet customer procurement needs with flexible billing schedules for private offers—available globally across all marketplace-supported currencies. ✅ Align deals with any virtually and contract value and billing timeframe ✅ Streamline sales and accelerate deal velocity ✅ Supports SaaS flat rate offers, VM software reservations, and professional services ✅ Available for customer private offers and multiparty private offers Over 100 partners in preview are already seeing success—now it’s your turn. 📖 Read more about how it works and explore real-world use cases: 👉 Meet customer business needs with flexible billing schedules in the marketplace127Views3likes0CommentsWhat changed for co-sell after QRP retirement — what ISVs should update now
Microsoft retired QRP last week and consolidated referrals into Partner Center. What most ISVs haven't caught yet is that the new AI-based matching reads your solution descriptions and industry tags differently than QRP did. If your listing hasn't been updated since early 2025, you may be getting matched to lower-quality leads or missed entirely. Three things worth reviewing now: your solution area tags, your customer segment fields, and your co-sell 1-pager format. Happy to share more detail on what I've seen reviewing several listings. Drop your questions below.117Views2likes5CommentsHow do you actually unlock growth from Microsoft Teams Marketplace?
Hey folks 👋 Looking for some real-world advice from people who’ve been through this. Context: We’ve been listed as a Microsoft Teams app for several years now. The app is stable, actively used, and well-maintained - but for a long time, Teams Marketplace wasn’t a meaningful acquisition channel for us. Things changed a bit last year. We started seeing organic growth without running any dedicated campaigns, plus more mid-market and enterprise teams installing the app, running trials, and even using it in production. That was encouraging - but it also raised a bigger question. How do you actually systematize this and get real, repeatable benefits from the Teams Marketplace? I know there are Microsoft Partner programs, co-sell motions, marketplace benefits, etc. - but honestly, it’s been very hard to figure out: - where exactly to start - what applies to ISVs building Teams apps - how to apply correctly - and what actually moves the needle vs. what’s just “nice to have” On top of that, it’s unclear how (or if) you can interact directly with the Teams/Marketplace team. From our perspective, this should be a win-win: we invest heavily into the platform, build for Teams users, and want to make that experience better. Questions to the community: If you’re a Teams app developer: what actually worked for you in terms of marketplace growth? Which Partner programs or motions are worth the effort, and which can be safely ignored early on? Is there a realistic way to engage with the Teams Marketplace team (feedback loops, programs, office hours, etc.)? How do you go from “organic installs happen” to a structured channel? Would really appreciate any practical advice, lessons learned, or even “what not to do” stories 🙏 Thanks in advance!304Views2likes4Comments