developer
8111 TopicsAccess fixes released in Version 2604 (Build 16.0.19929.20090)
Bug Name Issue Fixed Values display in the wrong control when using a form as a sublist When a form was used as a sublist (subdatasheet), field values could display in the wrong control, showing data in incorrect positions. Values now display in the correct controls. Applications that use the Access Database Engine (ACEOLEDB) terminate unexpectedly on exit Third-party applications using the Access Database Engine (ACEOLEDB) provider could terminate unexpectedly when closing. The shutdown sequence has been corrected. Long Text field corrupted when a query updates a record while a user is editing it When a query updated a Long Text field on a record that was simultaneously open for editing, the field data could become corrupted. The record update now correctly handles concurrent access to Long Text fields. Rendering errors with Aptos (Detail) font Controls using the Aptos (Detail) font variant could render incorrectly, with characters appearing misaligned or garbled. The font rendering has been corrected. Standard colors in Access didn't match other Office apps The standard color palette in Access used different color values than other Office applications like Word and Excel. The color palette has been updated to match the rest of Office. Option Group with Vertical Anchor Bottom: option buttons show incorrect visual state after clicking When an option group control had its Vertical Anchor property set to Bottom, clicking an option button would not correctly update the visual state of the buttons. The visual state now updates correctly regardless of the anchor setting. Query Design: Insert/Delete Columns don't work when ribbon is set to Show Tabs Only In Query Design view, the Insert Columns and Delete Columns commands on the ribbon did not work when the ribbon display option was set to "Show Tabs Only." The commands now work correctly regardless of ribbon display mode. SQL View: Ctrl+K should toggle pretty formatting off/on In the Monaco SQL editor, the Ctrl+K keyboard shortcut did not toggle SQL formatting. Ctrl+K now correctly toggles pretty formatting on and off. Monaco editor incorrectly converts Unicode characters in SQL view When switching between Design View and SQL View, the Monaco SQL editor could incorrectly convert certain Unicode characters, corrupting the SQL text. Unicode characters are now preserved correctly. Importing text files with Unicode characters in the filename fails Attempting to import a text file whose filename contained certain Unicode characters would fail. File imports now handle Unicode filenames correctly. Added VarP and StDevP to the Totals query aggregate dropdown The VarP (population variance) and StDevP (population standard deviation) aggregate functions were missing from the Totals row dropdown in Query Design view. They have been added alongside the existing Var and StDev options. Added VarP and StDevP to the datasheet totals row dropdown The VarP and StDevP aggregate functions were missing from the Totals row dropdown in Datasheet view. They have been added to match the options available in Query Design view. Access hangs at shutdown when VBA holds temporary DAO field references Access could hang during shutdown when VBA code created temporary DAO field references. The shutdown process now correctly cleans up temporary field references. Full Screen Mode ribbon display option does nothing in Access Selecting "Full Screen Mode" from the ribbon display options had no effect in Access. This option now works correctly, hiding the ribbon to maximize the available workspace.150Views2likes2CommentsDev Containers for .NET in VS Code: A Beginner‑Friendly Guide That Actually Works
What Dev Containers are really about At a high level, Dev Containers let you use a Docker container as your development environment inside VS Code. But the real idea is not “Docker for development”. The real idea is this: Move all environment complexity out of your laptop and into version‑controlled configuration. With Dev Containers: Your laptop becomes just a VS Code client Your tools, SDKs, runtimes, and dependencies live inside the container Your project defines its own development environment, not your machine This means: You can switch projects without breaking anything You can delete and recreate your environment safely New developers get the same setup without tribal knowledge Why Dev Containers are so useful for .NET projects .NET development often looks simple at first until it doesn’t. Common pain points: Different developers using different .NET SDK versions One project needs .NET 6, another needs .NET 8 Native dependencies work on one machine but not another CI runs on Linux but developers run on Windows Dev Containers solve this by: Locking the SDK version and OS used for development Running everything in a Linux container (close to CI/production) Keeping developer machines clean and stable Making onboarding almost instant: clone → reopen in container → run Once the .devcontainer folder is committed to the repo, the environment becomes part of the codebase, not a wiki page. How Dev Containers work in VS Code You don’t need deep Docker knowledge to use Dev Containers. Here’s the mental model that helped me: Your repository contains a .devcontainer folder Inside it, devcontainer.json describes the development environment VS Code reads that file and starts a container VS Code connects to the container and runs extensions inside it Your source code stays on your machine, but: the terminal runs inside the container the debugger runs inside the container the SDKs live inside the container If something breaks, you rebuild the container, not your laptop. When Dev Containers are a great choice (and when they’re not) Dev Containers are a great fit when: You work on multiple projects with different requirements Your team struggles with environment consistency You want Linux parity for CI and containerized deployments You value reproducibility over ad‑hoc local setup They may not be ideal when: You’re working on very small throwaway scripts You rely heavily on Windows‑only tooling You cannot use Docker at all in your environment For most professional .NET teams, the benefits far outweigh the cost. Docker on Windows: a choice you must make early When starting with Dev Containers on Windows, one of the first decisions you must make is how Docker runs on your machine. Both Docker Desktop and Docker Engine inside WSL work well with Dev Containers but they serve slightly different needs. Using Docker Desktop Docker Desktop is the easiest and most beginner‑friendly way to get started with Dev Containers. Pros Very quick setup with minimal configuration Comes with a graphical dashboard for containers, images, and logs Integrates smoothly with VS Code and WSL2 Easier to troubleshoot when you’re learning Cons Uses more system resources in the background Runs additional services even when you’re not actively developing May be restricted or licensed differently in some enterprise environments When to use Docker Desktop You are new to Docker or Dev Containers You want the simplest and fastest setup You value ease of use over fine‑grained control You are working on personal projects or in environments where Docker Desktop is allowed For most developers starting out with Dev Containers, Docker Desktop is the recommended entry point. Using Docker Engine inside WSL This approach installs Docker Engine directly inside a Linux distribution (like Ubuntu) running on WSL2, without Docker Desktop. Pros Lower resource usage compared to Docker Desktop Linux‑native behavior (closer to CI and production) No dependency on Docker Desktop Often preferred in enterprise or restricted environments Cons Requires manual installation and configuration Needs basic Linux and WSL knowledge No graphical UI everything is CLI‑based When to use Docker Engine in WSL Docker Desktop is not allowed or restricted You want a leaner, Linux‑first workflow You already work mostly inside WSL You want tighter control over your Docker setup This approach is ideal once you are comfortable with Docker and WSL. Note : Do not mix Docker Desktop and Docker Engine inside WSL.Pick one approach and stick with it. Running both at the same time often leads to Docker context confusion and Dev Containers failing in unpredictable ways, even when your configuration looks correct. A performance tip that makes a huge difference If you’re using Linux containers with WSL, store your code inside the WSL filesystem. Recommended: /home/<user>/projects/your-repo Avoid: /mnt/c/Users/<user>/your-repo Linux containers accessing Windows files are slower and cause file‑watching issues. Moving the repo into WSL made my Dev Containers feel almost native. First‑time setup: the simplest way to start If you’re trying Dev Containers for the first time, follow this exact order: Install Visual Studio Code Install the Dev Containers extension Install Docker Desktop (or Docker Engine in WSL) Clone your repo inside the WSL filesystem Open the folder in VS Code Run “Dev Containers: Reopen in Container” That’s it. VS Code handles the rest. Your first .NET Dev Container (hands‑on example) Tech Stack .NET 8 Web API PostgreSQL 16 Entity Framework Core + Npgsql VS Code Dev Containers Docker Compose Project Structure my-blog-api/ ├─ .devcontainer/ │ └─ devcontainer.json ├─ docker-compose.yml └─ src/ └─ BlogApi/ ├─ Program.cs ├─ BlogApi.csproj ├─ appsettings.json ├─ Models/ └─ Data/ Step 1: Create the Web API mkdir my-blog-api cd my-blog-api mkdir src && cd src dotnet new webapi -n BlogApi cd BlogApi Step 2: Add EF Core + PostgreSQL packages dotnet add package Npgsql.EntityFrameworkCore.PostgreSQL dotnet add package Microsoft.EntityFrameworkCore.Design Step 3: Docker Compose (API + PostgreSQL) Create docker-compose.yml at the repo root: version: "3.8" services: app: image: mcr.microsoft.com/devcontainers/dotnet:1-8.0 volumes: - .:/workspace:cached working_dir: /workspace command: sleep infinity ports: - "5000:5000" depends_on: - db db: image: postgres:16 environment: POSTGRES_USER: devuser POSTGRES_PASSWORD: devpwd POSTGRES_DB: devdb ports: - "5432:5432" volumes: - pgdata:/var/lib/postgresql/data pgadmin: image: dpage/pgadmin4 environment: PGADMIN_DEFAULT_EMAIL: admin@admin.com PGADMIN_DEFAULT_PASSWORD: admin ports: - "5050:80" depends_on: - db volumes: pgdata: Step 4: Dev Container configuration Create .devcontainer/devcontainer.json: { "name": "dotnet-postgres-devcontainer", "dockerComposeFile": "../docker-compose.yml", "service": "app", "workspaceFolder": "/workspace", "shutdownAction": "stopCompose", "customizations": { "vscode": { "extensions": [ "ms-dotnettools.csdevkit", "ms-dotnettools.csharp", "ms-azuretools.vscode-docker" ] } }, "postCreateCommand": "dotnet restore" } Open the folder in VS Code and run: Dev Containers: Reopen in Container Step 5: Connection string (container‑to‑container) Update appsettings.json: { "ConnectionStrings": { "DefaultConnection": "Host=db;Port=5432;Database=devdb;Username=devuser;Password=devpwd" }, "Logging": { "LogLevel": { "Default": "Information", "Microsoft.AspNetCore": "Warning" } }, "AllowedHosts": "*" } Host=db works because Docker Compose provides internal DNS between services. Step 6: EF Core Model & DbContext Post entity – Models/Post.cs namespace BlogApi.Models; public class Post { public int Id { get; set; } public string Title { get; set; } = string.Empty; public string Content { get; set; } = string.Empty; public DateTime CreatedUtc { get; set; } = DateTime.UtcNow; } DbContext – Data/BlogDbContext.cs using BlogApi.Models; using Microsoft.EntityFrameworkCore; namespace BlogApi.Data; public class BlogDbContext : DbContext { public BlogDbContext(DbContextOptions<BlogDbContext> options) : base(options) { } public DbSet<Post> Posts => Set<Post>(); } Step 7: Program.cs (Minimal CRUD) Replace Program.cs with: using BlogApi.Data; using BlogApi.Models; using Microsoft.EntityFrameworkCore; var builder = WebApplication.CreateBuilder(args); builder.Services.AddDbContext<BlogDbContext>(options => options.UseNpgsql(builder.Configuration.GetConnectionString("DefaultConnection"))); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(); var app = builder.Build(); // Apply migrations on startup (dev-only convenience) using (var scope = app.Services.CreateScope()) { var db = scope.ServiceProvider.GetRequiredService<BlogDbContext>(); db.Database.Migrate(); } app.UseSwagger(); app.UseSwaggerUI(); app.MapGet("/posts", async (BlogDbContext db) => await db.Posts.OrderByDescending(p => p.CreatedUtc).ToListAsync()); app.MapPost("/posts", async (Post post, BlogDbContext db) => { db.Posts.Add(post); await db.SaveChangesAsync(); return Results.Created($"/posts/{post.Id}", post); }); app.MapPut("/posts/{id:int}", async (int id, Post input, BlogDbContext db) => { var post = await db.Posts.FindAsync(id); if (post is null) return Results.NotFound(); post.Title = input.Title; post.Content = input.Content; await db.SaveChangesAsync(); return Results.Ok(post); }); app.MapDelete("/posts/{id:int}", async (int id, BlogDbContext db) => { var post = await db.Posts.FindAsync(id); if (post is null) return Results.NotFound(); db.Posts.Remove(post); await db.SaveChangesAsync(); return Results.NoContent(); }); app.Run("http://0.0.0.0:5000"); Step 8: Run migrations (inside Dev Container) cd src/BlogApi dotnet tool install --global dotnet-ef export PATH="$PATH:/home/vscode/.dotnet/tools" dotnet ef migrations add InitialCreate dotnet ef database update Step 9: Run the API dotnet run 🔗 Open: Swagger → http://localhost:5000/swagger Posts API → http://localhost:5000/posts Common mistakes and quick fixes Mistake Symptom Fix Mixing Docker models Random failures Use only one Docker approach Code under /mnt/c Slow builds Move repo to WSL filesystem Docker not running Container won’t start Check docker info Pruning first Issues return Fix daemon/context first Common Challenges Faced Multiple Docker engines active simultaneously Docker Desktop and Docker Engine inside WSL were both present, causing conflicts. Unstable Docker CLI context Docker CLI intermittently pointed to different or broken Docker endpoints. Docker daemon appeared running but was unusable Docker commands failed with API errors despite the daemon seeming active. systemd dependency issues inside WSL Docker Engine depended on systemd, which was not consistently active after WSL restarts. Dev Containers failing during setup VS Code Dev Containers surfaced failures during feature installation and builds. Misleading Docker error messages Errors pointed to API or version issues, masking the real root cause. Cache cleanup ineffective Pruning images and containers did not resolve underlying daemon issues. Container observability confusion PostgreSQL and pgAdmin worked, but container health, volumes, and data locations were unclear. Solutions & Maintainable Settings Enforce a Single Docker Model. Use either Docker Desktop or native Docker Engine inside WSL .never both. docker version docker info Verify only one server is shown No references to dockerDesktopLinuxEngine when using native WSL Docker Explicitly Lock Docker CLI Context. Always verify and set Docker context before running Compose or Dev Containers. docker context ls docker context show docker context use default Context must point to the intended daemon (WSL or Desktop) Validate Docker Daemon Health Before Project Start.Confirm Docker is reachable before Dev Containers or Compose. docker info docker ps Must return without API / 500 / version errors Do not proceed if these fail Ensure systemd Is Enabled in WSL.Docker Engine inside WSL depends on systemd. cat /etc/wsl.conf Expected: [boot] systemd=true If not then apply changes.Restart WSL and re‑verify: wsl --shutdown systemctl status docker Start Docker Explicitly After WSL Restart.WSL restarts silently stop services. sudo systemctl start docker sudo systemctl enable docker Verify: docker ps Use Only WSL Native Filesystem for Projects.Keep project under /home/<user>/... Avoid /mnt/c/... paths. Path starts with /home/ Treat Dev Containers as a Consumer, Not the Fix. Fix Docker issues outside Dev Containers first. Pre‑check Commands docker compose config docker compose up -d Compose works before opening Dev Container Keep Dev Container Features Minimal on First Run Start with base image + required services only Add features after baseline stability docker images docker ps Verify Container Observability Explicitly.Confirm containers are healthy, ports mapped, volumes mounted. docker ps docker inspect <container_name> docker logs <container_name> Port check: ss -lntp | grep <port> Avoid Cache Cleanup as a First Fix. Do not rely on prune to fix daemon issues Only After Daemon Is Healthy docker system prune -f docker volume prune -f Establish a “Known‑Good” Baseline Checklist. Validate sequence before development starts Baseline Flow wsl --shutdown # reopen WSL sudo systemctl start docker docker context show docker info docker compose up -d code . # Only then open Reopen in Container If something breaks in between after running dev container we can stop and clean the containers and rebuild it again docker ps docker stop $(docker ps -q) docker rm -f $(docker ps -aq) docker ps Closing Thoughts Dev Containers shift local development from fragile, machine‑specific setups to reproducible, version‑controlled environments. With Dev Containers, Docker Compose, PostgreSQL, and pgAdmin, your entire .NET development stack lives inside containers.not on your laptop. SDKs, databases, and tools are isolated, consistent, and easy to rebuild. When something breaks, you rebuild containers.not machines. This approach removes onboarding friction, improves Linux parity with CI, and eliminates the classic “works on my machine” problem. Once Docker is stable, Dev Containers become one of the most reliable ways to build modern .NET applications. Key Takeaways Dev Containers treat the development environment as code .NET, PostgreSQL, and pgAdmin run fully isolated in containers pgAdmin provides clear visibility into database state and migrations Docker stability is a prerequisite.Dev Containers are not a Docker fix Onboarding becomes simple: clone → reopen in container → run Rebuild containers, not laptopsHow to choose the right Marketplace offer type for your AI app or agent
Selecting the right Microsoft Marketplace offer type is one of the most important—and often most complex—decisions when bringing AI apps and agents to market. In this latest Marketplace blog article, you’ll learn how different offer types align to AI delivery models and why this choice directly impacts architecture, security boundaries, customer experience, and monetization strategy. The article breaks down key considerations across SaaS, Azure Managed Applications, containers, and virtual machines, helping software development companies understand how to balance control, scalability, and operational ownership. It also highlights how offer type decisions influence where AI workloads run, how data is managed, and how customers deploy and interact with your solution. If you’re building or publishing AI solutions in Microsoft Marketplace, this guidance will help you make informed decisions early—before development, security, and go-to-market plans are locked in. Read the full article: Marketplace Offer Types for AI Apps and agents: SaaS vs Managed App vs ContainersWeb Notifications API from Personal Tab app doesn't work
I have a client-side web application that we're trying to get to run as a Teams personal tab app. I have the app working as a teams app, as long as we "open in new window" so that the app doesn't get put to sleep as it needs a permanently up session with something else. Our app uses Web Notifications API (https://developer.mozilla.org/en-US/docs/Web/API/Notification) to create desktop notificiations, but this does not seem to work when running as a Teams tab app popped out into a new window. No notifications are displayed. Permissions have been requested correctly and given and Windows is set to allow notifications, but these web notifications never make it out of teams to the desktop. There are no errors in the browser console to say the web notifications are not supported in Teams. Is this not supported as a teams app? The inbuilt activity feed is useless for our purpose (for a few reasons), so please don't suggest I use graph to make use of that instead.72Views0likes3CommentsSecuring and governing AI agents before deployment
April 30 | 2:00-3:00 PM (GTM +10) Join this live webinar to learn how to secure and govern AI agents before they go live. Explore how to provision agents with Entra Agent ID, manage identities and credentials, enforce least-privilege access, and prevent risks like Shadow AI and agent sprawl. Join to gain practical guidance on governing AI agents across their full lifecycle—so you can deploy with confidence. To view the session live, register here: Securing and Governing AI Agents Before They Go Live You can view previous Security for Software Development Company series sessions on demand here: Security for Software Development Company Series: Securing the Agentic Eraad-hoc call transcripts via graph api endpoint
I have an app with the below permissions. I can successfully get list of transcripts and transcript content for online meetings. But when I try to get list for ad-hoc calls, I get the 400 Bad Request error. Any suggestions on how to get ad hoc call transcripts? Permissions Error:56Views0likes3CommentsAI apps and agents: choosing your Marketplace offer type
Choosing your Marketplace offer type is one of the earliest—and most consequential—decisions you’ll make when preparing an AI app for Microsoft Marketplace. It’s also one of the hardest to change later. This post is the second in our Marketplace‑ready AI app series. Its goal is not to push you toward a specific option, but to help you understand how Marketplace offer types map to different AI delivery models—so you can make an informed decision before architecture, security, and publishing work begins. You can always get a curated step-by-step guidance through building, publishing and selling apps for Marketplace through App Advisor. This post is part of a series on building and publishing well-architected AI apps and agents in Microsoft Marketplace. The series focuses on AI apps and agents that are architected, hosted, and operated on Azure, with guidance aligned to building and selling solutions through Microsoft Marketplace. Why offer type is an important Marketplace decision Offer type is more than a packaging choice. It defines the operating model of your AI app on Marketplace: How customers acquire your solution Where the AI runtime executes Determining the right security and business boundaries for the AI solution and associated contextual data Who operates and updates the system How transactions and billing are handled Once an offer type is selected, it cannot be changed without creating a new offer. Teams that choose too quickly often discover later that the decision creates friction across architecture, security boundaries, or publishing requirements. Microsoft’s Publishing guide by offer type explains the structural differences between offer types and why this decision must be made up front. How Marketplace offer types map to AI delivery models AI apps differ from traditional software in a few critical ways: Contextual data may need to remain in a specific tenant or geography Agents may operate autonomously and continuously Control over infrastructure often determines trust and compliance How the solution is charged and monetized, including whether pricing is usage‑based, metered, or subscription‑driven (for example, billing per inference, per workflow execution, or as a flat monthly fee) The buyer’s technical capability, including the level of engineering expertise required to deploy and operate the solution (for example, SaaS is generally easier to consume, while container‑based and managed application offers often require stronger cloud engineering and DevOps skills) Marketplace offer types don’t describe features. They define responsibility boundaries—who controls the AI runtime, who owns the infrastructure, and where customer data is processed. At a high level, Marketplace supports four primary delivery models for AI solutions: SaaS Azure Managed Application Azure Container Virtual Machine Each represents a different balance between publisher control and customer control. The sections below explain what each model means in practice. Check out the interactive offer selection wizard in App Advisor for decision support. Below, we unpack each of the offer types. SaaS offers for AI apps SaaS is the most common model for AI apps and agents on Marketplace—and often the default starting point. With a SaaS offer, the AI service runs in the publisher’s Azure environment and is accessed by customers through a centralized endpoint. This model works well for: Multi‑tenant AI platforms and agents Continuous model and prompt updates Rapid experimentation and iteration Usage‑based or subscription billing Because the service is centrally hosted, publishers retain full control over deployment, updates, and operational behavior. For multi-tenant AI apps, this also means making early decisions about Microsoft Entra ID configuration—such as how customers are onboarded, whether access is granted through tenant-level consent or external identities, and how user identities, roles, and data are isolated across tenants to prevent cross-tenant access or data leakage. For official guidance, see the SaaS section of the Marketplace publishing guide and the AI agent overview, which describes SaaS‑based agent deployments. Plan a SaaS offer for Microsoft Marketplace. Azure Managed Applications for AI solutions In this model, the solution is deployed into the customer’s Azure subscription, not the publisher’s. There are two variants: Managed applications, where the publisher retains permissions to operate and update the deployed resources Solution templates, where the customer fully manages the deployment after installation This model is a strong fit when AI workloads must run inside customer‑controlled environments, such as: Regulated or sensitive data scenarios Customer‑owned networking and identity boundaries Infrastructure‑heavy AI solutions that can’t be centralized Willingness or need on part of the customer or IT team to tailor the app to the needs of the end customer specific environment Managed Applications sit between SaaS and fully customer‑run deployments. They offer more customer control than SaaS, while still allowing publishers to manage lifecycle aspects when appropriate. Marketplace guidance for Azure Applications is covered in the publishing guide. For more information, see the following links: Plan an Azure managed application for an Azure application offer. Azure Container offers for AI workloads With container offers, the customer runs the AI workload—typically on AKS—using container images supplied by the publisher. This model is best suited for scenarios that require: Strict data residency Air‑gapped or tightly controlled environments Customer‑managed Kubernetes infrastructure The publisher delivers the container artifacts, but deployment, scaling, and runtime operations occur in the customer’s environment. This shifts operational responsibility, risk and compute costs away from the publisher and toward the customer. Container offer requirements are covered in the Marketplace publishing guide. Plan a Microsoft Marketplace Container offer. Virtual Machine offers for AI solutions Virtual Machine offers still play a role, particularly for specialized or legacy AI solutions. VM offers package a pre‑configured AI environment that customers deploy into their Azure subscription. Compared to other models, they offer: Updates and scaling are more tightly scoped Iteration cycles tend to be longer The solution is more closely aligned with specific OS or hardware requirements They are most commonly used for: Legacy AI stacks Fixed‑function AI appliances Solutions with specialized hardware or driver dependencies VM publishing requirements are also documented in the Marketplace publishing guide. Plan a virtual machine offer for Microsoft Marketplace. Comparing offer types across AI‑specific decision dimensions Rather than asking “which offer type is best,” it’s more useful to ask what trade‑offs you’re making. Key lenses to consider include: Who operates the AI runtime day‑to‑day Where customer data and AI prompts inputs and outputs are processed and stored How quickly models, prompts, and logic can evolve The balance between publisher control and customer control How Marketplace transactions and billing align with runtime behavior SaaS Container (AKS / ACI) Virtual Machine (VM) Azure Managed Application What it is Fully managed, externally hosted app integrated with Marketplace for billing and entitlement Containerized app deployed into customer-managed Azure container environments VM image deployed directly into the customer’s Azure subscription Azure native solution deployed into the customer’s subscription, managed by the publisher Control plane Publisher‑owned Customer owned Customer owned Customer owned (with publisher access) Operational model Centralized operations, updates, and scaling Customer operates infra; publisher provides containers Customer operates infra; publisher provides VM image Per customer deployment and lifecycle Good fit scenarios • Multi‑tenant AI apps serving many customers • Fast onboarding and trials • Frequent model or feature updates • Publisher has full runtime control • AI apps or agents built as microservices • Legacy or lift-and-shift AI workloads • Enterprise AI solutions requiring customer owned infrastructure Avoid when • Customers require deployment into their own subscription • Strict data residency mandates customer control • Offline or air‑gapped environments are required • Customers standardize on Kubernetes • Custom OS or driver dependencies • Tight integration with customer Azure resources Typical AI usage pattern Centralized inference and orchestration across tenants • Portability across environments is important • Specialized runtime requirements • Strong compliance and governance needs Different AI solutions land in different places across these dimensions. The right choice is the one that matches your operational reality—not just your product vision. Note: If your solution primarily delivers virtual machines or containerized workloads, use a Virtual Machine or Container offer instead of an Azure Managed Application. Supported sales models and pricing options by Marketplace offer type Marketplace offer types don’t just define how an AI app and agent is deployed — they also determine how it can be sold, transacted, and expanded through Microsoft Marketplace. Understanding the supported sales models early helps avoid misalignment between architecture, pricing, and go‑to‑market strategy. Supported sales models Offer type Transactable listing Public listing Private offers Resale enabled Multiparty private offers Azure IP Co‑sell eligible SaaS Yes Yes Yes Yes Yes Yes Container Yes Yes Yes No Yes Yes Virtual Machine Yes Yes Yes Yes Yes Yes Azure Managed Application Yes Yes Yes No Yes Yes What these sales models mean Transactable listing A Marketplace listing that allows customers to purchase the solution directly through Microsoft Marketplace, with billing handled through Microsoft. Public listing A listing that is discoverable by any customer browsing Microsoft Marketplace and available for self‑service acquisition. Private offers Customer‑specific offers created by the publisher with negotiated pricing, terms, or configurations, purchased through Marketplace. Resale enabled Using resale enabled offers, software companies can authorize their channel partners to sell their existing Marketplace offers on their behalf. After authorization, channel partners can independently create and sell private offers without direct involvement from the software company. Multiparty private offers Private offers that involve one or more Microsoft partners (such as resellers or system integrators) as part of the transaction. Azure IP Co‑sell eligible When all requirements are met this allows your offers to contribute toward customers' Microsoft Azure Consumption Commitments (MACC). Pricing section Marketplace offer types determine the pricing models available. Make sure you build towards a marketplace offer type that aligns with how you want to deploy and price your solution. SaaS – Subscription or flat‑rate pricing, per‑user pricing, and usage‑based (metered) pricing. Container – Kubernetes‑based offers support multiple Marketplace‑transactable pricing models aligned to how the workload runs in the customer’s environment, including per core, per core in cluster, per node, per node in cluster, per pod, or per cluster pricing, all billed on a usage basis. Container offers can also support custom metered dimensions for application‑specific usage. Alternatively, publishers may offer Bring Your Own License (BYOL) plans, where customers deploy through Marketplace but bring an existing software license. Virtual Machine – Usage-based hourly pricing (flat rate, per vCPU, or per vCPU size), with optional 1-year or 3-year reservation discounts. Publishers may also offer Bring Your Own License (BYOL) plans, where customers bring an existing software license and are billed only for Azure infrastructure. Azure Managed Application – A monthly management or service fee charged by the publisher; Azure infrastructure consumption is billed separately to the customer. Note: Azure Managed Applications are designed to charge for management and operational services, not for SaaS‑style application usage or underlying infrastructure consumption. Buyer‑side assumptions to be aware of For customers to purchase AI apps and agents through these sales models: The customer must be able to purchase through Microsoft Marketplace using their existing Microsoft procurement setup Marketplace purchases align with enterprise buying and governance controls, rather than ad‑hoc vendor contracts For private and multiparty private offers, the customer must be willing to engage in a negotiated Marketplace transaction, rather than pure self‑service checkout Important clarification Supported sales models are consistent across Marketplace offer types. What varies by offer type is how the solution is provisioned, billed, operated, and updated. Sales flexibility alone should not drive offer‑type selection — it must align with the architecture and operating model of the AI app and agent. How this decision impacts everything that follows Offer type decisions ripple through the rest of the Marketplace journey. They directly shape: Architecture design choices Security and compliance boundaries Fulfillment APIs and billing integration Publishing and certification requirements Cost, scalability, and operational responsibility Follow the series for updates on new posts. What’s next in the journey With the offer type decision in place, the focus shifts to turning that choice into a production‑ready solution. This includes designing an architecture that aligns with your delivery model, establishing clear security and compliance boundaries, and preparing the operational foundations required to run, update, and scale your AI app or agent confidently in customer environments. Getting these elements right early reduces rework and sets the stage for a smoother Marketplace journey. See the next post in the series: Designing Production‑Ready AI App and Agent Architectures for Microsoft Marketplace. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor Quick-Start Development Toolkit can connect you with code templates for AI solution patterns Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success186Views4likes0CommentsSuccess with AI apps and agents on Marketplace: the end-to-end
Preparing an AI app or agent for Microsoft Marketplace requires solving a broader set of problems—ones that extend beyond the model and into architecture, security, compliance, operations, and commerce. These requirements often surface late, when teams are already moving toward launch. Teams often reach the same milestone: the AI works, the demo is compelling, and early customers are interested. But when it’s time to publish, transact, and operate that solution through Marketplace, gaps emerge—around security, compliance, reliability, operations, or commerce integration. Whether you are demo ready or starting with a great AI idea, this series is designed to address those challenges through a connected, end‑to‑end journey. It brings together the decisions and requirements needed to build AI apps and agents that are not only functional, but Marketplace‑ready from day one. You can always get a curated step-by-step guidance through building, publishing and selling apps for Marketplace through App Advisor. This post is part of a series on building and publishing well-architected AI apps and agents on Microsoft Marketplace. The series focuses on AI apps and agents that are architected, hosted, and operated on Azure, with guidance aligned to building and selling solutions through Microsoft Marketplace. Why an end‑to‑end journey matters A working AI app does not automatically mean a Marketplace‑ready AI app. Marketplace readiness spans far more than model selection or prompt engineering. It requires a holistic approach across: Architecture and hosting design Security and AI guardrails Compliance and governance Operational maturity Commerce, billing, and lifecycle integration While guidance exists across each of these areas, it is often fragmented. This series connects those pieces into a single, reusable mental model that software companies can use to design, build, publish, and operate AI apps and agents with confidence. This first post frames the journey. Each subsequent post goes deep into one stage. The marketplace‑ready AI app and agent lifecycle At a high level, Marketplace‑ready AI apps and agents follow this lifecycle: Define how the AI app and agent will be delivered Identify industry compliance and regulatory requirements Design a production‑ready AI architecture Embed security and AI guardrails into the design Validate compliance and governance Build and test an MVP with potential customers Build for quality, reliability, and scale Integrate with Marketplace commerce Prepare for publishing and go‑live Operate, monitor, and evolve safely Promoting your AI app and agent to close initial sales This lifecycle is intentionally introduced once, at a high level. Decisions made early will shape everything that follows. Throughout the series, this lifecycle serves as a shared reference point. Step 1: Decide how your AI app and agent will be packaged and delivered The first decision is how the AI app and agent will be delivered through Marketplace. Offer types—such as SaaS, Azure Managed Applications, Containers, and Virtual Machines—are not just listing formats. They are delivery models that directly impact: Identity and authentication Billing and metering Deployment responsibilities Operational ownership Customer onboarding experience Supported sales models Choosing an offer type early helps avoid costly redesigns later. Step 2: Design a production‑ready AI architecture Marketplace AI apps and agents are expected to meet enterprise customer expectations for performance, reliability, and security. Architecture decisions must account for: Customer business, compliance, and security needs Offer‑specific hosting best practices For example, SaaS offers typically require: Tenant isolation Environment separation Strong identity boundaries Architecture must also support both AI behavior and Marketplace lifecycle events, such as provisioning, subscription changes, and entitlement checks. Step 3: Secure the AI app and agent and define guardrails Security cannot be treated as a certification checklist at the end of the process. AI introduces new risks beyond traditional applications, including expanded attack surfaces through prompts and inputs. Frameworks such as the OWASP GenAI Top 10 provide a useful lens for identifying these risks. Guardrails must be enforced: At design time through architecture and policy decisions At runtime through monitoring, enforcement, and response AI‑specific incident response must also factor in privacy regulations and customer trust. Step 4: Treat AI agents as compliance‑governed systems AI agents and their data are first‑class compliance subjects. This includes: Prompts and responses Contextual and training data Actions taken by the agent These artifacts must be auditable and governed inline, not retroactively. At the same time, publishers must balance compliance with intellectual property protection by enabling explainability and transparency without exposing proprietary logic. Step 5: Build for quality, reliability, and scale Marketplace buyers expect predictable behavior. AI apps and agents should formalize: Quality and evaluation frameworks Reliability and performance targets Scaling and cost optimization strategies Quality, reliability, and performance directly influence customer trust and satisfaction. Step 6: Integrate with Marketplace commerce and lifecycle APIs Marketplace is not “just a listing.” For transactable offers that help you sell globally direct to customers or through channel and allow customers to count sales of your app against their cloud commitments, Marketplace becomes an operational contract. Subscription state, entitlements, billing, and metering are runtime responsibilities of the application. For SaaS offers, SaaS Fulfillment APIs define the source of truth for subscription lifecycle events. Integrate Marketplace lead flows with your CRM using the Marketplace lead connector for CRM Step 7: Prepare for publishing, certification, and go‑live Publishing requires more than code completion. Marketplace certification validates: Security posture Customer experience Operational readiness Using templates, checklists, and tooling such as Quick Start Development Toolkit, Marketplace Rewards resources, and App Advisor reduces friction and rework. Step 8: Operate and evolve safely after go‑live Launch is not the end of the journey. AI apps and agents evolve continuously, making: Safe deployment strategies CI/CD discipline Rollback and monitoring practices This is essential for protecting both customers and publishers. Operational maturity also includes maintaining Marketplace offer assets (store images) as the product evolves. Use this framework to help you build a production ready AI app and agent, well architected, secured, reliable, scalable and integrated with Microsoft Marketplace global commerce engine. Step 9: Promote your AI app and agent Becoming Marketplace‑ready does not end at publication. AI app and agent success also depends on how effectively the solution is discovered, evaluated, and trusted by customers within Microsoft Marketplace and the broader Microsoft ecosystem. Promotion in Microsoft Marketplace is tightly integrated with how customers discover and purchase solutions. AI apps and agents are surfaced through Marketplace search, categories, and in‑product experiences, and once your AI app or agent becomes Azure IP co-sell eligible - the purchase of your offer can count towards your customers' Microsoft Azure Consumption Commitments (MACC) motivating customers to buy your offer. This reduces buying friction and accelerates evaluation‑to‑purchase transitions. Top activities to grow your sales: Optimize your listing once you publish your app, by getting an agentic review of your published listing in seconds, based on Marketplace listing best practices and expert Microsoft editorial guidance. Promote your Marketplace offer and track your engagement following best practices. Manage and nurture leads from trials to purchase, and from purchase to higher level SKUs. Private offers, which allow publishers to create customer-specific or negotiated offers directly through Marketplace, including multiparty private offers involving Microsoft channel partners Sell through channel, use resale enabled offers to enable resellers and channel partners to sell your app to their customers, Co-sell motions, where eligible AI apps and agents are sold jointly with Microsoft sellers and count toward customer cloud consumption commitments Effective customer engagement depends on alignment between how the AI app and agent is positioned and how it is delivered. Clear descriptions, accurate architectural boundaries, and transparent operational expectations help customers move confidently from discovery to production adoption. For publishers, programs such as ISV Success provide guidance and tooling to help align technical readiness, Marketplace requirements, and go‑to‑market execution as AI apps and agents scale through Microsoft Marketplace. Sales don't happen by accident, it's essential you engage in promoting your marketing. When promotion is treated as a first‑class step in the lifecycle, it reinforces trust, accelerates evaluation, and increases the likelihood that an AI app and agent transitions from initial interest to sustained use. How to use this series This series is designed to be used in two ways: Read sequentially to understand the full Marketplace‑ready journey Use individual posts alongside Microsoft Learn content, App Advisor, and Quick Start resources for deeper implementation guidance This series provides a structured, end‑to‑end view of what it takes to move from a working AI app and agent to a solution that customers can trust, deploy, and buy through Marketplace. It is designed to complement hands‑on implementation guidance, including Microsoft Learn resources such as Publishing AI agents to the Microsoft marketplace, and to help software companies navigate Marketplace readiness with fewer surprises and less rework. The upcoming post is about choosing your marketplace offer type which defines the operating model of your AI app or agent on Marketplace and influences key architectural decisions for your app or agent. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor, Quick-Start Development Toolkit Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success270Views2likes0CommentsFrom access to adoption: Unlock the power of Microsoft 365
Change is the only constant, but adapting to change isn't always easy. If we consider the celestial speed at which technology is evolving, keeping up with every trending tool can be daunting. An ecosystem like Microsoft 365 designed to simplify work by bringing many tools into one connected workspace can also feel overwhelming. That is why you are invited to join us at the Microsoft 365 Community Conference, where you can connect with Microsoft leaders, product experts, and peers to drive adoption forward and deliver business results. Rolling out Microsoft 365 is only the beginning. Real success happens when people understand the value, adopt new ways of working, and feel confident using the tools every day. That’s why Adoption and Change Management sessions are a cornerstone of the Microsoft 365 Community Conference. These sessions focus on the human side of transformation—helping organizations move from deployment to sustained impact through practical strategies, real-world lessons, and community tested approaches. Whether you’re leading adoption for Microsoft 365 Copilot, driving collaboration change with Teams and SharePoint, or building Champions across your organization, these sessions are designed to meet you where you are. We know from our research that organizations who have a strong Champions program and invest in peer learning have accelerated business results. At the Microsoft 365 Community Conference, adoption sessions go beyond theory. They’re built around: What actually works in the real world Lessons learned from failed or stalled rollouts How to scale change across large, distributed organizations How community-led approaches drive long-term success You’ll hear directly from practitioners, Microsoft experts, and community leaders who have led adoption efforts inside enterprises, public sector organizations, and fast-moving teams. Check out these can't miss adoption and change management sessions Adoption power skills – Building a Champion Program with Karuana Gatimu, Director, Customer Advocacy - AI & Collaboration and Tiffany Lee, Customer Experience Product Manager, Microsoft Collaboration 2026 – The next generation of Teams experiences with Nicole Enders, Microsoft 365 MVP, Managing Consultant - Microsoft Solutions, CONET Solutions GmbH Community in the age of AI – Humans at the center of Copilot adoption with Allison Michels, Sr. Program Manager, Viva Engage, Microsoft, Sarah Lundy, Sr. Customer Experience Program Manager, Viva Engage, Microsoft, and Alex Snyder, Product Manager, Copilot Adoption, Footlocker Demystifying Copilot and AI experiences on Windows with Anupam Pattnaik, Microsoft How Microsoft does IT: Driving adoption of M365 Copilot and agents across Microsoft with Cadie Kneip, Senior Business Program Director and Copilot Champ Community Lead, Microsoft and Stephan Kerametlian, Senior Director, Microsoft Digital, Microsoft From pilot to copilot: Building a scalable AI adoption framework with Tiffany Songvilay, AI Workforce Lead, Avanade Leading workforce transformation: The art and science of skilling your people with Karuana Gatimu, Director, Customer Advocacy - AI & Collaboration and Jessie Hwang, Customer Experience PM, Microsoft 365 Customer Advocacy Group, AI & Collaboration, Microsoft OneDrive, SharePoint, Viva Engage, and Teams… Oh my! Understanding the many collaboration solutions with David Drever, Microsoft 365 MVP - Compliance, Data Protection, and M365 Specialist, Protiviti Start your adoption journey with adoption.microsoft.com with Jessie Hwang, Customer Experience PM, Microsoft 365 Customer Advocacy Group, AI & Collaboration, Microsoft Tools and best practices for accelerating Copilot & agent adoption with Jojo Wright, AI Business Solutions Architect, Microsoft and Karuana Gatimu, Director, Customer Advocacy - AI & Collaboration Be sure to check the Whova app for other meetups and experiences around Adoption and Microsoft 365 Champions. Master the art of adoption at the Microsoft 365 Community Conference Boost your adoption strategy by joining subject matter experts in Orlando, FL, for the Microsoft 365 Community Conference on April 21–23, 2026. Discover how Microsoft is shaping the future of work by empowering teams to achieve more than ever. When the world is moving quickly, with the right technology, we want you to lead the way!933Views0likes1CommentSharePoint List Migration to new Tenant
Hi All, I am preparing for a tenant-to-tenant migration of 60+ SharePoint lists that function as the back-end for various PowerApps. Since we are doing a staggered cutover, I need to perform an initial migration now and then run 'Delta' syncs over the next few weeks to catch new records, updates, and deletes. My primary challenge is that SharePoint's native ID column is not preserved during manual migrations (PowerShell/CSV), which will break our App logic and Lookups. How have others handled cross-tenant list synchronization at this scale? Specifically: How do you maintain record relationships and deep links when the system IDs change? What is the most efficient way to handle deltas across 60 lists without buying expensive 3rd-party migration tools? thanks, Jake54Views0likes1Comment