customer story
9 TopicsLenovo scales AI adoption worldwide with Copilot Chat training
Discover how Lenovo helped 30,000 employees build practical AI skills with scenario‑driven training to streamline daily tasks, strengthen communication, and achieve meaningful, measurable outcomes. Listed by Forbes as the world’s largest PC company, Lenovo is taking AI out of the abstract and delivering its power to people around the world through a full-stack portfolio of AI-enabled devices, infrastructure, and software. Lenovo’s vision of delivering smarter technology for all includes its own employees and processes. This goal informed its decision to deploy and adopt Microsoft 365 Copilot Chat and Microsoft Copilot Studio across the entire company. Empowering innovation and efficiency Lenovo aimed to establish enterprise leadership in AI, using it to enhance productivity and accelerate business processes across key functions, including research and development (R&D), marketing, and human resources (HR). The company did this by deploying Copilot Chat to its global workforce of over 75,000 employees. The initial onboarding target was more manageable but still ambitious: 30,000 employees in China—about 40% of its workforce. The primary challenge was not just the scale of deployment but also the need to ensure adoption. It was vital that employees could move beyond basic usage to effectively integrate these advanced AI tools into their specific, daily work scenarios to drive tangible business value. Lenovo recognized that success hinged on helping employees build confidence, understand real-world applications, and adopt an AI‑first mindset. Copilot Chat has fundamentally changed how we manage our daily workflow. What used to take hours now happens in minutes. It not only saves us time but also enhances the quality of our communication and strategic thinking. —Dan Zhao, Operations Management Professional, Lenovo China Laying the groundwork To achieve such massive upskilling at scale, Lenovo worked with Digital China Cloud Technology Limited, a leading digital transformation partner whose many services include helping enterprise-level clients establish future-oriented core capabilities. Together they created a training strategy to help employees move from casual interaction with AI tools to meaningful, scenario‑driven usage. The two-phased training combined targeted learning pathways, interactive workshops, and department-specific problem-solving. By embedding training into employees’ existing digital workflows, Lenovo helped to ensure minimal friction and maximum relevance. Phase one: Foundation building The first phase focused on delivering broad-based AI literacy across the targeted workforce so that every employee, regardless of technical background, could start implementing AI tools to streamline their work. Lenovo introduced an “Introduction to Prompt Engineering” course designed to teach employees how to interact effectively with AI systems. This core training provided: Essential AI concepts. Practical prompting skills. Confidence-building exercises. Foundational habits for high‑quality AI interaction. Phase two: Role‑based integration The second phase moved beyond generic training to deep, scenario-specific workshops for key departments. It focused on using Copilot Chat for complex querying and analysis and on working with Copilot Studio, a platform for designing and building custom agents to automate and execute specific business processes. “The key innovation,” Digital China Microsoft Certified Trainer (MCT) Yan Yan explains, “was our scenario-first and agent-centric learning methodology. Instead of teaching features, we focused on real-world business problems.” This mapped training directly to real needs rather than abstract hypothetical use cases. Phase two included: Customized agent development workshops tailored to unique departmental challenges, like automating HR inquiries and generating marketing content. Workflow‑embedded scenario training, designed around actual daily tasks, like meeting summaries, data analysis, strategic document drafting, and more. Measurable business impact Lenovo achieved significant results from its program, including: Accelerated core business processes. Teams across R&D, marketing, and HR saw faster project cycles—from accelerated code development and quicker market analysis to more streamlined employee onboarding. Higher productivity and efficiency. Employees showed major gains in handling complex tasks, such as data synthesis and content creation, reducing time spent on routine work. A more engaged workforce. The program boosted employee confidence and helped cultivate a culture of innovation and AI-first thinking. Transformational impact on customer-facing teams. Sales teams experienced dramatic improvements in communication quality, speed, and strategic preparation, reducing timelines from hours to minutes. Realizing significant gains with a practical approach Lenovo’s AI adoption journey demonstrates what’s possible when an organization pairs visionary goals with a practical, scenario‑driven implementation strategy. By empowering employees with foundational skills and then guiding them through real‑world applications, Lenovo achieved widespread adoption, substantial efficiency gains, and a more empowered global workforce. This project not only accelerated Lenovo’s internal transformation but also positioned the company as a leader in enterprise‑wide AI enablement. Explore AI skilling opportunities for all your teams—technical and non-technical alike—on Microsoft Learn for Organizations. For more inspiration, read AI and human potential: Advancing skills, innovation, and outcomes.500Views2likes1CommentAkkodis and Microsoft share field‑tested tips for your AI skilling program
Generative AI technology is creating unprecedented opportunities to improve your business efficiency, fuel innovation, and gain a competitive edge. Given these benefits, equipping your workforce with the skills to effectively use AI-powered tools is a high priority. At the same time, the rapid evolution of AI technology, along with the potential impacts on your established processes, means that matching people with the right skills requires a deliberate approach. Akkodis and Microsoft partner to bring AI skilling to customers Across the Asia-Pacific region, global engineering and digital solutions company Akkodis helps organizations design, build, and operate technology driven solutions. Its Akkodis Academy integrates Microsoft technology and training, including industry-recognized Microsoft Credentials (role-based Microsoft Certifications and scenario-based Microsoft Applied Skills) that cover AI, into its customized learning and consulting programs. These programs help customers build learning cultures that keep up with the increasing pace of technological change. Building an AI skilling strategy that works Based on its extensive experience, Akkodis has found five strategies that can help you build (or sustain) a successful AI skilling program. 1. Start small, learn fast, iterate AI capabilities (and your business needs) evolve quickly. With iterative training cycles, you don’t need a perfect blueprint to get started. Instead, short, focused training sprints let you try ideas, collect feedback, and quickly improve. This lowers the intimidation factor for newcomers and helps build confidence and momentum with staff and stakeholders. 2. Tie skilling directly to your business goals The most successful skilling programs align training with technology and business goals. They also secure sponsorship from company leaders to help establish priorities and reinforce adoption efforts. Anchor training efforts to concrete business outcomes, like productivity, time‑to‑market, operational costs, or other real-world business metrics. 3. Establish a culture of continuous learning The traditional skilling model of “take a course, and you’re done” doesn’t work well for such rapidly evolving technology. Establishing an always‑on learning culture using webinars, tech talks, and collaborative community learning can move your teams from a “know-it-all” to a “learn-it-all” mindset, keeping your teams’ skills fresh and curiosity growing. 4. Combine technical skills with business and governance knowledge Organizations that realistically evaluate the business utility of AI tools tend to adopt this technology more effectively and efficiently. This means AI skills need to be complemented by business and operational knowledge. Alongside technical instruction, your teams should understand and apply your organization’s governance policies related to data privacy, ethical AI use, and other regulatory requirements. 5. Provide practical, applied learning Focus on real-world skills that show tangible application—not only what AI is but also how to effectively use it. Bootcamps, role‑based labs, and an emphasis on practical scenarios can help bridge the gap between theory and real-world use and can directly correlate with productivity gains and other key outcomes. Real-world AI skilling success stories Explore how AI skilling strategies have led to practical business gains: Commonwealth Bank of Australia invested heavily in AI skilling, equipping employees to effectively adopt AI tools. As a result, 84% of its 10,000 Microsoft 365 Copilot users report that they wouldn’t go back to working without it, and developers are adopting ~30% of GitHub Copilot code suggestions. Adecco Group’s AI skilling strategy has increased productivity for recruiters by 63%. Plus, the company’s AI-driven CV Maker generated 200,000 résumés, and 35,000 employees completed responsible AI training, driving better client interactions. Next steps An effective AI skilling program is about more than technology—it requires a workforce that can adapt and thrive as AI reshapes the business world around them. Successfully building AI fluency across your organization can accelerate your organization’s technology adoption, create improved business outcomes, and lead to tangible competitive advantages. Ready to grow your team’s AI skills? Read Create an AI Learning Culture.800Views1like1CommentShopify Campaign Buy X Get Y fullfillment.
Hi, I understand from an article that campaign articles Buy X Get Y (campaign items) in Shopify must currently be fulfilled manually. But I’m wondering if anyone has found a workaround for this limitation. Specifically: - Is it possible to adjust the Shopify Connector so that the fulfillment process can be automated? - Alternatively, can we fulfill the order in Business Central and then have that fulfillment successfully sync back to Shopify? Here’s what I’ve found so far: - There is a synchronization function in Business Central: Sync Shipments to Shopify. - The Sales Shipment Line (Table 111) in BC has a row number that matches with the Shopify Order Line (Table 30119). - However, the Shopify Fulfillment Order Line (Table 30144) does not have a row number that links back to the Shopify Order Line (30119). This creates a problem: If the order contains both “normal” items and campaign items, then the normal items are fulfilled as expected. But the campaign items are ignored during the fulfillment process, because of the Shopify Product IDs and Variant IDs are identical. Has anyone found a way to handle this? Maybe through a modification of the connector? Or another method to make sure campaign items also get fulfilled automatically when syncing shipments from BC to Shopify? Any tips, experiences, or suggestions would be greatly appreciated!UBS unlocks advanced AI techniques with PostgreSQL on Azure
This blog was authored by Jay Yang, Executive Director, and Orhun Oezbek, GenAI Architect, UBS RiskLab UBS Group AG is a multinational investment bank and world-leading asset manager that manages $5.7 trillion in assets across 15 different markets. We continue to evolve our tools to suit the needs of data scientists and to integrate the use of AI. Our UBS RiskLab data science platform helps over 1,200 UBS data scientists expedite development and deployment of their analytics and AI solutions, which support functions such as risk, compliance, and finance, as well as front-office divisions such as investment banking and wealth management. RiskLab and UBS GOTO (Group Operations and Technology Office) have a long-term AI strategy to provide a scalable and easy-to-use AI platform. This strategy aims to remove friction and pain points for users, such as developers and data scientists, by introducing DevOps automation, centralized governance and AI service simplification. These efforts have significantly democratized AI development for our business users. This blog walks through how we created two RiskLab products using Azure services. We also explain how we’re using Azure Database for PostgreSQL to power advanced Retrieval Augmented-Generation (RAG) techniques—such as new vector search algorithms, parameter tuning, hybrid search, semantic ranking, and a graphRAG approach—to further the work of our financial generative AI use cases. The RiskLab AI Common Ecosystem (AICE) provides fully governed and simplified generative AI platform services, including: Governed production data access for AI development Managed large language model (LLM) endpoints access control Tenanted RAG environments Enhanced document insight AI processing Streamlined AI agent standardization, development, registration, and deployment solutions End-to-end machine learning (ML) model continuous integration, training, deployment, and monitoring processes The AICE Vector Embedding Governance Application (VEGA) is a fully governed and multi-tenant vector store built on top of Azure Database for PostgreSQL that provides self-service vector store lifecycle management and advanced indexing and retrieval techniques for financial RAG use cases. A focus on best practices like AIOps and MLOps As generative AI gained traction in 2023, we noticed the need for a platform that simplified the process for our data scientists to build, test, and deploy generative AI applications. In this age of AI, the focus should be on data science best practices—GenAIOps and MLOps. Most of our data scientists aren’t fully trained on MLOps, GenAIOps, and setting up complex pipelines, so AICE was designed to provide automated, self-serve DevOps provisioning of the Azure resources they need, as well as simplified MLOps and AIOps pipelines libraries. This removes operational complexities from their workflows. The second reason for AICE was to make sure our data scientists were working in fully governed environments that comply with data privacy regulations from the multiple countries in which UBS operates. To meet that need, AICE provides a set of generative AI libraries that fully manages data governance and reduces complexity. Overall, AICE greatly simplifies the work for our data scientists. For instance, the platform provides managed Azure LLM endpoints, MLflow for generative AI evaluation, and AI agent deployment pipelines along with their corresponding Python libraries. Without going into the nitty gritty of setting up a new Azure subscription, managing MLFlow instances, and navigating Azure Kubernetes Service (AKS) deployments, data scientists can just write three lines of code to obtain a fully governed and secure generative AI ecosystem to manage their entire application lifecycle. And, as a governed, secure lab environment, they can also develop and prototype ML models and generative AI applications in the production tier. We found that providing production read-only datasets to build these models significantly expedites our AI development. In fact, the process for developing an ML model, building a pipeline for model training, and putting it into production has dropped from six months to just one month. Azure Database for PostgreSQL and pgvector: The best of both worlds for relational and vector databases Once AICE adoption ramped up, our next step was to develop a comprehensive, flexible vector store that would simplify vector store resource provisioning while supporting hundreds of RAG use cases and tenants across both lab and production environments. Essentially, we needed to create RAG as a Service (RaaS) so our data scientists could build custom AI solutions in a self-service manner. When we started building VEGA and this vector store, we anticipated that effective RAG would require a diverse range of search capabilities covering not only vector searches but also more traditional document searches or even relational queries. Therefore, we needed a database that could pivot easily. We were looking for a really flexible relational database and decided on Azure Database for PostgreSQL. For a while, Azure Database for PostgreSQL has been our go-to database at RiskLab for our structured data use cases because it’s like the Swiss Army Knife of databases. It’s very compact and flexible, and we have all the tools we need in a single package. Azure Database for PostgreSQL offers excellent relational queries and JSONB document search. When used in conjunction with the pgvector extension for vector search, we created some very powerful hybrid search and hierarchical search RAG functionalities for our end users. The relational nature of Azure Database for PostgreSQL also allowed us to build a highly regulated authorization and authentication mechanism that makes it easy and secure for data scientists to share their embeddings. This involved meeting very stringent access control policies so that users’ access to vector stores is on a need-to-know basis. Integrations with the Azure Graph API help us manage those identities and ensure that the environment is fully secure. Using VEGA, data scientists can just click a button to add a user or group and provide access to all their embeddings/documents. It’s very easy, but it’s also governed and highly regulated. Speeding vector store initialization from days to seconds With VEGA, the time it takes to provision a vector store has dropped from days to less than 30 seconds. Instead of waiting days on a request for new instances of Azure Database for PostgreSQL, pgvector, and Azure AI Search, data scientists can now simply write five lines of code to stand up virtual, fully governed, and secure collections. And the same is true for agentic deployment frameworks. This speed is critical for lab work that involves fast iterations and experiments. And because we built on Azure Database for PostgreSQL, a single instance of VEGA can support thousands of vector stores. It’s cost-effective and seamlessly scales. Creating a hybrid search to analyze thousands of documents Since launching VEGA, one of the top hybrid search use cases has been Augmented Indexing Search (AIR Search), allowing data scientists to comb through financial documents and pinpoint the correct sections and text. This search uses LLMs as agents that first filter based on metadata stored in JSONB columns of the Azure Database for PostgreSQL, then apply vector similarity retrieval. Our thousands of well-structured financial documents are built with hierarchical headers that act as metadata, providing a filtering mechanism for agents and allowing them to retrieve sections in our documents to find precisely what they’re looking for. Because these agents are autonomous, they can decide on the best tools to use for the situation—either metadata filtering or vector similarity search. As a hybrid search, this approach also minimizes AI hallucinations because it gives the agents more context to work with. To enable this search, we used ChatGPT and Azure OpenAI. But because most of our financial documents are saved as PDFs, the challenge was retaining hierarchical information from headers that were lost when simply dumping in text from PDFs. We also had to determine how to make sure ChatGPT understood the meaning behind aspects like tables and figures. As a solution, we created PNG images of PDF pages and told ChatGPT to semantically chunk documents by titles and headers. And if it came across a table, we asked it to provide a YAML or JSON representation of it. We also asked ChatGPT to interpret figures to extract information, which is an important step because many of our documents contain financial graphs and charts. We’re now using Azure AI Document Intelligence for layout detection and section detection as the first step, which simplified our document ingestion pipelines significantly. Forecasting economic implications with PostgreSQL Graph Extension Since creating AICE and VEGA using Azure services, we’ve significantly enhanced our data science workflows. We’ve made it faster and easier to develop generative AI applications thanks to the speed and flexibility of Azure Database for PostgreSQL. Making advanced AI features accessible to our data scientists has accelerated innovation in RiskLab and ultimately allowed UBS to deliver exceptional value to our customers. Looking ahead, we plan to use the Apache AGE graph extension in Azure Database for PostgreSQL for macroeconomics knowledge retention capabilities. Specifically, we’re considering Azure tooling such as GraphRAG to equip UBS economist and portfolio managers with advanced RAG capabilities. This will allow them to retrieve more coherent RAG search results for use cases such as economics scenario generation and impact analysis, as well as investment forecasting and decision-making. For instance, a UBS business user will be able to ask an AI agent: if a country’s interest rate increases by a certain percentage, what are the implications to my client’s investment portfolio? The agent can perform a graph search to obtain all other connected economic entity nodes that might be affected by the interest rate entity node in the graph. We anticipate the AI-assisted graph knowledge will gain significant traction in the financial industry. Learn more For a deeper dive on how we created AICE and VEGA, check out this on-demand session from Ignite. We talk through our use of Azure Database for PostgreSQL and pgvector, plus we show a demo of our GraphRAG capabilities. About Azure Database for PostgreSQL Azure Database for PostgreSQL is a fully managed, scalable, and secure relational database service that supports open-source PostgreSQL. It enables organizations to build and manage mission-critical applications with high availability, built-in security, and automated maintenance.1.3KViews1like0CommentsViva Goals Retirement
Hi, Microsoft just announce at https://learn.microsoft.com/en-us/viva/goals/goals-retirement that Viva Goals ending in December 2025. How do you feel about this announcement? It was one of the top business use cases with significant business value in Viva Suite after Viva Topics deprecation.2.9KViews9likes24CommentsOKR Dashboard Bugs
Hello everybody, We have been experiencing some issues with the Dashboards lately (for example, some Widgets are not updating or showing properly, and the changes in the Dashboards are not consistent). Do you know if there is something going on in the backend? Thank you for your help.457Views0likes1CommentWelcome to the Viva Community!
Welcome to the Viva Community! We are so happy this space for professionals invested in improving employee experience and performance has finally launched. I want to make sure all our community members are aware of Viva Summit, our live, free, virtual event happening on Thursday, April 20, with experts like Josh Bersin and customers from organizations like the LEGO Group, Merck, and PayPal sharing stories and advice. Learn more about our speakers and sessions, and get the link to register, here: https://techcommunity.microsoft.com/t5/microsoft-viva-blog/everything-you-need-to-know-about-the-microsoft-viva-summit/ba-p/3789710.633Views12likes0CommentsHealthcare IT Perspectives – Patrick McGill, MD on the CIO Podcast
Recently, the Healthcare IT CIO podcast welcomed Patrick McGill, MD EVP, Chief Transformation Officer at Community Health Network for a discussion around their projects, technologies, and where they see themselves going in the future. The interview covered a variety of topics and I've highlighted a few below that our team discusses with many of our provider customers. (the full interview is available following the list)