json
105 TopicsPurview Webinars
REGISTER FOR ALL WEBINARS HERE Upcoming Microsoft Purview Webinars JULY 15 (8:00 AM) Microsoft Purview | How to Improve Copilot Responses Using Microsoft Purview Data Lifecycle Management Join our non-technical webinar and hear the unique, real life case study of how a large global energy company successfully implemented Microsoft automated retention and deletion across the entire M365 landscape. You will learn how the company used Microsoft Purview Data Lifecyle Management to achieve a step up in information governance and retention management across a complex matrix organization. Paving the way for the safe introduction of Gen AI tools such as Microsoft Copilot. 2025 Past Recordings JUNE 10 Unlock the Power of Data Security Investigations with Microsoft Purview MAY 8 Data Security - Insider Threats: Are They Real? MAY 7 Data Security - What's New in DLP? MAY 6 What's New in MIP? APR 22 eDiscovery New User Experience and Retirement of Classic MAR 19 Unlocking the Power of Microsoft Purview for ChatGPT Enterprise MAR 18 Inheriting Sensitivity Labels from Shared Files to Teams Meetings MAR 12 Microsoft Purview AMA - Data Security, Compliance, and Governance JAN 8 Microsoft Purview AMA | Blog Post 📺 Subscribe to our Microsoft Security Community YouTube channel for ALL Microsoft Security webinar recordings, and more!1.2KViews2likes0CommentsWhat's new in SQL Server 2025
Add deep AI integration with built-in vector search and DiskANN optimizations, plus native support for large object JSON and new Change Event Streaming for live data updates. Join and analyze data faster with the Lakehouse shortcuts in Microsoft Fabric that unify multiple databases — across different SQL Server versions, clouds, and on-prem — into a single, logical schema without moving data. Build intelligent apps, automate workflows, and unlock rich insights with Copilot and the unified Microsoft data platform, including seamless Microsoft Fabric integration, all while leveraging your existing SQL skills and infrastructure. Bob Ward, lead SQL engineer, joins Jeremy Chapman to share how the latest SQL Server 2025 innovations simplify building complex, high-performance workloads with less effort. Run natural language semantic search directly in SQL Server 2025. Vector search and DiskANN work efficiently on modest hardware — no GPU needed. Get started. Run NoSQL in SQL. Store and manage large JSON documents directly in SQL Server 2025. Insert, update, and query JSON data with native tools. Check it out. Avoid delays. Reduce database locking without code changes to keep your apps running smoothly. See the new Optimized Locking in SQL Server 2025. QUICK LINKS: 00:00 — Updates to SQL Server 2025 00:58 — Search and AI 03:55 — Native JSON Support 06:41 — Real-Time Change Event Streaming 08:40 — Optimized Locking for Better Concurrency 10:33 — Join SQL Server data with Fabric 13:53 — Wrap up Link References Start using SQL Server 2025 at https://aka.ms/GetSQLServer2025 Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: - Today we’ll look at the AI integration developer updates and performance improvements that make SQL Server 2025 a major upgrade. We’ve got a lot to unpack here, so we’re going to waste no time and get straight into this with lead SQL engineer, Bob Ward. Welcome back to the show. - So great to be back. - So SQL Server 2025, it’s brand new. It’s in public preview right now. So what’s behind the release and what’s new? - There are three major areas of updates that we focus on in this release. First, we have deep AI integration. For example, we now have built-in vector search support for more accurate and efficient data retrieval with some under the hood optimizations using DiskANN. Second, if you’re a developer, this is the most significant release of SQL in the last decade. You know, some of the highlights are native support for JSON files and new change event streaming capabilities for real-time updates. And the third area is improved analytics, where we’re going to make it easy to mirror your SQL Servers into Microsoft Fabric without moving the data. - And all of these are very significant updates. So why don’t we start with what’s new in search and AI? - Great, let’s get going. As I’ve mentioned, we’ve integrated AI directly into the database engine to give you smarter, intelligent searching. With vector search capabilities built-in, you can do semantic search over your data to find matches based on similarity versus keywords. For example, here I have a database with a table called ProductDescription, and I want to search using SQL queries against the Description table for intelligent search. Typically, you’d use full text search for this. Now I’ve built this out, but what about these natural language phrases, Will they work? They don’t. And even when I use like clauses, as you can see here, or contains, or even freetext, none of these methods returns what I’m looking for. Instead, this is where natural language with vector search in SQL Server 2025 shines. As a developer, I can get started even locally on my laptop, no GPU required. I’m using the popular framework, Ollama, to host a free open-source embeddings model from Hugging Face. This will convert our data into vectors, including query prompts, and I declare it using this CREATE EXTERNAL MODEL statement. Then I’m able to go in and build a table using the new built-in vector type to store what’s called embeddings in a binary format. My table has keys pointing back to my description data and then I can use a built-in T-SQL function to generate embeddings based on Ollama and store them. For vector search to work, I need to create a vector index, and it’s also performance optimized using Disk approximate nearest neighbor, or DiskANN, which is a new way to offload what you’d normally want to run completely in memory to point to an index stored on disk. I have a stored procedure to convert the query prompts into embeddings so it can be used to find matching embeddings in the vector index. So now I have everything running locally on my laptop running SQL. Let’s see how it works. I’ll try this natural language prompt, like I showed earlier. And it worked. I get a rich set of results, with matching information based on my search to find products in the database. And I can even use Copilot from here to explore more about SQL data. I’ll prompt it to look for my new table. And you can see, response here, finding our new table. And I can ask it to pull up a few embedding values with product names and descriptions. And as you saw the result using our open source embeddings returned a few languages back. And the good news is that if your data contains multiple languages, it’s easy to use different embedding models. For example, here I’ve wired up Azure OpenAI’s ADA 2 embeddings model optimized for multiple languages without even changing my code. And now I can even search using Mandarin Chinese and get back matching results. - And DiskANN and vector-based search are both massive updates that really go hand in hand to enable better natural language querying on modest hardware. So what about all the developer updates? - With these updates, things get so much more efficient for developers. With JSON file types, you can bring NoSQL into your SQL relational database. Let me show you how. I’ve created a database called Orders and a table called Orders. Notice here the new JSON data type, which can store up to a massive two gigabytes of JSON document in this native data type. Now let’s look at a couple of examples. First, I can easily insert JSON documents in their native format directly into the table, and I’ll show you some of the JSON functions that you can do to process this new JSON type. JSON value will pull a particular value out of a JSON document and bring it back in result set format. And I can just dump out all the JSON values, so each document will appear as a separate row in their native JSON format. But instead of just doing that, I have aggregate functions. This takes all the rows of JSON types in the table and produces a single array with a single JSON document with all the new rows in the native JSON type. Key-value pairs are also popular in JSON, and I can use the new OBJECT AGGREGATE function to take the order ID key and the JSON document and produce a set of key-value pairs. And I can modify the JSON type directly from here too. Notice, for order_id 1, the quantity is also 1. I’ll run this update to modify the value. And when it’s finished, the order_id, quantity has been updated with the value of 2 directly in the JSON. Now that’s a good example of using the JSON type. So let me show you how this works with a JSON index. I’ve got a different database for contacts, along with the table for contacts using a JSON document as one of the properties of the contacts table. I can create a JSON index on top of that JSON document, like this. Now I’ve got some sample data that are JSON documents. And in a second, I’m going to push those into our database. And as I scroll, you’ll that this has nested tags as properties in the JSON document. Now I’ll run the query so I can insert these rows with the names of each tag. Let’s go look at the output. I’m using JSON value for the name, but I’m using JSON query because the tags are nested. Now I’ll show you an example searching with the JSON index. I’m using the new JSON contains function to find tags called fitness that are deep nested in the JSON document. And I can run that and find the right tags and even the execution plan. You can see here that it shows we’re using the new JSON index to help go find that information. - That’s a big deal. And like you said, there’s a lot happening natively in JSON, and now you’ve got the benefits of SQL for joins, and security, and a lot more, - You know, and for developers who use change data capture, things become a lot easier with change event streaming. Here, we’re reducing I/O overhead and sending transaction log changes directly to your application. To get started with change event streaming for our orders database, I’ll run the stored procedure to enable streaming for the database. You can see the table we’re going to use to track changes is a typical type of orders table. Here I’ve created what’s called an event stream group. This is where I’ve configured event streaming to tell it the location of our Azure event hub to stream our data, and I’ve added my credentials. Then I’ve configured the table orders to be part of the event streaming group. I’ve run these procedures to make sure that my configuration is correct. So let’s do something interesting. I’m going to automate a workflow using agents to listen for changes as they come in and try to resolve any issues. First, I’ve created an Azure function app, and using my function app, I have an agent running in the Azure AI service called ContosoShippingAgent. It’s built to take shipment information, analyze it, and decide whether something can be done to help. For example, resolving a shipping delay. I’ve started my Azure function. This function is waiting for events to be sent to Azure Event Hub in order to process them. Now, in SQL, I’ll insert a new order. Going back over to my Azure function, you’ll see how the event is processed. In the code, first, we’re dumping up the raw cloud event that I showed earlier. Notice the operation is an insert. It’s going to dump out some of the different fields we’ve parsed out of the data, the column names, the metadata, and then the row itself. Notice that because the shipment is 75 days greater than our sales date, it will call our agent. The agent then comes back with a response. It looked at the tracking details and determined that it can change the shipping provider to expedite our delayed shipment, and it contacted the customer directly with the updating shipping info. - And everybody likes faster shipping. So speaking of things that are getting faster, it’s kind of a tradition on Mechanics that we cover the speed ups for SQL Server. So what are the speed ups and the performance optimizations for ‘25? - Well, there’s a lot, but my favorite one improves application concurrency. We’ve improved the internals of how locking works without application code changes. And I’ve got an example of this running. I have a lock escalation problem that I need to resolve. I’m going to go update about 2,500 rows in this table just to show what happens, then how we’ve solved for it. So running this query against that Dynamic Management View, or DMV, shows locks that have accumulated, about 2,500 locks here for key-value locks and 111 for page locks. So what happens if I run enough updates against the table that would cause a lock escalation? Here, I’ll update 10,000 rows in the system. But you can see with the locks that this has been escalated to an object lock. It’s not updating the entire table, but it’s going to cause a problem. Because I’ve got a query over here that can update the maximum value in just one row and it’s going to get blocked, but it shouldn’t have to be. You can see here from the blocking query that’s running that it’s blocked on that original session, and I’m not actually updating a row that’s affected by the first one. This is the problem with lock escalation. Now let’s look at a new option called optimized locking in SQL Server 2025. Okay, let’s go back to where I updated 10,000 rows and look at the lock. Notice how in this particular case I have a transaction lock. It’s an intent exclusive lock for the table, but only a transaction lock for that update. If I use this query to update the max, you’ll see that we are not blocked. And by looking at the locks, each item has specific transaction locks, so we’re not blocking each other. And related to this, we’ve also solved another problem where two unrelated updates can get blocked. We call this lock after qualification. - Okay, so it’s pinpointing the exact lock type, so you’ll get less locks in the end. So why don’t we move on though from locks to joins? - Sure. With Microsoft Fabric, it’s amazing. You can pull in multiple databases, multiple data types into a unified data platform. Imagine you have two different SQL Servers in different clouds and on-prem, and you just want to join this data together in an easy way without migrating it. With Fabric, you can. I have a SQL Server 2022 instance with a database, and we’ve already mirrored the product tables from that database into Fabric. I’ll show you the mirroring configuration process for a SQL Server 2025 instance with different, but related tables. These are similar to the steps from mirroring any SQL Server. I’ve created a database connection for SQL Server 2025. Now I’ll pick all the tables in our database and connect. I’ll leave the name as is, AdventureWorks, and we’re ready to mirror our database. You can see now that the replication process has started for all the tables. All the rows have been replicated for all the columns on all the tables in my database and they’ve been mirrored into Fabric. Now let’s query the data using the SQL analytic endpoint. And you can see that the tables that we have previously had in our database and SQL Server are now mirrored into OneLake. Let’s run a query and I’ll use Copilot to do that. Here’s the Copilot code with explanations. Now I’ll run it. And as it completes, there’s our top customers buy sales. Now what if we wanted to do a join across the other SQL server? It’s possible. But normally, there are a lot of manual pieces to do this. Fabric can make that easier using a lakehouse. So let’s create a new lakehouse. I just didn’t to give it a name, AdventureWorks, and confirm. Now notice there are no tables in this lakehouse yet, so let’s add some. And for that, I’ll use a shortcut. A shortcut uses items in OneLake, like the SQL Server databases we just mirrored. So I’ll add the AdventureWorks database. And scrolling down, I’ll pick all the tables I want. Now I’ll create it. And we’re not storing the data separately in the lakehouse. It’s just a shortcut, like an active read link to the source data, which is our mirrored database, and therefore something that already exists in OneLake. And now you can see I’ve got these objects here. This icon means that these are shortcut from another table. So now, let’s get data from another warehouse. The SQL Server 2022 instance, which was ADW_products. Again, here, I’ll pick the tables that I want and Create. That’s it. So I can go and look at product to make sure I’ve got my product data. Now, let’s try to query this as one database and use another analytic endpoint directly against the lakehouse itself. So basically it thinks all the tables are just part of the unified schema now. Let’s open up Copilot and write a prompt to pull my top customers by products and sales. And it will be able to work directly against all of these connected databases because they are in just the same schema. And there you go. I have a list of all the data I need in one logical database. - And this is really great. And I know now that everything’s in OneLake, there’s also a lot more that you can do with that data. - With the lakehouse, the sky’s the limit. You can use Power BI, or any of those services that are in the unified data platform, Microsoft Fabric. - Okay, so now we’ve seen all the updates with SQL Server 2025. To everyone watching, what’s the best thing they can do to get started? - Well, the first thing is to start using it. SQL Server 2025 is ready for you to download and install it on the platform of your choice. You’ll find it at aka.ms/GetSQLServer2025. - So thanks so much for sharing all the updates, Bob, and thank you for joining us today. Be sure to subscribe for more, and we’ll see again soon.498Views0likes0CommentsHow to filter JSON data to a specific result?
Hi all, I am trying to get JSON data from the IEX API and have used the WEBSERVICE() function, but I can't seem to filter it down to the values I need. For example: If I want the "total cash" for the first year financials of Apple; I have tried this formula: =VALUE(WEBSERVICE("https://api.iextrading.com/1.0/stock/AAPL/financials?period=annual/0/totalcash")) Which gives an error. I think the incorrect part is the "/0/" but I don't see how to specify which year I want. Any help is really appreciated. Thanks,3KViews0likes1CommentImporting JSON as a list in SharePoint Online via pnp
I'm trying to import a JSON list file to my Sharepoint online website. But it seems impossible to import it directly. Does anyone know a possibility via pnp because I saw a method on pnp named "fromJson" but I don't know how to use it to import my files to my SharePoint website. Thanks in advance!59Views0likes0CommentsSharePoint List Calendar View Conditional Formatting
Hi all, I've been spinning my wheels on this one. I have created a calendar view from a SP list and want to color code calendar items based on the event type (field = "Event_Type"). They selection list is formatted as radio buttons. I've written the following JSON and input into the Advanced View under Conditional Formatting. I've tried everything I can think of to de-bug but it seems the code is correct? Yet it isn't working: { "$schema": "https://developer.microsoft.com/json-schemas/sp/v2/row-formatting.schema.json", "additionalEventClass": { "operator": ":", "operands": [ { "operator": "==", "operands": [ "[$Event_Type]", "Vacation/Out of Office" ] }, "=if(@isSelected == true, 'sp-css-color-WhiteFont sp-css-backgroundColor-BgRed' , 'sp-css-backgroundColor-BgDustRose sp-css-color-DustRoseFont')+' sp-field-fontSizeSmall'", { "operator": ":", "operands": [ { "operator": "==", "operands": [ "[$Event_Type]", "Training" ] }, "=if(@isSelected == true, 'sp-css-color-WhiteFont sp-css-backgroundColor-BgGreen' , 'sp-css-backgroundColor-BgMintGreen sp-css-color-MintGreenFont')+' sp-field-fontSizeSmall'", { "operator": ":", "operands": [ { "operator": "==", "operands": [ "[$Event_Type]", "Onboarding" ] }, "=if(@isSelected == true, 'sp-css-color-WhiteFont sp-css-backgroundColor-BgBrown' , 'sp-css-backgroundColor-BgGold sp-css-color-GoldFont')+' sp-field-fontSizeSmall'", { "operator": ":", "operands": [ { "operator": "==", "operands": [ "[$Event_Type]", "Holiday" ] }, "=if(@isSelected == true, 'sp-css-color-WhiteFont sp-css-backgroundColor-BgTeal' , 'sp-css-backgroundColor-BgCyan sp-css-color-CyanFont')+' sp-field-fontSizeSmall'", { "operator": ":", "operands": [ { "operator": "==", "operands": [ "[$Event_Type]", "Meeting" ] }, "=if(@isSelected == true, 'sp-css-color-WhiteFont sp-css-backgroundColor-BgDarkPurple' , 'sp-css-backgroundColor-BgViolet sp-css-color-VioletFont')+' sp-field-fontSizeSmall'", { "operator": ":", "operands": [ { "operator": "==", "operands": [ "[$Event_Type]", "Event" ] }, "=if(@isSelected == true, 'sp-css-color-WhiteFont sp-css-backgroundColor-BgBlue' , 'sp-css-backgroundColor-BgCornflowerBlue sp-css-color-CornflowerBlueFont')+' sp-field-fontSizeSmall'", { "operator": ":", "operands": [ { "operator": "==", "operands": [ "[$Event_Type]", "Board" ] }, "=if(@isSelected == true, 'sp-css-color-WhiteFont sp-css-backgroundColor-BgGray' , 'sp-css-backgroundColor-BgLightGray sp-css-color-LightGrayFont')+' sp-field-fontSizeSmall'", { "operator": ":", "operands": [ { "operator": "==", "operands": [ "[$Event_Type]", "Wellness" ] }, "=if(@isSelected == true, 'sp-css-color-WhiteFont sp-css-backgroundColor-BgDarkGreen' , 'sp-css-backgroundColor-BgSage sp-css-color-SageFont')+' sp-field-fontSizeSmall'", "" ] } ] } ] } ] } ] } ] } ] } ] } }Solved11KViews0likes10CommentsSet Up Endpoint DLP Evidence Collection on your Azure Blob Storage
Endpoint Data Loss Prevention (Endpoint DLP) is part of the Microsoft Purview Data Loss Prevention (DLP) suite of features you can use to discover and protect sensitive items across Microsoft 365 services. Microsoft Endpoint DLP allows you to detect and protect sensitive content across onboarded Windows 10, Windows 11 and macOS devices. Learn more about all of Microsoft's DLP offerings. Before you start setting up the storage, you should review Get started with collecting files that match data loss prevention policies from devices | Microsoft Learn to understand the licensing, permissions, device onboarding and your requirements. Prerequisites Before you begin, ensure the following prerequisites are met: You have an active Azure subscription. You have the necessary permissions to create and configure resources in Azure. You have setup endpoint Data Loss Prevention policy on your devices Configure the Azure Blob Storage You can follow these steps to create an Azure Blob Storage using the Azure portal. For other methods refer to Create a storage account - Azure Storage | Microsoft Learn Sign in to the Azure Storage Accounts with your account credentials. Click on + Create On the Basics tab, provide the essential information for your storage account. After you complete the Basics tab, you can choose to further customize your new storage account, or you accept the default options and proceed. Learn more about azure storage account properties Once you have provided all the information click on the Networking tab. In network access, select Enable public access from all networks while creating the storage account. Click on Review + create to validate the settings. Once the validation passes, click on Create to create the storage Wait for deployment of the resource to be completed and then click on Go to resource. Once the newly created Blob Storage is opened, on the left panel click on Data Storage -> Containers Click on + Containers. Provide the name and other details and then click on Create Once your container is successfully created, click on it. Assign relevant permissions to the Azure Blob Storage Once the container is created, using Microsoft Entra authorization, you must configure two sets of permissions (role groups) on it: One for the administrators and investigators so they can view and manage evidence One for users who need to upload items to Azure from their devices Best practice is to enforce least privilege for all users, regardless of role. By enforcing least privilege, you ensure that user permissions are limited to only those permissions necessary for their role. We will use portal to create these custom roles. Learn more about custom roles in Azure RBAC Open the container and in the left panel click on Access Control (IAM) Click on the Roles tab. It will open a list of all available roles. Open context menu of Owner role using ellipsis button (…) and click on Clone. Now you can create a custom role. Click on Start from scratch. We have to create two new custom roles. Based on the role you are creating enter basic details like name and description and then click on JSON tab. JSON tab gives you the details of the custom role including the permissions added to that role. For owner role JSON looks like this: Now edit these permissions and replace them with permissions required based on the role: Investigator Role: Copy the permissions available at Permissions on Azure blob for administrators and investigators and paste it in the JSON section. User Role: Copy the permissions available at Permissions on Azure blob for usersand paste it in the JSON section. Once you have created these two new roles, we will assign these roles to relevant users. Click on Role Assignments tab, then on Add + and on Add role assignment. Search for the role and click on it. Then click on Members tab Click on + Select Members. Add the users or user groups you want to add for that role and click on Select Investigator role – Assign this role to users who are administrators and investigators so they can view and manage evidence User role – Assign this role to users who will be under the scope of the DLP policy and from whose devices items will be uploaded to the storage Once you have added the users click on Review+Assign to save the changes. Now we can add this storage to DLP policy. For more information on configuring the Azure Blob Storage access, refer to these articles: How to authorize access to blob data in the Azure portal Assign share-level permissions. Configure storage in your DLP policy Once you have configured the required permissions on the Azure Blob Storage, we will add the storage to DLP endpoint settings. Learn more about configuring DLP policy Open the storage you want to use. In left panel click on Data Storage -> Containers. Then select the container you want to add to DLP settings. Click on the Context Menu (… button) and then Container Properties. Copy the URL Open the Data Loss Prevention Settings. Click on Endpoint Settings and then on Setup evidence collection for file activities on devices. Select Customer Managed Storage option and then click on Add Storage Give the storage name and copy the container URL we copied. Then click on Save. Storage will be added to the list. Storage will be added to the list for use in the policy configuration. You can add up to 10 URLs Now open the DLP endpoint policy configuration for which you want to collect the evidence. Configure your policy using these settings: Make sure that Devices is selected in the location. In Incident reports, toggle Send an alert to admins when a rule match occurs to On. In Incident reports, select Collect original file as evidence for all selected file activities on Endpoint. Select the storage account you want to collect the evidence in for that rule using the dropdown menu. The dropdown menu shows the list of storages configured in the endpoint DLP settings. Select the activities for which you want to copy matched items to Azure storage Save the changes Please reach out to the support team if you face any issues. We hope this guide is helpful and we look forward to your feedback. Thank you, Microsoft Purview Data Loss Prevention Team2.5KViews6likes1CommentSharepoint online - Guided tours, feature tour, walk throughs
TLDR; Is there a way to create in-app walkthroughs/ guides tours/ feature tours using open source packages like intro.js, chardin.js, joyride, bootstrap, HopScotch, Crumble with Modern SharePoint Online environment? I couldn't find any tutorials for the same. Since I am not a web developer I could not ascertain if this can work with the modern SharePoint Online site pages at all? Long version I am working on intranet design, using SharePoint online. This intranet serves as a knowledge base, a process workflow reference, and a collaborative tool. I am trying to put together a tutorial so that people can understand what the various web parts are. I have tried using ppt, video, sway, word doc but nothing seems to easy because the user needs to open up two different windows and need to simultaneously walk through both of them. That doesn't work at all for the mobile environment. I researched that websites like walkme, whatfix, inline etc. provide subscription services for the same. I am looking for open source free to use alternatives for the same. My researched shows there are several packages that can help with this, Joyride, intor.js, bootstrap etc. But I cannot ascertain if these can be used with the modern SharePoint online site pages. Could someone guide me to a resource that shows how these and other packages can be set up to use within the SharePoint online environment? Do I need to be worried about privacy and intranet content being exposed to third party apps/ developers if I use these plugins? Any other suggestions on creating walkthroughs for SharePoint online environment that work on mobile and desktop? Any other places where I can post this question? My background: I am not a developer but I tinker with a lot of stuff, I generally use python, VBA, and few visual scripting languages for automation for engineering design and reporting. Any help is appreciated. Thanks, AdityaSolved5.6KViews0likes5CommentsMultiple-selection-enabled People field properties access for View Formatting.
Hello dears, I've a list with a people field ("Owners") allowing multiple selection. There will always be at least 2 people in that field. Is it somehow possible to access the properties of the people field in the context of View Formatting (RowFormatter) ? I'm pretty sure its not possible to loop through the values of such array but maybe its possible to arbitrarily choose like for the 2 first elements? Something like [$Owners].[0].email ? Thanks for your help!1.3KViews0likes2CommentsGallery view formatting to group in columns
I'd like to know if their is a trick or JSON view formatting I can change so I can group all the items on a list in gallery view are grouped vertically by a certain list value (in this case the region which is a choice field). The image I attached helps explain what I mean. I like the exact card width used because 6 options across works perfectly actually.9KViews0likes4Comments