Audit
61 TopicsPowershell Script to list ALL videos in your 365 Stream environment
I hope this is useful to everyone. My goal was to get a list of all videos in my stream so that I could contact each video creator about the changes that are coming to stream. Also so I could figure out how much work there was to do in moving, and how much video is created and not used. My original solution (posted here 7 Oct 2020) I've retired, as thanks to Twan van Beers code here https://neroblanco.co.uk/2022/02/get-list-of-videos-from-microsoft-stream/ I've been able to build a single ps1 script that does all I need. I'll leave the the original code at the bottom of this post. Take a look a the comments I've put into the code for how to use this script. I found that my original script gave me information about the videos but it was hard to use AND didn't tell me which videos were in which channels. This new version of Twan van Beers script gives me both. Save the new code Oct 2022 as a PowerShell script e.g. get-stream-video-info.ps1 Then open a powershell screen, navigate to the folder get-stream-video-info.ps1 is in. To run it enter .\get-stream-video-info.ps1 "C:\Temp\streamanalysis\stream7Oct22-getnbstreamvideoinfo.csv" Follow the on screen prompts. >>> New Code Oct 2022 <<< using namespace System.Management.Automation.Host [CmdletBinding()] param ( [parameter(Position=0,Mandatory=$False)] [string]$OutputCsvFileName, [parameter(Position=1,Mandatory=$False)] [switch]$OpenFileWhenComplete = $False ) # ---------------------------------------- # How to use <#-- Open a powershell window then type in the following and click enter .\get-stream-video-info.ps1 "C:\Temp\streamanalysis\stream7Oct22-getnbstreamvideoinfo.csv" You'll then be prompted for 3 options [V] Videos [C] ChannelVideos [A] All [?] Help (default is "V"): V - videos, which will get a list of all the videos in your STREAM environment. NOTE you may need to alter the variables in the code if you have more than 1000s of videos C - ChannelVideos, will get a list of all the videos and the channels they are in. NOTE this returns a filtered view of all the videos associated with a channel A - All, returns both the Videos and the ChannelVideos. You'll then be prompted for a user to login to the STREAM portal with, this is so the script can get a security token to do it's work with. Choose/use an account with full access to STREAM. If you used a CSV file path after the script name, then this powershell script will export one or two CSV files based on the option chosen <your folder path, your filename>-videos<your file ending> and or <your folder path, your filename>-channelVideos<your file ending> If you don't want to export file names, this powershell creates objects you can use in other ways V or A - will create an object $ExtractData, which is a list of every video and key properties for each video. C or A - wil create an object $videosPerChannel, which lists key information about each video AND the channel they are part of. ---------------------------------------------------------------------------------------------- original source my script https://techcommunity.microsoft.com/t5/microsoft-stream-classic/powershell-script-to-list-all-videos-in-your-365-stream/m-p/1752149 which inspired Twan van Beers to write https://neroblanco.co.uk/2022/02/get-list-of-videos-from-microsoft-stream/ I've then taken Twan's script and modified it to do what I require in my environment. Namely - get the video information AND the channels they are part of. For my 1000 or so videos and 35 channels, it takes about 1 min to run using the All option. This meant I was able to setup an intranet video library with a channel metadata column, folders per channel (so i could give edit rights to channel owners, without opening up the entire library or having to use multiple libraries), and eventually download the videos, then upload them into the library using ShareGate to reinstate some of the key metadata, i.e. created date, person who created them etc --#> # ---------------------------------------------------------------------------------------------- function Show-OAuthWindowStream { param ( [string]$Url, [string]$WindowTitle ) $Source = ` @" [DllImport("wininet.dll", SetLastError = true)] public static extern bool InternetSetOption(IntPtr hInternet, int dwOption, IntPtr lpBuffer, int lpdwBufferLength); "@ $WebBrowser = Add-Type -memberDefinition $Source -passthru -name $('WebBrowser'+[guid]::newGuid().ToString('n')) $INTERNET_OPTION_END_BROWSER_SESSION = 42 # Clear the current session $WebBrowser::InternetSetOption([IntPtr]::Zero, $INTERNET_OPTION_END_BROWSER_SESSION, [IntPtr]::Zero, 0) | out-null Add-Type -AssemblyName System.Windows.Forms $Form = New-Object -TypeName System.Windows.Forms.Form -Property @{Width = 600; Height = 800 } $Script:web = New-Object -TypeName System.Windows.Forms.WebBrowser -Property @{Width = 580; Height = 780; Url = ($URL -f ($Scope -join "%20")) } $Web.ScriptErrorsSuppressed = $True $Form.Controls.Add($Web) $Featured = { $Head = $Web.Document.GetElementsByTagName("head")[0]; $ScriptEl = $Web.Document.CreateElement("script"); $Element = $ScriptEl.DomElement; # Javascript function to get the sessionInfo including the Token $Element.text = ` @' function CaptureToken() { if( typeof sessionInfo === undefined ) { return ''; } else { outputString = '{'; outputString += '"AccessToken":"' + sessionInfo.AccessToken + '",'; outputString += '"TenantId":"' + sessionInfo.UserClaim.TenantId + '",'; outputString += '"ApiGatewayUri":"' + sessionInfo.ApiGatewayUri + '",'; outputString += '"ApiGatewayVersion":"' + sessionInfo.ApiGatewayVersion + '"'; outputString += '}'; return outputString; } } '@; $Head.AppendChild($ScriptEl); $TenantInfoString = $Web.Document.InvokeScript("CaptureToken"); if( [string]::IsNullOrEmpty( $TenantInfoString ) -eq $False ) { $TenantInfo = ConvertFrom-Json $TenantInfoString if ($TenantInfo.AccessToken.length -ne 0 ) { $Script:tenantInfo = $TenantInfo; $Form.Controls[0].Dispose() $Form.Close() $Form.Dispose() } } } $Web.add_DocumentCompleted($Featured) $Form.AutoScaleMode = 'Dpi' $Form.ShowIcon = $False $Form.Text = $WindowTitle $Form.AutoSizeMode = 'GrowAndShrink' $Form.StartPosition = 'CenterScreen' $Form.Add_Shown( { $Form.Activate() }) $Form.ShowDialog() | Out-Null write-output $Script:tenantInfo } # ---------------------------------------------------------------------------------------------- function Get-RequestedAssets([PSCustomObject]$Token, [string]$Url, [string]$Label) { $Index = 0 $MainUrl = $Url $AllItems = @() do { $RestUrl = $MainUrl.Replace("`$skip=0", "`$skip=$Index") Write-Host " Fetching ... $($Index) to $($Index+100)" $Items = @((Invoke-RestMethod -Uri $RestUrl -Headers $Token.headers -Method Get).value) $AllItems += $Items $Index += 100 } until ($Items.Count -lt 100) Write-Host " Fetched $($AllItems.count) items" $Assets = $AllItems | Select-Object ` @{Name='Type';Expression={$Label}},` Id, Name,` @{Name='Size(MB)';Expression={$_.AssetSize/1MB}}, ` PrivacyMode, State, VideoMigrationStatus, Published, PublishedDate, ContentType, Created, Modified, ` @{name='Media.Duration';Expression={$_.Media.Duration}},` @{name='Media.Height';Expression={$_.Media.Height}},` @{name='Media.Width';Expression={$_.Media.Width}},` @{name='Media.isAudioOnly';Expression={$_.media.isAudioOnly}},` @{name='Metrics.Comments';Expression={$_.Metrics.Comments}},` @{name='Metrics.Likes';Expression={$_.Metrics.Likes}},` @{name='Metrics.Views';Expression={$_.Metrics.Views}}, ` @{name='ViewVideoUrl';Expression={("https://web.microsoftstream.com/video/" + $_.Id)}}, ` @{name='VideoCreatorName';Expression={$_.creator.name}}, ` @{name='VideoCreatorEmail';Expression={$_.creator.mail}}, ` @{name='VideoDescription';Expression={$_.description}} write-output $Assets } function Get-VideoChannels([PSCustomObject]$Token, [string]$Url, [string]$Label) { #this will get the list of channels $Index = 0 $MainUrl = $Url $AllItems = @() do { $RestUrl = $MainUrl.Replace("`$skip=0", "`$skip=$Index") Write-Host " Fetching ... $($Index) to $($Index+100)" $Items = @((Invoke-RestMethod -Uri $RestUrl -Headers $Token.headers -Method Get).value) $AllItems += $Items $Index += 100 } until ($Items.Count -lt 100) Write-Host " Fetched $($AllItems.count) items" #to add properties to this section look at https://aase-1.api.microsoftstream.com/api/channels?$skip=0&$top=100&adminmode=true&api-version=1.4-private $Channels = $AllItems | Select-Object ` @{Name='Type';Expression={$Label}},` Id, Name, Description,` @{Name='MetricsVideos';Expression={$_.metrics.videos}} #write-host $channels.count write-output $Channels } function Get-channelVideos([PSCustomObject]$Token, [PSCustomObject]$allChannels, [string]$Label) { #this will get the list of channels $MainUrl = "https://aase-1.api.microsoftstream.com/api/channels/ChannelIDToSwap/videos?`$top=50&`$skip=0&`$filter=published%20and%20(state%20eq%20%27completed%27%20or%20contentSource%20eq%20%27livestream%27)&`$expand=creator,events&adminmode=true&`$orderby=name%20asc&api-version=1.4-private" #for each channel URL go through all the videos, capture the channel name against the video id and name $allVideosPerChannel = @() foreach($chan in $allChannels) { $thisChannelid = $chan.id $chanUrl = @( $MainUrl.Replace("ChannelIDToSwap", $thisChannelid) ) $chanName = $chan.name $AllItems = "" $items = "" $thischanvideos = "" $Index = 0 #write-host $chanUrl #loop the index do { $RestUrl = $chanUrl.Replace("`$skip=0", "`$skip=$Index") #write-host $restUrl #Write-Host "$chanName | Fetching ... $($Index) to $($Index+50)" $Items = @((Invoke-RestMethod -Uri $RestUrl -Headers $Token.headers -Method Get).value) $allItems = $items | select id,name, @{Name='Channel';Expression={$chanName}},@{Name='Type';Expression={$Label}} #write-host $allItems.count #foreach($x in $items ) { # write-host $x.name #write-host $x.id #write-host $label #write-host $chanName #} $Index += 50 } until ($Items.Count -lt 100) #got videos into $items, now mist with $chan info and put into $allVideosPerChannel object $allVideosPerChannel += $AllItems $AllItems = "" $items = "" } Write-Host " Fetched $($allVideosPerChannel.count) videos in $($allChannels.count) channels" #to add properties to this section look at https://aase-1.api.microsoftstream.com/api/channels?$skip=0&$top=100&adminmode=true&api-version=1.4-private write-output $allVideosPerChannel } # ---------------------------------------------------------------------------------------------- function Get-StreamToken() { $TenantInfo = Show-OAuthWindowStream -url "https://web.microsoftstream.com/?noSignUpCheck=1" -WindowTitle "Please login to Microsoft Stream ..." $Token = $TenantInfo.AccessToken $Headers = @{ "Authorization" = ("Bearer " + $Token) "accept-encoding" = "gzip, deflate, br" } $UrlTenant = $TenantInfo.ApiGatewayUri $ApiVersion = $TenantInfo.ApiGatewayVersion $UrlBase = "$UrlTenant{0}?`$skip=0&`$top=100&adminmode=true&api-version=$ApiVersion" $RequestToken = [PSCustomObject]::new() $RequestToken | Add-Member -Name "token" -MemberType NoteProperty -Value $Token $RequestToken | Add-Member -Name "headers" -MemberType NoteProperty -Value $Headers $RequestToken | Add-Member -Name "tenantInfo" -MemberType NoteProperty -Value $TenantInfo $Urls = [PSCustomObject]::new() $RequestToken | Add-Member -Name "urls" -MemberType NoteProperty -Value $Urls $RequestToken.urls | Add-Member -Name "Videos" -MemberType NoteProperty -Value ($UrlBase -f "videos") $RequestToken.urls | Add-Member -Name "Channels" -MemberType NoteProperty -Value ($UrlBase -f "channels") $RequestToken.urls | Add-Member -Name "Groups" -MemberType NoteProperty -Value ($UrlBase -f "groups") $UrlBase = $UrlBase.replace("`$skip=0&", "") $RequestToken.urls | Add-Member -Name "Principals" -MemberType NoteProperty -Value ($UrlBase -f "principals") write-output $RequestToken } function New-Menu { [CmdletBinding()] param( [Parameter(Mandatory)] [ValidateNotNullOrEmpty()] [string]$Title, [Parameter(Mandatory)] [ValidateNotNullOrEmpty()] [string]$Question ) $videos = [ChoiceDescription]::new('&Videos', 'Videos') $channelvideos = [ChoiceDescription]::new('&ChannelVideos', 'All Videos by Channel') $all = [ChoiceDescription]::new('&All', 'All videos AND all videos by channel') $options = [ChoiceDescription[]]($videos, $channelvideos, $all) $result = $host.ui.PromptForChoice($Title, $Question, $options, 0) switch ($result) { 0 { 'Videos' } 1 { 'ChannelVideos' } 2 { 'All' } } } $menuoutome = New-Menu -title 'Stream videos' -question 'What do you want to output?' #write-host $menuoutome $StreamToken = Get-StreamToken $urlQueryToUse = $StreamToken.Urls.Videos #default $StreamToken.Urls.Videos is something like https://aase-1.api.microsoftstream.com/api/videos?$skip=900&$top=100&adminmode=true&api-version=1.4-private #To get creator and event details you need to add $expand=creator,events to the URL , not to do that you need to use &`$expand=creator,events with out the ` powershell thinks $expand is a variable. # e.g. use $urlQueryToUse = $StreamToken.Urls.Videos+"&`$expand=creator,events" #Other option # use the following if you want to only see files that have privacymode eq 'organization' i.e. video is visible to EVERYONE in the organisation # Thanks to Ryechz for this # # $urlQueryToUse = $StreamToken.Urls.Videos + "&orderby=publishedDate%20desc&`$expand=creator,events&`$filter=published%20and%20(state%20eq%20%27Completed%27%20or%20contentSource%20eq%20%27livestream%27)%20and%20privacymode%20eq%20%27organization%27%20" if($menuoutome -eq 'Videos' -Or $menuoutome -eq 'All'){ #modify the -URL submitted to get more data or filter data or order the output $ExtractData = Get-RequestedAssets -token $StreamToken -Url $urlQueryToUse -Label "Videos" write-host "" write-host "The `$ExtractData object contains all the details about each video. Use `$ExtractData[0] to see the first item in the object, and it's properties." if( $OutputCsvFileName ) { $thisOutputCsvFileName = $OutputCsvFileName.replace(".csv", '-'+$menuoutome+'-videos.csv') $ExtractData | Export-CSV $thisOutputCsvFileName -NoTypeInformation -Encoding UTF8 write-host "The following file has been created: $thisOutputCsvFileName" if( $OpenFileWhenComplete ) { Invoke-Item $thisOutputCsvFileName } } } if($menuoutome -eq 'ChannelVideos' -Or $menuoutome -eq 'All'){ #Get the list of channels , filter the result for the channel id and name $channelList = Get-VideoChannels -token $StreamToken -Url $StreamToken.Urls.Channels -Label "Channels" #for each channel get the videos that are in that channel, so that we can match them up to the ExtractData , which is the list of all videos) $videosPerChannel = get-channelvideos -token $StreamToken -allChannels $channelList -Label "ChannelVideos" write-host "" write-host "The `$videosPerChannel object contains key information about each video and the channel it is in. Use `$videosPerChannel[0] to see the first video, and it's properties." write-host "The `$channelList object contains a list of channel's and their properties." if( $OutputCsvFileName ) { $thisOutputCsvFileName = $OutputCsvFileName.replace(".csv", '-'+$menuoutome+'-channelVideos.csv') $videosPerChannel | Export-CSV $thisOutputCsvFileName -NoTypeInformation -Encoding UTF8 write-host "The following file has been created: $thisOutputCsvFileName" if( $OpenFileWhenComplete ) { Invoke-Item $thisOutputCsvFileName } } } >>> Original Code Oct 2020 <<< I've left this here in case it is useful to anyone. See the script above for a better solution. That said this solution relies on you knowing a bit about PowerShell, being a Stream Admin and being happy to manually save some files (I couldn't figure out how to do pass through windows authentication for the script) so you have to manually save each paged JSON file of 100 videos. It took me about 20minutes to export information about 591 videos (about 3 hours to make the script). To use the script update each variable marked with #<<<< Update this value in a normal (not admin) powershell window run the script (i copy and paste logical parts into the powershell window) you will be given several urls, to visit and save the JSON files from once you have the JSON files the final part of the script reads those, and exports a CSV file with a row per video in STREAM NOTE : as an admin you see all videos, so before you share the CSV with others be aware that there may be sensitive information in it that most users can't see due to STREAM's in built security. I don't have much time, hence I made this script so please don't expect quick answers to any questions. This script is rough, use it if it helps, but ... be professional and check it before you use it. ##>> Update 5 Aug 2021 <<## Thanks to everyone who has commented, I've updated the code below with your suggestions You still have to manually save the JSON browser tabs that show up as JSON files into a folder, but other than that I hope it is now easier for you to use đ #reference https://techcommunity.microsoft.com/t5/microsoft-stream-forum/powershell-script-to-audit-and-export-channel-content-details-of/m-p/354832 # goal of this script #- get list of all videos in stream for analysis #- it takes about 20 minutes to do this for 500 stream videos. #First # find out what your api source is # go to the following URL in chrome as an admin of Stream https://web.microsoftstream.com/browse # using Developer tools look at the "console" search for .api.microsoftstream to find out what is between https:// and .api.microsoftstream in my case https://aase-1.api.microsoftstream.com/api/ [string]$rootAPIlocation = "aase-1" #<<<< Update this value to the one you find in the console view #[string]$rootAPIlocation = "uswe-1" # use this for Western US #[string]$rootAPIlocation = "euno-1" # use this for the Europe North region #enter where you on your computer you want the files to go [string]$PowerShellScriptFolder = "C:\Temp\streamanalysis" #<<<< Update this value #json files will be saved into "VideosJSON" folder [string]$streamJSONfolder = Join-Path -Path $PowerShellScriptFolder -ChildPath "VideosJSON" #<<<< Update this value if you want a different folder name #>>> REMOVES all exiisting JSON files <<<< #remove all JSON items in this folder Remove-Item -path $streamJSONfolder\* -include *.json -Force -Recurse #guess approx number of videos you think you have divide by 100 e.g. 9 = 900 videos [int]$Loopnumber = 9 #<<<< Update this value #put in your stream portal url [string]$StreamPortal = "https://web.microsoftstream.com/?NoSignUpCheck=1" #put in the url where you see all videos from in stream [string]$StreamPortalVideoRoot = "https://web.microsoftstream.com/browse/" #$StreamPortalChannelRootForFindingVideos [string]$StreamPortalVideoViewRoot= "https://web.microsoftstream.com/video/" # for watching a video #this builds from the info you've put in a URL which will give back the JSON info about all your videos. [string]$StreamAPIVideos100 = "https://$rootAPIlocation.api.microsoftstream.com/api/videos?NoSignUpCheck=1&`$top=100&`$orderby=publishedDate%20desc&`$expand=creator,events&`$filter=published%20and%20(state%20eq%20%27Completed%27%20or%20contentSource%20eq%20%27livestream%27)&adminmode=true&api-version=1.4-private&`$skip=0" #$StreamAPIVideos100 # use the following if you want to only see files that have privacymode eq 'organization' i.e. video is visible to EVERYONE in the organisation #Thanks to Ryechz for this # # [string]$StreamAPIVideos100 = "https://$rootAPIlocation.api.microsoftstream.com/api/videos?NoSignUpCheck=1&`$top=100&`$orderby=publishedDate%20desc&`$expand=creator,events&`$filter=published%20and%20(state%20eq%20%27Completed%27%20or%20contentSource%20eq%20%27livestream%27)%20and%20privacymode%20eq%20%27organization%27%20&adminmode=true&api-version=1.4-private&`$skip=0" [int]$skipCounter [int]$skipCounterNext = $skipCounter+100 [string]$fileName = "jsonfor-$skipCounter-to-$skipCounterNext.json" #next section creates the URLS you need to manually download the json from , it was too hard to figure out how to do this programatically with authentication. Write-Host " Starting Chrome Enter your credentials to load O365 Stream portal" -ForegroundColor Magenta #Thanks Conrad Murray for this tip Start-Process -FilePath 'chrome.exe' -ArgumentList $StreamPortal Read-Host -Prompt "Press Enter to continue ...." Write-host " -----------------------------------------" -ForegroundColor Green Write-host " --Copy and past each url into chrome-----" -ForegroundColor Green Write-host " --save JSON output into $streamJSONfolder" -ForegroundColor Green for($i=0;$i -lt $Loopnumber; $i++) { $skipCounter = $i*100 if($skipCounter -eq 0) { write-host $StreamAPIVideos100 Start-Process -FilePath 'chrome.exe' -ArgumentList $StreamAPIVideos100 } else { write-host $StreamAPIVideos100.replace("skip=0","skip=$skipCounter") #following code opens browser tabs for each of the jsonfiles #Thanks Conrad Murray for this tip Start-Process -FilePath 'chrome.exe' -ArgumentList $StreamAPIVideos100.replace("skip=0","skip=$skipCounter") } } Write-host " --save each browser window showing JSON output into $streamJSONfolder" -ForegroundColor Green Write-host " -----------------------------------------------------------------------------------" -ForegroundColor Green Write-host " -----------------------------------------" -ForegroundColor Green Read-Host -Prompt "Press Enter to continue ...." Write-host " -----------------------------------------" -ForegroundColor Green $JSONFiles = Get-ChildItem -Path $streamJSONfolder -Recurse -Include *.json [int]$videoscounter = 0 $VideosjsonAggregateddata=@() $data=@() foreach($fileItem in $JSONFiles) { Write-host " -----------------------------------------" -ForegroundColor Green Write-Host " =====>>>> getting content of JSON File:", $fileItem, "- Path:", $fileItem.FullName -ForegroundColor Yellow $Videosjsondata = Get-Content -Raw -Path $fileItem.FullName | ConvertFrom-Json $VideosjsonAggregateddata += $Videosjsondata Write-host " -----------------------------------------" -ForegroundColor Green #Write-Host " =====>>>> Channel JSON Raw data:", $Videosjsondata -ForegroundColor green #Read-Host -Prompt "Press Enter to continue ...." } write-host "You have " $VideosjsonAggregateddata.value.count " videos in Stream , using these selection criteria" foreach($myVideo in $VideosjsonAggregateddata.value) { $videoscounter += 1 $datum = New-Object -TypeName PSObject Write-host " -----------------------------------------" -ForegroundColor Green Write-Host " =====>>>> Video (N°", $videoscounter ,") ID:", $myVideo.id -ForegroundColor green Write-Host " =====>>>> Video Name:", $myVideo.name," created:", $myVideo.created,"- modified:", $myVideo.modified -ForegroundColor green Write-Host " =====>>>> Video Metrics views:", $myVideo.metrics.views, "- comments:", $myVideo.metrics.comments -ForegroundColor Magenta Write-Host " =====>>>> Video Creator Name: ", $myVideo.creator.name , " - Email:", $myVideo.creator.mail -ForegroundColor Magenta Write-Host " =====>>>> Video Description: ", $myVideo.description -ForegroundColor Magenta $datum | Add-Member -MemberType NoteProperty -Name VideoID -Value $myVideo.id $datum | Add-Member -MemberType NoteProperty -Name VideoName -Value $myVideo.name $datum | Add-Member -MemberType NoteProperty -Name VideoURL -Value $($StreamPortalVideoViewRoot + $myVideo.id) $datum | Add-Member -MemberType NoteProperty -Name VideoCreatorName -Value $myVideo.creator.name $datum | Add-Member -MemberType NoteProperty -Name VideoCreatorEmail -Value $myVideo.creator.mail $datum | Add-Member -MemberType NoteProperty -Name VideoCreationDate -Value $myVideo.created $datum | Add-Member -MemberType NoteProperty -Name VideoModificationDate -Value $myVideo.modified $datum | Add-Member -MemberType NoteProperty -Name VideoLikes -Value $myVideo.metrics.likes $datum | Add-Member -MemberType NoteProperty -Name VideoViews -Value $myVideo.metrics.views $datum | Add-Member -MemberType NoteProperty -Name VideoComments -Value $myVideo.metrics.comments #the userData value is for the user running the JSON query i.e. did that user view this video. It isn't for information about all users who may have seen this video. There seems to be no information about that other than, total views = metrics.views #$datum | Add-Member -MemberType NoteProperty -Name VideoComments -Value $myVideo.userData.isViewed $datum | Add-Member -MemberType NoteProperty -Name Videodescription -Value $myVideo.description #thanks Johnathan Ogden for these values $datum | Add-Member -MemberType NoteProperty -Name VideoDuration -Value $myVideo.media.duration $datum | Add-Member -MemberType NoteProperty -Name VideoHeight -Value $myVideo.media.height $datum | Add-Member -MemberType NoteProperty -Name VideoWidth -Value $myVideo.media.width $datum | Add-Member -MemberType NoteProperty -Name VideoIsAudioOnly -Value $myVideo.media.isAudioOnly $datum | Add-Member -MemberType NoteProperty -Name VideoContentType -Value $myVideo.contentType $data += $datum } $datestring = (get-date).ToString("yyyyMMdd-hhmm") $csvfileName = ($PowerShellScriptFolder + "\O365StreamVideoDetails_" + $datestring + ".csv") #<<<< Update this value if you want a different file name Write-host " -----------------------------------------" -ForegroundColor Green Write-Host (" >>> writing to file {0}" -f $csvfileName) -ForegroundColor Green $data | Export-csv $csvfileName -NoTypeInformation Write-host " ------------------ DONE -----------------------" -ForegroundColor Green Disclaimer : You can use that solution as you want and modify it depending of your case. Many thanks to Fromelard and his https://techcommunity.microsoft.com/t5/microsoft-stream-forum/powershell-script-to-audit-and-export-channel-content-details-of/m-p/354832 which gave me enough to figure out how to do this.SolvedSecure and govern AI apps and agents with Microsoft Purview
The Microsoft Purview family is here to help you secure and govern data across third party IaaS and Saas, multi-platform data environment, while helping you meet compliance requirements you may be subject to. Purview brings simplicity with a comprehensive set of solutions built on a platform of shared capabilities, that helps keep your most important asset, data, safe. With the introduction of AI technology, Purview also expanded its data coverage to include discovering, protecting, and governing the interactions of AI apps and agents, such as Microsoft Copilots like Microsoft 365 Copilot and Security Copilot, Enterprise built AI apps like Chat GPT enterprise, and other consumer AI apps like DeepSeek, accessed through the browser. To help you view, investigate interactions with all those AI apps, and to create and manage policies to secure and govern them in one centralized place, we have launched Purview Data Security Posture Management (DSPM) for AI. You can learn more about DSPM for AI here with short video walkthroughs: Learn how Microsoft Purview Data Security Posture Management (DSPM) for AI provides data security and compliance protections for Copilots and other generative AI apps | Microsoft Learn Purview capabilities for AI apps and agents To understand our current set of capabilities within Purview to discover, protect, and govern various AI apps and agents, please refer to our Learn doc here: Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Here is a quick reference guide for the capabilities available today: Note that currently, DLP for Copilot and adhering to sensitivity label are currently designed to protect content in Microsoft 365. Thus, Security Copilot and Coplot in Fabric, along with Copilot studio custom agents that do not use Microsoft 365 as a content source, do not have these features available. Please see list of AI sites supported by Microsoft Purview DSPM for AI here Conclusion Microsoft Purview can help you discover, protect, and govern the prompts and responses from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps through its data security and data compliance solutions, while allowing you to view, investigate, and manage interactions in one centralized place in DSPM for AI. Follow up reading Check out the deployment guides for DSPM for AI How to deploy DSPM for AI - https://aka.ms/DSPMforAI/deploy How to use DSPM for AI data risk assessment to address oversharing - https://aka.ms/dspmforai/oversharing Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Explore the Purview SDK Microsoft Purview SDK Public Preview | Microsoft Community Hub (blog) Microsoft Purview documentation - purview-sdk | Microsoft Learn Build secure and compliant AI applications with Microsoft Purview (video) References for DSPM for AI Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Block Users From Sharing Sensitive Information to Unmanaged AI Apps Via Edge on Managed Devices (preview) | Microsoft Learn as part of Scenario 7 of Create and deploy a data loss prevention policy | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Explore the roadmap for DSPM for AI Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365PMPurHow to deploy Microsoft Purview DSPM for AI to secure your AI apps
Microsoft Purview Data Security Posture Management (DSPM for AI) is designed to enhance data security for the following AI applications: Microsoft Copilot experiences, including Microsoft 365 Copilot. Enterprise AI apps, including ChatGPT enterprise integration. Other AI apps, including all other AI applications like ChatGPT consumer, Microsoft Copilot, DeepSeek, and Google Gemini, accessed through the browser. In this blog, we will dive into the different policies and reporting we have to discover, protect and govern these three types of AI applications. Prerequisites Please refer to the prerequisites for DSPM for AI in the Microsoft Learn Docs. Login to the Purview portal To begin, start by logging into Microsoft 365 Purview portal with your admin credentials: In the Microsoft Purview portal, go to the Home page. Find DSPM for AI under solutions. 1. Securing Microsoft 365 Copilot Be sure to check out our blog on How to use the DSPM for AI data assessment report to help you address oversharing concerns when you deploy Microsoft 365 Copilot. Discover potential data security risks in Microsoft 365 Copilot interactions In the Overview tab of DSPM for AI, start with the tasks in âGet Startedâ and Activate Purview Audit if you have not yet activated it in your tenant to get insights into user interactions with Microsoft Copilot experiences In the Recommendations tab, review the recommendations that are under âNot Startedâ. Create the following data discovery policy to discover sensitive information in AI interactions by clicking into it. Detect risky interactions in AI apps - This public preview Purview Insider Risk Management policy helps calculate user risk by detecting risky prompts and responses in Microsoft 365 Copilot experiences. Click here to learn more about Risky AI usage policy. With the policies to discover sensitive information in Microsoft Copilot experiences in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter to Microsoft Copilot Experiences, and review the following for Microsoft Copilot experiences: Total interactions over time (Microsoft Copilot) Sensitive interactions per AI app Top unethical AI interactions Top sensitivity labels references in Microsoft 365 Copilot Insider Risk severity Insider risk severity per AI app Potential risky AI usage Protect sensitive data in Microsoft 365 Copilot interactions From the Reports tab, click on âView detailsâ for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities from Microsoft Copilot experiences based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. Then drill down to each activity to view details including the capability to view prompts and response with the right permissions. To protect the sensitive data in interactions for Microsoft 365 Copilot, review the Not Started policies in the Recommendations tab and create these policies: Information Protection Policy for Sensitivity Labels - This option creates default sensitivity labels and sensitivity label policies. If you've already configured sensitivity labels and their policies, this configuration is skipped. Protect sensitive data referenced in Microsoft 365 Copilot - This guides you through the process of creating a Purview Data Loss Prevention (DLP) policy to restrict the processing of content with specific sensitivity labels in Copilot interactions. Click here to learn more about Data Loss Prevention for Microsoft 365 Copilot. Protect sensitive data referenced in Copilot responses - Sensitivity labels help protect files by controlling user access to data. Microsoft 365 Copilot honors sensitivity labels on files and only shows users files they already have access to in prompts and responses. Use Data assessments to identify potential oversharing risks, including unlabeled files. Stay tuned for an upcoming blog post on using DSPM for AI data assessments! Use Copilot to improve your data security posture - Data Security Posture Management combines deep insights with Security Copilot capabilities to help you identify and address security risks in your org. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Govern the prompts and responses in Microsoft 365 Copilot interactions Understand and comply with AI regulations by selecting âGuided assistance to AI regulationsâ in the Recommendations tab and walking through the âActions to takeâ. From the Recommendations tab, create a Control unethical behavior in AI Purview Communications Compliance policy to detect sensitive information in prompts and responses and address potentially unethical behavior in Microsoft Copilot experiences and ChatGPT for Enterprise. This policy covers all users and groups in your organization. To retain and/or delete Microsoft 365 Copilot prompts and responses, setup a Data Lifecycle policy by navigating to Microsoft Purview Data Lifecycle Management and find Retention Policies under the Policies header. You can also preserve, collect, analyze, review, and export Microsoft 365 Copilot interactions by creating an eDiscovery case. 2. Securing Enterprise AI apps Please refer to this amazing blog on Unlocking the Power of Microsoft Purview for ChatGPT Enterprise | Microsoft Community Hub for detailed information on how to integrate with ChatGPT for enterprise, the Purview solutions it currently supports through Purview Communication Compliance, Insider Risk Management, eDiscovery, and Data Lifecycle Management. Learn more about the feature also through our public documentation. 3. Securing other AI Microsoft Purview DSPM for AI currently supports the following list of AI sites. Be sure to also check out our blog on the new Microsoft Purview data security controls for the browser & network to secure other AI apps. Discover potential data security risks in prompts sent to other AI apps In the Overview tab of DSPM for AI, go through these three steps in âGet Startedâ to discover potential data security risk in other AI interactions: Install Microsoft Purview browser extension For Windows users: The Purview extension is not necessary for the enforcement of data loss prevention on the Edge browser but required for Chrome to detect sensitive info pasted or uploaded to AI sites. The extension is also required to detect browsing to other AI sites through an Insider Risk Management policy for both Edge and Chrome browser. Therefore, Purview browser extension is required for both Edge and Chrome in Windows. For MacOS users: The Purview extension is not necessary for the enforcement of data loss prevention on macOS devices, and currently, browsing to other AI sites through Purview Insider Risk Management is not supported on MacOS, therefore, no Purview browser extension is required for MacOS. Extend your insights for data discovery â this one-click collection policy will setup three separate Purview detection policies for other AI apps: Detect sensitive info shared in AI prompts in Edge â a Purview collection policy that detects prompts sent to ChatGPT consumer, Micrsoft Copilot, DeepSeek, and Google Gemini in Microsoft Edge and discovers sensitive information shared in prompt contents. This policy covers all users and groups in your organization in audit mode only. Detect when users visit AI sites â a Purview Insider Risk Management policy that detects when users use a browser to visit AI sites. Detect sensitive info pasted or uploaded to AI sites â a Purview Endpoint Data loss prevention (eDLP) policy that discovers sensitive content pasted or uploaded in Microsoft Edge, Chrome, and Firefox to AI sites. This policy covers all users and groups in your org in audit mode only. With the policies to discover sensitive information in other AI apps in place, head back to the Reports tab of DSPM for AI to discover any AI interactions that may be risky, with the option to filter by Other AI Apps, and review the following for other AI apps: Total interactions over time (other AI apps) Total visits (other AI apps) Sensitive interactions per AI app Insider Risk severity Insider risk severity per AI app Protect sensitive info shared with other AI apps From the Reports tab, click on âView detailsâ for each of the report graphs to view detailed activities in the Activity Explorer. Using available filters, filter the results to view activities based on different Activity type, AI app category and App type, Scope, which support administrative units for DSPM for AI, and more. To protect the sensitive data in interactions for other AI apps, review the Not Started policies in the Recommendations tab and create these policies: Fortify your data security â This will create three policies to manage your data security risks with other AI apps: 1) Block elevated risk users from pasting or uploading sensitive info on AI sites â this will create a Microsoft Purview endpoint data loss prevention (eDLP) policy that uses adaptive protection to give a warn-with-override to elevated risk users attempting to paste or upload sensitive information to other AI apps in Edge, Chrome, and Firefox. This policy covers all users and groups in your org in test mode. Learn more about adaptive protection in Data loss prevention. 2) Block elevated risk users from submitting prompts to AI apps in Microsoft Edge â this will create a Microsoft Purview browser data loss prevention (DLP) policy, and using adaptive protection, this policy will block elevated, moderate, and minor risk users attempting to put information in other AI apps using Microsoft Edge. This integration is built-in to Microsoft Edge. Learn more about adaptive protection in Data loss prevention. 3) Block sensitive info from being sent to AI apps in Microsoft Edge - this will create a Microsoft Purview browser data loss prevention (DLP) policy to detect inline for a selection of common sensitive information types and blocks prompts being sent to AI apps while using Microsoft Edge. This integration is built-in to Microsoft Edge. Once you have created policies from the Recommendations tab, you can go to the Policies tab to review and manage all the policies you have created across your organization to discover and safeguard AI activity in one centralized place, as well as edit the policies or investigate alerts associated with those policies in solution. Note that additional policies not from the Recommendations tab will also appear in the Policies tab when DSPM for AI identifies them as policies to Secure and govern all AI apps. Conclusion Microsoft Purview DSPM for AI can help you discover, protect, and govern the interactions from AI applications in Microsoft Copilot experiences, Enterprise AI apps, and other AI apps. We recommend you review the Reports in DSPM for AI routinely to discover any new interactions that may be of concern, and to create policies to secure and govern those interactions as necessary. We also recommend you utilize the Activity Explorer in DSPM for AI to review different Activity explorer events while users interacting with AI, including the capability to view prompts and response with the right permissions. We will continue to update this blog with new features that become available in DSPM for AI, so be sure to bookmark this page! Follow-up Reading Check out this blog on the details of each recommended policies in DSPM for AI: Microsoft Purview â Data Security Posture Management (DSPM) for AI | Microsoft Community Hub Address oversharing concerns with Microsoft 365 blueprint - aka.ms/Copilot/Oversharing Microsoft Purview data security and compliance protections for Microsoft 365 Copilot and other generative AI apps | Microsoft Learn Considerations for deploying Microsoft Purview AI Hub and data security and compliance protections for Microsoft 365 Copilot and Microsoft Copilot | Microsoft Learn Commonly used properties in Copilot audit logs - Audit logs for Copilot and AI activities | Microsoft Learn Supported AI sites by Microsoft Purview for data security and compliance protections | Microsoft Learn Where Copilot usage data is stored and how you can audit it - Microsoft 365 Copilot data protection and auditing architecture | Microsoft Learn Downloadable whitepaper: Data Security for AI Adoption | Microsoft Public roadmap for DSPM for AI - Microsoft 365 Roadmap | Microsoft 365Smart Auditing: Leveraging Azure AI Agents to Transform Financial Oversight
In today's data-driven business environment, audit teams often spend weeks poring over logs and databases to verify spending and billing information. This time-consuming process is ripe for automation. But is there a way to implement AI solutions without getting lost in complex technical frameworks? While tools like LangChain, Semantic Kernel, and AutoGen offer powerful AI agent capabilities, sometimes you need a straightforward solution that just works. So, what's the answer for teams seeking simplicity without sacrificing effectiveness? This tutorial will show you how to use Azure AI Agent Service to build an AI agent that can directly access your Postgres database to streamline audit workflows. No complex chains or graphs required, just a practical solution to get your audit process automated quickly. The Auditing Challenge: It's the month end, and your audit team is drowning in spreadsheets. As auditors reviewing financial data across multiple SaaS tenants, you're tasked with verifying billing accuracy by tracking usage metrics like API calls, storage consumption, and user sessions in Postgres databases. Each tenant generates thousands of transactions daily, and traditionally, this verification process consumes weeks of your team's valuable time. Typically, teams spend weeks: Manually extracting data from multiple database tables. Cross-referencing usage with invoices. Investigating anomalies through tedious log analysis. Compiling findings into comprehensive reports. With an AI-powered audit agent, you can automate these tasks and transform the process. Your AI assistant can: Pull relevant usage data directly from your database Identify billing anomalies like unexpected usage spikes Generate natural language explanations of findings Create audit reports that highlight key concerns For example, when reviewing a tenant's invoice, your audit agent can query the database for relevant usage patterns, summarize anomalies, and offer explanations: "Tenant_456 experienced a 145% increase in API usage on April 30th, which explains the billing increase. This spike falls outside normal usage patterns and warrants further investigation." Letâs build an AI agent that connects to your Postgres database and transforms your audit process from manual effort to automated intelligence. Prerequisites: Before we start building our audit agent, you'll need: An Azure subscription (Create one for free). The Azure AI Developer RBAC role assigned to your account. Python 3.11.x installed on your development machine. OR You can also use GitHub Codespaces, which will automatically install all dependencies for you. Youâll need to create a GitHub account first if you donât already have one. Setting Up Your Database: For this tutorial, we'll use Neon Serverless Postgres as our database. It's a fully managed, cloud-native Postgres solution that's free to start, scales automatically, and works excellently for AI agents that need to query data on demand. Creating a Neon Database on Azure: Open the Neon Resource page on the Azure portal Fill out the form with the required fields and deploy your database After creation, navigate to the Neon Serverless Postgres Organization service Click on the Portal URL to access the Neon Console Click "New Project" Choose an Azure region Name your project (e.g., "Audit Agent Database") Click "Create Project" Once your project is successfully created, copy the Neon connection string from the Connection Details widget on the Neon Dashboard. It will look like this: postgresql://[user]:[password]@[neon_hostname]/[dbname]?sslmode=require Note: Keep this connection string saved; we'll need it shortly. Creating an AI Foundry Project on Azure: Next, we'll set up the AI infrastructure to power our audit agent: Create a new hub and project in the Azure AI Foundry portal by following the guide. Deploy a model like GPT-4o to use with your agent. Make note of your Project connection string and Model Deployment name. You can find your connection string in the overview section of your project in the Azure AI Foundry portal, under Project details > Project connection string. Once you have all three values on hand:âŻNeon connection string,âŻProject connection string,âŻandâŻModel Deployment Name,âŻyou are ready to set up the Python project to create an Agent. All the code and sample data are available inâŻthisâŻGitHub repository. You can clone or download the project. Project Environment Setup: Create aâŻ.envâŻfile with your credentials: PROJECT_CONNECTION_STRING="<Your AI Foundry connection string> "AZURE_OPENAI_DEPLOYMENT_NAME="gpt4o" NEON_DB_CONNECTION_STRING="<Your Neon connection string>" Create and activate a virtual environment: python -m venv .venv source .venv/bin/activate # on macOS/Linux .venv\Scripts\activate # on Windows Install required Python libraries: pip install -r requirements.txt ExampleâŻrequirements.txt: Pandas python-dotenv sqlalchemy psycopg2-binary azure-ai-projects ==1.0.0b7 azure-identity Load Sample Billing Usage Data: We will use a mock dataset for tenant usage, including computed percent change in API calls and storage usage in GB: tenant_id date api_calls storage_gb tenant_456 2025-04-01 1000 25.0 tenant_456 2025-03-31 950 24.8 tenant_456 2025-03-30 2200 26.0 Run python load_usage_data.py Python script to create and populate the usage_data table in your Neon Serverless Postgres instance: # load_usage_data.py file import os from dotenv import load_dotenv from sqlalchemy import ( create_engine, MetaData, Table, Column, String, Date, Integer, Numeric, ) # Load environment variables from .env load_dotenv() # Load connection string from environment variable NEON_DB_URL = os.getenv("NEON_DB_CONNECTION_STRING") engine = create_engine(NEON_DB_URL) # Define metadata and table schema metadata = MetaData() usage_data = Table( "usage_data", metadata, Column("tenant_id", String, primary_key=True), Column("date", Date, primary_key=True), Column("api_calls", Integer), Column("storage_gb", Numeric), ) # Create table with engine.begin() as conn: metadata.create_all(conn) # Insert mock data conn.execute( usage_data.insert(), [ { "tenant_id": "tenant_456", "date": "2025-03-27", "api_calls": 870, "storage_gb": 23.9, }, { "tenant_id": "tenant_456", "date": "2025-03-28", "api_calls": 880, "storage_gb": 24.0, }, { "tenant_id": "tenant_456", "date": "2025-03-29", "api_calls": 900, "storage_gb": 24.5, }, { "tenant_id": "tenant_456", "date": "2025-03-30", "api_calls": 2200, "storage_gb": 26.0, }, { "tenant_id": "tenant_456", "date": "2025-03-31", "api_calls": 950, "storage_gb": 24.8, }, { "tenant_id": "tenant_456", "date": "2025-04-01", "api_calls": 1000, "storage_gb": 25.0, }, ], ) print("â usage_data table created and mock data inserted.") Create a Postgres Tool for the Agent: Next, we configure an AI agent tool to retrieve data from Postgres. The Python scriptâŻbilling_agent_tools.pyâŻcontains: The function billing_anomaly_summary() that: Pulls usage data from Neon. Computes % change in api_calls. Flags anomalies with a threshold of > 1.5x change. Exports user_functions list for the Azure AI Agent to use. You do not need to run it separately. # billing_agent_tools.py file import os import json import pandas as pd from sqlalchemy import create_engine from dotenv import load_dotenv # Load environment variables load_dotenv() # Set up the database engine NEON_DB_URL = os.getenv("NEON_DB_CONNECTION_STRING") db_engine = create_engine(NEON_DB_URL) # Define the billing anomaly detection function def billing_anomaly_summary( tenant_id: str, start_date: str = "2025-03-27", end_date: str = "2025-04-01", limit: int = 10, ) -> str: """ Fetches recent usage data for a SaaS tenant and detects potential billing anomalies. :param tenant_id: The tenant ID to analyze. :type tenant_id: str :param start_date: Start date for the usage window. :type start_date: str :param end_date: End date for the usage window. :type end_date: str :param limit: Maximum number of records to return. :type limit: int :return: A JSON string with usage records and anomaly flags. :rtype: str """ query = """ SELECT date, api_calls, storage_gb FROM usage_data WHERE tenant_id = %s AND date BETWEEN %s AND %s ORDER BY date DESC LIMIT %s; """ df = pd.read_sql(query, db_engine, params=(tenant_id, start_date, end_date, limit)) if df.empty: return json.dumps( {"message": "No usage data found for this tenant in the specified range."} ) df.sort_values("date", inplace=True) df["pct_change_api"] = df["api_calls"].pct_change() df["anomaly"] = df["pct_change_api"].abs() > 1.5 return df.to_json(orient="records") # Register this in a list to be used by FunctionTool user_functions = [billing_anomaly_summary] Create and Configure the AI Agent: Now we'll set up the AI agent and integrate it with our Neon Postgres tool using theâŻAzure AI Agent Service SDK.âŻTheâŻPython scriptâŻdoes the following: Creates the agentâŻInstantiates an AI agent using the selected model (gpt-4o, for example), adds tool access, and sets instructions that tell the agent how to behave (e.g., âYou are a helpful SaaS assistantâŚâ). Creates a conversation threadâŻA thread is started to hold a conversation between the user and the agent. Posts a user messageâŻSends a question like âWhy did my billing spike for tenant_456 this week?â to the agent. Processes the requestâŻThe agent reads the message, determines that it should use the custom tool to retrieve usage data, and processes the query. Displays the responseâŻPrints the response from the agent with a natural language explanation based on the toolâs output. # billing_anomaly_agent.py import os from datetime import datetime from azure.ai.projects import AIProjectClient from azure.identity import DefaultAzureCredential from azure.ai.projects.models import FunctionTool, ToolSet from dotenv import load_dotenv from pprint import pprint from billing_agent_tools import user_functions # Custom tool function module # Load environment variables from .env file load_dotenv() # Create an Azure AI Project Client project_client = AIProjectClient.from_connection_string( credential=DefaultAzureCredential(), conn_str=os.environ["PROJECT_CONNECTION_STRING"], ) # Initialize toolset with our user-defined functions functions = FunctionTool(user_functions) toolset = ToolSet() toolset.add(functions) # Create the agent agent = project_client.agents.create_agent( model=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"], name=f"billing-anomaly-agent-{datetime.now().strftime('%Y%m%d%H%M')}", description="Billing Anomaly Detection Agent", instructions=f""" You are a helpful SaaS financial assistant that retrieves and explains billing anomalies using usage data. The current date is {datetime.now().strftime("%Y-%m-%d")}. """, toolset=toolset, ) print(f"Created agent, ID: {agent.id}") # Create a communication thread thread = project_client.agents.create_thread() print(f"Created thread, ID: {thread.id}") # Post a message to the agent thread message = project_client.agents.create_message( thread_id=thread.id, role="user", content="Why did my billing spike for tenant_456 this week?", ) print(f"Created message, ID: {message.id}") # Run the agent and process the query run = project_client.agents.create_and_process_run( thread_id=thread.id, agent_id=agent.id ) print(f"Run finished with status: {run.status}") if run.status == "failed": print(f"Run failed: {run.last_error}") # Fetch and display the messages messages = project_client.agents.list_messages(thread_id=thread.id) print("Messages:") pprint(messages["data"][0]["content"][0]["text"]["value"]) # Optional cleanup: # project_client.agents.delete_agent(agent.id) # print("Deleted agent") Run the agent: To run the agent, run the following command python billing_anomaly_agent.py Snippet of output from agent: Using the Azure AI Foundry Agent Playground: After running your agent using the Azure AI Agent SDK, it is saved within your Azure AI Foundry project. You can now experiment with it using theâŻAgent Playground. To try it out: Go to theâŻAgentsâŻsection in yourâŻAzure AI FoundryâŻworkspace. Find yourâŻbilling anomaly agentâŻin the list and click to open it. Use the playground interface to test different financial or billing-related questions, such as: âDid tenant_456 exceed their API usage quota this month?â âExplain recent storage usage changes for tenant_456.â This is a great way to validate your agent's behavior without writing more code. Summary: Youâve now created a working AI agent that talks to your Postgres database, all using: A simple Python function Azure AI Agent Service A Neon Serverless Postgres backend This approach is beginner-friendly, lightweight, and practical for real-world use. Want to go further? You can: Add more tools to the agent Integrate withâŻvector search âŻ(e.g., detect anomaly reasons from logs using embeddings) Resources: Introduction to Azure AI Agent Service Develop an AI agent with Azure AI Agent Service Getting Started with Azure AI Agent Service Neon on Azure Build AI Agents with Azure AI Agent Service and Neon Multi-Agent AI Solution with Neon, Langchain, AutoGen and Azure OpenAI Azure AI Foundry GitHub Discussions That's it, folks! But the best part?âŻYou can become part of a thriving community of learners and builders by joining theâŻMicrosoft Learn Student Ambassadors Community.âŻConnect with like-minded individuals, explore hands-on projects, and stay updated with the latest in cloud and AI. đŹ Join the community on DiscordâŻhere and explore more benefits on the Microsoft Learn Student Hub.538Views5likes1CommentIncreased security visibility through new Standard Logs in Microsoft Purview Audit
In response to increasing frequency and evolution of cyberthreats, Microsoft is providing access to wider cloud security logs to its worldwide customers at no additional cost. Audit (Standard) customers can now access these additional logs, which have been identified as a result of close coordination with commercial and government customers, and with the Cybersecurity and Infrastructure Security Agency (CISA).Hacking Made Easy, Patching Made Optional: A Modern Cyber Tragedy
In todayâs cyber threat landscape, the tools and techniques required to compromise enterprise environments are no longer confined to highly skilled adversaries or state-sponsored actors. While artificial intelligence is increasingly being used to enhance the sophistication of attacks, the majority of breaches still rely on simple, publicly accessible tools and well-established social engineering tactics. Another major issue is the persistent failure of enterprises to patch common vulnerabilities in a timely mannerâdespite the availability of fixes and public warnings. This negligence continues to be a key enabler of large-scale breaches, as demonstrated in several recent incidents. The Rise of AI-Enhanced Attacks Attackers are now leveraging AI to increase the credibility and effectiveness of their campaigns. One notable example is the use of deepfake technologyâsynthetic media generated using AIâto impersonate individuals in video or voice calls. North Korean threat actors, for instance, have been observed using deepfake videos and AI-generated personas to conduct fraudulent job interviews with HR departments at Western technology companies. These scams are designed to gain insider access to corporate systems or to exfiltrate sensitive intellectual property under the guise of legitimate employment. Social Engineering: Still the Most Effective Entry Point And yet, many recent breaches have begun with classic social engineering techniques. In the cases of Coinbase and Marks & Spencer, attackers impersonated employees through phishing or fraudulent communications. Once they had gathered sufficient personal information, they contacted support desks or mobile carriers, convincingly posing as the victims to request password resets or SIM swaps. This impersonation enabled attackers to bypass authentication controls and gain initial access to sensitive systems, which they then leveraged to escalate privileges and move laterally within the network. Threat groups such as Scattered Spider have demonstrated mastery of these techniques, often combining phishing with SIM swap attacks and MFA bypass to infiltrate telecom and cloud infrastructure. Similarly, Solt Thypoon (formerly DEV-0343), linked to North Korean operations, has used AI-generated personas and deepfake content to conduct fraudulent job interviewsâgaining insider access under the guise of legitimate employment. These examples underscore the evolving sophistication of social engineering and the need for robust identity verification protocols. Built for Defense, Used for Breach Despite the emergence of AI-driven threats, many of the most successful attacks continue to rely on simple, freely available tools that require minimal technical expertise. These tools are widely used by security professionals for legitimate purposes such as penetration testing, red teaming, and vulnerability assessments. However, they are also routinely abused by attackers to compromise systems Case studies for tools like Nmap, Metasploit, Mimikatz, BloodHound, Cobalt Strike, etc. The dual-use nature of these tools underscores the importance of not only detecting their presence but also understanding the context in which they are being used. From CVE to Compromise While social engineering remains a common entry point, many breaches are ultimately enabled by known vulnerabilities that remain unpatched for extended periods. For example, the MOVEit Transfer vulnerability (CVE-2023-34362) was exploited by the Cl0p ransomware group to compromise hundreds of organizations, despite a patch being available. Similarly, the OpenMetadata vulnerability (CVE-2024-28255, CVE-2024-28847) allowed attackers to gain access to Kubernetes workloads and leverage them for cryptomining activity days after a fix had been issued. Advanced persistent threat groups such as APT29 (also known as Cozy Bear) have historically exploited unpatched systems to maintain long-term access and conduct stealthy operations. Their use of credential harvesting tools like Mimikatz and lateral movement frameworks such as Cobalt Strike highlights the critical importance of timely patch managementânot just for ransomware defense, but also for countering nation-state actors. Recommendations To reduce the risk of enterprise breaches stemming from tool misuse, social engineering, and unpatched vulnerabilities, organizations should adopt the following practices: 1. Patch Promptly and Systematically Ensure that software updates and security patches are applied in a timely and consistent manner. This involves automating patch management processes to reduce human error and delay, while prioritizing vulnerabilities based on their exploitability and exposure. Microsoft Intune can be used to enforce update policies across devices, while Windows Autopatch simplifies the deployment of updates for Windows and Microsoft 365 applications. To identify and rank vulnerabilities, Microsoft Defender Vulnerability Management offers risk-based insights that help focus remediation efforts where they matter most. 2. Implement Multi-Factor Authentication (MFA) To mitigate credential-based attacks, MFA should be enforced across all user accounts. Conditional access policies should be configured to adapt authentication requirements based on contextual risk factors such as user behavior, device health, and location. Microsoft Entra Conditional Access allows for dynamic policy enforcement, while Microsoft Entra ID Protection identifies and responds to risky sign-ins. Organizations should also adopt phishing-resistant MFA methods, including FIDO2 security keys and certificate-based authentication, to further reduce exposure. 3. Identity Protection Access Reviews and Least Privilege Enforcement Conducting regular access reviews ensures that users retain only the permissions necessary for their roles. Applying least privilege principles and adopting Microsoft Zero Trust Architecture limits the potential for lateral movement in the event of a compromise. Microsoft Entra Access Reviews automates these processes, while Privileged Identity Management (PIM) provides just-in-time access and approval workflows for elevated roles. Just-in-Time Access and Risk-Based Controls Standing privileges should be minimized to reduce the attack surface. Risk-based conditional access policies can block high-risk sign-ins and enforce additional verification steps. Microsoft Entra ID Protection identifies risky behaviors and applies automated controls, while Conditional Access ensures access decisions are based on real-time risk assessments to block or challenge high-risk authentication attempts. Password Hygiene and Secure Authentication Promoting strong password practices and transitioning to passwordless authentication enhances security and user experience. Microsoft Authenticator supports multi-factor and passwordless sign-ins, while Windows Hello for Business enables biometric authentication using secure hardware-backed credentials. 4. Deploy SIEM and XDR for Detection and Response A robust detection and response capability is vital for identifying and mitigating threats across endpoints, identities, and cloud environments. Microsoft Sentinel serves as a cloud-native SIEM that aggregates and analyses security data, while Microsoft Defender XDR integrates signals from multiple sources to provide a unified view of threats and automate response actions. 5. Map and Harden Attack Paths Organizations should regularly assess their environments for attack paths such as privilege escalation and lateral movement. Tools like Microsoft Defender for Identity help uncover Lateral Movement Paths, while Microsoft Identity Threat Detection and Response (ITDR) integrates identity signals with threat intelligence to automate response. These capabilities are accessible via the Microsoft Defender portal, which includes an attack path analysis feature for prioritizing multicloud risks. 6. Stay Current with Threat Actor TTPs Monitor the evolving tactics, techniques, and procedures (TTPs) employed by sophisticated threat actors. Understanding these behaviours enables organizations to anticipate attacks and strengthen defenses proactively. Microsoft Defender Threat Intelligence provides detailed profiles of threat actors and maps their activities to the MITRE ATT&CK framework. Complementing this, Microsoft Sentinel allows security teams to hunt for these TTPs across enterprise telemetry and correlate signals to detect emerging threats. 7. Build Organizational Awareness Organizations should train staff to identify phishing, impersonation, and deepfake threats. Simulated attacks help improve response readiness and reduce human error. Use Attack Simulation Training, in Microsoft Defender for Office 365 to run realistic phishing scenarios and assess user vulnerability. Additionally, educate users about consent phishing, where attackers trick individuals into granting access to malicious apps. Conclusion The democratization of offensive security tooling, combined with the persistent failure to patch known vulnerabilities, has significantly lowered the barrier to entry for cyber attackers. Organizations must recognize that the tools used against them are often the same ones available to their own security teams. The key to resilience lies not in avoiding these tools, but in mastering themâusing them to simulate attacks, identify weaknesses, and build a proactive defense. Cybersecurity is no longer a matter of if, but when. The question is: will you detect the attacker before they achieve their objective? Will you be able to stop them before reaching your most sensitive data? Additional read: Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable in Isolation Due to AI-Generated Deepfakes by 2026 Cyber security breaches survey 2025 - GOV.UK Jasper Sleet: North Korean remote IT workersâ evolving tactics to infiltrate organizations | Microsoft Security Blog MOVEit Transfer vulnerability Solt Thypoon Scattered Spider SIM swaps Attackers exploiting new critical OpenMetadata vulnerabilities on Kubernetes clusters | Microsoft Security Blog Microsoft Defender Vulnerability Management - Microsoft Defender Vulnerability Management | Microsoft Learn Zero Trust Architecture | NIST tactics, techniques, and procedures (TTP) - Glossary | CSRC https://learn.microsoft.com/en-us/security/zero-trust/deploy/overviewHow to use Log Analytics log data exported to Storage Accounts
In this blog post I explore some options for accessing logs that were archived in Azure storage account containers, either through export from Log Analytics and Sentinel or through a custom Logic App. This is to address exceptional cases where you need those archived data, for example for historical context during an investigation.5.6KViews3likes6Comments