https://graph.microsoft.com/v1.0/Groups/<the owner id from the plan>/members
Now I have my products, buckets and resource ids I can create my json lookup file that I’ll be using from PowerShell. The format is as follows – obviously your ids will be different and feel free to choose different products – this file drives the selection, placement and assignment of messages to tasks.
An Application Id, sometimes called an AppId or a ClientId is a code from Azure that has certain permissions associated with it and is then used, along with your credentials, to get a token that allows you to do stuff with the various APIs. In my scenario we will be hitting both the Office 365 Service Communications API as well as the Graph API. I’ll be using the same AppId for both and setting the necessary permissions but you could also use two different AppIds as the different APIs are used by my two different functions. To get an Application Id you need to navigate to the Azure AD Admin Center for your Office 365 account (which may not be the same account as your Azure Portal account). From the Office 365 Admin Portal navigate to the Admin Centers list in the left navigation and Azure AD is usually found at the bottom.
Once you land at aad.portal.azure.com you click on App registrations option where we will create a new App registration and get our Id.
Clicking on New application registration at the top of the App Registration blade and enter a name for your App (it can be anything) and make sure you select Native, and the Redirect URI can be your O365 tenant. The Create button is at the foot of the blade (and move the focus to another field if if isn’t active).
Next select the App registration you just created and copy the Application ID from the Essentials pane for use later – and click All settings so we can give it some permissions – by selecting Required permissions on the next blade.
I’ll be setting all permissions required on this single App ID – but if you wanted to split then then the function that writes the plans needs one more added to the Windows Azure Active Directory API – of Read and write all Groups (sign in and read user profile will already be selected). and then you will also need to click Save, and then Grant Permissions (and Yes to the dialog) in the left blade. Granting permissions for a Native App is akin to the dialog you will have no doubt seen in some web apps where you have to acknowledge that the App can do stuff for you before it takes you to the App itself.
For the other permissions required to read the message center posts we will be adding another API by selecting the Add option and Select an API and choose Office 365 Management APIs. From the list of delegated permissions we only need to select the top one (as I write this anyway) Read service health information for your organization. Worth scanning the others just to give you an idea what other data this API may allow you to read for other applications. Select, Done and then Grant Permissions will finish our step (and you did copy the Application ID didn’t you?). Our permissions page showing the API we just added should look like this:
(If you’d rather just run the PowerShell from your desktop – and you have the permissions then you can skip the Azure stuff – meet me again at Step 5 where I’ll be walking through the actual PowerShell code in the functions)
If you already have an Azure subscription then you are good to go – or you can create a free account if you just want to try things out. You get a $200 credit (or local equivalent) and you also get a free amount of execution time each month (400,000 GB-s) and 1 million executions. There is also a small cost for the storage account which also gets used for the queue. https://azure.microsoft.com/en-us/free/
Once you have a subscription then Click New in the upper left and in the Compute section you will find Function App. Once clicked it will prompt you for a name (which needs to be unique across Azure – I usually prefix things I’m working on with my alias. You also select your subscription and either create a new or use and existing resource group (I’m using a new one – it makes cleaning up easier that I can just delete the resource group when I’m finished – if this is a keeper for you then you might want to use an existing). I’m going with the consumption plan – West US location and a new storage account and finally turning on Application Insights – which gives some useful telemetry on your working system. Check Pin to dashboard so you can easily find your Function App again and click Create.
It just takes a few minutes to deploy and will show on your dashboard when it is ready. In subsequent steps we will be adding our actual functions but first we will configure several application settings – basically some of the variables that you need that are easier to pull in to the function rather than have to hard code wherever you need them. Once the blade for your new Function App opens you will see a section bottom right labelled Configured features – and we are headed to Application settings to set the settings for our application:
There are a bunch of settings already configured – we will be adding the following:
*** added tenantId 10/28/2017 - You can find it by going to the Admin Portal, then the Admin Center for Azure AD. then the Properties item under Manage – and the Directory ID is the GUID you are looking for. ***
Variable Value Purpose
aadtenant <yourtenant>.onmicrosoft.com The tenant we are working with
aad_username <yourname>@<yourtenant>,onmicrosoft.com The login we will be using
aad_password *************************** Password for above
clientId <your application id from step 3> Identify our app and get the permissions
messageCenterPlanId <the PlanId of the Plan from Step 1> Make sure we write to the right plan
tenantId <see above> Identifies your tenant for the api calls
To create our first function – which will be the one that reads the messages based on a timer (hourly for starters) and writes to a storage queue we first hit the + sign next to Functions:
There is a new Wizard for premade functions – but for the PowerShell one I am creating I’ll go for the Custom function option in the lower part of the screen:
I choose PowerShell in the language dropdown (but take a look at the other options while you are there) and the middle one is the one for us – TimerTrigger – PowerShell. I’ll call it ReadMessagesOnTimer and set the schedule to hourly with the cron format of 0 0 * * * * and click Create (Daily may be fast enough in production – although if you were using this to read service health data and not messages then hourly may be appropriate. See https://codehollow.com/2017/02/azure-functions-time-trigger-cron-cheat-sheet/ for a good cron reference for Azure Functions.
The initial script just outputs a timestamp of when the function was executed. We have some other settings to add before we paste in the script.
If we go to the View files tab on the right there are a couple of files we need to upload. One is the products.json file we created earlier, and the second is a dll that we need for the Azure (adal) authentication. This is called Microsoft.IdentityModel.Clients.ActiveDirectory.dll and can be found in C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Services by default – and if you don’t see it then you probably need the Azure SDK https://azure.microsoft.com/en-us/downloads/ . Upload both of these files.
Next in the Integrate section under our function we will add a new output – for writing to our storage queue. After we click New output we can choose Azure Queue Storage from the panel and click Select.
For the Message parameter name and Queue name I’ll leave the default, and for the Storage account connection I’ll choose AzureWebJobsStorage from the dropdown – then click Save. And while I’m here I’ll also Create a new Function triggered by this output by clicking Go,
Again I will choose PowerShell from the Language dropdown – name my function WriteTaskToPlan and click Create.
My second function also needs to have the dll uploaded but doesn’t need the products file as I do all the selection and tagging in the first function and write that out to the queue.
Now for the fun stuff – pasting in the PowerShell code itself and running! The code is attached in ReadMessageOnTimer.txt and WriteTaskToPlan.txt. We can start with the first one first – paste it in and there shouldn’t need to be any need for edits unless you have deviated from my naming – one potential change is in the path to the uploaded documents – which includes the name of the function – so edit as necessary. You can run this just using the Run option – and you will see an exception first time as the queue gets created if it doesn’t exist – and it will populate the queue with messages. If I navigate back to the Integrate section for my WriteTaskToPlan function (which we haven’t pasted in yet, unless you are ahead of me) and change the Queue name to the real one I am using “message-center-to-planner-tasks” we should quickly see that our queue gets drained – assuming you have a way to monitor the queue! This is where Azure Storage Explorer comes in handy – see the foot of the https://azure.microsoft.com/en-us/downloads/ page under Standalone tools.
In this screenshot I managed to see some of the messages before they were picked up by the 2nd function (which isn’t actually creating tasks yet).
The storage account is my brismitho365mcstore and I can see my message-center-to-planner-tasks queue. We can also monitor function activity via the Monitor section under each of the functions – and this shows the ‘empty’ function pulling messages from the queue:
If I now overwrite the current contents (2 lines) of my WriteTaskToPlan function with the contents of WriteTaskToPlan.txt – updating the paths to the files as necessary if the name isn’t the same as mine – then I can save and I should now have a working system. I can either wait until the top of the hour – or just run the ReadMessageOnTimer function manually to check that all is working. Just to show some of the debugging capabilities I ‘forgot’ to load the dll – so in my case I had a number of failures (it tries each queue message 3 times before giving up) and here you can see the really useful log info that shows that my dll wasn’t found (Thinking back I probably could have loaded it at the wwwroot level to serve both functions too):
With my dll uploaded and another manual run of my function we are in business!
And to show what the tasks look like when I drill in – lets take a look at a rich content message (Actually a test sent just to my tenant with a Planner title - MC123579 - Planner: test format targeted post (we have a bug currently that the thumbnail for the image isn’t showing) – but first this is the original message in the Message Center – complete with a product link and a YouTube video:
Then in Planner we don’t have the richness so instead I trawled the text for Urls and added them as attachments – so the person who is assigned the task in Planner can still review the content (and the potential target audience here is people who can’t access the Message Center as they are not global admins remember). You can also see here the categories set from metadata in the message.
That turned into a pretty long blog – even if it is mostly pictures – and I really wanted to step through the code too to explain what I was doing – as I’m sure even if you don’t want to do this precise thing there may be parts that are useful – but I’ll save that for another blog.
This last part is taking the logic of the two functions and just putting them into one PowerShell that you could run from your desktop (assuming the right permissions, the Office 365 PowerShell connectivity – see https://technet.microsoft.com/en-us/library/dn975125.aspx and also the Azure SDK stuff https://azure.microsoft.com/en-us/downloads/ (PowerShell is listed half way down in the command-line tools).
You will still need to do Steps 1 to 3 to create your Plan, get your bucket and resource info and create your products.json – and register an ApplicationId – and then the PowerShell is in the attachment – MCtoPlannerFull.txt. You will need to edit the various constants – all identified by <something that needs editing> type text. The main differences are references to local files – not going via a queue, and looping round the entire set of messages as it writes them out.
Hopefully the copy/paste of the text hasn’t broke anything – but always worth looking for the quotes or dashes – just in case they have been changed by one of the editors I’ve used!
This is just a sample – and there are many ways you could change things about – from pulling the other message types, such as service information, or writing out to other applications like Yammer or Teams – that is the beauty of the Graph APIs – once you get familiar with them they open up a whole world of applications. And maybe the Message Center will get the PowerApp/Flow treatment – and make this even easier!
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.