Forum Widgets
Latest Discussions
Effective Cloud Governance: Leveraging Azure Activity Logs with Power BI
We all generally accept that governance in the cloud is a continuous journey, not a destination. There's no one-size-fits-all solution and depending on the size of your Azure cloud estate, staying on top of things can be challenging even at the best of times. One way of keeping your finger on the pulse is to closely monitor your Azure Activity Log. This log contains a wealth of information ranging from noise to interesting to actionable data. One could set up alerts for delete and update signals however, that can result in a flood of notifications. To address this challenge, you could develop a Power Bi report, similar to this one, that pulls in the Azure Activity Log and allows you to group and summarize data by various dimensions. You still need someone to review the report regularly however consuming the data this way makes it a whole lot easier. This by no means replaces the need for setting up alerts for key signals, however it does give you a great view of what's happened in your environment. If you're interested, this is the KQL query I'm using in Power Bi let start_time = ago(24h); let end_time = now(); AzureActivity | where TimeGenerated > start_time and TimeGenerated < end_time | where OperationNameValue contains 'WRITE' or OperationNameValue contains 'DELETE' | project TimeGenerated, Properties_d.resource, ResourceGroup, OperationNameValue, Authorization_d.scope, Authorization_d.action, Caller, CallerIpAddress, ActivityStatusValue | order by TimeGenerated ascAdeelazizJan 17, 2025Brass Contributor82Views0likes0CommentsAzure Diagnostic data cannot be processed by Azure Stream Analytics due to InputDeserializerError
Planning to steam Azure resource(frontdoor) diagnostic logs to stream to Azure Stream Analytics. However, having troubles on this one as data specifically from AzureDiagnostics failed to get deserialized as input for Stream Analytics job. Error: Error while deserializing input message Id: Partition: [0], Offset: [3663944], SequenceNumber: [285]. Hit following error: Column name: ErrorInfo is already being used. Please ensure that column names are unique (case insensitive) and do not differ only by whitespaces. It's caused by a duplicating column, errorInfo and ErrorInfo on AzureDiagnostic Table, which I am unsure what distinguishes them apart when observing its values. Have any thoughts or solution in mind on how we could simplify or transform these Diagnostic log to possibly remove this duplicating column prior to getting ingested to the Stream Analytics job? Have initially thought of the following solutions, but they aren't so straight-forward and probably costs more and would like to hear other's thoughts as well. 1. Transformation using DCR. I beleive this is ideal for sending Diagnostic Logs to Log Analytics workspace. but this would mean diagnostic logs have to pass through the workspace and then get exported to Stream Analytics which to achieve, may require to add in more components in between the data pipeline. 2. Logic App. Saw somewhere where a scheduled Logic App(probably run by schedule) is used to export data using a query (KQL) from Log analytics workspace then get sent to a storage. Has to modify the destination to an event hub instead perhaps. yet again, to many layers just to pass on the data to ASA. Any other solution you can suggest to refining the incoming data to ASA while minimizing the utilization of compute resources?AizaBCAug 02, 2023Copper Contributor749Views0likes0CommentsAzure Alert ITSM Servicenow Connector Payload not appearing in ticket description
Hello, Trying to create ServiceNow tickets based on alerts from Azure alert rule in "Log Analytics Workspace" for Machine learning Job failures with ITSM connector based action group. In this process, in ServiceNow tickets are getting generated but issue is with Payload i.e., payload passed is not appearing in ticket description under the section <-- Log Entry --> as shown in screenshot below. I have gone through the documentation but I couldn't find exact reference in addressing this issue. It would be great if you can provide any suggestions / exact references in the documentation. Please let me know for any additional inputs. Thanks & Regards, Siva KumarSiva_Kumar_mentaJun 02, 2023Copper Contributor639Views0likes0CommentsIoT Hub Distributed Tracing
Hi I have been following this guide: https://learn.microsoft.com/en-us/azure/iot-hub/iot-hub-distributed-tracing and have done everything and messages are being sent with tracestates but I am not receiving any logs in my container or log analytics workspace, I get logs for other things like connections but not distributed tracing logs. what could the issue be? Thanks694Views0likes0CommentsLog Analytics dropping logs if the application shuts down immediately after
In my micronaut application, there is an error condition where I write the error to the logs and then I am forcing my application to shutdown. The shutdown happens as expected, but I do not see the error log that was fired just before the shutdown. Is it possible it is lost? Is it related to latency? My understanding is that even with the latency, I should get to see the error log.JohnOldmanApr 05, 2023Microsoft494Views0likes0CommentsAdvanced hunting query for pulling browser extension details and email address.
Hello, I have created a query which pulls out users with lastpass on Edge browser extension, I'm not able to get email details from the "LoggedonUser". DeviceTvmBrowserExtensions | join DeviceInfo on DeviceId | where ExtensionName like "LastPass" | summarize TotalDevices=dcount(DeviceName), ExtensionOn = dcountif(DeviceId,IsActivated=="true") by BrowserName, ExtensionName, ExtensionRisk, ExtensionId, LoggedOnUsers, DeviceName | sort by ExtensionName asc | mv-expand todynamic(LoggedOnUsers) | where BrowserName == @"edge" | join kind=leftouter ( IdentityInfo | where EmailAddress != "" | project emailaddress = AccountUpn, Department | distinct emailaddress ) on emailaddress | summarize emailaddress = makeset(Department), Accounts = makeset(AccountName) by BrowserName I want to link the email address to the "Loggedonuser" , the first part works i can pull user information out, but soon as i add the join in it stops working.am7861700Jan 10, 2023Copper Contributor4.2KViews0likes0Comments- henry75Dec 05, 2022Copper Contributor496Views0likes0Comments
VM Availability Metric
For the VM availability metrics, it is said "The metric allows you to track the pulse of your VMs—during expected behavior, the metric displays a value of 1. In response to any VM availability disruptions, the metric dips to a 0 for the duration of impact" I believe the pulse is referring to the heartbeat which is either 1 or 0. However, if i have a reading of 0.25 for a time gauntly of 5mins, may i know how to interpret the data? In addition, since the pulse track the availability of the VM, i am wondering how the number will become 0.25 since i did not stop the VM. Thanksalex14112022Nov 13, 2022Copper Contributor2.7KViews0likes0CommentsPermission denied
Hi all, Could anyone explain why I cant access /etc/opt/microsoft/omsagent/[WorkspaceID]/conf/omsagent.d/security_events.conf ? Keeps comining up as Permission Denied. I need to access this to check CEF scripts. Thank you!cfulbrookNov 01, 2022Copper Contributor564Views0likes0CommentsMissing Operation ID and name
We'd like to customize out log format, so we have a method to get the logger. However, when we use the logger, we found that the operation_Name and operation_Id disappeared. We'd like to know why the operation_Name and operation_Id disappear and how to make it show on the query result. Here' the sample code to get logger: FORMAT = json.dumps( { 'loggerName': '%(name)s', 'funcName': '%(funcName)s', 'message': '%(message)s', 'asctime': '%(asctime)s', 'level': '%(levelname)s', } ) def get_customized_json_logger(logger_name): logger = logging.getLogger(logger_name) logger.setLevel(logging.INFO) log_handler = logging.StreamHandler() log_handler.setLevel(logging.INFO) formatter = logging.Formatter(FORMAT) log_handler.setFormatter(formatter) logger.addHandler(log_handler) return loggerLucas915Oct 05, 2022Copper Contributor688Views0likes0Comments
Tags
- azure monitor1,093 Topics
- Azure Log Analytics401 Topics
- Query Language247 Topics
- Log Analytics63 Topics
- Custom Logs and Custom Fields18 Topics
- solutions17 Topics
- Metrics15 Topics
- workbooks14 Topics
- alerts14 Topics
- application insights13 Topics