Analytics
132 TopicsNetwork Monitoring
Hi, I recently applied Network Security Groups on Virtual Networks (NSG). Now my question is, is it possible to monitor / record the network traffic? For example, I've configured many rules on the NSG, now a application on a Server won't work and my first guess is the NSG is blocking the communication. How do I see now which port the application is using so I can set a new rule to the NSG? I know when you already know the port you can check it in Network Watcher "IP flow verify and NSG diagnostics" as a whatif state. Traffic Analytics isn't the right answer too or am I seeing it wrong? Vnet Flow Logs should be the right thing. I configured it, applied traffic analytics and a account storage. Applied it for testing on a nic but I don't see anything practical for my use? The only thing Iwish is to see live or logged the traffic if the NSG blocked anything and troubleshoot.57Views0likes1CommentUnable to process AAS model connecting to Azure SQL with Service Account
Hello I have built a demo SSAS model that I am hosting on an Azure Analysis Services Server. The model connects to an Azure SQL database in my tenant (the Database is the default AdventureWorks provided by Azure when creating your first DB). To connect to the Azure SQL, I have created an App (service principal) and granted it reader access to my Azure SQL DB. If I login to the Azure SQL DB from SSMS with this account, using Microsoft Entra Service Principal Authentication providing ClientId@TenantID for the Username and SecretValue as the password, I am able to login and SELECT from the tables. However, when I try to process the SSAS model, I get an error. For reference, below I have put the TMSL script that sets the DataSource part of the SSAS after deployment via YAML pipelines (variables are replaced when running). I think the issue lies in the "AuthenticationKind" value I have provided in the credential, but I can't figure out what to use. When I create the datasource like this and process, I get error: Failed to save modifications to the server. Error returned: '<ccon>Windows authentication has been disabled in the current context.</ccon>. I don't understand why since I am not using Windows authentication kind. Every other keyword I used ib the "AuthenticationKind" part returns error AuthenticationKind not supported. Any help on how to change this script would be useful. { "createOrReplace": { "object": { "database": "$(AAS_DATABASE)", "dataSource": "$(AZSQLDataSourceName)" }, "dataSource": { "type": "structured", "name": "$(AZSQLDataSourceName)", "connectionDetails": { "protocol": "tds", "address": { "server": "$(AZSQLServer)" }, "initialCatalog": "$(AZSQLDatabase)" }, "credential": { "AuthenticationKind": "ServiceAccount", "username": "$(AZSQL_CLIENT_ID)@$(AZSQL_TENANT_ID)", "password": "$(AZSQL_CLIENT_SECRET)" } } } }45Views0likes1CommentFrom Doubt to Victory: How I Passed Microsoft SC-200
Hey everyone! I wanted to share my journey of how I went from doubting my chances to successfully passing the Microsoft SC-200 exam. At first, the idea of taking the SC-200 seemed overwhelming. With so many topics to cover, especially with the integration of Microsoft security technologies, I wasn’t sure if I could pull it off. But after months of studying and staying consistent, I finally passed! 🎉 Here’s what worked for me: Study Plan: I created a structured study schedule and stuck to it. I broke down each section of the exam objectives and allocated time for each part. Authentic Exam Questions: I used it-examstest for practice exams. Their realistic test format helped me get a good grasp of the exam pattern. Plus, the explanations for the answers were super helpful in understanding the concepts. Practice Exams: I did multiple mock tests. Honestly, they helped me more than I expected! They boosted my confidence, and I could pinpoint areas where I needed to improve. SC-200 Study Materials: I relied on a combination of online courses, books, and video resources. Watching the study videos and taking notes helped me retain the information better. Don’t Cram: I didn’t leave things to the last minute. It took me about 2-3 months of consistent study to get comfortable with the material. I made sure to take breaks and not burn myself out. Passing this exam felt amazing! If you're in the same boat and feeling uncertain, just stick with it! It’s a challenging exam, but with the right tools and preparation, you can do it. Keep pushing forward, and good luck to everyone! 💪 Would be happy to answer any questions if anyone has them!388Views0likes2CommentsHow to Create an AI Model for Streaming Data
A Practical Guide with Microsoft Fabric, Kafka and MLFlow Intro In today’s digital landscape, the ability to detect and respond to threats in real-time isn’t just a luxury—it’s a necessity. Imagine building a system that can analyze thousands of user interactions per second, identifying potential phishing attempts before they impact your users. While this may sound complex, Microsoft Fabric makes it possible, even with streaming data. Let’s explore how. In this hands-on guide, I’ll walk you through creating an end-to-end AI solution that processes streaming data from Kafka and employs machine learning for real-time threat detection. We’ll leverage Microsoft Fabric’s comprehensive suite of tools to build, train, and deploy an AI model that works seamlessly with streaming data. Why This Matters Before we dive into the technical details, let’s explore the key advantages of this approach: real-time detection, proactive protection, and the ability to adapt to emerging threats. Real-Time Processing: Traditional batch processing isn’t enough in today’s fast-paced threat landscape. We need immediate insights. Scalability: With Microsoft Fabric’s distributed computing capabilities, our solution can handle enterprise-scale data volumes. Integration: By combining streaming data processing with AI, we create a system that’s both intelligent and responsive. What We’ll Build I’ve created a practical demonstration that showcases how to: Ingest streaming data from Kafka using Microsoft Fabric’s Eventhouse Clean and prepare data in real-time using PySpark Train and evaluate an AI model for phishing detection Deploy the model for real-time predictions Store and analyze results for continuous improvement The best part? Everything stays within the Microsoft Fabric ecosystem, making deployment and maintenance straightforward. Azure Event Hub Start by creating an Event Hub namespace and a new Event Hub. Azure Event Hubs have Kafka endpoints ready to start receiving Streaming Data. Create a new Shared Access Signature and utilize the Python i have created. You may adopt the Constructor to your own idea. import uuid import random import time from confluent_kafka import Producer # Kafka configuration for Azure Event Hub config = { 'bootstrap.servers': 'streamiot-dev1.servicebus.windows.net:9093', 'sasl.mechanisms': 'PLAIN', 'security.protocol': 'SASL_SSL', 'sasl.username': '$ConnectionString', 'sasl.password': 'Endpoint=sb://<replacewithyourendpoint>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=xxxxxxx', } # Create a Kafka producer producer = Producer(config) # Shadow traffic generation def generate_shadow_payload(): """Generates a shadow traffic payload.""" subscriber_id = str(uuid.uuid4()) # Weighted choice for subscriberData if random.choices([True, False], weights=[5, 1])[0]: subscriber_data = f"{random.choice(['John', 'Mark', 'Alex', 'Gordon', 'Silia' 'Jane', 'Alice', 'Bob'])} {random.choice(['Doe', 'White', 'Blue', 'Green', 'Beck', 'Rogers', 'Fergs', 'Coolio', 'Hanks', 'Oliver', 'Smith', 'Brown'])}" else: subscriber_data = f"https://{random.choice(['example.com', 'examplez.com', 'testz.com', 'samplez.com', 'testsite.com', 'mysite.org'])}" return { "subscriberId": subscriber_id, "subscriberData": subscriber_data, } # Delivery report callback def delivery_report(err, msg): """Callback for delivery reports.""" if err is not None: print(f"Message delivery failed: {err}") else: print(f"Message delivered to {msg.topic()} [partition {msg.partition()}] at offset {msg.offset()}") # Topic configuration topic = 'streamio-events1' # Simulate shadow traffic generation and sending to Kafka try: print("Starting shadow traffic simulation. Press Ctrl+C to stop.") while True: # Generate payload payload = generate_shadow_payload() # Send payload to Kafka producer.produce( topic=topic, key=str(payload["subscriberId"]), value=str(payload), callback=delivery_report ) # Throttle messages (1500ms) producer.flush() # Ensure messages are sent before throttling time.sleep(1.5) except KeyboardInterrupt: print("\nSimulation stopped.") finally: producer.flush() You can run this from your Workstation, an Azure Function or whatever fits your case. Architecture Deep Dive: The Three-Layer Approach When building AI-powered streaming solutions, thinking in layers helps manage complexity. Let’s break down our architecture into three distinct layers: Bronze Layer: Raw Streaming Data Ingestion At the foundation of our solution lies the raw data ingestion layer. Here’s where our streaming story begins: A web service generates JSON payloads containing subscriber data These events flow through Kafka endpoints Data arrives as structured JSON with key fields like subscriberId, subscriberData, and timestamps Microsoft Fabric’s Eventstream captures this raw streaming data, providing a reliable foundation for our ML pipeline and stores the payloads in Eventhouse Silver Layer: The Intelligence Hub This is where the magic happens. Our silver layer transforms raw data into actionable insights: The EventHouse KQL database stores and manages our streaming data Our ML model, trained using PySpark’s RandomForest classifier, processes the data SynapseML’s Predict API enables seamless model deployment A dedicated pipeline applies our ML model to detect potential phishing attempts Results are stored in Lakehouse Delta Tables for immediate access Gold Layer: Business Value Delivery The final layer focuses on making our insights accessible and actionable: Lakehouse tables store cleaned, processed data Semantic models transform our predictions into business-friendly formats Power BI dashboards provide real-time visibility into phishing detection Real-time dashboards enable immediate response to potential threats The Power of Real-Time ML for Streaming Data What makes this architecture particularly powerful is its ability to: Process data in real-time as it streams in Apply sophisticated ML models without batch processing delays Provide immediate visibility into potential threats Scale automatically as data volumes grow Implementing the Machine Learning Pipeline Let’s dive into how we built and deployed our phishing detection model using Microsoft Fabric’s ML capabilities. What makes this implementation particularly interesting is how it combines traditional ML with streaming data processing. Building the ML Foundation First, let’s look at how we structured the training phase of our machine learning pipeline using PySpark: Training Notebook Connect to Eventhouse Load the data from pyspark.sql import SparkSession # Initialize Spark session (already set up in Fabric Notebooks) spark = SparkSession.builder.getOrCreate() # Define connection details kustoQuery = """ SampleData | project subscriberId, subscriberData, ingestion_time() """ # Replace with your desired KQL query kustoUri = "https://<eventhousedbUri>.z9.kusto.fabric.microsoft.com" # Replace with your Kusto cluster URI database = "Eventhouse" # Replace with your Kusto database name # Fetch the access token for authentication accessToken = mssparkutils.credentials.getToken(kustoUri) # Read data from Kusto using Spark df = spark.read \ .format("com.microsoft.kusto.spark.synapse.datasource") \ .option("accessToken", accessToken) \ .option("kustoCluster", kustoUri) \ .option("kustoDatabase", database) \ .option("kustoQuery", kustoQuery) \ .load() # Show the loaded data print("Loaded data:") df.show() Separate and flag Phishing payload Load it with Spark from pyspark.sql.functions import col, expr, when, udf from urllib.parse import urlparse # Define a UDF (User Defined Function) to extract the domain def extract_domain(url): if url.startswith('http'): return urlparse(url).netloc return None # Register the UDF with Spark extract_domain_udf = udf(extract_domain) # Feature engineering with Spark df = df.withColumn("is_url", col("subscriberData").startswith("http")) \ .withColumn("domain", extract_domain_udf(col("subscriberData"))) \ .withColumn("is_phishing", col("is_url")) # Show the transformed data df.show() Use Spark ML Lib to Train the model Evaluate the Model from pyspark.sql.functions import col from pyspark.ml.feature import Tokenizer, HashingTF, IDF from pyspark.ml.classification import RandomForestClassifier from pyspark.ml import Pipeline from pyspark.ml.evaluation import MulticlassClassificationEvaluator # Ensure the label column is of type double df = df.withColumn("is_phishing", col("is_phishing").cast("double")) # Tokenizer to break text into words tokenizer = Tokenizer(inputCol="subscriberData", outputCol="words") # Convert words to raw features using hashing hashingTF = HashingTF(inputCol="words", outputCol="rawFeatures", numFeatures=100) # Compute the term frequency-inverse document frequency (TF-IDF) idf = IDF(inputCol="rawFeatures", outputCol="features") # Random Forest Classifier rf = RandomForestClassifier(labelCol="is_phishing", featuresCol="features", numTrees=10) # Build the ML pipeline pipeline = Pipeline(stages=[tokenizer, hashingTF, idf, rf]) # Split the dataset into training and testing sets train_data, test_data = df.randomSplit([0.7, 0.3], seed=42) # Train the model model = pipeline.fit(train_data) # Make predictions on the test data predictions = model.transform(test_data) # Evaluate the model's accuracy evaluator = MulticlassClassificationEvaluator( labelCol="is_phishing", predictionCol="prediction", metricName="accuracy" ) accuracy = evaluator.evaluate(predictions) # Output the accuracy print(f"Model Accuracy: {accuracy}") Add Signature to AI Model from mlflow.models.signature import infer_signature from pyspark.sql import Row # Select a sample for inferring signature sample_data = train_data.limit(10).toPandas() # Create a Pandas DataFrame for schema inference input_sample = sample_data[["subscriberData"]] # Input column(s) output_sample = model.transform(train_data.limit(10)).select("prediction").toPandas() # Infer the signature signature = infer_signature(input_sample, output_sample) Run – Publish Model and Log Metric: Accuracy import mlflow from mlflow import spark # Start an MLflow run with mlflow.start_run() as run: # Log the Spark MLlib model with the signature mlflow.spark.log_model( spark_model=model, artifact_path="phishing_detector", registered_model_name="PhishingDetector", signature=signature # Add the inferred signature ) # Log metrics like accuracy mlflow.log_metric("accuracy", accuracy) print(f"Model logged successfully under run ID: {run.info.run_id}") Results and Impact Our implementation achieved: 81.8% accuracy in phishing detection Sub-second prediction times for streaming data Scalable processing of thousands of events per second Yes, that's a good start ! Now let's continue our post by explaining the deployment and operation phase of our ML solution: From Model to Production: Automating the ML Pipeline After training our model, the next crucial step is operationalizing it for real-time use. We’ve implemented one Pipeline with two activities that process our streaming data every 5 minutes: All Streaming Data Notebook # Main prediction snippet from synapse.ml.predict import MLFlowTransformer # Apply ML model for phishing detection model = MLFlowTransformer( inputCols=["subscriberData"], outputCol="predictions", modelName="PhishingDetector", modelVersion=3 ) # Transform and save all predictions df_with_predictions = model.transform(df) df_with_predictions.write.format('delta').mode("append").save("Tables/phishing_predictions") Clean Streaming Data Notebook # Filter for non-phishing data only non_phishing_df = df_with_predictions.filter(col("predictions") == 0) # Save clean data for business analysis non_phishing_df.write.format("delta").mode("append").save("Tables/clean_data") Creating Business Value What makes this architecture particularly powerful is the seamless transition from ML predictions to business insights: Delta Lake Integration: All predictions are stored in Delta format, ensuring ACID compliance Enables time travel and data versioning Perfect for creating semantic models Real-Time Processing: 5-minute refresh cycle ensures near real-time threat detection Automatic segregation of clean vs. suspicious data Immediate visibility into potential threats Business Intelligence Ready: Delta tables are directly compatible with semantic modeling Power BI can connect to these tables for live reporting Enables both historical analysis and real-time monitoring The Power of Semantic Models With our data now organized in Delta tables, we’re ready for: Creating dimensional models for better analysis Building real-time dashboards Generating automated reports Setting up alerts for security teams Real-Time Visualization Capabilities While Microsoft Fabric offers extensive visualization capabilities through Power BI, it’s worth highlighting one particularly powerful feature: direct KQL querying for real-time monitoring. Here’s a glimpse of how simple yet powerful this can be: SampleData | where EventProcessedUtcTime > ago(1m) // Fetch rows processed in the last 1 minute | project subscriberId, subscriberData, EventProcessedUtcTime This simple KQL query, when integrated into a dashboard, provides near real-time visibility into your streaming data with sub-minute latency. The visualization possibilities are extensive, but that’s a topic for another day. Conclusion: Bringing It All Together What we’ve built here is more than just a machine learning model – it’s a complete, production-ready system that: Ingests and processes streaming data in real-time Applies sophisticated ML algorithms for threat detection Automatically segregates clean from suspicious data Provides immediate visibility into potential threats The real power of Microsoft Fabric lies in how it seamlessly integrates these different components. From data ingestion through Eventhouse ad Lakehouse, to ML model training and deployment, to real-time monitoring – everything works together in a unified platform. What’s Next? While we’ve focused on phishing detection, this architecture can be adapted for various use cases: Fraud detection in financial transactions Quality control in manufacturing Customer behavior analysis Anomaly detection in IoT devices The possibilities are endless with our imagination and creativity! Stay tuned for the Git Repo where all the code will be shared ! References Get Started with Microsoft Fabric Delta Lake in Fabric Overview of Eventhouse CloudBlogger: A guide to innovative Apps with MS Fabric199Views0likes0CommentsFormer Employer Abuse
My former employer, Albert Williams, president of American Security Force Inc., keeps adding my outlook accounts, computers and mobile devices to the company's azure cloud even though I left the company more than a year ago. What can I do to remove myself from his grip? Does Microsoft have a solution against abusive employers?43Views0likes0CommentsCreating Logic App to Identify Low Storage Devices from Intune
Hello everyone, I’m seeking some assistance with creating a Logic App. I need to identify devices in Intune that have 5GB or less of available space and receive an email with the details of these devices, including their names. Is this achievable?521Views0likes3CommentsAzure - PowerShell script to change the Table Retention in Azure Log Analytics Workspaces
With large scale implementation of Azure, the Log Analytics Workspace volume could increase and the default value for retention is quite long if you are not changing it. This PowerShell script will help you to reset the 2 retention values applied in Workspace Tables (Live and Total). I applied a selection criteria based in name as we are using a naming convention with status (prod, vs nonprod), you can anyway adapt this part with your context. #Install-Module -Name Az -Repository PSGallery -Force Import-module Az Connect-AzAccount $RetentionDays = 30 $TotalRetentionDays = 30 $AzureRetentionDays = 90 $AzureTotalRetentionDays = 90 $namecriteria = "nonprod" $All_Az_Subscriptions = Get-AzSubscription Foreach ($Az_Subscription in $All_Az_Subscriptions) { ################################################### #Set the context Write-Host "Working on subscription ""$($Az_Subscription.Name)""" Set-AzContext -SubscriptionObject $Az_Subscription | Out-Null $AllWorkspaces = Get-AzOperationalInsightsWorkspace foreach ($myWorkspace in $AllWorkspaces) { Write-Host " ---------------", $myWorkspace.Name ,"---------------- " -foregroundcolor "gray" if ($myWorkspace.Name -match $namecriteria) { Write-Host " >>> WORKSPACE TO APPLY RETENTION ADJUSTMENT:", $myWorkspace.Name -foregroundcolor "green" if ($myWorkspace.retentionInDays -gt $RetentionDays) { Write-Host " >>> APPLYING DEFAULT RETENTION PERIOD:", $RetentionDays -foregroundcolor "yellow" Set-AzOperationalInsightsWorkspace -ResourceGroupName $myWorkspace.ResourceGroupName -Name $myWorkspace.Name -RetentionInDays $RetentionDays } $GetAllTables = Get-AzOperationalInsightsTable -ResourceGroupName $myWorkspace.ResourceGroupName -WorkspaceName $myWorkspace.Name foreach ($MyTable in $GetAllTables) { if (($MyTable.Name -eq "AzureActivity") -or ($MyTable.Name -eq "Usage")) { if (($MyTable.RetentionInDays -gt $AzureRetentionDays) -or ($MyTable.TotalRetentionInDays -gt $AzureTotalRetentionDays)) { Write-Host " >>> APPLYING SPECIFIC RETENTION PERIOD:", $AzureRetentionDays, "- TABLE:", $MyTable.Name -foregroundcolor "yellow" Update-AzOperationalInsightsTable -ResourceGroupName $MyTable.ResourceGroupName -WorkspaceName $MyTable.WorkspaceName -TableName $MyTable.Name -RetentionInDays $AzureRetentionDays -TotalRetentionInDays $AzureTotalRetentionDays } else { Write-Host " >>> NO CHANGE FOR RETENTION PERIOD FOR TABLE:", $MyTable.Name -foregroundcolor "green" } } else { if (($MyTable.RetentionInDays -gt $RetentionDays) -or ($MyTable.TotalRetentionInDays -gt $RetentionDays)) { Write-Host " >>> APPLYING NEW RETENTION PERIOD:", $RetentionDays, "- TABLE:", $MyTable.Name -foregroundcolor "yellow" Update-AzOperationalInsightsTable -ResourceGroupName $MyTable.ResourceGroupName -WorkspaceName $MyTable.WorkspaceName -TableName $MyTable.Name -RetentionInDays $RetentionDays -TotalRetentionInDays $TotalRetentionDays } else { Write-Host " >>> NO CHANGE FOR RETENTION PERIOD FOR TABLE:", $MyTable.Name -foregroundcolor "green" } } } } else { Write-Host " >>> WORKSPACE NOT CONCERNED BY THIS CHANGE:", $myWorkspace.Name -foregroundcolor "green" } } } With this script, we reduced the Workspace cost for non prod drastically maintaining only the last 30 days live without any archive. The material used for this script is: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/data-retention-archive?tabs=portal-3%2Cportal-1%2Cportal-2 https://learn.microsoft.com/en-us/powershell/module/az.operationalinsights/get-azoperationalinsightsworkspace?view=azps-11.6.0 https://learn.microsoft.com/en-us/powershell/module/az.operationalinsights/update-azoperationalinsightstable?view=azps-11.6.0 Fabrice Romelard736Views0likes1CommentQuerying tables from different Log Analytics Workspace
Dear specialists, when using "workspace("GUID").tablename" I can get information from a table in a different workspace. However, when I extend tablename | where ColumnName it says that "ColumnName" cannot be found. Any idea how work with tables in other LAWs including normal KQL functionalities? Thank you in advance! 🙂 Ruben385Views0likes0CommentsHow to write kusto query to display resource type like in Azure
When I go to the Azure portal to display the resources in the resource group, the list of types would look like: I ran the following Kusto query in the Azure Resource Graph Explorer to obtain the resources in the subscription: | join kind=inner ( resourcecontainers | where type == 'microsoft.resources/subscriptions' ) on $left.subscriptionId == $right.subscriptionId | project id, name, type, location, resourceGroup, name1 | where resourceGroup == 'wp-production-dynamicsnavaddons' | order by name asc The results will display the type as (these were exported into a CSV file): I can assume that the value of "microsoft.web/sites" from Kusto is "App Service" in the portal. But we have a lot of resources in Azure and I don't want to match them 1 by 1. I was wondering is there a way in Kusto Query to display the Type like what's in the portal? (display "App Service" instead of "microsoft.web/sites"). Jason482Views0likes0Comments