General
687 TopicsAllow for the expansion of physical disks in a storage pool
Allow for the expansion of physical disks in a storage pool just like lvm on Linux. This would be useful in virtual environments and in Azure to be able to expand the physical representation of the virtual disk file. Be it a vmdk or vhdx. This currently allowable on other modern logical volume managers out there. This should be trivial to modify since the meta data is already there. It could be added to the powershell commandlets to rescan-physicaldisk, which would rebuild the metadata on the pool. I have a feedback hub suggestion open on it, which if you could please upvote: https://aka.ms/AAy4ihw29Views0likes0CommentsBuilding a Custom Continuous Export Pipeline for Azure Application Insights
1. Introduction MonitorLiftApp is a modular Python application designed to export telemetry data from Azure Application Insights to custom sinks such as Azure Data Lake Storage Gen2 (ADLS). This solution is ideal when Event Hub integration is not feasible, providing a flexible, maintainable, and scalable alternative for organizations needing to move telemetry data for analytics, compliance, or integration scenarios. 2. Problem Statement Exporting telemetry from Azure Application Insights is commonly achieved via Event Hub streaming. However, there are scenarios where Event Hub integration is not possible due to cost, complexity, or architectural constraints. In such cases, organizations need a reliable, incremental, and customizable way to extract telemetry data and move it to a destination of their choice, such as ADLS, SQL, or REST APIs. 3. Investigation Why Not Event Hub? Event Hub integration may not be available in all environments. Some organizations have security or cost concerns. Custom sinks (like ADLS) may be required for downstream analytics or compliance. Alternative Approach Use the Azure Monitor Query SDK to periodically export data. Build a modular, pluggable Python app for maintainability and extensibility. 4. Solution 4.1 Architecture Overview MonitorLiftApp is structured into four main components: Configuration: Centralizes all settings and credentials. Main Application: Orchestrates the export process. Query Execution: Runs KQL queries and serializes results. Sink Abstraction: Allows easy swapping of data targets (e.g., ADLS, SQL). 4.2 Configuration (app_config.py) All configuration is centralized in app_config.py, making it easy to adapt the app to different environments. CONFIG = { "APPINSIGHTS_APP_ID": "<your-app-id>", "APPINSIGHTS_WORKSPACE_ID": "<your-workspace-id>", "STORAGE_ACCOUNT_URL": "<your-adls-url>", "CONTAINER_NAME": "<your-container>", "Dependencies_KQL": "dependencies \n limit 10000", "Exceptions_KQL": "exceptions \n limit 10000", "Pages_KQL": "pageViews \n limit 10000", "Requests_KQL": "requests \n limit 10000", "Traces_KQL": "traces \n limit 10000", "START_STOP_MINUTES": 5, "TIMER_MINUTES": 5, "CLIENT_ID": "<your-client-id>", "CLIENT_SECRET": "<your-client-secret>", "TENANT_ID": "<your-tenant-id>" } Explanation: This configuration file contains all the necessary parameters for connecting to Azure resources, defining KQL queries, and scheduling the export job. By centralizing these settings, the app becomes easy to maintain and adapt. 4.3 Main Application (main.py) The main application is the entry point and can be run as a Python console app. It loads the configuration, sets up credentials, and runs the export job on a schedule. from app_config import CONFIG from azure.identity import ClientSecretCredential from monitorlift.query_runner import run_all_queries from monitorlift.target_repository import ADLSTargetRepository def build_env(): env = {} keys = [ "APPINSIGHTS_WORKSPACE_ID", "APPINSIGHTS_APP_ID", "STORAGE_ACCOUNT_URL", "CONTAINER_NAME" ] for k in keys: env[k] = CONFIG[k] for k, v in CONFIG.items(): if k.endswith("KQL"): env[k] = v return env class MonitorLiftApp: def __init__(self, client_id, client_secret, tenant_id): self.env = build_env() self.credential = ClientSecretCredential(tenant_id, client_id, client_secret) self.target_repo = ADLSTargetRepository( account_url=self.env["STORAGE_ACCOUNT_URL"], container_name=self.env["CONTAINER_NAME"], cred=self.credential ) def run(self): run_all_queries(self.target_repo, self.credential, self.env) if __name__ == "__main__": import time client_id = CONFIG["CLIENT_ID"] client_secret = CONFIG["CLIENT_SECRET"] tenant_id = CONFIG["TENANT_ID"] app = MonitorLiftApp(client_id, client_secret, tenant_id) timer_interval = app.env.get("TIMER_MINUTES", 5) print(f"Starting continuous export job. Interval: {timer_interval} minutes.") while True: print("\n[INFO] Running export job at", time.strftime('%Y-%m-%d %H:%M:%S')) try: app.run() print("[INFO] Export complete.") except Exception as e: print(f"[ERROR] Export failed: {e}") print(f"[INFO] Sleeping for {timer_interval} minutes...") time.sleep(timer_interval * 60) Explanation: The app can be run from any machine with Python and the required libraries installed—locally or in the cloud (VM, container, etc.). No compilation is needed; just run as a Python script. Optionally, you can package it as an executable using tools like PyInstaller. The main loop schedules the export job at regular intervals. 4.4 Query Execution (query_runner.py) This module orchestrates KQL queries, runs them in parallel, and serializes results. import datetime import json from concurrent.futures import ThreadPoolExecutor, as_completed from azure.monitor.query import LogsQueryClient def run_query_for_kql_var(kql_var, target_repo, credential, env): query_name = kql_var[:-4] print(f"[START] run_query_for_kql_var: {kql_var}") query_template = env[kql_var] app_id = env["APPINSIGHTS_APP_ID"] workspace_id = env["APPINSIGHTS_WORKSPACE_ID"] try: latest_ts = target_repo.get_latest_timestamp(query_name) print(f"Latest timestamp for {query_name}: {latest_ts}") except Exception as e: print(f"Error getting latest timestamp for {query_name}: {e}") return start = latest_ts time_window = env.get("START_STOP_MINUTES", 5) end = start + datetime.timedelta(minutes=time_window) query = f"app('{app_id}')." + query_template logs_client = LogsQueryClient(credential) try: response = logs_client.query_workspace(workspace_id, query, timespan=(start, end)) if response.tables and len(response.tables[0].rows) > 0: print(f"Query for {query_name} returned {len(response.tables[0].rows)} rows.") table = response.tables[0] rows = [ [v.isoformat() if isinstance(v, datetime.datetime) else v for v in row] for row in table.rows ] result_json = json.dumps({"columns": table.columns, "rows": rows}) target_repo.save_results(query_name, result_json, start, end) print(f"Saved results for {query_name}") except Exception as e: print(f"Error running query or saving results for {query_name}: {e}") def run_all_queries(target_repo, credential, env): print("[INFO] run_all_queries triggered.") kql_vars = [k for k in env if k.endswith('KQL') and not k.startswith('APPSETTING_')] print(f"Number of KQL queries to run: {len(kql_vars)}. KQL vars: {kql_vars}") with ThreadPoolExecutor(max_workers=len(kql_vars)) as executor: futures = { executor.submit(run_query_for_kql_var, kql_var, target_repo, credential, env): kql_var for kql_var in kql_vars } for future in as_completed(futures): kql_var = futures[future] try: future.result() except Exception as exc: print(f"[ERROR] Exception in query {kql_var}: {exc}") Explanation: Queries are executed in parallel for efficiency. Results are serialized and saved to the configured sink. Incremental export is achieved by tracking the latest timestamp for each query. 4.5 Sink Abstraction (target_repository.py) This module abstracts the sink implementation, allowing you to swap out ADLS for SQL, REST API, or other targets. from abc import ABC, abstractmethod import datetime from azure.storage.blob import BlobServiceClient class TargetRepository(ABC): @abstractmethod def get_latest_timestamp(self, query_name): pass @abstractmethod def save_results(self, query_name, data, start, end): pass class ADLSTargetRepository(TargetRepository): def __init__(self, account_url, container_name, cred): self.account_url = account_url self.container_name = container_name self.credential = cred self.blob_service_client = BlobServiceClient(account_url=account_url, credential=cred) def get_latest_timestamp(self, query_name, fallback_hours=3): blob_client = self.blob_service_client.get_blob_client(self.container_name, f"{query_name}/latest_timestamp.txt") try: timestamp_str = blob_client.download_blob().readall().decode() return datetime.datetime.fromisoformat(timestamp_str) except Exception as e: if hasattr(e, 'error_code') and e.error_code == 'BlobNotFound': print(f"[INFO] No timestamp blob for {query_name}, starting from {fallback_hours} hours ago.") else: print(f"[WARNING] Could not get latest timestamp for {query_name}: {type(e).__name__}: {e}") return datetime.datetime.utcnow() - datetime.timedelta(hours=fallback_hours) def save_results(self, query_name, data, start, end): filename = f"{query_name}/{start:%Y%m%d%H%M}_{end:%Y%m%d%H%M}.json" blob_client = self.blob_service_client.get_blob_client(self.container_name, filename) try: blob_client.upload_blob(data, overwrite=True) print(f"[SUCCESS] Saved results to blob for {query_name} from {start} to {end}") except Exception as e: print(f"[ERROR] Failed to save results to blob for {query_name}: {type(e).__name__}: {e}") ts_blob_client = self.blob_service_client.get_blob_client(self.container_name, f"{query_name}/latest_timestamp.txt") try: ts_blob_client.upload_blob(end.isoformat(), overwrite=True) except Exception as e: print(f"[ERROR] Failed to update latest timestamp for {query_name}: {type(e).__name__}: {e}") Explanation: The sink abstraction allows you to easily switch between different storage backends. The ADLS implementation saves both the results and the latest timestamp for incremental exports. End-to-End Setup Guide Prerequisites Python 3.8+ Azure SDKs: azure-identity, azure-monitor-query, azure-storage-blob Access to Azure Application Insights and ADLS Service principal credentials with appropriate permissions Steps Clone or Download the Repository Place all code files in a working directory. 2. **Configure **app_config.py Fill in your Azure resource IDs, credentials, and KQL queries. 3. Install Dependencies pip install azure-identity azure-monitor-query azure-storage-blob 4. Run the Application Locally: python monitorliftapp/main.py Deploy to a VM, Azure Container Instance, or as a scheduled job. (Optional) Package as Executable Use PyInstaller or similar tools if you want a standalone executable. 2. Verify Data Movement Check your ADLS container for exported JSON files. 6. Screenshots App Insights Logs:\ Exported Data in ADLS: \ 7. Final Thoughts MonitorLiftApp provides a robust, modular, and extensible solution for exporting telemetry from Azure Monitor to custom sinks. Its design supports parallel query execution, pluggable storage backends, and incremental data movement, making it suitable for a wide range of enterprise scenarios.118Views1like0CommentsWindows 365 Watermarking - QR Codes Missing in Screenshots/Teams from Within Session?
Hi all, I've implemented watermarking on our Windows 365 setup using the official Microsoft guide, and I'm seeing behaviour that I'd like to confirm is expected. Current Situation: Watermarking is enabled and working (QR codes appear when I screenshot from my local client PC) However, when taking screenshots FROM WITHIN the Cloud PC session itself, no QR codes appear Similarly, when screen sharing via Teams from within the Cloud PC session, participants don't see the QR codes My Question: Is this the intended behaviour? Should QR codes only appear when capturing externally (from the client device) but not when capturing internally (from within the Windows 365 session itself)? I've read through the Microsoft documentation but can't find explicit clarification on whether internal screenshots should show watermarks or if the protection is specifically designed for external capture attempts. Can anyone confirm this behaviour or point me to official documentation that explains the internal vs external capture distinction? Thanks in advance!32Views0likes0CommentsCannot Access Windows Hardware Developer Program in Partner Center — How to Sign Drivers in 2025?
Hi all, I'm trying to sign a Windows driver and need access to the Microsoft Windows Hardware Developer Program. **What I'm trying to achieve:** - Sign a driver for Windows using the standard Microsoft hardware signing process. **The issue:** - When I try to register for the Windows Hardware Developer Program, I get a message saying "Hardware Program is already in Active state". - However, when I go to Programs > Settings in Microsoft Partner Center, the Hardware Developer Program is NOT visible/available. - I have Global Admin permissions, and I’ve also tried using an account with Owner permissions — no difference, the Hardware Program is missing from the list. **My question:** - How do I get access to the Windows Hardware Developer Program if it's "Active" but not visible in the Partner Center? - Is there any way to manage or join the Hardware Program in 2025 if it's not listed? - Is there an alternative process for signing Windows drivers now? Any up-to-date guidance for 2025 would be super helpful. Any advice or escalation contacts would be highly appreciated! Thanks in advance.266Views5likes4CommentsAAD join Server 2025
Hi, Wondering if Server 2025 can be AAD joined. this would help some businesses that have their laptops joined as well as would also like to have the option to join their Server for their line of business apps etc. Seems really strange you can have win11 AAD joined but not server 2025. Or am i just missing something here. Having to use Azure Arc comes with extra headaches and costs.Solved9.8KViews2likes15CommentsIn-place upgrade possibility planned for Windows Server 2025 Datacenter Azure Edition ?
There is currently no official ISO for Windows Server Datacenter: Azure Edition that supports setup.exe /auto upgrade for in-place upgrades. Azure Update Manager does not support OS version upgrades for Azure Edition through optional features. Is anyone aware of a supported workaround?418Views3likes4CommentsHelp us shape the future of Windows Server Previews
Feedback window extended through September 23, 2025 Help us shape the future of Windows Server Previews Hello Server Insiders! Your feedback is vital in helping us understand your needs and preferences with our preview programs. We invite you to participate in our survey designed to help us assess interest in validating servicing update (LCUs) previews for Windows Server. Your participation is greatly appreciated and will help shape the future of Windows Server preview offerings. We will not ask for your personal information and your responses will contribute directly to the development of Windows Server Preview programs. Please share your valuable insights before September 23, 2025. Survey Link Privacy Statement Thank you for your interest in collaborating with Microsoft!252Views2likes0CommentsMicrophone & camera passthrough to Cloud PC from MacBook
I have a M1 MacBook Pro that I use to connect to my Cloud PC for work via the Microsoft App. I try to use the Teams app installed locally on the Cloud PC for making and accepting calls but I am having a lot of audio issues. First of all, the person I am calling sounds very tinny (kinda like a chipmunk!) and they cannot hear me. Video doesn't seem to work properly either. I have had very little luck with my external webcam (some Logitech one, don't actually know the model but I don't think it makes a difference? but it has a microphone). There has the very odd few times that a call is working fine but then after a few seconds or so, or when I start sharing my screen on the Cloud PC after a minute of the call starting, I start to experience the same issues with audio as explained above. I am running Sequoia 15.6, the Mac Windows App is version 11.1.5 (2585). Admittedly I've mostly tried in clamshell mode and connected to external earphones (AirPods Pro). I used to use my earphones via the Citrix app on VDI with no issues previously. Any solutions would be gratefully received. Thank you60Views0likes0CommentsAdd native postfix to Windows Server
With the removal of smtp from Windows Server starting with Windows Server 2025, microsoft should add postfix to the server in a similar manner to how ssh was added to windows server. The source code is actively maintained: https://github.com/vdukhovni/postfix.783Views7likes1Comment