User Profile
NKUGAN
MCT
Joined Sep 24, 2018
User Widgets
Recent Discussions
Re: Local Group Policy
It sounds like you're dealing with a tricky Group Policy issue. Here are a few steps you can take to troubleshoot and resolve this problem: Check for Conflicting Policies: Sometimes, multiple Group Policies can conflict with each other. Use the Resultant Set of Policy (RSoP) tool or the Group Policy Results Wizard in the Group Policy Management Console (GPMC) to see which policies are being applied and if there are any conflicts. Refresh Group Policy: Run the gpupdate /force command in the Command Prompt to force a refresh of the Group Policy settings. This can help apply the new settings and remove any lingering old ones. https://learn.microsoft.com/en-us/troubleshoot/windows-server/group-policy/applying-group-policy-troubleshooting-guidance Event Viewer Logs: Check the Event Viewer for any Group Policy-related errors or warnings. Navigate to Event Viewer > Windows Logs > System and look for entries related to Group Policy. These logs can provide insights into what might be going wrong. Check Policy Inheritance and Enforcement: Ensure that the policies are not being overridden by higher-level policies. Use the GPMC to check the Link Order and Enforcement settings. Security Filtering and WMI Filtering: Verify that the correct security groups and WMI filters are applied to the policies. Incorrect filtering can prevent policies from being applied to the intended users or computers. Local Group Policy Corruption: If the local Group Policy is corrupted, you might need to reset it. You can do this by deleting the Registry.pol files located in the C:\Windows\System32\GroupPolicy and C:\Windows\System32\GroupPolicyUsers directories and then running gpupdate /force. If these steps don't resolve the issue, you might want to consider using third-party Group Policy management tools, which can provide more advanced troubleshooting and reporting features.99Views0likes0CommentsRe: Windows Server 2025 Desktop Experience: Hyper-V cannot be installed
It sounds like you need to enable virtualization support in your BIOS settings. Here are the general steps to do that: Restart your computer and enter the BIOS setup. This is usually done by pressing a key like F2, F10, Delete, or Esc during the initial boot screen. The exact key depends on your computer's manufacturer. Navigate to the CPU or Processor settings. This might be under a menu like Advanced, Advanced BIOS Features, or CPU Configuration. Enable virtualization. Look for options like Intel VT-x, Intel Virtualization Technology, AMD-V, or SVM Mode and set them to Enabled. Save your changes and exit the BIOS. This is usually done by pressing F10 and confirming the changes. Issues when starting VM or installing Hyper-V - Windows Server | Microsoft Learn After enabling virtualization in the BIOS, try installing Hyper-V again. If you still encounter issues, make sure your BIOS is up to date, as some older versions might not support the necessary features1.5KViews0likes0CommentsRe: Basic LoadBalancer Upgrade - no outbound rule created
Creating a backend pool for a Standard Load Balancer using all the NICs from an inbound NAT rule and then creating an outbound rule based on this new backend pool is a common approach. However, there are a few security considerations to keep in mind: Network Security Groups (NSGs): Ensure that the NSGs associated with your backend pool are properly configured to allow only the necessary traffic. This helps in maintaining a secure environment by restricting unwanted access. Public IP Addresses: If your backend VMs or VMSS instances have instance-level public IP addresses, ensure that these are correctly configured to avoid conflicts or exposure. Monitoring and Logging: Enable monitoring and logging to keep track of the traffic and detect any unusual activity. Azure Monitor and Azure Security Center can help with this. Outbound Rule Configuration: When configuring the outbound rule, make sure to define the SNAT (source network address translation) behavior explicitly. This includes specifying which virtual machines are translated to which public IP addresses, how SNAT ports are allocated, and the protocols to provide outbound translation for. Outbound rules Azure Load Balancer | Microsoft Learn104Views1like0CommentsRe: Include asset tags in Azure Monitor alerts
Yes, it is possible to include tags in Azure Monitor alerts by using Azure Logic Apps or Azure Functions to enrich the alert payload with the resource tags. Here’s a step-by-step approach to accomplish this: Create the Azure Monitor Alert: Set up your alert rule in Azure Monitor based on your criteria. Trigger a Logic App or Azure Function: Configure the alert to trigger an Azure Logic App or an Azure Function. This can be done in the "Actions" section of the alert rule, where you can specify an Action Group to call a Logic App or Function. Retrieve Resource Tags: In your Logic App or Azure Function, use Azure Resource Manager (ARM) REST API to get the tags for the resource. This can be done by making an HTTP request to the ARM endpoint for the resource. For Logic Apps: Use the HTTP action to call the ARM REST API. For Azure Functions: Use the appropriate SDK or HTTP client to call the ARM REST API. Enrich the Alert Payload: Extract the tags from the ARM response and append them to the alert payload. Example in a Logic App: json { "actions": [ { "call": { "method": "GET", "uri": "https://management.azure.com/{resourceId}?api-version=2021-04-01", "headers": { "Authorization": "Bearer {token}" } }, "extract": { "path": "$.tags" }, "compose": { "inputs": { "tags": "@body('HTTP')['tags']" } } } ] } 5. Notify or Act on the Enriched Alert: Send the enriched alert payload to your desired endpoint (email, SMS, ITSM system, etc.) using the appropriate actions in your Logic App or Azure Function. By following these steps, you can include the tags configured on the resource in the Azure Monitor alert payload. This approach allows you to have more context about the resource when an alert is triggered. odallal1.1KViews0likes0CommentsRe: Azure Routing Cross-Region?
To address the issue of IP-based blocking by the vendor's website, you have several potential solutions in Azure: 1. VPN (Virtual Private Network) A VPN can tunnel traffic from the Cloud PCs in India to the US East region, giving the appearance that the traffic originates from the US. Azure VPN Gateway: You can set up a site-to-site VPN or point-to-site VPN to route traffic through the US East region. Pros: Quick to set up, reliable, and secure. Cons: Might introduce some latency, can be costly depending on the amount of traffic. 2. Proxy Server A proxy server can route traffic through a different IP address. Azure Application Gateway with WAF (Web Application Firewall): Can act as a reverse proxy and route traffic through the US. Custom Proxy Server: You can set up a VM in US East and configure it to act as a proxy for your users. Pros: Flexible, can provide caching and additional security. Cons: Requires management of the proxy server, potential single point of failure. 3. Custom Routing with vNet Custom routing can direct traffic from India through resources in the US. Azure Route Tables (UDR): Configure user-defined routes to direct traffic through a specific path, such as through a VM or a gateway in US East. Pros: Control over routing paths, integrated with Azure networking. Cons: Requires careful configuration and management, potential complexity. 4. Azure Virtual WAN Azure Virtual WAN can simplify large-scale site-to-site connectivity and provide optimized routing. Pros: Centralized management of network connectivity, optimized routing, scalability. Cons: Can be more complex to set up initially, might require adjustments to existing network architecture. Recommended Approach VPN Gateway: Start with setting up a VPN gateway to route traffic through the US East region. This is often the quickest and least disruptive method. Proxy Server: If VPN does not meet your needs, consider setting up a proxy server in the US East region. Explore Virtual WAN: If your network needs become more complex, or if you plan to scale further, investing time in Azure Virtual WAN can provide long-term benefits. Here’s a high-level outline to set up a VPN gateway: Create a Virtual Network Gateway in US East. Configure VPN Gateway for point-to-site or site-to-site connection. Set up VPN clients on Cloud PCs in India to connect through the VPN. Test connectivity to ensure that traffic routes through the US East gateway. Resources Azure VPN Gateway Documentation : https://docs.microsoft.com/en-us/azure/vpn-gateway/ Azure Application Gateway Documentation : https://docs.microsoft.com/en-us/azure/application-gateway/ Azure Route Tables Documentation : https://docs.microsoft.com/en-us/azure/virtual-network/manage-route-table Azure Virtual WAN Documentation : https://docs.microsoft.com/en-us/azure/virtual-wan/ If you need more detailed steps or assistance with specific configurations, feel free to ask!677Views0likes0CommentsRe: Alert notifications Azure Arc
Setting up alert notifications in Azure Arc, especially for virtual machines (VMs) being patched via Azure Update Manager with Extended Security updates, is a valuable practice to keep stakeholders informed about the status of their VMs. Here’s a step-by-step guide to help you set up email notifications for VM update events: Step 1: Configure Azure Monitor Alerts Azure Arc heavily relies on Azure Monitor, which can be used to set up alert rules based on specific conditions or events. Navigate to Azure Monitor: In the Azure portal, go to Azure Monitor. Create Alert Rules: Under Alerts, you can create new alert rules. Here, you specify the criteria for when an alert should be triggered, such as when update operations start, succeed, or fail. Specify the Conditions: Define the conditions that trigger the alerts. You can use the activity log or metrics to monitor the status of the VM updates. For instance, you can set an alert for when the update process starts and another for when it completes (successfully or with errors). Step 2: Action Groups An Action Group in Azure Monitor defines the actions to take when an alert rule is triggered. 1) Create an Action Group: Under Alerts, find the “Action groups” section. Here you create a new group or use an existing one. 2) Set Up Email Notifications: In the action group, add an action that sends an email. You’ll need to specify the email addresses of the individuals who should receive the notifications. Step 3: Integrate with Azure Update Management Since you’re using Azure Update Manager for patching, ensure the alerts are configured to track the specific update tasks. 1) Link Alerts with Update Tasks: Make sure the alerts are set to trigger based on the events generated by Azure Update Manager. 2) Test the Configuration: It’s good practice to test the setup to ensure that the notifications are working as expected. Cost Considerations Alerts and Notifications: Generally, Azure Monitor Alerts might incur costs based on the number of signals, alert rules, and notifications. Review the Azure Monitor pricing page for detailed information. Action Groups: Sending emails through action groups might not significantly affect costs, but it’s important to confirm this in the pricing details. Final Notes Documentation and Support: Azure’s documentation and support channels are valuable resources if you encounter specific issues or require more detailed guidance. Customization: You can further customize the alerts and notifications based on the needs of your team or organization. By following these steps, you should be able to set up email notifications for VM patching events in Azure Arc, enhancing the visibility and management of your VMs' update status.900Views0likes0CommentsRe: Azure Storage Usage Availability report
Creating a "usage and availability" report for Azure File Storage using Kusto Query Language (KQL) involves several steps, starting from activating diagnostics to writing and executing the query in Azure Monitor or Azure Log Analytics. 1) Enable Diagnostics Settings: It seems you've already activated diagnostics for Azure File Storage. Ensure it's configured to send data to Azure Monitor Logs (Log Analytics workspace). 2) Access Log Analytics: Navigate to your Azure Log Analytics workspace where your storage account diagnostics data is sent. 3) Write Your KQL Query: To retrieve a usage and availability report, you'll need to query the right tables and fields. Azure File Storage diagnostics logs can include metrics such as TotalRequests, TotalIngress, TotalEgress, SuccessE2ELatency, and ServerLatency, among others. Here's a basic structure of how you might start your KQL query: StorageFileLogs | where TimeGenerated > ago(30d) // Adjust the timeframe as needed | summarize TotalRequests = sum(TotalRequests), TotalIngress = sum(TotalIngress), TotalEgress = sum(TotalEgress) by bin(TimeGenerated, 1d), AccountName, ShareName | order by TimeGenerated desc This query is an example that aggregates total requests, ingress, and egress over the last 30 days, grouped by day, account name, and share name. You might need to adjust the fields and aggregation based on what specific metrics are relevant for your "usage and availability" report. 1) Run the Query and Analyze: Execute the query in the Log Analytics workspace. You can analyze the results directly there or export them to a tool like Power BI for more advanced visualizations. 2) Refine Your Query: Depending on your needs, you might want to refine the query further. For example, you could focus on specific time windows, filter for particular file shares, or look into other metrics like availability, latency, or error rates. 3) Automation: If you need this report regularly, consider automating the process. Azure offers options to automate the execution of KQL queries and the delivery of reports, such as using Azure Logic Apps or Automation Runbooks. Remember, the exact query will depend on the specific metrics and dimensions that are relevant to your usage and availability needs. You might also want to consult the Azure documentation for the latest schema details of StorageFileLogs or any other relevant tables in your Log Analytics workspace.982Views0likes1CommentRe: global secure access and azure VPN
Integrating Global Secure Access with Azure Point-to-Site (P2S) VPN can indeed present some challenges, especially if both solutions are being used concurrently on the same devices. The clash typically arises because both services are trying to manage network traffic, which can lead to routing conflicts or issues with DNS resolution, among other potential conflicts. Here are some general strategies you might consider to allow these two services to coexist: 1. Routing Configuration: Ensure that the routing tables on the devices are configured to properly handle traffic for both services. This might involve setting up specific routes that direct Azure-related traffic through the P2S VPN and other traffic through Global Secure Access. This approach often involves adjusting the metric values in the routing table so that the preferred routes are chosen based on the destination of the traffic. 2. Split Tunneling: If the VPN is set to route all traffic through the Azure network, you might want to consider configuring split tunneling. Split tunneling allows only Azure-specific traffic to go through the VPN, while the rest of the traffic goes directly to the internet or through Global Secure Access. This can reduce conflicts between the two services. 3. DNS Resolution: Conflicts may also arise from DNS resolution, where both services try to resolve names differently. Ensuring that DNS queries are properly routed to the correct resolver for the service they are intended for can help mitigate this issue. 4. Whitelisting: If Global Secure Access provides a feature to whitelist certain traffic or destinations, you could configure it to recognize Azure P2S VPN traffic as trusted. This might involve identifying the IP ranges used by Azure and configuring Global Secure Access to allow direct communication for these ranges. 5. Vendor Documentation and Support: Since both Azure and the provider of Global Secure Access may have specific recommendations or best practices for such configurations, reviewing their documentation or reaching out to their support teams can provide more tailored advice. 6. Test Environment: Before rolling out any changes to your production environment, it's beneficial to test your configuration in a controlled setting to ensure that the integration does not disrupt your network traffic. If these general strategies don't resolve the issue, it could be beneficial to provide more details about the specific problems you're encountering. For instance, are there specific applications or services that are not functioning correctly when both Global Secure Access and Azure P2S VPN are active? Details like these can help in diagnosing the issue and providing more precise solutions.956Views0likes0CommentsRe: Public Azure Container Registry: what is default access?
When you create a Public Azure Container Registry (ACR), it means that the registry is accessible over the internet, but it doesn't automatically mean that everyone can access the content (images, artifacts) stored within the registry without proper authentication and authorization. Even though your ACR is public and the firewall is not enabled, Azure still enforces authentication and authorization to access the content within the ACR. Here's how access is managed in a nutshell: 1. Authentication: Users or services need to authenticate to interact with the ACR. This can be done using various methods, such as Azure Active Directory (AAD) integration, admin accounts (not recommended for production), or managed identities for Azure resources. 2. Authorization: After authentication, what the user or service can do (read, write, delete) is determined by their role assignments. Azure uses Role-Based Access Control (RBAC) to manage these permissions. There are several built-in roles for ACR, like AcrPull (pull images), AcrPush (push and pull images), and Owner (full access). Since you want to pull images from the registry to an Azure Container App using RBAC/managed identity, you should assign the appropriate RBAC role to the managed identity. Typically, the AcrPull role is enough if you just need to pull images. Here's how you can secure your ACR: ⦁ Assign Roles: Assign the AcrPull role to the managed identity associated with your Azure Container App. This allows the app to pull images from the ACR without granting broader access. ⦁ Access Reviews: Regularly review who has access to your ACR and what permissions they have. Remove unnecessary permissions to minimize potential attack vectors. ⦁ Monitor Logs: Use Azure Monitor to keep an eye on the activities in your ACR. This can help you detect any unauthorized access attempts. Regarding the analogy of the door on street level: Yes, the door is there, and it's visible, but it's locked by default. Only those with the correct key (in this case, proper authentication and authorization) can open the door and access the contents. To directly address your concern about accessing the Registry URL from a browser: It's expected that you cannot access the registry's content via a browser without proper authentication. Even though the endpoint is public, you need to use the Azure CLI, Docker CLI, or another appropriate client, along with proper authentication, to access the registry contents.565Views1like0CommentsRe: How to improve performance of azure timer trigger function
Improving the performance of an Azure Timer Trigger Function, especially when it's handling a high volume of events, requires a strategic approach that optimizes various aspects of the application. Here are several strategies you can consider to enhance the performance of your Azure Function: 1. Scale-Out and Concurrency: The Azure Functions Consumption Plan automatically scales out, but there are limits. Ensure your function can efficiently handle parallel executions. You might want to review the 'maxConcurrentCalls' and 'maxOutstandingRequests' settings if you're not already doing so. 2. Batch Processing: You're already batching requests before sending them to the Log Analytics custom tables, which is great. Consider if there's room to optimize the batch size or the batching logic itself to ensure you're balancing between memory usage and network calls efficiently. 3. Efficient Code: Review your function's code to ensure it's as efficient as possible. Look for any potential bottlenecks or inefficient loops/operations, especially those that might not scale linearly with the number of events. 4. Connection Reuse: Ensure you're reusing connections wherever possible, particularly in the context of AWS SQS and S3. Creating new connections for each request can significantly impact performance. 5. Optimize Memory Usage: High memory usage can lead to increased garbage collection, which can impact performance. Ensure your function is using memory efficiently, particularly with respect to how it accumulates events in memory. 6. Premium Plan Consideration: Moving to a Premium plan can provide enhanced performance due to better compute options and the ability to keep instances warm, reducing cold start times. However, this should be a consideration after optimizing within the Consumption plan as much as possible. 7. Monitoring and Diagnostics: Utilize Azure Monitor and Application Insights to get detailed insights into your function's performance. Look for patterns that indicate performance degradation and focus on those areas for optimization. 8. Parallel Processing: If your logic allows, consider processing multiple files or batches in parallel. Azure Functions supports asynchronous execution, which can be leveraged to process multiple tasks concurrently. 9. Function App Splitting: If there's a logical separation in the processing steps, consider splitting the function app into multiple smaller functions. This can allow for more granular scaling and can isolate performance bottlenecks. 10. Networking Considerations: Since your Azure Function interacts with AWS services, network latency can be a factor. Evaluate the network path and see if there are optimizations, such as using Azure ExpressRoute or optimizing how data is transferred between Azure and AWS. 11. Cold Start Mitigation: In the Consumption Plan, cold starts can affect performance, especially under scaling scenarios. While this is less of an issue in the Premium Plan due to pre-warmed instances, in the Consumption Plan, optimizing for cold start times is crucial. Before moving to the Premium Plan, exhaust the optimization possibilities within the Consumption Plan. The switch should be considered when you're certain that the limitations are not due to the application design but due to the inherent constraints of the Consumption plan. Often, significant improvements can be achieved through optimization without incurring the additional cost of a higher-tier plan.1KViews0likes1CommentRe: MFA registration with Conditional access rules enabled
To resolve this, you need to adjust your Conditional Access policies to allow new users to register for MFA from untrusted locations. Here's a step-by-step approach to resolve this issue: 1) Temporary Access for Registration: Create a temporary Conditional Access policy or modify the existing one to allow users to register for MFA from untrusted locations. This policy should be specifically targeted at users who have not completed their MFA registration. You can use the "Users and groups" condition to target these users specifically. 2) Use a One-Time Bypass: Depending on the specifics of your MFA setup, you might be able to issue a one-time bypass code for MFA. This allows the user to bypass MFA temporarily to set it up properly. 3) Trusted Device or Location: Another option is to allow MFA registration from a trusted device or location. For instance, you could have a policy that allows users to register for MFA when they're connected to your network via VPN or from a specific device. 4) Grace Period: Some MFA solutions offer a grace period for new users, during which they can complete their MFA registration. You can check if your system has such a feature and enable it. 5) Role-Based Conditions: If possible, apply the MFA requirement based on roles. New users could be assigned a temporary role that does not require MFA until they have it set up. 6) Communication and Support: Inform new users of the MFA registration process and provide them with clear instructions. Make sure they know whom to contact for support if they run into issues. 7) Testing: Always test your Conditional Access policies to ensure they work as intended without locking out legitimate users. After the user has registered for MFA, you can revert the changes to your Conditional Access policies, maintaining the security posture you desire. Remember, the specific steps might vary based on the specifics of your setup (like the MFA solution you're using), but the general approach should help you resolve this issue.1.3KViews1like0CommentsRe: One of our offices is nit able to connect to Azure resources all of a sudden
Raising a ticket with Microsoft, especially for issues that aren't related to your specific Azure instance settings, can be navigated through a few channels. If you've encountered a scenario where your issue isn't related to technical settings within your Azure portal but still requires attention, here's how you can proceed: 1)Azure Support Plans: Typically, Azure provides different levels of support plans, including Developer, Standard, Professional Direct, and Premier. If you're seeing prompts for paid technical support, it likely means your current plan doesn't include the level of support you're seeking. However, if you believe your issue isn't related to the technical aspects of your service but perhaps is a billing or account issue, you should still be able to get assistance without a technical support plan. 2)Microsoft Q&A and Azure Forums: Before escalating to a support ticket, consider using Microsoft Q&A or Azure forums. These platforms are monitored by Microsoft employees, MVPs, and community members and can be helpful for a wide range of issues. 3)Billing and Subscription Management Support: If your issue is related to billing or subscription management, you should be able to get support without a paid technical support plan. Azure offers free support for billing and subscription-related inquiries. 4)Azure Service Health: If your issue is related to a service outage or performance degradation, check Azure Service Health in the Azure portal. This dashboard provides you with a personalized view of the health of your Azure services and regions, along with any incidents that might be impacting your resources. 5)Sales and Licensing Support: If your inquiry is related to sales or licensing, you can typically get support without a technical support plan. Microsoft offers resources and contact options for sales and licensing inquiries. 6)Contact Microsoft Support Directly: If the above options don't suit your needs, you can try contacting Microsoft support directly through their general support page. There are options for both technical and non-technical support, and you can specify that your issue is not related to an Azure service setting. When you raise a ticket, be as detailed as possible about your issue, specifying that it's not related to the technical settings of your Azure instance but rather a broader concern or question. This clarification can help direct your query to the right support team within Microsoft.423Views0likes0CommentsRe: Azure VM - Best Practice to associate a Public IP to a Internal VM
1)Best Practice for Associating a Public IP to a Private Network VM: The best practice for associating a public IP address with a virtual machine (VM) in a private network is to ensure that the public IP is not directly assigned to the VM. Instead, use a network device like a load balancer or a NAT gateway. This approach provides an additional layer of security, as the VMs are not exposed directly to the internet. For example, in Azure, you can assign the public IP to a load balancer and then configure the load balancer to forward traffic to the private IP of the VM within the virtual network. 2)Associating a Public IP to a New NIC vs. Existing NIC with Private IP: New NIC: Adding a new network interface card (NIC) with a public IP can be a good approach if you want to segregate traffic. For instance, you might use one NIC for internal traffic (with a private IP) and another for external traffic (with a public IP). However, this can add complexity and might not be necessary depending on your architecture and security requirements. Existing NIC: Associating a public IP with an existing NIC that already has a private IP is a common practice. It simplifies the network configuration and is sufficient for most scenarios. However, direct exposure of VMs to the internet should be avoided for security reasons, and access should be controlled through firewalls or other security appliances. 3)Changes to Azure Firewall for Enabling Public IP: If you're planning to enable a public IP on a resource behind an Azure Firewall, you might need to configure DNAT (Destination Network Address Translation) rules on the firewall to allow inbound traffic to reach the VM. The firewall will need to know how to route the traffic coming to the public IP to the correct private IP in the virtual network. Additionally, ensure that your network security groups (NSGs) and firewall rules are properly configured to allow the necessary inbound and outbound traffic while maintaining security.2.9KViews0likes1CommentRe: OMS Agent troubleshooting
Yes, there are several ways to troubleshoot the connectivity between the OMS Agent (installed on a Linux server) and the Azure Log Analytics workspace: 1. Verify that the OMS Agent is running: You can check the status of the agent by running the command service omsagent status on the Linux server. This will tell you if the agent is currently running and if there are any errors. 2. Check the agent log files: The OMS Agent logs its activity in the /var/opt/microsoft/omsagent/<workspace_id>/log/omsagent.log file. You can check this file for any errors or warnings that might be preventing the agent from connecting to the workspace. 3. Verify that the workspace ID and key are correct: Make sure that the workspace ID and key that are specified in the /etc/opt/microsoft/omsagent/<workspace_id>/conf/omsagent.conf file are correct. You can verify this by checking the agent configuration in the Azure Log Analytics workspace 4. Check the firewall configuration: Make sure that the Linux server has the correct firewall rules to allow outbound traffic to the Azure Log Analytics service on ports 443 and 12000-12099. 5. Use telnet to test connectivity: Try to telnet to the Azure Log Analytics service endpoint on port 443 or 12000-12099. If you are able to connect, it indicates that your server is able to reach the service. 6. Verify that the OMS agent is able to reach the internet by doing nslookup on the domain 'dc.services.visualstudio.com' , this will ensure that the server can reach the necessary endpoint to send the logs. 7. Check if the OMS Agent is able to reach the necessary ports on Azure by doing telnet to port 443 and 12000-12099 on the Azure data center IP address(es) for your region. These steps should help you identify and resolve any connectivity issues between the OMS Agent and the Azure Log Analytics workspace. I recommend you to check the OMS Agent documentation for your specific version for more information on troubleshooting.11KViews0likes0CommentsRe: Windows Desktop (AVD & Windows365) Client on Thin Client
In Azure Virtual Desktop, you can configure the remote desktop session host to prompt users to sign in again when they open a new session or when they close their current session. This can be done using the following steps: 1. In the Azure portal, navigate to the Virtual Desktop service that you want to configure. 2. Under "Settings," select "Session Hosts." 3. Select the session host pool that you want to configure. 4. Under "Authentication," select "Prompt for credentials on new connection." This will prompt users to enter their credentials when they start a new session. 5. Under "Authentication," select "Prompt for credentials on reconnection." This will prompt users to enter their credentials when they reconnect to a disconnected session. 6. (Optional) If you want the user to sign-out and close the current subscription when closing the current session: Under "Session" select the "End session" and select "Sign-out" Note that this configuration can also be done via PowerShell script and azure API, Also this configuration is valid for the entire session host pool, so all users in the session host pool will be prompted for credentials when they start a new session or reconnect to a disconnected session.4.7KViews0likes1CommentRe: VM Connection very often gets disconnected
This error message suggests that there was an unexpected interruption in the connection between the client device and the Windows Virtual Desktop service. This could be caused by a number of factors, such as network connectivity issues, problems with the client device, or issues with the Windows Virtual Desktop service itself. To resolve this issue, it is important to first identify the cause of the problem, which may involve checking network connectivity, troubleshooting the client device, or checking the status of the Windows Virtual Desktop service. Depending on the cause, potential solutions could include restarting the client device or the network, or reaching out to the Windows Virtual Desktop service for further assistance.17KViews0likes2CommentsRe: Can I assign a existing private IP to a VM
It is possible to assign the same private IP address to the new VM as the existing VM had, but the specific steps to do so will depend on the cloud provider and the specific resources you are using. Generally speaking, you will need to create a new disk from the snapshot, and then create a new VM using that disk. During the process of creating the new VM, you will have the option to assign an IP address to it. You can assign the same IP address as the existing VM had. However, it is important to make sure that the IP address is not already in use by another resource. Additionally, if the snapshot image is taken from a different virtual network it might not be possible to use the same IP address. This because the IP address is assigned by the DHCP server for that specific network. You should also consult with the documentation and support provided by your cloud provider for specific instructions on how to create a new VM from a snapshot and assign a specific IP address to it.3KViews0likes0CommentsAzure New 16 Built In Roles Available In Preview
Microsoft announces in Azure AD new 16 new built-in roles are included also highly requested Global Reader role is now in public preview. Most of the daily tasks are run by the global administrator and another system administrator cannot do any tasks these new roles can help to reduce the global administrator tasks. These roles are available globally for all subscriptions Global reader is the read-only counterpart to Global administrator. Assign Global reader instead of Global administrator for planning, audits, or investigations. Use Global reader in combination with other limited admin roles like Exchange Administrator to make it easier to get work done without the assigning the Global Administrator role. The global reader works with Microsoft 365 admin center, Exchange admin center, Teams admin center, Security center, Compliance center, Azure AD admin center, and Device Management admin center. Global reader role has a few limitations right now – SharePoint admin center – SharePoint admin center does not support the Global reader role. You won’t see ‘SharePoint’ in left pane under Admin Centers in Microsoft 365 admin center. OneDrive admin center – OneDrive admin center does not support the Global reader role. Azure AD portal – Global reader can’t read the provisioning mode of an enterprise app. M365 admin center – Global reader can’t read customer lockbox requests. You won’t find the Customer lockbox requests tab under Support in the left pane of M365 Admin Center. M365 Security center – Global reader can’t read sensitivity and retention labels. You won’t find Sensitivity labels, Retention labels, and Label analytics tabs in the left pane of the M365 Security center. Teams admin center – Global reader cannot read Teams lifecycle, Analytics & reports, IP phone device management and App catalog. Privileged Access Management (PAM) doesn’t support the Global reader role. Azure Information Protection – Global reader is supported for central reporting only, and when your tenant isn’t on the unified labeling platform. These features are currently in development. Role name Description Authentication administrator View, set, and reset authentication method information and passwords for any non-admin user. Azure DevOps administrator Manage Azure DevOps organization policy and settings. B2C user flow administrator Create and manage all aspects of user flows. B2C user flow attribute administrator Create and manage the attribute schema available to all user flows. B2C IEF Keyset administrator Manage secrets for federation and encryption in the Identity Experience Framework. B2C IEF Policy administrator Create and manage trust framework policies in the Identity Experience Framework. Compliance data administrator Create and manage compliance data and alerts. External Identity Provider administrator Configure identity providers for use in direct federation. Global reader View everything a Global administrator can view without the ability to edit or change. Kaizala administrator Manage settings for Microsoft Kaizala. Message center privacy reader Read Message center posts, data privacy messages, groups, domains and subscriptions. Password administrator Reset passwords for non-administrators and Password administrators. Privileged authentication administrator View, set, and reset authentication method information for any user (admin or non-admin). Security operator Creates and manages security events. Search administrator Create and manage all aspects of Microsoft Search settings. Search editor Create and manage editorial content such as bookmarks, Q & As, locations, floorplan.1.5KViews0likes0CommentsAzure Storage Account Larger File Shares
Microsoft Announce General Available Larger file shares available in the storage account. Azure Files is secure, fully managed public cloud file storage with a full range of data redundancy options and hybrid capabilities using Azure File Sync. All premium file shares are available with 100 TiB capacity. Visit Azure Files scale limits documentation to get more details. What’s new? Large file shares now has: Ability to upgrade existing general purpose storage accounts and existing file shares. Ability to opt in for larger files shares at a storage account instead of subscription level. Expanded regional coverage. Support for both locally redundant and zonal redundant storages. Improvements in the performance and scale of sync to work better with larger file shares. Visit Azure File Sync scalability targets to keep informed of the latest scale. New storage account Create a new general-purpose storage account in one of the supported regions on a supported redundancy option. While creating storage account, go to Advanced tab and enable Large file shares feature. See detailed steps on how to enable large file shares support on a new storage account. All new shares created under this new account will, by default, have 100 TiB capacity with increased scale. Existing storage account On an existing general-purpose storage account that resides on one of the supported regions, go to Configuration, enable Large file shares feature, and hit Save. You can now update quota for existing shares under this upgraded account to more than 5 TiB. All new shares created under this upgraded account will, by default, have 100 TiB capacity with increased scale.13KViews1like0CommentsPublish The Static Website Using Azure Storage
Microsoft introduced Static website hosting is a feature from storage account enabled on the Static website. To enable static website hosting, select the name of your default file, and then optionally provide a path to a custom 404 page. If a blob storage container named $web doesn’t already exist in the account, one is created for you. Add the files of your site to this container. Creating a new Azure Storage Account and provide a name and under the Account -Kind make sure that you select General Purpose StorageV2. After it creates the resource then go to Settings and select Static website. Select Enabled for Static Website. Under the Index Document Name type index.html and under Error document path type 404.html. Click Save, you’ll see there is a $web folder that you can click on to upload your files. I simply dropped a single index.html file with some text to test. You’ll also want to jot down the Primary endpoint location as you’ll test your site with that URL. Upload HTML files $web folder. Once you’ve uploaded your file to $web then go to your browser and paste in the URL provided in the previous step.2KViews0likes0Comments
Recent Blog Articles
No content to show