linux
137 Topics[Wayland] PWAs no longer appear as separate app windows — all group under main Edge icon
Hi, Since around 2025-09-07 to 2025-09-10, I’ve noticed that on GNOME (Wayland) all installed PWAs (even across different profiles) now appear grouped under the main Edge browser icon in the GNOME Shell dash/taskbar. Previously, each PWA would open in its own window group with its own icon. This is still the behavior in Chromium/Brave/Chrome, and can be restored there by editing the PWA’s .desktop file to set: StartupWMClass=<same value as Icon> However, Edge now seems to ignore StartupWMClass completely on Wayland, breaking workspace separation and making task switching hard. Environment: Ubuntu 25.04 GNOME 48.0 / Mutter (Wayland) Kernel 6.14.0-29 Intel Iris Xe GPU Edge (latest stable, observed post 2025-09-13) Repro steps: Install any PWA (e.g. Outlook, Teams, Spotify) from Edge Launch it (from any profile) Observe that it appears grouped under the main Edge browser icon Expected: PWA shows under its own icon and window group like in Chromium-based browsers Actual: All PWAs are bundled into the main Edge window group Editing StartupWMClass in the .desktop file no longer helps This regression makes PWAs much harder to manage on Wayland. Please route to the Linux/Wayland team if possible. Thanks!1.9KViews10likes18CommentsIntegrating Microsoft Foundry with OpenClaw: Step by Step Model Configuration
Step 1: Deploying Models on Microsoft Foundry Let us kick things off in the Azure portal. To get our OpenClaw agent thinking like a genius, we need to deploy our models in Microsoft Foundry. For this guide, we are going to focus on deploying gpt-5.2-codex on Microsoft Foundry with OpenClaw. Navigate to your AI Hub, head over to the model catalog, choose the model you wish to use with OpenClaw and hit deploy. Once your deployment is successful, head to the endpoints section. Important: Grab your Endpoint URL and your API Keys right now and save them in a secure note. We will need these exact values to connect OpenClaw in a few minutes. Step 2: Installing and Initializing OpenClaw Next up, we need to get OpenClaw running on your machine. Open up your terminal and run the official installation script: curl -fsSL https://openclaw.ai/install.sh | bash The wizard will walk you through a few prompts. Here is exactly how to answer them to link up with our Azure setup: First Page (Model Selection): Choose "Skip for now". Second Page (Provider): Select azure-openai-responses. Model Selection: Select gpt-5.2-codex , For now only the models listed (hosted on Microsoft Foundry) in the picture below are available to be used with OpenClaw. Follow the rest of the standard prompts to finish the initial setup. Step 3: Editing the OpenClaw Configuration File Now for the fun part. We need to manually configure OpenClaw to talk to Microsoft Foundry. Open your configuration file located at ~/.openclaw/openclaw.json in your favorite text editor. Replace the contents of the models and agents sections with the following code block: { "models": { "providers": { "azure-openai-responses": { "baseUrl": "https://<YOUR_RESOURCE_NAME>.openai.azure.com/openai/v1", "apiKey": "<YOUR_AZURE_OPENAI_API_KEY>", "api": "openai-responses", "authHeader": false, "headers": { "api-key": "<YOUR_AZURE_OPENAI_API_KEY>" }, "models": [ { "id": "gpt-5.2-codex", "name": "GPT-5.2-Codex (Azure)", "reasoning": true, "input": ["text", "image"], "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 400000, "maxTokens": 16384, "compat": { "supportsStore": false } }, { "id": "gpt-5.2", "name": "GPT-5.2 (Azure)", "reasoning": false, "input": ["text", "image"], "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 272000, "maxTokens": 16384, "compat": { "supportsStore": false } } ] } } }, "agents": { "defaults": { "model": { "primary": "azure-openai-responses/gpt-5.2-codex" }, "models": { "azure-openai-responses/gpt-5.2-codex": {} }, "workspace": "/home/<USERNAME>/.openclaw/workspace", "compaction": { "mode": "safeguard" }, "maxConcurrent": 4, "subagents": { "maxConcurrent": 8 } } } } You will notice a few placeholders in that JSON. Here is exactly what you need to swap out: Placeholder Variable What It Is Where to Find It <YOUR_RESOURCE_NAME> The unique name of your Azure OpenAI resource. Found in your Azure Portal under the Azure OpenAI resource overview. <YOUR_AZURE_OPENAI_API_KEY> The secret key required to authenticate your requests. Found in Microsoft Foundry under your project endpoints or Azure Portal keys section. <USERNAME> Your local computer's user profile name. Open your terminal and type whoami to find this. Step 4: Restart the Gateway After saving the configuration file, you must restart the OpenClaw gateway for the new Foundry settings to take effect. Run this simple command: openclaw gateway restart Configuration Notes & Deep Dive If you are curious about why we configured the JSON that way, here is a quick breakdown of the technical details. Authentication Differences Azure OpenAI uses the api-key HTTP header for authentication. This is entirely different from the standard OpenAI Authorization: Bearer header. Our configuration file addresses this in two ways: Setting "authHeader": false completely disables the default Bearer header. Adding "headers": { "api-key": "<key>" } forces OpenClaw to send the API key via Azure's native header format. Important Note: Your API key must appear in both the apiKey field AND the headers.api-key field within the JSON for this to work correctly. The Base URL Azure OpenAI's v1-compatible endpoint follows this specific format: https://<your_resource_name>.openai.azure.com/openai/v1 The beautiful thing about this v1 endpoint is that it is largely compatible with the standard OpenAI API and does not require you to manually pass an api-version query parameter. Model Compatibility Settings "compat": { "supportsStore": false } disables the store parameter since Azure OpenAI does not currently support it. "reasoning": true enables the thinking mode for GPT-5.2-Codex. This supports low, medium, high, and xhigh levels. "reasoning": false is set for GPT-5.2 because it is a standard, non-reasoning model. Model Specifications & Cost Tracking If you want OpenClaw to accurately track your token usage costs, you can update the cost fields from 0 to the current Azure pricing. Here are the specs and costs for the models we just deployed: Model Specifications Model Context Window Max Output Tokens Image Input Reasoning gpt-5.2-codex 400,000 tokens 16,384 tokens Yes Yes gpt-5.2 272,000 tokens 16,384 tokens Yes No Current Cost (Adjust in JSON) Model Input (per 1M tokens) Output (per 1M tokens) Cached Input (per 1M tokens) gpt-5.2-codex $1.75 $14.00 $0.175 gpt-5.2 $2.00 $8.00 $0.50 Conclusion: And there you have it! You have successfully bridged the gap between the enterprise-grade infrastructure of Microsoft Foundry and the local autonomy of OpenClaw. By following these steps, you are not just running a chatbot; you are running a sophisticated agent capable of reasoning, coding, and executing tasks with the full power of GPT-5.2-codex behind it. The combination of Azure's reliability and OpenClaw's flexibility opens up a world of possibilities. Whether you are building an automated devops assistant, a research agent, or just exploring the bleeding edge of AI, you now have a robust foundation to build upon. Now it is time to let your agent loose on some real tasks. Go forth, experiment with different system prompts, and see what you can build. If you run into any interesting edge cases or come up with a unique configuration, let me know in the comments below. Happy coding!111Views0likes0CommentsMS Edge - Sidebar and copilot not working on Linux Ubuntu v24 LTS
Sidebar not syncing or accepting new app additions (+). Copilot button is dead/inactive. Sidebar settings are well configured. Using the latest version of the Ubuntu OS and MS Edge. Functionality appears to be absent as product was released without full functionality....Lame!26KViews44likes58CommentsLegal question on Bing Wallpapers
Is it legal to use Bing Wallpaper on Linux, using tools available there, like Gnome plugin "Bing Wallpaper" and such? Would it be legal for Ubuntu to add it directly in the distro (not images, those are obviously copyrighted by Getty et al, just the tool that downloads it daily) ? I tried to figure this from terms of use, but it is really hard to understand what is actually allowed.76Views1like2CommentsEdge 144 Creates Multiple GNOME Menu Icons
Starting in Edge 144, there are now 2 icons displaying in the GNOME applications menu. com.microsoft.Edge.desktop microsoft-edge.desktop Both of these files are installed as part of the RPM; the first has "NoDisplay=true" at the bottom, but that doesn't seem to work as intended. I think this value needs to be in the first section, under "[Desktop Entry]", in order for it to work properly.Solved91Views1like1CommentFailed to enter windows
Hi everyone, After installing ubuntu, I can not get into the windows and struggle with a bios loop, and failed to repair it with iso or other startup disks. Gently ask if there is any possiblility I can repair the Windows reserving documents in C. System information : MB : B350M MORTAR (MS-7A37) CPU: AMD RYZEN 7 1700X EIGHT CORE PROCESSOR RAM 32768MB BIOS E7A37AMS.170524Views0likes1CommentPostgreSQL: Migrating Large Objects (LOBs) with Parallelism and PIPES
Why This Approach? Migrating large objects (LOBs) between PostgreSQL servers can be challenging due to size, performance, and complexity. Traditional methods often involve exporting to files and re-importing, which adds overhead and slows down the process. Also, there could be some Cloud restrictions that limit the usage of other tools like: Pgcopydb: Welcome to pgcopydb’s documentation! — pgcopydb 0.17~dev documentation Or other techniques like using RLS: SoporteDBA: PostgreSQL pg_dump filtering data by using Row Level Security (RLS) This solution introduces a parallelized migration script that: Reads directly from pg_largeobject. Splits work across multiple processes using the MOD() function on loid. Streams data via PIPES—no intermediate files. Scales easily by adjusting parallel degree. *Possible feature*: Commit size logic to supports resume logic by excluding already migrated LOBs. Key Benefits Direct streaming: No temporary files, reducing disk I/O. Parallel execution: Faster migration by leveraging multiple processes. Simple setup: Just two helper scripts for source and destination connections. Reference Scripts reference: Moving data with PostgreSQL COPY and \COPY commands | Microsoft Community Hub Source Connection -- pgsource.sh -- To connect to source database -- PLEASE REVIEW CAREFULLY THE CONNECTION STRINGS TO CONNECT TO SOURCE SERVER PGPASSWORD=<password> psql -t -h <sourceservername>.postgres.database.azure.com -U <username> <database> -- Permissions to execute chmod +x pgsource.sh Destination Connection -- pgdestination.sh -- To connect to target database -- PLEASE REVIEW CAREFULLY THE CONNECTION STRINGS TO CONNECT TO DESTINATION SERVER PGPASSWORD=<password> psql -t -h <destinationservername>.postgres.database.azure.com -U <username> <database> -- Permissions to execute chmod +x pgdestination.sh Parallel Migration Script -- transferlobparallel.sh -- To perform the parallel migrations of lobs echo > nohup.out echo 'ParallelDegree: '$1 'DateTime: '`date +"%Y%m%d%H%M%S"` # Check if no large objects to migrate count=$(echo "select count(1) from pg_largeobject;"|./pgsource.sh) count=$(echo "$count" | xargs) if [ "$count" -eq 0 ]; then echo "There are no large objects to migrate. Stopping the script." exit 0 fi par=$1 for i in $(seq 1 $1); do nohup /bin/bash <<EOF & echo "\copy (select data from (select 0 as rowsort, 'begin;' as data union select 1, concat('SELECT pg_catalog.lo_create(', lo.loid, ');SELECT pg_catalog.lo_open(', lo.loid, ', 131072);SELECT pg_catalog.lowrite(0,''', string_agg(lo.data, '' ORDER BY pageno), ''');SELECT pg_catalog.lo_close(0);') from pg_largeobject lo where mod(lo.loid::BIGINT,$par)=$i-1 group by lo.loid union select 2, 'commit;') order by rowsort) to STDOUT;"|./pgsource.sh|sed 's/\\\\\\\/\\\\/g'|./pgdestination.sh; echo "Execution thread $i finished. DateTime: ";date +"%Y%m%d%H%M%S"; EOF done tail -f nohup.out|grep Execution -- Permissions to execute chmod +x transferlobparallel.sh -- NOTE Please pay attention during the script execution, it will never finish as last line is “tail -f nohup…”, you need to monitor if all the threads already finished or checking in a different session the “psql” processes are still working or not. How to Run ./transferlobparallel.sh <ParallelDegree> Example: ./transferlobparallel.sh 3 Runs 3 parallel processes to migrate LOBs directly to the destination server. Performance Results Please find basic metrics taking into consideration that client linux VM with accelerated networking was co-located in same region/AZ than source and target Azure Database for PostgreSQL Flexible servers, servers and linux client based in Standard 4CPU SKUs, no other special configurations. 16 threads: ~11 seconds for 500MB. CPU usage: ~15% at 16 threads. Estimated migration for 80GB: ~30 minutes Key Takeaways This approach dramatically reduces migration time and complexity. By combining PostgreSQL’s pg_largeobject with parallelism and streaming, without intermediate storage, and using only psql client command as required client software. Disclaimer The script is provided as it is. Please review carefully when running/testing, the script is just a starting point to show how to migrate large objects in a parallelized way without intermediate storage, but it can be also implemented/improved using other methods as mentioned..NET MAUI on Linux with Visual Studio Code
Explore Cross-Platform Development with .NET MAUI on Linux! Dive into the latest release of the .NET MAUI extension for Visual Studio Code, enabling Linux users to develop apps for Android, Windows, iOS, and macOS. This guide offers a step-by-step tutorial on setting up your Linux system for .NET MAUI development, including installation of essential tools and leveraging the C# Dev Kit extension. Whether you're working on Ubuntu or another Linux distribution, this article, enriched with a video walkthrough by Gerald Versluis, simplifies the journey to creating powerful, versatile applications with .NET MAUI.100KViews4likes12CommentsMicrosoft Intune Company Portal for Linux and Conditional Access Issue
Greetings everyone, I have the following scenario implemented regarding conditional access: Rule#1: For pilotuser1, for all cloud apps, for all platforms --> require MFA Rule#2: For pilotuser1, for all cloud apps except Microsoft Intune Enrollment and Microsoft Intune, for all platforms --> Require Device marked as compliant This should allow me to enroll to Intune successfully a non-enrolled device and require the device compliance for the other workloads. For Windows it works just fine. The problem lies with Linux. Following the instructions on Enroll a Linux device in Intune | Microsoft Learn & Get the Microsoft Intune app for Linux | Microsoft Learn I installed Intune App and Edge (Version 109.0.1518.52 (Official build) (64-bit)) on a VM with Ubuntu 22.04. I open the Intune App and try to sign in: First step is to Register the Device on Azure AD, it goes without a problem --> On the next stage I get the following and press continue: At this stage Microsoft Edge opens and I sign in successfully but the Intune App throws an error: The sign in logs on Azure AD show that even though I excluded Intune Enrollment from the CA policy, it is not enough. Sign-in error code: 530003 Failure reason: Your device is required to be managed to access this resource. Additional Details: The requested resource can only be accessed using a compliant device. The user is either using a device not managed by a Mobile-Device-Management (MDM) agent like Intune, or it's using an application that doesn't support device authentication. The user could enroll their devices with an approved MDM provider, or use a different app to sign in, or find the app vendor and ask them to update their app. More details available at https://docs.microsoft.com/azure/active-directory/active-directory-conditional-access-device-remediation Application: Microsoft Intune Company Portal for Linux Application ID: b743a22d-6705-4147-8670-d92fa515ee2b Resource : Microsoft Graph Resource ID: 00000003-0000-0000-c000-000000000000 Client app: Mobile Apps and Desktop clients Client credential type: None Resource service principal ID: 01989347-a263-48ef-a8d7-583ee83db9a2 Token issuer type: Azure AD Apparently something is different in the enrollment process of Linux because I had no issues with Windows 10 enrollment . Any thoughts on the subject would be appreciated. Kind Regards, Panos17KViews1like19Comments