troubleshooting
877 TopicsAdobe Payment Declines Caused by Mislabelled VAT Field — Sharing a Fix to Save Someone’s Sunday
a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; } I wanted to share a recent issue that cost me an entire Sunday, in case it saves someone else the pain I went through. I was trying to add my business card as the payment method on my Adobe account. Every attempt ended with the same message: “Purchase Declined.” I tried multiple cards — same result. Naturally, I reached out to NatWest through their messaging system. After a long back‑and‑forth with Cora (their bot), I finally got through to a human. They confirmed there was nothing wrong with my card and advised me to check with the vendor. Adobe, of course, bounced me back to the bank. Classic loop. Eventually, I managed to solve it myself. a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; } The culprit? A misbehaving VAT Number field. On Adobe’s payment form, there’s a field for a VAT number. If I left it blank, the payment went through immediately. But if I tried to enter my actual VAT number, the card was rejected every time. Based on a bit of trial, error, and experience with automation tools, I suspect the VAT field’s label has been updated, but the underlying target still points to the 3‑digit card security code field. Since that field is required, entering a VAT number likely breaks the form validation and triggers the “declined” status. The fix: Leave the VAT number field empty when adding a card to Adobe. Once I did this, my business card was accepted straight away. I figured I’d share in case anyone else hits the same brick wall. It’s a small thing, but exactly the kind of time‑sink that ruins your weekend! Hope this helps someone.8Views0likes0CommentsWindows Update fails with “Something went wrong – Undoing changes” unless installed via ISO
Hi everyone, I'm facing a strange issue with Windows Update on my laptop and I wanted to know if anyone else experienced something similar. Problem: When I install updates through Windows Update, the update downloads normally and during restart it goes up to 100%, but then I get the message: Something went wrong. Undoing changes. After that, Windows rolls back the update. Observation: Interestingly, updates that start directly from the “You're there” stage sometimes install correctly. Policy change I made: Previously Windows would automatically download and install updates and frequently ask for restarts. Because of that behavior, I changed the policy to manual download and install so updates would not start installing automatically. Thermal precaution I tried: Since my laptop has a faulty CPU fan, I also limited the maximum CPU state to 99% in Power Options to prevent aggressive turbo boosting and reduce potential thermal throttling during the update process. Another important observation: If I install the same update using a Windows ISO (in-place upgrade / repair install), the update installs successfully and does not fail at 100%. Possible hardware issue: My laptop currently has: A broken battery A faulty CPU fan So I'm wondering if the update process might be failing due to power or thermal issues during the installation phase. System info: OS: Windows 11 Pro Insider Preview Channel: Release Preview Current build: 26200.7840 Update that fails: KB5077241 (Build 26200.7922) Questions: Can hardware issues like a damaged battery or faulty CPU fan cause Windows Update installation failures? Why would updates succeed when installing from an ISO but fail through Windows Update? Which logs should I check to identify the exact cause? (CBS.log, WindowsUpdate.log, etc.) Any suggestions, troubleshooting steps, or similar experiences would be appreciated. Thanks!134Views0likes4CommentsRecovering Missing Rows (“Gaps”) in Azure SQL Data Sync — Supported Approaches (and What to Avoid)
Azure SQL Data Sync is commonly used to keep selected tables synchronized between a hub database and one or more member databases. In some cases, you may discover a data “gap”: a subset of rows that exist in the source but are missing on the destination for a specific time window, even though synchronization continues afterward for new changes. This post explains supported recovery patterns and what not to do, based on a real support scenario where a customer reported missing rows for a single table within a sync group and requested a way to synchronize only the missing records. The scenario: Data Sync continues, but some rows are missing In the referenced case, the customer observed that: A specific table had a gap on the member side (missing rows for a period), while newer data continued to sync normally afterward. They asked for a Microsoft-supported method to sync only the missing rows, without rebuilding or fully reinitializing the table. This is a reasonable goal—but the recovery method matters, because Data Sync relies on service-managed tracking artifacts. Temptation: “Can we push missing data by editing tracking tables or calling internal triggers?” A frequent idea is to “force” Data Sync to pick up missing rows by manipulating internal artifacts: Writing directly into Data Sync tracking tables (for example, tables under the DataSync schema such as *_dss_tracking), or altering provisioning markers. Manually invoking Data Sync–generated triggers or relying on their internal logic. The case discussion specifically referenced internal triggers such as _dss_insert_trigger, _dss_update_trigger, and _dss_delete_trigger. Why this is not recommended / not supported as a customer-facing solution In the case, the guidance from Microsoft engineering was clear: Manually invoking internal Data Sync triggers is not supported and can increase the risk of data corruption because these triggers are service-generated at runtime and are not intended for manual use. Directly manipulating Data Sync tracking/metadata tables is not recommended. The customer thread also highlights that these tracking tables are part of Data Sync internals, and using them for manual “push” scenarios is not a supported approach. Also, the customer conversation highlights an important conceptual point: tracking tables are part of how the service tracks changes; they are not meant to be treated as a user-managed replication queue. Supported recovery option #1 (recommended): Re-drive change detection via the base table The most supportable approach is to make Data Sync detect the missing rows through its normal change tracking path—by operating on the base/source table, not the service-managed internals. A practical pattern: “No-op update” to re-fire tracking In the internal discussion with the product team, the recommended pattern was to update the source/base table (even with a “no-op” assignment) so that Data Sync’s normal tracking logic is triggered, without manually invoking internal triggers. Example pattern (conceptual): UPDATE t SET some_column = some_column -- no-op: value unchanged FROM dbo.YourTable AS t WHERE <filter identifying the rows that are missing on the destination>; This approach is called out explicitly in the thread as a way to “re-drive” change detection safely through supported mechanisms. Operational guidance (practical): Apply the update in small batches, especially for large tables, to reduce transaction/log impact and avoid long-running operations. Validate the impacted row set first (for example, by comparing keys between hub and member). Supported recovery option #2: Deprovision and re-provision the affected table (safe “reset” path) If the gap is large, the row-set is hard to isolate, or you want a clean realignment of tracking artifacts, the operational approach discussed in the case was: Stop sync Remove the table from the sync group (so the service deprovisions tracking objects) Fix/clean the destination state as needed Add the table back and let Data Sync re-provision and sync again This option is often the safest when the goal is to avoid touching system-managed artifacts directly. Note: In production environments, customers may not be able to truncate/empty tables due to operational constraints. In that situation, the sync may take longer because the service might need to do more row-by-row evaluation. This “tradeoff” was discussed in the case context. Diagnostics: Use the Azure SQL Data Sync Health Checker When you suspect metadata drift, missing objects, or provisioning inconsistencies, the case recommended using the AzureSQLDataSyncHealthChecker script. This tool: Validates hub/member metadata and scopes against the sync metadata database Produces logs that can highlight missing artifacts and other inconsistencies Is intended to help troubleshoot Data Sync issues faster A likely contributor to “gaps”: schema changes during Data Sync (snapshot isolation conflict) In the case discussion, telemetry referenced an error consistent with concurrent DDL/schema changes while the sync process is enumerating changes (snapshot isolation + metadata changes). A well-known related error is SQL Server error 3961, which occurs when a snapshot isolation transaction fails because metadata was modified by a concurrent DDL statement, since metadata is not versioned. Microsoft documents this behavior and explains why metadata changes conflict with snapshot isolation semantics. Prevention guidance (practical) Avoid running schema deployments (DDL) during active sync windows. Use a controlled workflow for schema changes with Data Sync—pause/coordinate changes to prevent mid-sync metadata shifts. (General best practices exist for ongoing Data Sync operations and maintenance.) Key takeaways Do not treat Data Sync tracking tables/triggers as user-managed “replication internals.” Manually invoking internal triggers or editing tracking tables is not a supported customer-facing recovery mechanism. Do recover gaps via base table operations (insert/update) so the service captures changes through its normal path—“no-op update” is one practical pattern when you already know the missing row set. For large/complex gaps, consider the safe reset approach: remove the table from the sync group and re-add it to re-provision artifacts. Use the AzureSQLDataSyncHealthChecker to validate metadata consistency and reduce guesswork. If you see intermittent failures around deployments, consider the schema-change + snapshot isolation pattern (e.g., error 3961) as a possible contributor and schedule DDL changes accordingly. From our experience, when there are row gaps it is usually because of change in the PK or source table.198Views0likes0CommentsStructural issue: Copilot presents assumptions as facts despite explicit verification constraints
a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; } I want to report a structural design issue I consistently encounter when using Microsoft 365 Copilot in a technical/enterprise context. Problem statement Copilot frequently presents plausible assumptions as verified facts, even when the user: explicitly requests verification first explicitly asks to label uncertainty explicitly prioritizes correctness over speed This behaviour persists after repeated corrections and even when constraints are clearly stated at the start of the conversation. Why this is not a simple “wrong answer” issue This is not about one incorrect response. It is about a systemic tendency: The model optimizes for plausibility and continuity over epistemic certainty User‑defined constraints (e.g. “only answer if verifiable”) are not reliably enforced Corrections can paradoxically introduce new confident but unverified claims Enterprise risk In an enterprise / technical environment this creates real risks: Incorrect technical decisions based on confident‑sounding answers Compliance and audit exposure Loss of trust in Copilot as a decision‑support tool Important distinction I am not asking for Copilot to stop reasoning or making hypotheses. I am asking for: Reliable enforcement of user‑defined epistemic constraints Explicit and consistent marking of statements as: verified unverified assumption / hypothesis Why this matters Advanced users do not want faster answers. They want correct, bounded answers — or an explicit statement that verification is not possible. Right now, Copilot’s behaviour makes that impossible to rely on. I’m sharing this here because it appears to be a design‑level issue, not a prompt‑engineering problem.42Views1like0CommentsCopilot Studio Agent vs SharePoint subfolders
Hi community, We are exploring the Copilot Studio Agents and are running into some issues. When we add the root library of a SharePoint site the agent is able to search the documents within, however when using only a subfolder of the site as a knowledge source it can't find any documents. The agent response is: "I have searched the available knowledge base but could not find any instructions on "Subject XXX". I read it can take up to 24 hours to build the index for subfolders, however after 3 days of waiting the agent still can't find any documents. I've tried making a new agent directly adding the subfolder as a source. The permissions of the user configuring the agent are also correct and the user has full access to all the files. I've also tried working with subagents with no success either. Has someone else experienced the same and is it correct agents can't work properly yet with subfolders of SharePoint sites?Solved910Views0likes2Comments