Forum Widgets
Latest Discussions
SQL logshipping
Hi Everyone, has any experience an issue whereby you setup log shopping from SQL 2014 to SQL 2022 , when you choose the paths for data and logs to be stored on destination server it defaults to store both files on the Data drive instead of it being seperate drives like you specified? The initial log setup completes with no errors. Would appreciate any help i can get. Thanks RrmxApr 14, 2026Copper Contributor19Views0likes1CommentHow does GitHub Copilot in SSMS 22 handle database context collection before generating a response?
Hello, I am trying to better understand the internal workflow of GitHub Copilot in SSMS 22, especially for database-specific questions. From the product descriptions, it seems that Copilot can use the context of the currently connected database, such as schema, tables, columns, and possibly other metadata, when answering questions or generating T-SQL. However, I could not find clear official documentation about the actual sequence of operations. My main questions are: Before generating a response, does Copilot first collect database context/metadata from the active connection and then send that context to the LLM as grounding information? Or does it first use the LLM to interpret the user’s request, decide what information is needed, and then retrieve database metadata before generating the final answer? In some explanations, I have seen the phrase "Core SQL Copilot Infrastructure", but I cannot find any official documentation for that term. Is this an official component name? If so, what does it specifically refer to in the SSMS Copilot architecture? When Copilot answers schema-related or data-related questions, what information is retrieved automatically from the connected database, and is any SQL executed as part of that process? Is there any official architectural documentation that explains: context collection, prompt grounding, LLM invocation order, and whether query execution can occur before the final response is generated? I am asking because I want to understand the feature from both an architecture and data governance/security perspective. Any clarification from the product team or documentation links would be greatly appreciated. Thank you.ezpz97Apr 14, 2026Copper Contributor17Views0likes0CommentsSQL Migration from SQL2017 to SQL2022
AG1: Win OS 2016, SQL 2017 AG2: Win OS 2019, SQL 2022 We are trying to migrate database from AG1 to AG2 via distributed AG. As the database is on different version, the status of the db on AG2 will be in Synchronized/In Recovery mode which is not readable. Are there any ways to verify the data integrity of the database as its not readable?EvangelineApr 13, 2026Copper Contributor27Views0likes2CommentsUnable to install SQL Server 2022 Express (installer glitch + SSMS error)
Hi, I recently purchased a new Lenovo laptop, and I am trying to install Microsoft SQL Server 2022 Express along with SSMS. SSMS installed successfully, but SQL Server installation fails, and sometimes the installer UI glitches or does not load properly. Because of this, I am getting connection errors in SSMS like "server not found" and "error 40". I am not very familiar with technical troubleshooting. Can someone guide me step-by-step in a simple way to install SQL Server correctly? Thank you.Max12Apr 03, 2026Copper Contributor56Views0likes0CommentsMigrate SQL 2016 to SQL 2022 - Detail Work Breadown Structre (WBS)
Hi, We’ve started a project to migrate from SQL Server 2016 to SQL Server 2022, and I’m currently preparing a detailed Work Breakdown Structure (WBS). Has anyone in this community gone through a similar migration and been willing to share their project WBS, either in .mpp or Excel format? Regards, Subhasish Roysubhasishroy2025Mar 30, 2026Copper Contributor37Views0likes0CommentsSQL Server In-Memory Databases in SUSPECT Mode After Update to SQL Server 2019 (RTM-CU27-GDR) (KB504
Hello everyone, Last night, we updated our SQL Server to version SQL Server 2019 (RTM-CU27-GDR) (KB5040948). Unfortunately, after restarting the system, two of our In-Memory databases went into the SUSPECT state and did not come online. Below are the server specifications and error details: Server Specifications: Memory: 48 GB RAM CPU: 8 cores Error Details: Database: TestDB1 [ERROR] openExistingHkDatabase(): HkHostRecoverDatabaseHelper::RestoreV2(): Database ID: [19] 'TestDB1'. Failed to create XTP database. Error code: 0x88000001. [ERROR] HkHostRecoverDatabaseHelper::ReportAndRaiseFailure(): Database ID: [19] 'TestDB1'. Failed to load XTP checkpoint. Error code: 0x88000001. Restore operation failed for database 'TestDB1' with internal error code '0x88000001'. Database: TestDB2 Restore operation failed for database 'TestDB2' with internal error code '0x88000001'. We checked the SQL Server log files and found the above errors indicating problems with creating the XTP database and loading the XTP checkpoint. The databases are currently in the SUSPECT state.MohamadrezaMar 29, 2026Copper Contributor785Views0likes1CommentSQL Server 2025 Log Shipping Fails with Missing Assembly (sqllogship.exe) on Split-Drive Install
Hello, I am testing SQL Server 2025 in a lab environment and have encountered an issue with log shipping that appears to be related to assembly resolution. Environment: SQL Server 2025 (fresh install, both unattended and manual tested) Windows Server 2022 and Windows Server 2025 (issue occurs on both) SQL binaries installed on E:\ Default system drive is C:\ Issue: When log shipping runs (via SQL Agent job or manually invoking sqllogship.exe), it fails with the following error: Unhandled Exception: System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.SqlServer.ConnectionInfo, Version=17.100.0.0... Observed Behavior: sqllogship.exe is located at:E:\Program Files\Microsoft SQL Server\170\Tools\Binn\ The required assemblies (e.g., Microsoft.SqlServer.ConnectionInfo.dll) are installed at:C:\Program Files\Microsoft SQL Server\170\Shared\MDS5xSMO\ The sqllogship.exe.config file in SQL Server 2025 includes explicit codeBase entries using relative paths:..\..\Shared\MDS5xSMO\Microsoft.SqlServer.ConnectionInfo.dll Because of this, the application attempts to resolve assemblies at:E:\Program Files\Microsoft SQL Server\170\Shared\MDS5xSMO\which does not exist by default. Workaround: Manually copying the shared SMO directory from C: to E: resolves the issue: C:\Program Files\Microsoft SQL Server\170\Shared\MDS5xSMO → E:\Program Files\Microsoft SQL Server\170\Shared\MDS5xSMO After doing this, log shipping works as expected. Comparison with SQL Server 2022: SQL Server 2022 sqllogship.exe.config is empty It does not rely on explicit codeBase paths Log shipping works without requiring any manual file copies Question: Is this expected behavior in SQL Server 2025, or a potential issue with how sqllogship.exe resolves shared assemblies when SQL is installed on a non-system drive? Specifically: Should Shared\MDS5xSMO also be installed on the same drive as the SQL binaries? Or should sqllogship.exe.config be updated to use absolute paths instead of relative ones? Would appreciate any confirmation or guidance from others who may have encountered this. Thanks!abel5405Mar 26, 2026Copper Contributor52Views0likes0CommentsBest Practices for Connecting Internal SQL Server Financial Systems to Online Payment Platforms
I currently have an internal enterprise system used for purchasing, payments, and finance operations. The system runs on an on-premises database using Microsoft SQL Server and stores all financial transactions and internal workflow data. We now have a new requirement to enable online payment services for customers. These services will be exposed externally (likely in the cloud) and must interact with the same financial system so that transactions are reflected in our internal database. My main concerns are related to architecture, security, and data synchronization. Key points about the current setup: The core system and database are hosted internally (on-premises). The database contains sensitive finance and payment data. Internal processes depend on the current database structure and workflows. The new requirements: Develop an online payment service accessible over the internet. Ensure transactions from the online service update the internal system. Maintain data integrity and security. Avoid performance issues for the internal system. I’m evaluating a few possible approaches but I’m unsure which is best practice: Allow the cloud payment service to connect directly to the internal SQL Server database through secure networking. Maintain a replicated or read/write copy of the database in the cloud. Use SQL Server replication (transactional or snapshot) between on-prem and cloud. Introduce an API or middleware layer that handles all transactions and updates the internal database. Maintain separate databases and synchronize transactions asynchronously. My main questions: Is it recommended to expose the internal SQL Server database directly to cloud services? Should I use replication, a secondary database, or a service/API layer? What architecture pattern is commonly used for integrating on-prem financial systems with online payment platforms? How can we ensure consistency between internal transactions and online payments? Are there recommended SQL Server features or patterns for this scenario (replication, service broker, CDC, etc.)? Any advice on best practices, architecture patterns, or real-world implementations would be greatly appreciated.ShuraCouncilSeniorDevMar 14, 2026Copper Contributor60Views0likes0CommentsBest architecture to integrate internal SQL Server system with cloud-based online payment services
I currently have an internal enterprise ERP system also need to be integrated with online payments, and finance operations. The system runs on an on-premises database using Microsoft SQL Server 2022 and stores all financial transactions and internal workflow data. We now have a new requirement to enable online payment services for customers. These services will be exposed externally (likely in the cloud) and must interact with the same financial system so that transactions are reflected in our internal database. My main concerns are related to architecture, security, and data synchronization. Key points about the current setup: The core system and database are hosted internally (on-premises). The database contains sensitive finance and payment data. Internal processes depend on the current database structure and workflows. The new requirements: Develop an online payment service accessible over the internet. Ensure transactions from the online service update the internal system. Maintain data integrity and security. Avoid performance issues for the internal system. I’m evaluating a few possible approaches but I’m unsure which is best practice: Allow the cloud payment service to connect directly to the internal SQL Server database through secure networking. Maintain a replicated or read/write copy of the database in the cloud. Use SQL Server replication (transactional or snapshot) between on-prem and cloud. Introduce an API or middleware layer that handles all transactions and updates the internal database. Maintain separate databases and synchronize transactions asynchronously. My main questions: Is it recommended to expose the internal SQL Server database directly to cloud services? Should I use replication, a secondary database, or a service/API layer? What architecture pattern is commonly used for integrating on-prem financial systems with online payment platforms? How can we ensure consistency between internal transactions and online payments? Are there recommended SQL Server features or patterns for this scenario (replication, service broker, CDC, etc.)? Any advice on best practices, architecture patterns, or real-world implementations would be greatly appreciated.qasiliaMar 14, 2026Copper Contributor33Views0likes0CommentsSSRS 2016 Browsing/API very slow
Since moving an existing SSRS server to an Azure VM we get extremely slow performance when opening to browse reports. Devtools shows 2 calls to the API are taking 25s+ to respond - ServiceState and Me. When we call the API direct for those 2 (e.g. <servername>/reports/api/v1.0/ServiceState) they are both the same - very slow. They return a 200 response, so are working ok, and once you've logged in browsing around is fine. The reports themselves run well, and if we skip the API and just browse to ReportServer it's fast. The only thing that has changed on the server is the IP when it moved to Azure - there are no other new firewall rules/ACLs/etc in place. Has anyone had this issue or can shed some light on this?James_ChristopherMar 13, 2026Copper Contributor1.2KViews0likes1Comment
Tags
- sql server76 Topics
- Data Warehouse73 Topics
- Integration Services66 Topics
- sql59 Topics
- Reporting Services46 Topics
- Business Intelligence43 Topics
- Analysis Services33 Topics
- analytics25 Topics
- Business Apps23 Topics
- ssms23 Topics