Recent Discussions
Access to SQL Jobs
Hi, I have several jobs in SQL Server Agent. In the department there are 3 people who need administration access to the jobs. What kind of security access must they have? They already have SQL Agent Operator, SQL Agent Reader and SQL Agent User. As far as I know you can only edit jobs where you are the Owner. But, all 3 people must be able to edit all jobs. How to solve that issue?24Views0likes2CommentsCannot connect Azure OpenAI Embeddings model to SQL Server 2025
On SQL Server 2025, I am trying to vectorize a table. To set up the ability for SQL Server 2025 to communicate with Azure OpenAI embeddings model, I first created a master key for encryption. CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Secret'; GO Then I set up a database scoped credential. CREATE DATABASE SCOPED CREDENTIAL [MyAzureOpenAICredential] WITH IDENTITY = 'HTTPEndpointHeaders', SECRET = '{"api-key":"secret"}'; Then I created an external model. CREATE EXTERNAL MODEL AzureOpenAIEmbeddingsModel WITH ( LOCATION = 'https://{secret}-eastus2.cognitiveservices.azure.com/openai/deployments/text-embedding-3-small/embeddings?api-version=2023-05-15', API_FORMAT = 'Azure OpenAI', MODEL_TYPE = EMBEDDINGS, MODEL = 'text-embedding-3-small', CREDENTIAL = [MyAzureOpenAICredential] ); However, when I run this simple script: DECLARE @text NVARCHAR(MAX) = N'SQL Server 2025 enables AI-powered applications'; DECLARE @embedding VECTOR(1536) = AI_GENERATE_EMBEDDINGS(@text USE MODEL AzureOpenAIEmbeddingsModel); I get this error. The database scoped credential 'MyAzureOpenAICredential' cannot be used to invoke an external rest endpoint. I have read through https://learn.microsoft.com/en-us/training/modules/build-ai-solutions-sql-server/4-integrate-ai-models pertaining to this task. As well as SQL Server 2025 docs for creating a model. I have also read SQL Server 2025 docs for creating https://learn.microsoft.com/en-us/sql/t-sql/statements/create-database-scoped-credential-transact-sql?view=sql-server-ver17. I have not found any answers.26Views0likes0CommentsPolybase - Enforce TCP/IP Protocol
Hello, We know Polybase service in SQL server by default makes use of shared memory protocol to connect SQL server (local). I would like to know if there is any way we can change or force the connections to make use of TCP / IP protocol. Any help would be appreciated.34Views0likes0CommentsSynchronize SQL database between two servers
Questions (to make right the synchronize SQL database between two servers) 1. Is required to have SQL Enterprise in both servers, for to be able the synchronization? Or will be enough to have SQL Enterprise in server1. And in server2 with only SQL Express, please? 2. Which requirements are required to fulfil or prepare for to be able synchronize SQL database between two servers, please? 3. Which possibilities (when are more possible solutions) have existed for synchronize SQL database (totally awaiting between four servers), please? 4. Exist a manual for reading an synchronization of SQL server - settings and steps? Thanks in advance.91Views0likes5CommentsSQL Server AG Failover - Automatic Failover
Hello, I am looking for a straight and definitive answer that I was hoping someone could answer for me. I want to trust what Copilot says, but I really need to hear it from Microsoft and I can't find any documentation from Microsoft confirming my question. My Environment: 2 replicas in datacenter 1 1 replica in datacenter 2 All three (3) replicas are set to synchronous-commit mode with automatic failover. I tested the failover manually between all three (3) replicas without issue. When I test the automatic failover - I take down both replicas in datacenter 1 at the same time to simulate a datacenter outage. I look at the replica in datacenter 2 and it is just says (Resolving...) next to the replica name. The replica does not come online and the DB is not moved. When I was searching I couldn't find out why. So I turned to Copilot not solve the issue, but to see if it could point me in the right direction. I tell Copilot my setup and what happened. Copilot responded stating that by design from Microsoft you cannot have more than two (2) replicas set to synchronous-commit mode with automatic failover in a SQL Server AG instance. That if more than two (2) are set for automatic failover. The SQL Server AG will use the first two (2) replicas it sees in its metadata and ignore the rest. Copilot went into detail about why this is designed this way, but the amount of information would make this post longer than it already is. If this is true - then when I took down both replicas in datacenter 1, SQL Server AG only saw those two (2) replicas in datacenter 1 as the available replicas to use for an automatic failover and thus why the replica in datacenter 2 did not come online and the DB not being moved So let's do a test. I brought back up the two (2) replicas in datacenter 1. Then I made a change in the AG proprieties. I set the 2nd replica in datacenter 1 to manual. So 1 replica is set to automatic failover and 1 replica is set to manual failover in datacenter 1. The replica in datacenter 2 is set to automatic failover I then take down both replicas in datacenter 1 again to simulate the "outage" and the replica in datacenter 2 comes online and the DB is moved. So is Copilot right? Can there only be two (2) replicas allowed to have/use automatic failover? I cannot find a definitive answer confirming this. Or is my configuration wrong/missing something and if it is, could you please point me in the right direction on how to get this resolved?113Views0likes4CommentsService Broker on Ubuntu 20.04 (Docker) cryptography error
Hi, Has anyone successfully created a multi‑instance / multi‑host SQL Server Service Broker implementation on Linux, specifically on distributions using OpenSSL 3? I have a fully working Service Broker environment on SQL Server 2017, running in Docker on Ubuntu 18.04 (OpenSSL 1.1.1). My environment has 6 containers, arranged to simulate two datacenters: DC1: Primary AG replica + synchronous replica + async replica DC2: Same setup 7 databases in the Availability Group Service Broker dialogs between databases on the same SQL instance work perfectly. This confirms: Database‑level dialog security works Certificate‑based authentication between database principals is correct DMKs are auto-opened and protected by SMK All dialogs use BEGIN DIALOG with certificates valid and present Cross-instance messages work correctly. I have: Broker endpoints using certificate‑based endpoint authentication Proper certificate exchange (private key on owning instance, public key on the peer) Routes defined correctly Remote Service Bindings used to force dialog security Certificates created dynamically with correct validity date ranges DMKs created, opened, and re-encrypted with SMK at runtime Scripts that fully automate certificate creation, export, import, user creation, RSB creation, etc. Everything here works fine as long as I stay on Ubuntu 18.04 (OpenSSL 1.1.1). The problem: If I switch only the base Linux image—to Ubuntu 20.04, Ubuntu 22.04, or RHEL 8—and change nothing else, then all cross‑instance Service Broker dialogs fail with: An exception occurred while enqueueing a message in the target queue. Error: 9641, State: 122. A cryptographic operation failed. This error indicates a serious problem with SQL Server. This happens on SQL Server 2019 and 2022, both CU up-to-date, both using OpenSSL 3. Key points: Intra‑instance dialog security still works Endpoint transport security still works (i.e., endpoints authenticate and connect) The failure occurs only when creating session keys for cross‑instance dialog security The failure happens even when using: encryption = off fresh DMKs fresh certificates certificates stored in master or user DB key length 2048 or 3072 different validity periods So far I have: Confirmed DMKs exist on both instances Confirmed DMKs are encrypted by SMK Confirmed DMKs auto-open (is_master_key_encrypted_by_server = 1) Manually opened DMKs before creating certificates Recreated SMK/DMK on clean containers Everything behaves as expected. Repeatedly rebuilt all of the following: Endpoint authentication certificates Dialog security certificates Remote Service Bindings Database principals Routes Service Broker services, queues, and message types I’ve verified: All certificate subjects match All public keys export/import correctly Private keys exist where they should exist Thumbprints match across both sides No old certificates remain in master or user DBs Modified the OpenSSL behaviour to legacy (1.1.1) behaviour, e.g. [openssl_init] providers = provider_sect ssl_conf = ssl_sect alg_section = evp_properties [evp_properties] rh-allow-sha1-signatures = yes fips_mode = no [provider_sect] default = default_sect legacy = legacy_sect [default_sect] activate = 1 [legacy_sect] activate = 1 [ssl_sect] system_default = system_default_sect [all_policy] rsa_pkcs1_padding_check = 0 [system_default_sect] # Lowest policy to allow legacy algorithms (including SHA-1, 1024-bit RSA) CipherString = DEFAULT:@SECLEVEL=0 To rule out network/transport issues, I validated: Endpoints authenticate each other Connections appear in sys.dm_broker_connections STATE = OPEN Transport security appears to be working on OpenSSL 3. I confirmed: Messages flow when RSB is removed and encryption=off is used Messages fail only when RSB is enabled, which requires certificate‑based dialog security The target instance claims “private key missing” even though it is present and readable Has anyone successfully created a multi‑instance / multi‑host SQL Server Service Broker implementation on Linux, specifically on distributions using OpenSSL 3? If so, any guidance on how to resolve the issues i am facing? Many thanks, Andrew28Views0likes0CommentsSQL Timestamp Incremental Load Issue
An incremental load is implemented based on timestamp, using a view to update a fact table. • The fact table contains multiple KPIs coming from different source tables. • Two main KPI tables are included directly in the view and drive the incremental logic (their timestamps change). • Additional KPIs come from other source tables, which are LEFT JOINed to the view. During an incremental run: • Main source tables are updated → timestamp changes • Other source tables are NOT updated → timestamp unchanged • Because of the LEFT JOIN, missing values from these tables are returned as NULL • These NULLs overwrite existing values in the fact table Example Fact table (before load) id app_number score 1 333 5 Source tables • source1, source2 → timestamps updated (drive incremental load) • source3 → timestamp unchanged Stored procedure logic MERGE fact_table tgt USING ( SELECT app_number, ISNULL(score, 0) AS score FROM vw_main_kpis v LEFT JOIN source3 s3 ON v.app_number = s3.app_number ) src ON tgt.app_number = src.app_number WHEN MATCHED THEN UPDATE SET tgt.score = src.score; Result (incorrect) id app_number score 1 333 0 Existing data is lost.135Views0likes2CommentsCompat level 90: XML string-to-datetime UDF
Hello, I’m testing a behavior described in SQL Server documentation for **database compatibility level 90**. The docs state that a user-defined function that converts an XML constant string value to a SQL Server date/time type is marked as **deterministic**. On **SQL Server 2005**, I’m seeing the opposite: the function is marked as **non-deterministic** (`IsDeterministic = 0`). I’m trying to understand whether I’m missing a requirement/constraint or whether this is a doc mismatch / version-specific behavior. ### Environment - Product: **Microsoft SQL Server 2005** - Database compatibility level: **90** --- ## ✅ Repro script ```sql IF OBJECT_ID('dbo.fn_ParamXmlToDatetime', 'FN') IS NOT NULL DROP FUNCTION dbo.fn_ParamXmlToDatetime; GO CREATE FUNCTION dbo.fn_ParamXmlToDatetime (@xml XML) RETURNS DATETIME WITH SCHEMABINDING AS BEGIN DECLARE @y DATETIME; -- Convert an XML value to DATETIME SET @y = CONVERT(DATETIME, @xml.value('(/r)[1]', 'datetime')); RETURN @y; END GO SELECT OBJECTPROPERTY(OBJECT_ID('dbo.fn_ParamXmlToDatetime'), 'IsDeterministic') AS IsDeterministic, OBJECTPROPERTY(OBJECT_ID('dbo.fn_ParamXmlToDatetime'), 'IsPrecise') AS IsPrecise; GO ``` ### Actual result `IsDeterministic = 0` (non-deterministic) ### Expected result (based on docs) `IsDeterministic = 1` (deterministic) for this pattern under compat level 90. --- ## Questions 1. Are there additional conditions required for SQL Server to mark this UDF as deterministic (for example, specific XQuery usage, avoiding `CONVERT`, using `CAST`, using `datetime2` doesn’t exist in 2005, etc.)? 2. Does the determinism rule apply only when converting from an **XML literal constant** inside the function, rather than an XML parameter value? 3. Is this behavior different for **typed XML** (XML schema collections) vs **untyped XML**? 4. Is this a known difference/bug in SQL Server 2005 where the UDF is functionally deterministic but still reported as non-deterministic by `OBJECTPROPERTY`? Thank you for any clarification. ---32Views0likes0CommentsSQL Server 2005 (compatibility level 90)
Hello, I’m testing the behavior described in the SQL Server documentation for **compatibility level 90** regarding the special attributes `xsi:nil` and `xsi:type`: > “The special attributes `xsi:nil` and `xsi:type` can't be queried or modified by data manipulation language statements. This means that `/e/@xsi:nil` fails while `/e/@*` ignores the `xsi:nil` and `xsi:type` attributes. However, `/e` returns the `xsi:nil` and `xsi:type` attributes for consistency with `SELECT xmlCol`, even if `xsi:nil = "false"`. ” But on **SQL Server 2005**, I can successfully query `@xsi:nil` and it returns the expected value. I’m trying to reproduce the documented “`/e/@xsi:nil` fails” behavior, but I can’t. ### Environment - Product: **Microsoft SQL Server 2005** - Database compatibility level: **90** --- ## ✅ Repro script ```sql IF EXISTS (SELECT * FROM sys.xml_schema_collections WHERE name = 'MyTestSchema') DROP XML SCHEMA COLLECTION MyTestSchema; GO CREATE XML SCHEMA COLLECTION MyTestSchema AS N' <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"> <xsd:element name="root"> <xsd:complexType> <xsd:sequence> <xsd:element name="element" nillable="true" type="xsd:string" /> </xsd:sequence> </xsd:complexType> </xsd:element> </xsd:schema>'; GO DECLARE @xmlData XML(MyTestSchema) = N' <root xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <element xsi:nil="true" /> </root>'; ;WITH XMLNAMESPACES ('http://www.w3.org/2001/XMLSchema-instance' as xsi) SELECT @xmlData.query('<result> { /root/element/@xsi:nil } </result>') AS Typed_Result; ``` ### Actual result `Typed_Result` contains `xsi:nil="true"` under `<result>...`. ### Expected result (based on docs) I expected `/root/element/@xsi:nil` to fail, or not return `xsi:nil`. --- ## Questions 1. In the documentation, does “data manipulation language statements” mean only **XML DML** (i.e., `.modify()`), not XQuery used in `SELECT` with `.query()` / `.value()`? 2. Does the “`/e/@xsi:nil` fails” behavior apply only when the XML is stored in a **table column**, not when using an **XML variable**? 3. Is the behavior different between **typed XML** (with an XML schema collection) vs **untyped XML**? 4. Can someone provide a minimal reproduction in SQL Server 2005 where `/e/@xsi:nil` fails as described? Thank you. ---58Views0likes1CommentSQL Server 2025 VECTOR functions accepting JSON array strings
Hello! Playing with new VECTOR type and functions, I can read following https://learn.microsoft.com/en-us/sql/t-sql/functions/vector-distance-transact-sql?view=sql-server-ver17 for vector parameters: An expression that evaluates to vector data type. To me this means that any expression (including character strings as JSON-arrays) can be used as parameter. Since INSERT statement accepts to convert a JSON-array string to a VECTOR, I would expect that these function also accept this conversion. However, it appears that we are forced to cast the JSON-array to VECTOR. Any chance to improve this? Here some T-SQL example: declare v1 as VECTOR(3) = '[-1,1,0]' declare S1 as VARCHAR(50) = '[-1,1,0]' drop table tab1; create table tab1 (pkey int not null primary key, emb vector(3)); insert into tab1 values ( 101, v1 ); insert into tab1 values ( 102, S1 ); select * from tab1 order by pkey; select vector_distance('cosine',emb,@v1) from tab1; select vector_distance('cosine',emb,@s1) from tab1; -- fails Seb80Views0likes1Comment- 190Views0likes7Comments
Clarification on redistributability of MSSQL JDBC Driver 8.4 components
I am reviewing the licensing terms for Microsoft JDBC Driver 8.4 for SQL Server, specifically for the native authentication library mssql-jdbc_auth-8.4.1.x64.dll and mssql-jdbc-8.4.1.jre8.jar. The license terms state that certain components are redistributable if they are listed in a REDIST.TXT file included with the software. However, in the JDBC Driver 8.4 package (downloaded from the official Microsoft site), we are unable to locate REDIST.TXT file. https://go.microsoft.com/fwlink/?linkid=2137600 Could someone please clarify the following: Are mssql-jdbc_auth-8.4.1.x64.dll and mssql-jdbc-8.4.1.jre8.jar considered redistributable under the Microsoft JDBC Driver 8.4 license terms? If so, is there official documentation or an alternative location where the redistributable components for this driver are listed? If one or both components are not redistributable, what is the recommended and supported approach for applications that require integrated authentication when deployed to customer environments? This clarification is important for us to ensure compliance with Microsoft licensing requirements when distributing our application. Thank you for your assistance.79Views0likes1Comment- 99Views0likes2Comments
SQL Sever 2025 request with SQL connection
Hello, I installed SQL Server 2025 on a new VM, and I’m using a C# project to initialize my database based on a model (creating tables, foreign keys, default values, etc.). The process is extremely slow on SQL Server 2025: on my SQL Server 2022 environment, it takes about 30 minutes, but my first test on 2025 ran for over 4 hours. I’m connecting to SQL using a SQL authentication login (I tried both SA and a newly created account — same issue). I then tested using a Windows Authentication login, and surprisingly, the performance issue disappeared. Are there any known issues related to SQL authentication in SQL Server 2025?132Views0likes1CommentCannot login to Analyses Services SQL 2025 Standard
Hello, I have installed SQL Server 2025 Standard Developer and no way to login to Analyses Services. I have latest version of SSMS (so drivers/libraries should not be an issue): I keep getting this: I've even tried with connection string with server name and instance - no luck. I know that after installing SQL 2025 Server Enterprise, there is no such problem with Analyses Services. Any help would be very appreciated. Best regards!169Views0likes3CommentsDatabase Level Collation Issues
Some background information. In the past we had a database where we didn't really care about collation at the database, table/column, functions and views. Though now we have a requirement due to a technology acquisition, requires SQL_Latin1_General_CP1_CI_AS. Microsoft seems to not have a simple method that guarantees changing the collation throughout the database on complex DBs with large amounts of tables, columns, functions, etc... specially with high dependencies, FKs and PKs and not affect any of the data (no moving the data is not simple). I have a command, where I stop all of the SQL services and any connections to the DB and use the following command line to change the collation to the one I specify. The issue is this command only works about 65% of the time and no clue to the why it fails the other 35%. The command used is: The parameter "-s" is necessary only if you have a named instance, remove the -s if no instance. sqlservr -m -T4022 -T3659 -s"SQL2019" -q"SQL_Latin1_General_CP1_CI_AS" or sqlservr -m -T4022 -T3659 -q"SQL_Latin1_General_CP1_CI_AS" When this does not work, I have a set of 6 SQL scripts to tear the database apart in a complex method to get them operational. Does anybody have any idea why the hit and miss on the command? Or a better method / way to fully change the DB collation?136Views0likes2CommentsMajor increase in transaction log backup size after upgrading to SQL Server 2022
Has anyone experienced a major increase in transaction log backup size after upgrading to SQL Server 2022? I recently upgraded one of our SQL Server environments from SQL Server 2019 to SQL Server 2022 (migrated databases to new SQL Server 2022 instances). After the upgrade, I noticed that transaction log backup sizes increased by 3× to 5×, even though: No application changes were made Workload and data change volume should be roughly the same Backup schedule remained the same (log backups every 30 minutes) Backup compression is default (same as before) The database is part of an Always On Availability Group running in synchronous commit mode Metrics I obeserved: Transaction log backup file size — increased 3× to 7× Log Write Throughput/sec — more than doubled Write Log Throughput — increased after upgrade Background Write Pages/sec — went up as well Log Bytes Flushed/sec Bytes Sent to Replica/sec — increased post-upgrade Backup/Restore Throughput/sec — increased from ~13 MB/sec → ~38 MB/sec Bytes Sent to Replica/sec (Always On counter) — increased New hardware is used — previous physical disk sector size was 4K, now it's 512 bytes Same Amount of RAM and CPU212Views0likes3Comments
Events
Recent Blogs
- 2 MIN READSQLCon 2026, the premier Microsoft SQL Community Conference, is co-located with the Microsoft Fabric Community Conference (FabCon) from March 16-20, 2026! Register today! This year, your Azure Data...Feb 05, 2026168Views0likes0Comments
- The SQL community is leveling up! We’re launching a lightweight, beginner friendly SQL + AI contest designed to spark awareness of SQL Server 2025’s new AI capabilities—from connecting to an AI mo...Jan 30, 20264.5KViews9likes1Comment