Recent Discussions
Data flow sink to Blob storage not writing to subfolder
Hi Everybody This seems like it should be straightforward, but it just doesn't seem to work... I have a file containing JSON data, one document per line, with many different types of data. Each type is identified by a field named "OBJ", which tells me what kind of data it contains. I want to split this file into separate files in Blob storage for each object type prior to doing some downstream processing. So, I have a very simple data flow - a source which loads the whole file, and a sink which writes the data back to separate files. In the sink settings, I've set the "File name option" setting to "Name file as column data" and selected my OBJ column for the "Column Data", and this basically works - it writes out a separate file for each OBJ value, containing the right data. So far, so good. However, what doesn't seem to work is the very simplest thing - I want to write the output files to a folder in my Blob storage container, but the sink seems to completely ignore the "Folder path" setting and just writes them into the root of the container. I can write my output files to a different container, but not to a subfolder inside the same container. It even creates the folder if it's not there already, but doesn't use it. Am I missing something obvious, or does the "Folder path" setting just not work when naming files from column data? Is there a way around this?12Views0likes0CommentsSeptember 2025 Recap: What’s New with Azure Database for PostgreSQL
September 2025 Recap for Azure Database for PostgreSQL September was a big month for Azure Postgres! From the public preview of PostgreSQL 18 (launched same day as the community!) to the GA of Azure Confidential Computing and Near Zero Downtime scaling for HA, this update is packed with new capabilities that make PostgreSQL on Azure more secure, performant, and developer-friendly. 💡 Here’s a quick peek at what’s inside: PostgreSQL 18 (Preview) – early access to the latest community release on Azure Near Zero Downtime Scaling (GA) – compute scaling in under 30 seconds for HA servers Azure Confidential Computing (GA) – hardware-backed data-in-use protection PostgreSQL Discovery & Assessment in Azure Migrate (Preview) – plan your migration smarter LlamaIndex Integration – build AI apps and vector search using Azure Postgres VS Code Extension Enhancements – new Server Dashboard + Copilot Chat integration Catch all the highlights and hands-on guides in the full recap 👉 #PostgreSQL #AzureDatabase #AzurePostgres #CloudDatabases #AI #OpenSource #MicrosoftPostgreSQL 18 Preview on Azure Database for PostgreSQL
PostgreSQL 18 Preview on Azure Postgres Flexible Server We’re excited to bring the latest Postgres innovations directly into Azure. With PG18 Preview, you can already test: 🔹 Asynchronous I/O (AIO) → faster queries & lower latency 🔹 Vacuuming enhancements → less bloat, fewer replication conflicts 🔹 UUIDv7 support → better indexing & sort locality 🔹 B-Tree skip scan → more efficient use of multi-column indexes 🔹 Improved logical replication & DDL → easier schema evolution across replicas And that’s just the start — PG18 includes hundreds of community contributions, with 496 from Microsoft engineers alone 💪 👉 Try it out today on Azure Postgres Flexible Server (initially in East Asia), share your feedback, and help shape GA.Problem with Linked Service to SQL Managed Instance
Hi I'm trying to create a linked Service to a SQL Managed Instance. The Managed Instance is configured with a Vnet_local endpoint If I try to connect with an autoresolve IR or a SHIR I get the following error The value of the property '' is invalid: 'The remote name could not be resolved: 'SQL01.public.ec9fbc2870dd.database.windows.net''. The remote name could not be resolved: 'SQL01.public.ec9fbc2870dd.database.windows.net' Is there a way to connect to it without resorting to a private endpoint? Cheers Alex35Views0likes0CommentsAzure SQL server rollback itself?
We have an Azure SQL server. It is a datasource of a Power App canvas app. Today I connected to it with SSMS v19. First, I ran 'Begin tran' twice (is it a mistake?). Then 'Delete From dbo.table1 where ID=30' and another row with ID=31. Then I verified these 2 rows are deleted by 'Select * from dbo.table1' Finally, I ran 'Commit tran' I verified again above 2 rows are deleted by 'Select * from dbo.table1' However, there is no change in the Power App. So I reopen the SSMS and connect to the DB again. This time when I ran 'Select * from dbo.table1', the 2 rows are showing up. What could be the problem? Is it a bug in old version SSMS?28Views0likes0CommentsADF connection issue with Cassandra
Hi, I am trying to connect a cassandra DB hosted in azure cosmos db. I created the linked service but getting below error on test connection. Already checked the cassandra DB and its public network access is set to all networks. Google suggested enabling SSL but there is no such option in linked service. Please help. Failed to connect to the connector. Error code: 'Unknown', message: 'Failed to connect to Cassandra server due to, ErrorCode: InternalError' Failed to connect to the connector. Error code: 'InternalError', message: 'Failed to connect to Cassandra server due to, ErrorCode: InternalError' Failed to connect to Cassandra server due to, ErrorCode: InternalError All hosts tried for query failed (tried 51.107.58.67:10350: SocketException 'A request to send or receive data was disallowed because the socket is not connected and (when sending on a datagram socket using a sendto call) no address was supplied')65Views1like1CommentAugust 2025 Recap: Azure Database for PostgreSQL
Here’s what’s new this month to help you build smarter and scale securely: Advisor performance tuning (GA): New insights on index scans, logging, stats, and connections Entra ID group login (Preview): Let users sign in with their own credentials (no need for login using group-ID). New region – Austria East: Lower latency + data residency options for Central Europe LangChain & LangGraph support: Use Azure PostgreSQL as a vector store for AI agents Active-active replication guide: Step-by-step walkthrough using pglogical Full details in monthly recap: https://techcommunity.microsoft.com/blog/adforpostgresql/august-2025-recap-azure-database-for-postgresql/4450527Prevent Accidental Deletion of an Instance in Azure Postgres
Did you know that accidental deletion of database servers is a leading source of support tickets? Read this blog post to learn how you can safeguard your Azure Database for PostgreSQL Flexible Server instances using ARM’s CanNotDelete lock — an easy best-practice that helps prevent accidental deletions while keeping regular operations seamless. 🌐 Prevent Accidental Deletion of an Instance in Azure PostgresUser configuration issue
Hi, I am getting below error "Execution fail against sql server. Please contact SQL Server team if you need further support. Sql error number: 3930. Error Message: The current transaction cannot be committed and cannot support operations that write to the log file. Roll back the transaction" I never faced this kind of error. Could anyone please let me know what i can do and i needs to do. I am beginner, please explain me. Thanks.79Views1like1CommentHelp with Partial MongoDB Update via Azure Data Factory Data Flow
Hello, everyone! I have a complex question about how to perform a partial update on a MongoDB collection using Data Flow in Azure Data Factory. My goal is to modify only some nested fields without overwriting the entire document. My flow reads JSON files with the following structure: { "_id": { "$oid": "1xp3232to" }, "root_field": "root_value", "main_array": [ { "array_id": "id001", "status": "PENDING", "nested_array": [] } ], "numeric_value": { "$numberDecimal": "10.99" } } I need Data Flow to make two changes in a single run: Change the status field from "PENDING" to "SENT". Add a new object to the nested_array with the following data: event: "SENT" description: "FILE GENERATED" timestamp: (current date and time) system: "Sis Test" I've tried some expressions with update and append in the Derived Column transformation, but I can't get the syntax right to make both changes at the same time. My biggest concern is with the MongoDB Sink: how to configure it so that Data Flow performs a partial update and doesn't overwrite the entire document, losing root_field, numeric_value, etc.? My questions are: What is the correct expression for the Derived Column that makes these two nested modifications in a single step? How should I configure the MongoDB Sink to ensure the update is partial, using _id as the key? I really appreciate the community's help!109Views0likes1CommentGetting an Oauth2 API access token using client_id and client_secret - help
Hi, I'm attempting to integrate external data into our SQL Server. The third-party data is from a solution called iLevel. They use token based Oauth2 APIs for access. The integration tool is ADF Pipelines. I'm not a data engineer but it has fallen upon me to complete this exercise. What I've attempted so far is failing and I don't know why. I would like your help on this. I'll explain what I've configured so far in the order I configured it. 1) To generate a client_id and client_secret, I logged on to the iLevel solution itself and generated the same for my account (call me 'Joe' account) and the Team account (call it 'Data team' account). I've recorded the client_id and client_secret for both users/accounts in notepad for reference. 2) I logged in Azure Data Factory using my 'Joe Admin' admin account (this is the account I need to log in with for any ADF development). 3) I created a Linked Service with the following configuration. Note how the Test connection was successful. I guess this means our ADF instance can connect to iLevel's Base URL. 4) I then created a dataset for iLevel. I configured this based on an online example I was following which I can't get working, so this configuration may be incorrect. 5) I then created a Pipeline which contains a 'Web' activity and a 'Set variable' activity. The Pipeline has a variable as shown below. The 'Web' activity has the following configuration: URL = is iLevel's token URL (it is different from the Base URL used in the Linked Service). Body = I've blocked out the client_id and client_secret (I'm using the client_id and client_secret generated for the 'Data team' account - remember I'm logged into ADF using the 'Joe Admin' account - not sure if this makes a difference) but have placed red brackets around where the start and end of each values is. I'm wrapping the values in any single or double quotes - not sure if I'm meant to. I'm not sure if I have configured the Body correctly. The ilevel documentation states to use an Authorization header, Content-Type header and Body - it states to the following is needed to obtain an access token, but it doesn't state exactly how to submit the information (i.e. how to format it). Notice how, in my configuration, I haven't used an Authorization header - this partly because an online example I've followed doesn't use one. If iLevel state to use one then I think I should but I don't know how to format it - any ideas? The 'Set variable' activity has the following activity. The idea is the access token is retrieved from the 'Web' activity and placed in the 'Set variable' "iLevel access token" variable. At this point I validate all and it comes back with no errors found. I then Debug it to see if it does indeed work but it returns an error stating the request contains an invalid client_id or client_secret. The client_id and client_secret values used are the exact same I generated from within the iLevel solution just a few hours ago. Is anyone able to point out to me why this isn't working? Have I populated all that I need to (as mentioned, iLevel say to use an Authorization header which I haven't but I don't know how to format it if I were to use one)? What can I do to get this working? I'm just trying to get the access token at the moment. I've not even attempted to extract the iLevel data and can't until I get a working token. iLevel's token have a 1 hour time-to-live so the Pipeline needs to generate a new token each time it's executed. You help will be most appreciated. Thanks.81Views1like0CommentsJuly 2025 Recap: Azure Database for PostgreSQL
Azure PostgreSQL – July 2025 Recap is live! This month’s update brings a strong mix of performance, security, and developer experience improvements — plus new region rollouts and deeper analytics integrations. What’s new: Cascading Read Replica (Preview) for multi-level replication Fabric Mirroring with VNET + Private Endpoint (Preview) Power BI Entra ID authentication (GA) Upsert + Script activity in ADF/Synapse (GA) Now available in Malaysia West & Chile Central PostgreSQL minor version updates: 17.5, 16.9, 15.13, 14.18, 13.21 Agentic Web + NLWeb support for PostgreSQL VS Code extension enhancements Improved maintenance for stopped instances Read the complete blog post - Azure PostgreSQL – July 2025 Recap is live!67Views0likes0CommentsAzure SQL Firewall / Locks
Hi there, I have 2 environments. I'm more of admin on Azure environment (recently made as subscription admin) after which Dev issue - Azure SQL I'm having difficulty to remove IP from Azure SQL Firewall. (Earlier i was able to) today my manager granted me subscription admin and as SQL Security Manager and it still not able to remove grayed out IPs.52Views0likes0CommentsBetter way to check database for dups
I have vba in Excel that incorporates SQL to iterate through all the records in a sheet and checks the databsase for that record using a combination of two fields to make a unique id. If it's in the database then it's deleted from the sheet. Otherwise it goes to the next one. Easy enough. I've had this code for years and it's always been fine but now there are too many records to do it this way. It takes too long. What would be a better way to query the SQL database to return those records that are dups and then delete them from the sheet. Here's an idea of the snippet of code I currently use: 'i is the record number from bottom to top For i = DataSheet.Range("A" & Rows.Count).End(xlUp).Row To 2 Step -1 CountyName = DataSheet.Cells(i, 1).Value CaseNumber = DataSheet.Cells(i, 3).Value strSQL = "SELECT County, Case_Number FROM tblAdLetter " & _ "WHERE County = '" & CountyName & "' AND Case_Number = '" & CaseNumber & "'" rs.Open strSQL, cn If Not rs.EOF Then DataSheet.Rows(i).Delete End If rs.Close Next i Is there a better way or faster way? tod31Views0likes0CommentsDynamics AX connector stops getting records after amount of time
Hello everyone, I am using the Dynamics AX connector to get data out of Finance. After a certain amount of time it suddenly doesnt get any new records anymore and it keeps running until it reaches the general timeout. It gets 290,000 records in like 01:30:00 and then keeps running and doesn't get new records anymore. Sometimes it gets stuck earlier or later. Sometimes it also gives me this error: Failure happened on 'Source' side. ErrorCode=ODataRequestTimeout,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Fail to get response from odata service in a expected time.,Source=Microsoft.DataTransfer.Runtime.ODataConnector,''Type=System.Threading.Tasks.TaskCanceledException,Message=A task was canceled.,Source=mscorlib,' This is my pipeline JSON: { "name": "HICT - Init Sync SalesOrders", "properties": { "activities": [ { "name": "Get FO SalesOrders", "type": "Copy", "dependsOn": [], "policy": { "timeout": "0.23:00:00", "retry": 0, "retryIntervalInSeconds": 30, "secureOutput": false, "secureInput": false }, "userProperties": [], "typeProperties": { "source": { "type": "DynamicsAXSource", "query": "$filter=FM_InterCompanyOrder eq Microsoft.Dynamics.DataEntities.NoYes'No' and dataAreaId eq 'prev'&$select=SalesOrderNumber,SalesOrderName,IsDeliveryAddressPrivate,FormattedInvoiceAddress,FormattedDeliveryAddress,ArePricesIncludingSalesTax,RequestedReceiptDate,QuotationNumber,PriceCustomerGroupCode,PBS_PreferredInvoiceDate,PaymentTermsBaseDate,OrderTotalTaxAmount,OrderTotalChargesAmount,OrderTotalAmount,TotalDiscountAmount,IsInvoiceAddressPrivate,InvoiceBuildingCompliment,InvoiceAddressZipCode,LanguageId,IsDeliveryAddressOrderSpecific,IsOneTimeCustomer,InvoiceAddressStreetNumber,InvoiceAddressStreet,InvoiceAddressStateId,InvoiceAddressPostBox,InvoiceAddressLongitude,InvoiceAddressLatitude,InvoiceAddressDistrictName,InvoiceAddressCountyId,InvoiceAddressCountryRegionISOCode,InvoiceAddressCity,FM_Deadline,Email,DeliveryTermsCode,DeliveryModeCode,DeliveryBuildingCompliment,DeliveryAddressCountryRegionISOCode,DeliveryAddressZipCode,DeliveryAddressStreetNumber,SalesOrderStatus,DeliveryAddressStreet,DeliveryAddressStateId,SalesOrderPromisingMethod,DeliveryAddressPostBox,DeliveryAddressName,DeliveryAddressLongitude,DeliveryAddressLocationId,DeliveryAddressLatitude,DeliveryAddressDunsNumber,DeliveryAddressDistrictName,DeliveryAddressDescription,DeliveryAddressCountyId,DeliveryAddressCity,CustomersOrderReference,IsSalesProcessingStopped,CustomerRequisitionNumber,SalesOrderProcessingStatus,CurrencyCode,ConfirmedShippingDate,ConfirmedReceiptDate,SalesOrderOriginCode,URL,OrderingCustomerAccountNumber,InvoiceCustomerAccountNumber,ContactPersonId,FM_WorkerSalesTaker,FM_SalesResponsible,PaymentTermsName,DefaultShippingSiteId,DefaultShippingWarehouseId,DeliveryModeCode,dataAreaId,FM_InterCompanyOrder&cross-company=true", "httpRequestTimeout": "00:15:00", "additionalHeaders": { "Prefer": "odata.maxpagesize=1000" }, "retrieveEnumValuesAsString": true }, "sink": { "type": "JsonSink", "storeSettings": { "type": "AzureBlobStorageWriteSettings", "copyBehavior": "FlattenHierarchy" }, "formatSettings": { "type": "JsonWriteSettings" } }, "enableStaging": false, "enableSkipIncompatibleRow": true, "logSettings": { "enableCopyActivityLog": true, "copyActivityLogSettings": { "logLevel": "Warning", "enableReliableLogging": false }, "logLocationSettings": { "linkedServiceName": { "referenceName": "AzureBlobStorage", "type": "LinkedServiceReference" }, "path": "ceexports" } } }, "inputs": [ { "referenceName": "AX_SalesOrders_Dynamics_365_FO_ACC", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "Orders_FO_D365_Data_JSON", "type": "DatasetReference" } ] }, { "name": "Get_All_CE_Table_Data", "type": "ForEach", "dependsOn": [ { "activity": "Get FO SalesOrders", "dependencyConditions": [ "Completed" ] } ], "userProperties": [], "typeProperties": { "items": { "value": "@pipeline().parameters.CE_Tables", "type": "Expression" }, "activities": [ { "name": "Copy_CE_TableData", "type": "Copy", "dependsOn": [], "policy": { "timeout": "0.12:00:00", "retry": 0, "retryIntervalInSeconds": 30, "secureOutput": false, "secureInput": false }, "userProperties": [], "typeProperties": { "source": { "type": "CommonDataServiceForAppsSource" }, "sink": { "type": "DelimitedTextSink", "storeSettings": { "type": "AzureBlobStorageWriteSettings", "copyBehavior": "FlattenHierarchy" }, "formatSettings": { "type": "DelimitedTextWriteSettings", "quoteAllText": true, "fileExtension": ".txt" } }, "enableStaging": false }, "inputs": [ { "referenceName": "CE_Look_Up_Tables", "type": "DatasetReference", "parameters": { "entiryName": "@item().sourceDataset" } } ], "outputs": [ { "referenceName": "CE_GenericBlobSink", "type": "DatasetReference", "parameters": { "sinkPath": { "value": "@item().sinkPath", "type": "Expression" } } } ] } ] } }, { "name": "Transform_Create_CE_JSON", "type": "ExecuteDataFlow", "dependsOn": [ { "activity": "Get_All_CE_Table_Data", "dependencyConditions": [ "Succeeded" ] } ], "policy": { "timeout": "0.12:00:00", "retry": 0, "retryIntervalInSeconds": 30, "secureOutput": false, "secureInput": false }, "userProperties": [], "typeProperties": { "dataflow": { "referenceName": "FO_Transform_CE_Select", "type": "DataFlowReference" }, "compute": { "coreCount": 16, "computeType": "General" }, "traceLevel": "Fine" } } ], "parameters": { "CE_Tables": { "type": "array", "defaultValue": [ { "name": "D365_CE_ACC_AccountRelations", "sourceDataset": "crmp_accountrelation", "sinkPath": "ce-exports/D365_CE_ACC_AccountRelations.json" }, { "name": "D365_CE_ACC_ContactRelations", "sourceDataset": "crmp_contactrelation", "sinkPath": "ce-exports/D365_CE_ACC_ContactRelations.json" }, { "name": "D365_CE_ACC_PriceCustomerGroup", "sourceDataset": "msdyn_pricecustomergroup", "sinkPath": "ce-exports/D365_CE_ACC_PriceCustomerGroup.json" }, { "name": "D365_CE_ACC_SalesOrderOrigin", "sourceDataset": "odin_salesorderorigin", "sinkPath": "ce-exports/D365_CE_ACC_SalesOrderOrigin.json" }, { "name": "D365_CE_ACC_ShipVia", "sourceDataset": "msdyn_shipvia", "sinkPath": "ce-exports/D365_CE_ACC_ShipVia.json" }, { "name": "D365_CE_ACC_SystemUser", "sourceDataset": "systemuser", "sinkPath": "ce-exports/D365_CE_ACC_SystemUser.json" }, { "name": "D365_CE_ACC_TermsOfDelivery", "sourceDataset": "msdyn_termsofdelivery", "sinkPath": "ce-exports/D365_CE_ACC_TermsOfDelivery.json" }, { "name": "D365_CE_ACC_Worker", "sourceDataset": "cdm_worker", "sinkPath": "ce-exports/D365_CE_ACC_Worker.json" }, { "name": "D365_CE_ACC_TransactionCurrency", "sourceDataset": "transactioncurrency", "sinkPath": "ce-exports/D365_CE_ACC_TransactionCurrency.json" }, { "name": "D365_CE_ACC_Warehouse", "sourceDataset": "msdyn_warehouse", "sinkPath": "ce-exports/D365_CE_ACC_Warehouse.json" }, { "name": "D365_CE_ACC_OperationalSite", "sourceDataset": "msdyn_operationalsite", "sinkPath": "ce-exports/D365_CE_ACC_OperationalSite.json" }, { "name": "D365_CE_ACC_PaymentTerms", "sourceDataset": "odin_paymentterms", "sinkPath": "ce-exports/D365_CE_ACC_PaymentTerms.json" } ] } }, "annotations": [], "lastPublishTime": "2025-07-30T12:55:32Z" }, "type": "Microsoft.DataFactory/factories/pipelines" }72Views0likes0CommentsHow to use existing cache for external table when acceleration in progress
I enabled query acceleration for my external table which binds a delta table (1TB) on ADLS. But acceleration progress needs 1.5 hours to complete. I found during acceleration in progress, querying on the table is quite slower than case when acceleration completed. How can I use the existing acceleration cache/index and after acceleration completed, Kusto will switch to new index?120Views1like1Commenttimechart legend in Azure Data Explorer
Hi, I'm creating a timechart dashboard in Azure Data Explorer and facing an issue with the legend labels. The legends have extra prefixes and suffixes, such as "Endpoint" or "Count". How can I remove these and show only the actual value in the legend? Thank you!92Views0likes1CommentAzure Data Factory ForEach Loop Fails Despite Inner Activity Error Handling - Seeking Best Practices
Hello Azure Data Factory Community, I'm encountering a persistent issue with my ADF pipeline where a ForEach loop is failing, even though I've implemented error handling for the inner activities. I'm looking for insights and best practices on how to prevent internal activity failures from propagating up and causing the entire ForEach loop (and subsequently the pipeline) to fail, while still logging all outcomes. My Setup: My pipeline processes records using a ForEach loop. Inside the loop, I have a Web activity (Sample_put_record) that calls an external API. This API call can either succeed or fail for individual records. My current error handling within the ForEach iteration is structured as follows: 1.Sample_put_record (Web Activity): Makes the API call. 2.Conditional Logic: I've tried two main approaches: •Approach A (Direct Success/Failure Paths): The Sample_put_record activity has a green arrow (on success) leading to a Log Success Items (Script activity) and a red arrow (on failure) leading to a Log Failed Items (Script activity). Both logging activities are followed by Wait activities (Dummy Wait For Success/Failure). •Approach B (If Condition Wrapper): I've wrapped the Sample_put_record activity and its success/failure logging within an If Condition activity. The If Condition's expression is @equals(activity('Sample_put_record').status, 'Succeeded'). The True branch contains the success logging, and the False branch contains the failure logging. The intention here was for the If Condition to always report success, regardless of the Sample_put_record outcome, to prevent the ForEach from failing. The Problem: Despite these error handling attempts, the ForEach loop (and thus the overall pipeline) still fails when an Sample_put_record activity fails. The error message I typically see for the ForEach activity is "Activity failed because an inner activity failed." When using the If Condition wrapper, the If Condition itself sometimes fails with the same error, indicating that an activity within its True or False branch is still causing a hard failure. For example, a common failure for Sample_put_record is: "valid":false,"message":"WARNING: There was no xxxxxxxxxxxxxxxxxxxxxxxxx scheduled..." (a user configuration/data issue). Even when my Log Failed Items script attempts to capture this, the ForEach still breaks. What I've Ensured/Considered: •Wait Activity Configuration: Wait activities are configured with reasonable durations and do not appear to be the direct cause of failure. •No Unhandled Exceptions: I'm trying to ensure no unhandled exceptions are propagating from my error handling activities. •Pipeline Status Goal: My ultimate goal is for the overall pipeline status to be Succeeded as long as the pipeline completes its execution, even if some Sample_put_record calls fail and are logged. I need to rely on the logs to identify actual failures, not the pipeline status. My Questions to the Community: 1.What is the definitive best practice in Azure Data Factory to ensure a ForEach loop never fails due to an inner activity failure, assuming the inner activity's failure is properly logged and handled within that iteration? 2.Are there specific nuances or common pitfalls with If Condition activities or Script activities within ForEach loops that could still cause failure propagation, even with try-catch and success exits? 3.How do you typically structure your ADF pipelines to achieve this level of resilience where internal failures are logged but don't impact the overall pipeline success status? 4.Are there any specific configurations on the ForEach activity itself (e.g., Continue on error setting, if it exists for ForEach?) or other activities that I might be overlooking? Any detailed examples, architectural patterns, or debugging tips would be greatly appreciated. Thank you in advance for your help!130Views0likes0CommentsJune 2025 updates for Azure Database for PostgreSQL
Big news this month — PostgreSQL 17 is now GA with in-place upgrades, and our Migration Service fully supports PG17, making adoption smoother than ever. Also in this release: Online Migration is now generally available SSD v2 HA (Preview) with 10s failovers and better resilience Azure PostgreSQL now available in Indonesia Central VS Code extension enhancements for smoother dev experience Enhanced role management for improved admin control Ansible collection updated for latest REST API Check all these updates in this month’s recap blog: https://techcommunity.microsoft.com/blog/adforpostgresql/june-2025-recap-azure-database-for-postgresql/4412095 Check it out and tell us which feature you're most excited about!
Events
Recent Blogs
- Microsoft Ignite 2025 is almost here! Many practitioners are surprised by the powerful new capabilities in Azure Managed Redis—and now is your chance to see them in action. Whether you are modernizin...Oct 23, 2025114Views0likes0Comments
- Azure SQL Managed Instance improves and promotes "Redirect" to become the default connection type. Redirect now only requires port 1433 and can handle older drivers gracefully.Oct 22, 2025443Views3likes0Comments