Recent Discussions
Azure SQL server rollback itself?
We have an Azure SQL server. It is a datasource of a Power App canvas app. Today I connected to it with SSMS v19. First, I ran 'Begin tran' twice (is it a mistake?). Then 'Delete From dbo.table1 where ID=30' and another row with ID=31. Then I verified these 2 rows are deleted by 'Select * from dbo.table1' Finally, I ran 'Commit tran' I verified again above 2 rows are deleted by 'Select * from dbo.table1' However, there is no change in the Power App. So I reopen the SSMS and connect to the DB again. This time when I ran 'Select * from dbo.table1', the 2 rows are showing up. What could be the problem? Is it a bug in old version SSMS?12Views0likes0CommentsAugust 2025 Recap: Azure Database for PostgreSQL
Here’s what’s new this month to help you build smarter and scale securely: Advisor performance tuning (GA): New insights on index scans, logging, stats, and connections Entra ID group login (Preview): Let users sign in with their own credentials (no need for login using group-ID). New region – Austria East: Lower latency + data residency options for Central Europe LangChain & LangGraph support: Use Azure PostgreSQL as a vector store for AI agents Active-active replication guide: Step-by-step walkthrough using pglogical Full details in monthly recap: https://techcommunity.microsoft.com/blog/adforpostgresql/august-2025-recap-azure-database-for-postgresql/4450527Prevent Accidental Deletion of an Instance in Azure Postgres
Did you know that accidental deletion of database servers is a leading source of support tickets? Read this blog post to learn how you can safeguard your Azure Database for PostgreSQL Flexible Server instances using ARM’s CanNotDelete lock — an easy best-practice that helps prevent accidental deletions while keeping regular operations seamless. 🌐 Prevent Accidental Deletion of an Instance in Azure PostgresGetting an Oauth2 API access token using client_id and client_secret - help
Hi, I'm attempting to integrate external data into our SQL Server. The third-party data is from a solution called iLevel. They use token based Oauth2 APIs for access. The integration tool is ADF Pipelines. I'm not a data engineer but it has fallen upon me to complete this exercise. What I've attempted so far is failing and I don't know why. I would like your help on this. I'll explain what I've configured so far in the order I configured it. 1) To generate a client_id and client_secret, I logged on to the iLevel solution itself and generated the same for my account (call me 'Joe' account) and the Team account (call it 'Data team' account). I've recorded the client_id and client_secret for both users/accounts in notepad for reference. 2) I logged in Azure Data Factory using my 'Joe Admin' admin account (this is the account I need to log in with for any ADF development). 3) I created a Linked Service with the following configuration. Note how the Test connection was successful. I guess this means our ADF instance can connect to iLevel's Base URL. 4) I then created a dataset for iLevel. I configured this based on an online example I was following which I can't get working, so this configuration may be incorrect. 5) I then created a Pipeline which contains a 'Web' activity and a 'Set variable' activity. The Pipeline has a variable as shown below. The 'Web' activity has the following configuration: URL = is iLevel's token URL (it is different from the Base URL used in the Linked Service). Body = I've blocked out the client_id and client_secret (I'm using the client_id and client_secret generated for the 'Data team' account - remember I'm logged into ADF using the 'Joe Admin' account - not sure if this makes a difference) but have placed red brackets around where the start and end of each values is. I'm wrapping the values in any single or double quotes - not sure if I'm meant to. I'm not sure if I have configured the Body correctly. The ilevel documentation states to use an Authorization header, Content-Type header and Body - it states to the following is needed to obtain an access token, but it doesn't state exactly how to submit the information (i.e. how to format it). Notice how, in my configuration, I haven't used an Authorization header - this partly because an online example I've followed doesn't use one. If iLevel state to use one then I think I should but I don't know how to format it - any ideas? The 'Set variable' activity has the following activity. The idea is the access token is retrieved from the 'Web' activity and placed in the 'Set variable' "iLevel access token" variable. At this point I validate all and it comes back with no errors found. I then Debug it to see if it does indeed work but it returns an error stating the request contains an invalid client_id or client_secret. The client_id and client_secret values used are the exact same I generated from within the iLevel solution just a few hours ago. Is anyone able to point out to me why this isn't working? Have I populated all that I need to (as mentioned, iLevel say to use an Authorization header which I haven't but I don't know how to format it if I were to use one)? What can I do to get this working? I'm just trying to get the access token at the moment. I've not even attempted to extract the iLevel data and can't until I get a working token. iLevel's token have a 1 hour time-to-live so the Pipeline needs to generate a new token each time it's executed. You help will be most appreciated. Thanks.57Views1like0CommentsJuly 2025 Recap: Azure Database for PostgreSQL
Azure PostgreSQL – July 2025 Recap is live! This month’s update brings a strong mix of performance, security, and developer experience improvements — plus new region rollouts and deeper analytics integrations. What’s new: Cascading Read Replica (Preview) for multi-level replication Fabric Mirroring with VNET + Private Endpoint (Preview) Power BI Entra ID authentication (GA) Upsert + Script activity in ADF/Synapse (GA) Now available in Malaysia West & Chile Central PostgreSQL minor version updates: 17.5, 16.9, 15.13, 14.18, 13.21 Agentic Web + NLWeb support for PostgreSQL VS Code extension enhancements Improved maintenance for stopped instances Read the complete blog post - Azure PostgreSQL – July 2025 Recap is live!53Views0likes0CommentsAzure SQL Firewall / Locks
Hi there, I have 2 environments. I'm more of admin on Azure environment (recently made as subscription admin) after which Dev issue - Azure SQL I'm having difficulty to remove IP from Azure SQL Firewall. (Earlier i was able to) today my manager granted me subscription admin and as SQL Security Manager and it still not able to remove grayed out IPs.44Views0likes0CommentsBetter way to check database for dups
I have vba in Excel that incorporates SQL to iterate through all the records in a sheet and checks the databsase for that record using a combination of two fields to make a unique id. If it's in the database then it's deleted from the sheet. Otherwise it goes to the next one. Easy enough. I've had this code for years and it's always been fine but now there are too many records to do it this way. It takes too long. What would be a better way to query the SQL database to return those records that are dups and then delete them from the sheet. Here's an idea of the snippet of code I currently use: 'i is the record number from bottom to top For i = DataSheet.Range("A" & Rows.Count).End(xlUp).Row To 2 Step -1 CountyName = DataSheet.Cells(i, 1).Value CaseNumber = DataSheet.Cells(i, 3).Value strSQL = "SELECT County, Case_Number FROM tblAdLetter " & _ "WHERE County = '" & CountyName & "' AND Case_Number = '" & CaseNumber & "'" rs.Open strSQL, cn If Not rs.EOF Then DataSheet.Rows(i).Delete End If rs.Close Next i Is there a better way or faster way? tod23Views0likes0CommentsDynamics AX connector stops getting records after amount of time
Hello everyone, I am using the Dynamics AX connector to get data out of Finance. After a certain amount of time it suddenly doesnt get any new records anymore and it keeps running until it reaches the general timeout. It gets 290,000 records in like 01:30:00 and then keeps running and doesn't get new records anymore. Sometimes it gets stuck earlier or later. Sometimes it also gives me this error: Failure happened on 'Source' side. ErrorCode=ODataRequestTimeout,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Fail to get response from odata service in a expected time.,Source=Microsoft.DataTransfer.Runtime.ODataConnector,''Type=System.Threading.Tasks.TaskCanceledException,Message=A task was canceled.,Source=mscorlib,' This is my pipeline JSON: { "name": "HICT - Init Sync SalesOrders", "properties": { "activities": [ { "name": "Get FO SalesOrders", "type": "Copy", "dependsOn": [], "policy": { "timeout": "0.23:00:00", "retry": 0, "retryIntervalInSeconds": 30, "secureOutput": false, "secureInput": false }, "userProperties": [], "typeProperties": { "source": { "type": "DynamicsAXSource", "query": "$filter=FM_InterCompanyOrder eq Microsoft.Dynamics.DataEntities.NoYes'No' and dataAreaId eq 'prev'&$select=SalesOrderNumber,SalesOrderName,IsDeliveryAddressPrivate,FormattedInvoiceAddress,FormattedDeliveryAddress,ArePricesIncludingSalesTax,RequestedReceiptDate,QuotationNumber,PriceCustomerGroupCode,PBS_PreferredInvoiceDate,PaymentTermsBaseDate,OrderTotalTaxAmount,OrderTotalChargesAmount,OrderTotalAmount,TotalDiscountAmount,IsInvoiceAddressPrivate,InvoiceBuildingCompliment,InvoiceAddressZipCode,LanguageId,IsDeliveryAddressOrderSpecific,IsOneTimeCustomer,InvoiceAddressStreetNumber,InvoiceAddressStreet,InvoiceAddressStateId,InvoiceAddressPostBox,InvoiceAddressLongitude,InvoiceAddressLatitude,InvoiceAddressDistrictName,InvoiceAddressCountyId,InvoiceAddressCountryRegionISOCode,InvoiceAddressCity,FM_Deadline,Email,DeliveryTermsCode,DeliveryModeCode,DeliveryBuildingCompliment,DeliveryAddressCountryRegionISOCode,DeliveryAddressZipCode,DeliveryAddressStreetNumber,SalesOrderStatus,DeliveryAddressStreet,DeliveryAddressStateId,SalesOrderPromisingMethod,DeliveryAddressPostBox,DeliveryAddressName,DeliveryAddressLongitude,DeliveryAddressLocationId,DeliveryAddressLatitude,DeliveryAddressDunsNumber,DeliveryAddressDistrictName,DeliveryAddressDescription,DeliveryAddressCountyId,DeliveryAddressCity,CustomersOrderReference,IsSalesProcessingStopped,CustomerRequisitionNumber,SalesOrderProcessingStatus,CurrencyCode,ConfirmedShippingDate,ConfirmedReceiptDate,SalesOrderOriginCode,URL,OrderingCustomerAccountNumber,InvoiceCustomerAccountNumber,ContactPersonId,FM_WorkerSalesTaker,FM_SalesResponsible,PaymentTermsName,DefaultShippingSiteId,DefaultShippingWarehouseId,DeliveryModeCode,dataAreaId,FM_InterCompanyOrder&cross-company=true", "httpRequestTimeout": "00:15:00", "additionalHeaders": { "Prefer": "odata.maxpagesize=1000" }, "retrieveEnumValuesAsString": true }, "sink": { "type": "JsonSink", "storeSettings": { "type": "AzureBlobStorageWriteSettings", "copyBehavior": "FlattenHierarchy" }, "formatSettings": { "type": "JsonWriteSettings" } }, "enableStaging": false, "enableSkipIncompatibleRow": true, "logSettings": { "enableCopyActivityLog": true, "copyActivityLogSettings": { "logLevel": "Warning", "enableReliableLogging": false }, "logLocationSettings": { "linkedServiceName": { "referenceName": "AzureBlobStorage", "type": "LinkedServiceReference" }, "path": "ceexports" } } }, "inputs": [ { "referenceName": "AX_SalesOrders_Dynamics_365_FO_ACC", "type": "DatasetReference" } ], "outputs": [ { "referenceName": "Orders_FO_D365_Data_JSON", "type": "DatasetReference" } ] }, { "name": "Get_All_CE_Table_Data", "type": "ForEach", "dependsOn": [ { "activity": "Get FO SalesOrders", "dependencyConditions": [ "Completed" ] } ], "userProperties": [], "typeProperties": { "items": { "value": "@pipeline().parameters.CE_Tables", "type": "Expression" }, "activities": [ { "name": "Copy_CE_TableData", "type": "Copy", "dependsOn": [], "policy": { "timeout": "0.12:00:00", "retry": 0, "retryIntervalInSeconds": 30, "secureOutput": false, "secureInput": false }, "userProperties": [], "typeProperties": { "source": { "type": "CommonDataServiceForAppsSource" }, "sink": { "type": "DelimitedTextSink", "storeSettings": { "type": "AzureBlobStorageWriteSettings", "copyBehavior": "FlattenHierarchy" }, "formatSettings": { "type": "DelimitedTextWriteSettings", "quoteAllText": true, "fileExtension": ".txt" } }, "enableStaging": false }, "inputs": [ { "referenceName": "CE_Look_Up_Tables", "type": "DatasetReference", "parameters": { "entiryName": "@item().sourceDataset" } } ], "outputs": [ { "referenceName": "CE_GenericBlobSink", "type": "DatasetReference", "parameters": { "sinkPath": { "value": "@item().sinkPath", "type": "Expression" } } } ] } ] } }, { "name": "Transform_Create_CE_JSON", "type": "ExecuteDataFlow", "dependsOn": [ { "activity": "Get_All_CE_Table_Data", "dependencyConditions": [ "Succeeded" ] } ], "policy": { "timeout": "0.12:00:00", "retry": 0, "retryIntervalInSeconds": 30, "secureOutput": false, "secureInput": false }, "userProperties": [], "typeProperties": { "dataflow": { "referenceName": "FO_Transform_CE_Select", "type": "DataFlowReference" }, "compute": { "coreCount": 16, "computeType": "General" }, "traceLevel": "Fine" } } ], "parameters": { "CE_Tables": { "type": "array", "defaultValue": [ { "name": "D365_CE_ACC_AccountRelations", "sourceDataset": "crmp_accountrelation", "sinkPath": "ce-exports/D365_CE_ACC_AccountRelations.json" }, { "name": "D365_CE_ACC_ContactRelations", "sourceDataset": "crmp_contactrelation", "sinkPath": "ce-exports/D365_CE_ACC_ContactRelations.json" }, { "name": "D365_CE_ACC_PriceCustomerGroup", "sourceDataset": "msdyn_pricecustomergroup", "sinkPath": "ce-exports/D365_CE_ACC_PriceCustomerGroup.json" }, { "name": "D365_CE_ACC_SalesOrderOrigin", "sourceDataset": "odin_salesorderorigin", "sinkPath": "ce-exports/D365_CE_ACC_SalesOrderOrigin.json" }, { "name": "D365_CE_ACC_ShipVia", "sourceDataset": "msdyn_shipvia", "sinkPath": "ce-exports/D365_CE_ACC_ShipVia.json" }, { "name": "D365_CE_ACC_SystemUser", "sourceDataset": "systemuser", "sinkPath": "ce-exports/D365_CE_ACC_SystemUser.json" }, { "name": "D365_CE_ACC_TermsOfDelivery", "sourceDataset": "msdyn_termsofdelivery", "sinkPath": "ce-exports/D365_CE_ACC_TermsOfDelivery.json" }, { "name": "D365_CE_ACC_Worker", "sourceDataset": "cdm_worker", "sinkPath": "ce-exports/D365_CE_ACC_Worker.json" }, { "name": "D365_CE_ACC_TransactionCurrency", "sourceDataset": "transactioncurrency", "sinkPath": "ce-exports/D365_CE_ACC_TransactionCurrency.json" }, { "name": "D365_CE_ACC_Warehouse", "sourceDataset": "msdyn_warehouse", "sinkPath": "ce-exports/D365_CE_ACC_Warehouse.json" }, { "name": "D365_CE_ACC_OperationalSite", "sourceDataset": "msdyn_operationalsite", "sinkPath": "ce-exports/D365_CE_ACC_OperationalSite.json" }, { "name": "D365_CE_ACC_PaymentTerms", "sourceDataset": "odin_paymentterms", "sinkPath": "ce-exports/D365_CE_ACC_PaymentTerms.json" } ] } }, "annotations": [], "lastPublishTime": "2025-07-30T12:55:32Z" }, "type": "Microsoft.DataFactory/factories/pipelines" }59Views0likes0CommentsAzure Data Factory ForEach Loop Fails Despite Inner Activity Error Handling - Seeking Best Practices
Hello Azure Data Factory Community, I'm encountering a persistent issue with my ADF pipeline where a ForEach loop is failing, even though I've implemented error handling for the inner activities. I'm looking for insights and best practices on how to prevent internal activity failures from propagating up and causing the entire ForEach loop (and subsequently the pipeline) to fail, while still logging all outcomes. My Setup: My pipeline processes records using a ForEach loop. Inside the loop, I have a Web activity (Sample_put_record) that calls an external API. This API call can either succeed or fail for individual records. My current error handling within the ForEach iteration is structured as follows: 1.Sample_put_record (Web Activity): Makes the API call. 2.Conditional Logic: I've tried two main approaches: •Approach A (Direct Success/Failure Paths): The Sample_put_record activity has a green arrow (on success) leading to a Log Success Items (Script activity) and a red arrow (on failure) leading to a Log Failed Items (Script activity). Both logging activities are followed by Wait activities (Dummy Wait For Success/Failure). •Approach B (If Condition Wrapper): I've wrapped the Sample_put_record activity and its success/failure logging within an If Condition activity. The If Condition's expression is @equals(activity('Sample_put_record').status, 'Succeeded'). The True branch contains the success logging, and the False branch contains the failure logging. The intention here was for the If Condition to always report success, regardless of the Sample_put_record outcome, to prevent the ForEach from failing. The Problem: Despite these error handling attempts, the ForEach loop (and thus the overall pipeline) still fails when an Sample_put_record activity fails. The error message I typically see for the ForEach activity is "Activity failed because an inner activity failed." When using the If Condition wrapper, the If Condition itself sometimes fails with the same error, indicating that an activity within its True or False branch is still causing a hard failure. For example, a common failure for Sample_put_record is: "valid":false,"message":"WARNING: There was no xxxxxxxxxxxxxxxxxxxxxxxxx scheduled..." (a user configuration/data issue). Even when my Log Failed Items script attempts to capture this, the ForEach still breaks. What I've Ensured/Considered: •Wait Activity Configuration: Wait activities are configured with reasonable durations and do not appear to be the direct cause of failure. •No Unhandled Exceptions: I'm trying to ensure no unhandled exceptions are propagating from my error handling activities. •Pipeline Status Goal: My ultimate goal is for the overall pipeline status to be Succeeded as long as the pipeline completes its execution, even if some Sample_put_record calls fail and are logged. I need to rely on the logs to identify actual failures, not the pipeline status. My Questions to the Community: 1.What is the definitive best practice in Azure Data Factory to ensure a ForEach loop never fails due to an inner activity failure, assuming the inner activity's failure is properly logged and handled within that iteration? 2.Are there specific nuances or common pitfalls with If Condition activities or Script activities within ForEach loops that could still cause failure propagation, even with try-catch and success exits? 3.How do you typically structure your ADF pipelines to achieve this level of resilience where internal failures are logged but don't impact the overall pipeline success status? 4.Are there any specific configurations on the ForEach activity itself (e.g., Continue on error setting, if it exists for ForEach?) or other activities that I might be overlooking? Any detailed examples, architectural patterns, or debugging tips would be greatly appreciated. Thank you in advance for your help!111Views0likes0CommentsJune 2025 updates for Azure Database for PostgreSQL
Big news this month — PostgreSQL 17 is now GA with in-place upgrades, and our Migration Service fully supports PG17, making adoption smoother than ever. Also in this release: Online Migration is now generally available SSD v2 HA (Preview) with 10s failovers and better resilience Azure PostgreSQL now available in Indonesia Central VS Code extension enhancements for smoother dev experience Enhanced role management for improved admin control Ansible collection updated for latest REST API Check all these updates in this month’s recap blog: https://techcommunity.microsoft.com/blog/adforpostgresql/june-2025-recap-azure-database-for-postgresql/4412095 Check it out and tell us which feature you're most excited about!Copy Activity Successful, But Times Out
This appears to be an edge case, but I wanted to share. A copy activity is successful, but times out. Duration is 1:58:55. Times out at 2:00:12. Runs a second time time and is successful, loading duplicate records. The duplicate records is the undesired result. Copy Activity General Timeout: 0.02:00:00 Retry: 2 Source mySQL Parameterized SQL Parameterized Sink Synapse SQL Pool Parameterized Copy method: COPY command Settings Use V2 Hiearchy storage for staging General Synapse/ADF Managed Network59Views0likes0CommentsAdvice requested: how to capture full SQL CDC changes using Dataflow and ADLS gen2
Hi, I'm working on a fairly simple ETL process using Dataflow in Azure Data Factory, where I want to capture the changes in a CDC-enabled SQL table, and store those in Delta Lake format in a ADLS gen2 sink. The resulting dataset will be further processed, but for me this is the end of the line. I don't have an expert understanding of all the details of the Delta Lake format, but I do know that I can use it to store changes to my data over time. So in the sink, I enabled all Update methods (Insert, Delete, Upsert, Update), since my CDC source should be able to figure out the correct row transformation. Key columns are set to the primary key columns in SQL. All this works fine as long as I configure my source to use CDC with 'netChanges: true'. That yields a single change row for each record, which is correctly stored in the sink. But I want to capture all changes since the previous run, so I want to set the source to netChanges: false. That yields rows for every change since the previous time the dataflow ran. But for every table that actually has records with more than one change, the dataflow fails saying "Cannot perform Merge as multiple source rows matched and attempted to modify the same target row in the Delta table in possibly conflicting ways." I take that to mean that my dataflow is, as it is, not smart enough to loop through all changes in the source, and apply them to the sink in order. So apparently something else has to be done. My intuition says that, since CDC actually provides all the metadata to make this possible, there's probably an out-of-the-box way to achieve what I want. But I can't readily find that magic box I should tick 😉. I can probably build it out 'by hand', by somehow looping over all changes and applying them in order, but before I go down that route, I came here to learn from the experts whether this is indeed the only way, or, preferably, that there is a neat trick I missed to get this done easily. Thanks so much for your advice! BR53Views0likes0CommentsPostgreSQL 17 General Availability with In-Place Upgrade Support
We’re excited to share that PostgreSQL 17 is now Generally Available on Azure Database for PostgreSQL – Flexible Server! This release brings community-driven enhancements including improved vacuum performance, smarter query planning, enhanced JSON functions, and dynamic logical replication. It also includes support for in-place major version upgrades, allowing customers to upgrade directly from PostgreSQL 11–16 to 17 without needing to migrate data or change connection strings. PostgreSQL 17 is now the default version for new server creations and major version upgrades. 📖 Read the full blog post: http://aka.ms/PG17 Let us know if you have feedback or questions!Solution: Handling Concurrency in Azure Data Factory with Marker Files and Web Activities
Hi everyone, I wanted to share a concurrency issue we encountered in Azure Data Factory (ADF) and how we resolved it using a small but effective enhancement—one that might be useful if you're working with shared Blob Storage across multiple environments (like Dev, Test, and Prod). Background: Shared Blob Storage & Marker Files In our ADF pipelines, we extract data from various sources (e.g., SharePoint, Oracle) and store them in Azure Blob Storage. That Blob container is shared across multiple environments. To prevent duplicate extractions, we use marker files: started.marker → created when a copy begins completed.marker → created when the copy finishes successfully If both markers exist, pipelines reuse the existing file (caching logic). This mechanism was already in place and worked well under normal conditions. The Issue: Race Conditions We observed that simultaneous executions from multiple environments sometimes led to: Overlapping attempts to create the same started.marker Duplicate copy activities Corrupted Blob files This became a serious concern because the Blob file was later loaded into Azure SQL Server, and any corruption led to failed loads. The Fix: Web Activity + REST API To solve this, we modified only the creation of started.marker by: Replacing Copy Activity with a Web Activity that calls the Azure Storage REST API The API uses Azure Blob Storage's conditional header If-None-Match: * to safely create the file only if it doesn't exist If the file already exists, the API returns "BlobAlreadyExists", which the pipeline handles by skipping. The Copy Activity is still used to copy the data and create the completed.marker—no changes needed there. Updated Flow Check marker files: If both exist (started and completed) → use cached file If only started.marker → wait and retry If none → continue to step 2 Web Activity calls REST API to create started.marker Success → proceed with copy in step 3 Failure → another run already started → skip/retry Copy Activity performs the data extract Copy Activity creates completed.marker Benefits Atomic creation of started.marker → no race conditions Minimal change to existing pipeline logic with marker files Reliable downstream loads into Azure SQL Server Preserves existing architecture (no full redesign) Would love to hear: Have you used similar marker-based patterns in ADF? Any other approaches to concurrency control that worked for your team? Thanks for reading! Hope this helps someone facing similar issues.45Views0likes0CommentsBlob Storage Event Trigger Disappears
Yesterday I ran into an odd situation where there was a resource lock and I was unable to rename pipelines or drop/create storage event triggers. An admin cleared the lock and I was able to remove and clean up the triggers and pipelines. Today, when I try to recreate the blob storage trigger to process a file when it appears in a container, the trigger creates just fine but on refresh, it disappears. If I try to recreate it again with the same name as the one that went away ADF UI says it already exists. I cannot assign it to a pipeline because the UI does not see it. Any insight as to where it is, how I can see it, or even what logs would have such activity recorded to give a clue as to what is going on. This seems like a bug.36Views0likes0CommentsParameter controls are not showing Display text
Hi, After a recent update to the Azure Data Explorer Web UI, the Parameter controls are not displaying correctly. The Display Text for parameters is not shown by default; instead, the raw Value is displayed until the control is clicked, at which point the correct Display Text appears. Could you please investigate this issue and provide guidance on a resolution? Thank you,33Views0likes0CommentsJune 2023 Update: Azure Database for PostgreSQL Flexible Server Unveils New Features
The Azure Database for PostgreSQL Flexible Server's June 2023 update is live! Now enjoy: Easier major version upgrades with reduced downtime. Server recovery feature for dropped servers. A more user-friendly Connect experience. Improved server performance with new IO enhancements. Auto-growing storage and online disk resize, now in public preview. We also support minor versions PostgreSQL 15.2 (preview), 14.7, 13.10, 12.14, 11.19. Big thanks to our dedicated team! Check out our blog for more details: https://techcommunity.microsoft.com/t5/azure-database-for-postgresql/june-2023-recap-azure-database-postgresql-flexible-server/ba-p/3868650July 2023 Recap: Azure Database PostgreSQL Flexible Server
July 2023 Recap: Azure Database PostgreSQL Flexible Server Support for PostgreSQL 15 is now available (general availability). Automation Tasks have been introduced for Streamlined Management (preview phase). Flexible Server Migration Tooling has been enhanced (general availability). Hardware Options have been expanded with the addition of AMD Compute SKUs (general availability). These updates represent substantial improvements in performance, scalability, and efficiency. Whether you are a developer, a Database Administrator (DBA), or an individual passionate about PostgreSQL, we trust that these enhancements will contribute positively to your experience with our platform. Should you find these updates valuable, we encourage you to engage with us through appropriate channels of communication. Thank you for your continued support and interest in Azure Database for PostgreSQL Flexible Server.Autoscaling with Azure: A Comprehensive Guide to PostgreSQL Optimization Using Azure Automation Task
Autoscaling Azure PostgreSQL Server with Automation Tasks Read our latest article detailing the power of Autoscaling the Azure Database for PostgreSQL Flexible Server using Azure Automation Tasks. This new feature can revolutionize how we manage resources, streamlining operations, and minimizing human error.August 2023 Recap: Azure Database for PostgreSQL Flexible Server
Absolutely thrilled to unveil our latest blog post, " 𝗔𝘂𝗴𝘂𝘀𝘁 𝟮𝟬𝟮𝟯 𝗥𝗲𝗰𝗮𝗽: 𝗔𝘇𝘂𝗿𝗲 𝗗𝗮𝘁𝗮𝗯𝗮𝘀𝗲 𝗳𝗼𝗿 𝗣𝗼𝘀𝘁𝗴𝗿𝗲𝗦𝗤𝗟 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗦𝗲𝗿𝘃𝗲𝗿 ". This month is jam-packed with feature updates designed to amplify your experience! 1. 𝗔𝘂𝘁𝗼𝘃𝗮𝗰𝘂𝘂𝗺 𝗠𝗼𝗻𝗶𝘁𝗼𝗿𝗶𝗻𝗴 - Elevate your database health with improved tools and metrics. 2. 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗗𝗡𝗦 𝗭𝗼𝗻𝗲 𝗟𝗶𝗻𝗸𝗶𝗻𝗴 - Simplify your server setup process for multiple networking models. 3. 𝗦𝗲𝗿𝘃𝗲𝗿 𝗣𝗮𝗿𝗮𝗺𝗲𝘁𝗲𝗿 𝗩𝗶𝘀𝗶𝗯𝗶𝗹𝗶𝘁𝘆 𝗲𝗻𝗵𝗮𝗻𝗰𝗲𝗺𝗲𝗻𝘁𝘀 - Now view hidden parameters for better performance optimization. 4. 𝗦𝗶𝗻𝗴𝗹𝗲 𝘁𝗼 𝗙𝗹𝗲𝘅𝗶𝗯𝗹𝗲 𝗦𝗲𝗿𝘃𝗲𝗿 𝗠𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗧𝗼𝗼𝗹𝗶𝗻𝗴 – Simplified migration experience with automated extension allow listing. Don't miss out! Read the full scoop here: August 2023 Recap: Azure Database for PostgreSQL Flexible Server
Events
Recent Blogs
- Authors: Johannes Schuetzner, Software Engineer at Mercedes-Benz & Nacho Alonso Portillo, Principal Program Manager at Microsoft When you think of Mercedes-Benz, you think of innovation, precision...Sep 24, 2025151Views1like0Comments
- At the heart of every digital transformation is data. This year’s Migrate and Modernize Summit was more than a showcase of new features—it was a demonstration of how Microsoft Azure is reimagining th...Sep 23, 2025147Views1like0Comments