User Profile
DennesTorres
MVP
Joined Jul 07, 2020
User Widgets
Recent Discussions
Re: Self-service group membership
Deleted , Hi, Unfortunatelly this was a long time ago. I solved the problem, but I don't remember exactly how. I can say I used the two links below. Besides that, there is an additional trick: You can only manage the groups owned by yourself. If the groups were created by other user, you can't manage them. Users can request to join a group and the group owner can approve or reject the request. The links: Self-Service Group Management https://docs.microsoft.com/en-us/azure/active-directory/enterprise-users/groups-self-service-management Access Panel for Self-Service Group Management https://account.activedirectory.windowsazure.com/r#/joinGroups Dennes2.7KViews0likes0CommentsVS 2022 and the execution of new Angular Template
Hi, Visual Studio 2022 has a new template for Angular projects and it's very nice. I'm not Angular specialist, so I'm not sure if my question will be a bit silly.... For the site to be executed, there is a need of a command prompt window to be opened and make the typescript compilation using node, besides provisioning a server. The problem is this window sometimes open, sometimes not. When it decides not to open, I get lost about what to do to ensure it will open again on next execution. Sometime it appears to be related to the execution of a Build before the execution, but not always. What may I be doing wrong, how could I work around this and execute the project easier? A workaround for this is to execute npm start on the package manager console window. But by doing this, when we need to compile again we need to close Visual Studio and open again. Any better way to work with this?3.6KViews0likes0CommentsRe: disk managment
Hi, Your environment is unusual, at least, so it becomes difficult to help without a more complete explanation about your environment. KQL means Kusto Query Language. It's being used in log analytics and/or Azure Data Explorer to handle big amounts of data, especially logs from azure services. However, KQL will not provide any information by itself. The queries relies on log collection made by log analytics and/or data ingestion in azure data explorer. Powershel, on the other hand, has many different uses, and it's able to collect performance counters from the machine where it's running. You could be collecting the performance counters using powershell, no KQL. Or you could build log analytics workbooks with dashboards exposing the collected information using KQL, no powershell. Use powershell to run KQL queries over log analytics is strange. Power BI would be easier. My guess is that you need to investigate your log analytics workspace and the agents configuration. However, it may be easier if who created this solution could explain it to you. Kind Regards, Dennes2KViews0likes3CommentsRe: Cross database queries- Authentication with managed identities
I have not tested your exactly scenario - external table from sql to sql, but Azure SQL supports managed identities to access blob storage, for example. There is a special way to set the managed identity and create the credential, but it works. Have you tried something similar? Here is a reference I wrote: https://www.red-gate.com/simple-talk/blogs/azure-sql-and-managed-identity/ Kind Regards, Dennes1.7KViews0likes0CommentsRe: Azure SQL Hyperscale Best Practices
Hi, Even before HyperScale, using other service levels of Azure SQL, these 3 tasks are not needed at all when scaling up or down an Azure SQL Database. Take a look on this video for a more in depth explanation about hyperscale: https://youtu.be/vH9XM9Ulc_g?list=PLNbt9tnNIlQ5i3hWhnwU3VqRxC-GPcol8&t=2481 Kind Regards, Dennes2KViews0likes0CommentsRe: Azure SQL Web Portal .dacpac Functionality
Hi, It doesn't seems the right direction to look for. A DACPAC is a package containing a new structure for the database. SSDT will make a schema compare between the DACPAC and the production database and generate the script you need to update the database structure. The generation of the script involves queries to the database schema, they are not heavy queries at all. The script itself, once you decide to execute, will be executed on the Azure SQL DB, all the transformations will be made on the server side. There is not a huge amount of network traffic as you suggest that could cause a so terrible response time. Anyway, we are talking about DACPAC, updating only the structure, not a BACPAC, with the data. This kind of compare can be made using Azure Data Explorer, Visual Studio, a stand-alone installation of SSDT and maybe even visual studio code. This kind of deployment can be automated in Azure DevOps pipelines. There is a task to update an Azure SQL Database based on a DacPac. Kind Regards, Dennes1KViews0likes0CommentsRe: Connect to MS SQL Server 2014 database (OS Windows Server 2012 R2) from Azure App service
Your connection problem is related to the TLS version requested by your application and the TLS version used by the server. On this link you find some details about TLS version support on SQL 2014, you need to ensure you have the correct service packs/CU's to support TLS 1.2, if that's what you would like: https://support.microsoft.com/en-us/topic/kb3135244-tls-1-2-support-for-microsoft-sql-server-e4472ef8-90a9-13c1-e4d8-44aad198cdbe On this other link you find details about how to configure TLS in the server: https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/enable-encrypted-connections-to-the-database-engine?view=sql-server-ver151.3KViews0likes0CommentsRe: disk managment
SurbhiGupta Hi, I'm not sure if I completely understand what you want. What I understand you would like to create an alert based on free disk space using KQL. 1) You need a log analytics workspace where you will store all captured data 2) You may need to install log analytics agents on the virtual machines to collect performance counters and configure the agents to collect the disk counter 3) You can create alerts based on KQL querys Kind Regards, Dennes1.9KViews0likes5CommentsHow to track policy effect execution
Hi, I created two policies to work over the new SQL Feature to enforce AD Authentication on Azure SQL Servers. One of them, using AuditIfNotExists effect, works fine. The other one, using DeployIfNotExists, doesn't. I imagine the biggest challenge is to use the correct role permission, considering the DeployIfNotExists effect requires a role permission in order to create the managed identity to execute the deployment. I chose the SQL Security Manager role, /providers/Microsoft.Authorization/roleDefinitions/056cd41c-7e88-42e1-933e-88ba6a50c9c3 . It was correctly identified on the role assignment. I can also see the role assignment on the resource group RBAC access control. However, the deployment requested on the effect never happens. The policy identifies the resource is not compliant, but the deployment never happens to change the property as it should. I'm not sure what I may be missing and maybe one of the main questions is: How could I track the effect execution to identify what's wrong with this policy? The policy definition: { "properties": { "displayName": "Deploy SQL Integrated Security", "policyType": "Custom", "mode": "All", "description": "Force all Azure SQL Servers to use Integrated Security only", "metadata": { "category": "SQL", "createdBy": "065ae953-dd11-413c-a4a6-bc1eb6f55fcc", "createdOn": "2021-06-16T14:49:33.2911598Z", "updatedBy": "065ae953-dd11-413c-a4a6-bc1eb6f55fcc", "updatedOn": "2021-06-18T16:23:11.2510009Z" }, "parameters": {}, "policyRule": { "if": { "allOf": [ { "field": "type", "equals": "Microsoft.Sql/servers" } ] }, "then": { "effect": "deployIfNotExists", "details": { "type": "Microsoft.Sql/servers/azureADOnlyAuthentications", "roleDefinitionIds": [ "/providers/Microsoft.Authorization/roleDefinitions/056cd41c-7e88-42e1-933e-88ba6a50c9c3" ], "existenceCondition": { "allOf": [ { "field": "Microsoft.Sql/servers/azureADOnlyAuthentications/azureADOnlyAuthentication", "equals": true } ] }, "deployment": { "properties": { "mode": "incremental", "name": "Default", "template": { "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#", "contentVersion": "1.0.0.0", "parameters": { "fullDbName": { "type": "string" } }, "resources": [ { "name": "[concat(parameters('fullDbName'), '/Default')]", "apiVersion": "2021-02-01-preview", "type": "Microsoft.Sql/servers/azureADOnlyAuthentications", "dependsOn": [ "[resourceId('Microsoft.Sql/servers', parameters('fullDbName'))]" ], "properties": { "azureADOnlyAuthentication": true } } ] }, "parameters": { "fullDbName": { "value": "[field('name')]" } } } } } } } }, "id": "/providers/Microsoft.Management/managementGroups/52b6b910-1fc7-44e7-b03d-ffb4ea2dd90b/providers/Microsoft.Authorization/policyDefinitions/9daaafde-73ff-4966-9037-445dc937474d", "type": "Microsoft.Authorization/policyDefinitions", "name": "9daaafde-73ff-4966-9037-445dc937474d" }642Views0likes0CommentsSelf-service group membership
Hi, In Azure Active Directory -> Groups -> General I enabled the self-service group membership. The first option to enable it is easy, the second is a bit confuse if it should be set as Yes or No and I found both information in different sources. I tried both. Security Groups self-service is enabled: I created two new users to test the self-service feature. I tried to access the self-service portal (https://myapps.microsoft.com) using one of the accounts. The myApps appear to work fine, but when I change to group management, I get an error message. The following one: It appears there is something missing, but I don't know what is missing. Any help will be very welcome. Kind Regards, Dennes3.3KViews0likes2CommentsSynchronizing Data Factory with github
Hi, I created two data factories in different subscriptions linked with the same github repository. The data factory has global parameters and they were created in a moment when the "include in ARM template" option was disabled. I enabled this option ("include in ARM template"), saved the global parameters and published. I did this multiple times. I refreshed the parameters on the other data factory and... nothing. I just can't synchronize the global parameters. Am I doing something wrong or this synchronization is not so easy as it should be?567Views0likes0CommentsRe: ADF - data connect from blob to Azure SQL
Shruthi96 You can create parameters in a dataset. In this case the parameter could be the filename/table name You can create a table in SQL to hold the file name/table name relation. Than you use a lookup activity to read this data, foreach activity to repeat the copy activity for each file and fill the dataset parameters. This requires your files to follow some pattern. If they are completely different from each other and from the tables this may not work. However, there are only 4 files, right? It's not that difficult to make 8 datasets.2KViews0likes1CommentRe: Event Driven Centralized Metadata lake using Azure Services
Sagar_Lad You explanation is not enough to give you an answer that I would be confident it's the right one. Probably the best option for you would be hiring a consulting service to analyze the details of your environment and propose a solution. I'm not confident you need to move this metadata to another place, although in fact I don't know what this metadata is. The systems holding this metadata, such as Synapse, are used to hold BIG data warehouses, so you don't have the need to move them again, that's the idea of a data lake. As long as these are not production systems, you can build a virtual data lake using Synapse serverless and pointing to this data, even building aggregations of the data. But again, I don't understand your scenario, I'm not sure if that's what you need. Besides that, logic apps and azure functions can deal with events, receiving events and triggering actions.1.3KViews0likes0CommentsRe: DBCC SHOW CONTIG returns 0 records in SQL Server 2014 but 2 records in SQL Server 2016
Kanishka_Basak IMHO, more than that. You shouldn't be concerned about SHOWCONTIG at all. For columnstore indexes, the structure is different, the maintenance is different as well. I don't have a link now, but It's not difficult to find more about them and you will need the two DMV's I mentioned.1.1KViews0likes1CommentRe: DBCC SHOW CONTIG returns 0 records in SQL Server 2014 but 2 records in SQL Server 2016
Kanishka_Basak I know I'm not directly answering your question, but do you know the concepts of fragmentation DBCC SHOWCONTIG deals with don't apply do columnstore indexes? Do you know columnstore indexes have completely different problems related to row groups and the maintenance needs to check sys.column_store_row_groups and dm_db_column_store_row_group_physical_stats ?1KViews0likes3CommentsRe: ADF - data connect from blob to Azure SQL
Shruthi96 I'm supposing the files contain records and you would like to insert records in SQL, not the files themselves. In this case, a simple COPY activity will do the work and you can create a trigger for each new file created in blob storage.2KViews0likes3CommentsRe: Sql Server License
Hi, Yes, you can, it's called Azure Hybrid benefit. Before the link, I will highlight some interesting points: Make detailed calculation. You need to have and keep software assurance, so you need to calculate what option provides the most benefits. If it's a development/test server, you can use a development/test subscription and you don't need to worry about that. The lift-and-shift, creating a VM in Azure, is only the first step for the cloud migration. You need to evolve your application and enable it to use PaaS services. PaaS based on DTU's don't charge license value and are not affected by the hybrid benefit. Include this in your calculation as well. Links: Hybrid Benefit: https://azure.microsoft.com/en-us/pricing/hybrid-benefit/ Azure Dev/Test Subscription: https://azure.microsoft.com/en-us/pricing/dev-test/725Views0likes0Comments
Recent Blog Articles
No content to show