SOLVED

Improve performance Access frontend with Azure backend

New Contributor

 

I have the following question. I have migrated a "traditional build" Access frontend application with a SQL server backend to Azure SQL. The linking goes fine, however the performance often leaves something to be desired. I managed to link the tables directly to the Azure SQL by an ODBC connection with ActiveDirectoryInteractive authentication which uses a twofactor authentication.

At the time, I built the application traditionally, with various subforms, comboboxes and the forms are all binded directly to a table or query.

 

After some research, I now understand that a traditional build Access is not handy when it comes to using a remote SQL as a backend. And the best solution would be redesining it. However, this is a customer app and that customer will not be happy if the entire application has to be rebuilt all over again.

 

In order to improve the performance I have replaced queries whenever possible with pass trough queries for the comboboxes and subforms. There is a form with a heavy load with multiple subforms each on a separate tab. I have limited the loading of these subforms so that only when the user clicks on the tab the corresponding subform is loaded. I am trying to minimize data trafic between the Access frontend and the Azure backend in every possible way.

 

I read somewhere that instead of binding a table directly to a form, I need to redesign these forms with single record queries, so that criteria are applied to select and return one record and a time and requery that one-record recordset.

 

But how does that work? I tried it with a form by loading a record set with one record in the form, which works fine, however the form is not updateable any more. The user off course needs to be able to update the data in the form. So what's the solution? Create a separate form with a temporary table in case the user wants to update the form and switch between the read and this update form? Or is there a better approach for this?

6 Replies
best response confirmed by RogerH72 (New Contributor)
Solution

@RogerH72 

however the form is not updateable any more. 

That is probably because you used a PT query. They are never updatable. Rather you would select 1 row from the linked table or a query based on a linked table.

That did it thank you. But I've got another question, rather than the modifications I mentioned, do you have some more recommendations I could do to improve performance?

@RogerH72 

> the performance often leaves something to be desired
> customer will not be happy if the entire application has to be rebuilt all over again

It appears you're going about it the wrong way: it seems YOU decided performance was not adequate, and you THINK customer will not be happy.

That is for them to decide. Presumably they decided on Azure hosting, and they have to understand that comes with consequences. They will have to decide how much time and money they want to invest in making it faster, and what parts of the app should be focused on.

 

If you search online you should find plenty of discussion of this topic.
One technique I can recommend is local caching of quasi-static tables such as Statuses, States table, and others.

Thank you for your help Tom, data caching is definitely something I will dive into.
Welcome to the true "performance" of Access+SQL
Simple things...binding the forms/recordsets to tables ...is slow
What you can do...well you need to replace whatever you can with PT (as you did), create some good amount of temp tables for caching, maybe check the performance when you load e.g. comboboxes with in memory arrays and so on.
If nothing does the job then you will need to work on the unbound scenario where the data are INSERTED on demand but the form is unbound and the form is populated only with the minimum amount of required data.
Two other things that can improve performance are NOT to bind forms and reports directly to tables, and making sure that all tables are appropriately indexed.
Part of the performance problem with a remote back end like Azure is that calls to the database for data go out over the internet. Pulling large datasets across that connection is inefficient and slow. Therefore, you want to minimize the number and size of those recordsets. That's where local caching shines. Also, if you want to edit a single record, use a form bound to a query with a filter that returns only that one record. Obviously, efficient indexing can make tasks like filtering and sorting faster. That is probably more of an art to get it right, but it should be included in your preparations.