SQL timeouts after moving to azure


After deploying my application to Azure I am experiencing sql timeout issues with a few db intensive reports.

I have tried changing the sql timeout in the connection string but without any effect.. I have also tried upgrading the database from S2 to P1, which then is performing well enough to not time out.. But unfortunately that is not really an option for us.

How can I change the timeout value for those operations please?

This is the error that I am receiving (has not happened on rackspace before)

Server Error in '/' Application.

The wait operation timed out

Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code.

Exception Details: System.ComponentModel.Win32Exception: The wait operation timed out

Source Error:

An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below.

Stack Trace:

[Win32Exception (0x80004005): The wait operation timed out]

[SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.]
System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action 1 wrapCloseInAction) +2442634
System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action 1 wrapCloseInAction) +5766568

asked on Stack Overflow Dec 19, 2016 by Nik • edited Dec 22, 2016 by Jaxidian

3 Answers


If you change plan from lower to bigger and it works then it's definitely issue with performance.

Azure SQL is using DTU as a measure of how many resources you use CPU, IO and so on. So if you are using 100% of DTU your queries will be kind of delayed and longer you use 100% you will get timeout exception since by default there is 30 seconds timeout in .net connection. Increasing will probably not help you since issue could be that you are running same query many times and it starts blocking each other.

Go to your database then Query Performance Insight, and see your top queries run time. And start optimisation from there.

Potential places could be EntityFramework Include, if you are using it, this could generate queries with huge amount of data to be returned which slowdown query and use lots of IO.

If you still want to increase timeout you can do that in .config file for your connection string by adding

;Connection Timeout=60

But as I said its kind of 50/50 fix it could work but better is to see which queries are slow and improve them

PS. I recently had same problem with my app, with one particular query which i would never say would be using so much DTUs.

PS. Well I did not read your question properly first time. So i have deleted my previous answer

answered on Stack Overflow Dec 19, 2016 by Volodymyr Bilyachat • edited Dec 20, 2016 by Volodymyr Bilyachat

As pointed out by Volodymyr,i would definetly look out on optimizing queries which are involved in time out..but i also would like to point out..

SQLAzure won't update indexes/statistics for us,just like onpremises..You have to take care of it..

In our case,we created a weekly task to Rebuild indexes and update statistics on a weekly basis,then errors related to time outs dropped to 99%..

For the queries which are still timing out,we took an approach of optimizing them

answered on Stack Overflow Dec 19, 2016 by TheGameiswar • edited Dec 19, 2016 by TheGameiswar

As of this month, April 2019, the same issue crept in by total surprise. We were on S3 (Standard 3) ready to push our app out live then all of a sudden parts queries didn't work.

Going into debug with Visual Studio to find 'our error' showed that the queries were all being timed out. I found this article stating that max DTUs had to be being exceeded.

I thought 'Aha! ... that might be because I copied over the DB and had to run my data insert query multiple times to update that DB'. No. Just read the find print.

S3, which is where we were at, got changed very recently to just 'Standard' with what Microsoft deems 'typical IO workload'. Looking at our DTU usage, we were at .7%. No way that could have forced query timeouts to 30 seconds or less.

OK, Premium is now for 'more IO intensive workloads.' As if 0.7% of the DTU usage before was IO intensive.

I changed it up to P0 and Bingo! the parts queries ran again.

If this runs us significantly more money I'll call it to the attention of our powers that be. I spent a load of time yesterday solving this so I don't want anyone else to have the same experience.

enter image description here

answered on Stack Overflow Apr 7, 2019 by user1585204

User contributions licensed under CC BY-SA 3.0