2012r2 Direct Access non-paged memory leak

Copper Contributor

We have a single site 2012r2 Direct Access server running as a hyper-v guest in edge configuration.  The physical box is a 2012r2 Dell R710.  The R710 has a Qlogic/BCM5709C NIC card in it, we have turned off VMQ.  Direct Access is working but I am noticing a non-paged memory leak occurring in the NDnd tag on the Direct Access guest.  The leak occurs when UDP RDP packets traverse Direct Access.  The RAM can fill up in a day and the box blue screens and reboots.  To bypass this we have set up RDP traffic to go to a RDP gateway instead of across Direct Access for these users.

The 2012r2 boxes are fully patched and the clients are windows 10 enterprise fully patched.  I have removed all 3rd party software using the NDnd pool tag and a "findstr /m /l NDnd *.sys" only returns ndis.sys.  


Any ideas would be appreciated.






7 Replies

@mattsutton1295 - this doesn't help you, but I have the exact same issue.  I have a set of 2012 R2 DA servers that are load balanced, and they started having a major memory leak during the transition to a new 2019 RDS farm that utilizes UDP (old farm did not, and I had this DA farm up for years without issue).  I now have both servers set to a ridiculous 32gb of RAM each so that they continue to function with only doing a weekly reboot - if you find anything, let me know!

@stopnik Interesting.  Since moving the RDP traffic away from Direct Access the leak has stopped for us.  Something with RDP UDP packets only, otherwise we would see the leak grow with DNS queries and other UDP traffic.  Must be something internal in Direct Access 2012r2 that is handling these packets differently?



@mattsutton1295 @stopnik  


Hi guys, any luck with resolving the mentioned problem? It seems that I have faced the same one on HP physical server running Windows 2012r2 and DA role, non-paged RAM leak on "NDnd" pointing to the "ndis.sys"

@david0n unfortunately there's no fix as far as I'm aware of with Direct Access running on 2012 R2, it's fixed in 2016 and 2019.  I am not in the position to upgrade ours right now, so I have my 2 servers on an automatic reboot schedule at 2am once a week (staggered as I have an HA pair), this has managed the leak so that the servers don't fail.  It's not ideal, but it works for now until I can move all of our clients over to a newer version.


Good luck!

@david0n @stopnik 

We ended up retiring our 2012r2 Direct Access environment and moved to 2019 Always on VPN.  We were rebooting every night for a while.  We finally just bit the bullet and migrated over to AOVPN.


Good Luck!!

@mattsutton1295- that's good to hear!  Unrelated to this post I realize, but how did you handle transitioning over your clients?  Were you able to get away with delivering the new certs & GPO settings/etc over the old DA connection or did you have to bring all the clients back into the corporate office to get reconfigured?  That's the main reason I haven't done our upgrade - with this pandemic, having everybody bring in or ship their devices back to our office isn't feasible.   


We had employees bring devices back to the closest office to their residence to get them on the corporate network.  Not ideal, but I think it worked out that they scheduled the time vs something go wrong and then making them come in because their device can no longer connect.