Forum Discussion

DiegoUSC's avatar
DiegoUSC
Copper Contributor
Aug 31, 2021

Azure Kubernetes Service (AKS) forbidden address ranges for vnet

I installed some months ago an AKS cluster with kubenet networking without problems. In our last version upgrade something changed, because it complained that we were using for the vnet where the cluster is placed a private address range that is disallowed in the documentation:

 

AKS clusters may not use 169.254.0.0/16, 172.30.0.0/16, 172.31.0.0/16, or 192.0.2.0/24 for the Kubernetes service address range, pod address range or cluster virtual network address range

 

The problem is that we use these "forbidden" private address ranges for our network infrastructure in azure (we have a hub & spoke architecture) and on-premises (we have an ExpressRoute connection) and it seems that we have to make a huge change in all our network to be able to upgrade or reinstall the AKS cluster with full connectivity.

 

I tried with Azure support but they say that it is a design decision that can not be changed.

 

If anybody has any suggestion to deal with this AKS upgrade/reinstall problem (that does not require a complete change in our IP addressing policy), that would be very helpful.

  • mike351425's avatar
    mike351425
    Copper Contributor

    DiegoUSC 

    I know this is an old thread, but I've run into the same problem. We use 172.30.0.0/16 addresses on prem and have had no luck getting function apps to talk to internal servers in this address space. Anybody knows a good workaround I'd be eternally grateful.

    • DiegoUSC's avatar
      DiegoUSC
      Copper Contributor

      mike351425I'm sorry to hear that this problem hasn't been resolved yet. After reviewing the documentation and speaking on several occasions with MS support, the only solution we found was to change the network range of the VNET in which we have the AKS cluster and recreate the cluster. We were able to maintain the addressing in the rest of the peered VNETs and in the on-premises networks, ​​but in the VNET where the AKS is hosted we finally had to change it. With this change everything seems to work fine, and the limitation seems to apply only for the AKS VNET. Our problem was different, and I think that we are not connecting from the AKS to IPs of this forbidden range, and maybe that is the reason why we didn't experience more problems.

       

      The only solution that I can think of for your problem is to expose this on-premise IP forbidden ranges through a NAT that is in the allowed ranges for the AKS cluster.

      • mike351425's avatar
        mike351425
        Copper Contributor

        DiegoUSCThat is what we ended up doing and it does seem to work. Was hoping not to have to go this route though since we'd have to put NATs in for every on premise thing we'd want to hit from logic apps. Guess it's that or move the internal range to another subnet (easier said than done).

Resources