Problem with Spoke > Hub > on-prem access

Copper Contributor

I am a little bit lost - maybe there is something in Azure that I do miss.

I have a hub/spoke in Azure with on-prem connected via Azure Network Gateway and Site-2-Site tunnel:

  • hub is 172.30.50.0/25

  • spoke is 10.1.2.0/24 (peered with hub)

  • on-prem is 172.30.50.128/25

  • VM1 (windows vm) with IP 172.30.50.116

  • VM2 (windows vm) with IP 10.1.2.78

  • VM3 (vns3 installed) with IP 172.30.50.119

I don't have the on-prem under control, so: I cannot change there anything.

 

Communication between a VM1 in "hub" and "on-prem" works fine (browser shows page on https://172.30.50.147).Communication between a VM2 in "spoke" and "hub" works fine (browser shows page on https://172.30.50.119, which is the UI of the VNS3 gateway). Routing in "spoke" contains 172.30.50.128/25 next hop virtual appliance: 172.30.50.119. Trying to open https://172.30.50.147 in a browser in VM2 gives me a timeout.

 

Firewall Rules of the vns3:

POSTROUTING\_CUST -o eth0 -s 10.0.0.0/8 -j SNAT --to 172.20.153.119
POSTROUTING\_CUST -o eth0 -j MASQUERADE-ONCE
FORWARD\_CUST -o eth0 -m conntrack --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT FORWARD\_CUST -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT

Network Sniffer output:

... IP 10.1.2.78.51118 > 172.30.50.142.443: Flags [S], seq 2875113164, win 64240, options [mss 1418,nop,wscale 8,nop,nop,sackOK], length 0
... IP 172.30.50.119.51118 > 172.30.50.142.443: Flags [S], seq 2875113164, win 64240, options [mss 1418,nop,wscale 8,nop,nop,sackOK], length 0
... IP 172.30.50.142.443 > 172.30.50.119.51118: Flags [S.], seq 4142239753, ack 2875113165, win 3954, options [mss 1320,sackOK,eol], length 0
... IP 172.30.50.142.443 > 10.1.2.78.51118: Flags [S.], seq 4142239753, ack 2875113165, win 3954, options [mss 1320,sackOK,eol], length 0

Status of eth0: no dropped or errors.

 

So, from the network sniffer output, I would assume that the packages are traveling like this:

VM2 -> vns3
vns3 -> tunnel -> on-prem-service
on-prem-service -> tunnel -> vns3
vns3 -> azure network with destination IP 10.1.2.78 - but never reaching VM2

Does anyone see what I need to do to be able to connect successfully from my spoke to an on-prem server?

3 Replies
Route need to be like this:

ONprem VM > onprem Gateway / FW / VPN > Tunnel to Azure > VPN Gateway > UDR with Route to your Spoke > pointing to your appliance > FW Rule > internal nic > peering to spoke > spoke VM

Route Tables on your HUB need "propagate routes" to be enabled
Route Tables on your spoke need that disabled

you can check on your nic all learned routes.. check if your peering is active
That's it. Check all that, and you should be fine on Azure's end.

If you're still not able to reach the on-premises network from the spoke, then the on-premises guys need to check their traffic selectors, making sure that the spoke address space is whitelisted...

@SvenMatzen are you fine now or still need assistance