Teams - Direct Routing, with LMO, Centralised SIP, Downstream Breakout and Ribbon Edge/2000 Series

Copper Contributor

Hi All

I'm working on a deployment of Teams EV for a fairly large organization with a footprint across most of the world, concentrating mainly in Europe and surrounding regions.

 

Background/Topology

 

We have a fairly simple test topology:

  • Ribbon Edge 2000 SBCs in the core DCs providing centralised SIP providers for most countries
    • These have LAN, service provider interfaces. They now have DMZ interfaces, with private IPs and a corresponding public NAT.
  • Ribbon Edge 1000 SBCs in some branches where we have local breakout.
    • These have internet connectivity, but no dedicated/inbound NAT. Prior to Teams they didn't need the internet
  • Many branches with no SBC at all; using central services only.

So - there are Microsoft planning/config docs that describe two basic scenarios - centralised SIP, and downstream breakouts with no internet connectivity (well, no inbound access from Teams, to those downstream SBCs directly). I suspect many organisations will be like mine - we've centralised SIP for 90% of our estate across the continent, but there are a few exceptions where that's not possible and we have maintained the odd E1.

 

These are the docs I'm talking about...

Planning: https://docs.microsoft.com/en-us/microsoftteams/direct-routing-media-optimization

Configuring: https://docs.microsoft.com/en-us/microsoftteams/direct-routing-media-optimization-configure

 

'Designed' Solution

Our solution was simple:

  • Turn on LMO for all the SBCs.
  • Now for our central SBCs, users routing to those directly will use the external IP if off-net, and LAN IP if on-net. Much like Skype Edge; and as described by MS in the LMO Planning doc.
  • For downstream SBCs, external users will route media in via the proxies, and internal users will route media direct on net.

Simple enough... however, in practice...

 

  • For downstream; mainly things work OK. However, there's no way to assign two proxy SBCs to one downstream. We've engineered around this, but are told it's a roadmapped feature by MS, and not expected to work in the current version.  We also see some unexpected stuff - like direct RTP from MS MoH-in-the-cloud on a 52.x.x.x IP to our internal/downstream SBC being dynamically set up through general internet access, rather than via the proxy.
  • For the central/proxy SBCs - it simply does not work. The only way to have a system that can carry out basic hold/blind transfer/consult transfer functions is to a) disable the bypasspolicy for the central SBCs in teams (effectively turning off LMO, as no x-ms-userlocation headers are sent) and b) set both internal and external interfaces to the DMZ interface (so all users, internal and external talk to the DMZ - otherwise, we again get failed transfer/hold scenarios. Essentially, the 2Ks seem unable to 'flip' between interfaces, sending traffic out one interface, with the IP of another, which causes firewall drops..  or simply 'lose' audio in transfer scenarios (i.e. it doesn't leave the SBC, and doesn't reach the two participants on the SBC when the transfer is completed.

'Working' Solution

 

So our working scenario is that we really don't have LMO on for the central SBC, just downstream. It looks logical, and simple. It doesn't look anything like what's laid out by MS or Ribbon.

 

Funnily enough - the powershell example of an LMO config in the MS docs results in an SBC you can no longer modify in the Teams Web Admin page. Basically, the teams web admin page enforces our working config; but doesn't allow you to apply the config that microsoft put in their powershell example. This was a warning sign I wish I'd heeded earlier.

 

A little about me - I'm a grumpy engineer; I have 20+ years of VoIP experience with Cisco/MS/and various other systems, various MS quals including Skype/Lync, CCIE Voice. I've deployed hundreds of soft PBXs including Cisco CCM/Skype/Lync, at least 250K phones on those systems, and other associated systems such as contact centers etc. I also run a small software dev company retailing Cisco related phone tools. All that done, deploying LMO with Teams and Ribbon has given me a huge headache, bad dreams, long days and a sense of dread in the mornings that nothing else has. Maybe I'm getting old....

 

So this is the question - how was it for you? What worked, what didn't work, what config did you try, what do you have in production?

 

Please don't respond to this if you do not have direct experience of deploying Teams Direct Routing, with LMO, on the Ribbon Edge series. We can all regurgitate some dudes' blog or the MS examples/docs, that isn't what's needed here.

 

What worked for you? What didn't work? 

 

OK, maybe if you have 5xxx series. That'll be phase 2...

 

Thanks!

 

Aaron

 

1 Reply