Tuesday, January 31, 2017

Anuta Networks NCX: Overcoming Skepticism

Anuta Networks demonstrated their NCX network/service orchestration product at Network Field Day 14.

Disclaimer
Anuta Networks page at TechFieldDay.com with videos of their presentations

Anuta's promise with NCX is to provide a vendor and platform agnostic network provisioning tool with a slick user interface and powerful management / provisioning features.

I was skeptical, especially after seeing the impossibly long list of supported platforms.

Impossible!
Network device configurations are complicated! They've got endless features, each of which is tied to others the others in unpredictable ways. Sure, seasoned network ops folks have no problem hopping around a text configuration to discover the ways in which ACLs, prefix lists, route maps, class maps, service policies, interfaces, and whatnot relate to one another... But capturing these complicated relationships in a GUI? In a vendor independent way?

I left the presentation with an entirely different perspective, and a desire to try it out on a network I manage. Seriously, I have a use case for this thing. Here's why I was wrong:

Not a general purpose UI
Okay, so it's a provisioning system, not a general purpose UI. Setup is likely nontrivial because it requires you to consider the types of services and related configurations you deploy in your network, and then express those possibilities in a simple form. Examples of things you might choose to express are:

  • For an MPLS PE device, are we 8021.Q tagging the traffic at the customer handoff? If so, what tag? That's a checkbox and a text field.
  • For a DMVPN router, do we want to allow direct internet access, or backhaul everything to HQ? A checkbox.
  • For various WAN interfaces, choose from a list of provider types in order to get the correct QoS templates applied
Pre-built Templates
NCX ships with understanding of how to do lots of things (create a VLAN,  configure spanning tree, define a BGP neighbor) on lots of different platforms. Much of the work is already done. We're not inventing the wheel with Ansible here.

Vendors and Features
The vendor list was huge, but let's just consider a Cisco for the moment. What does it mean for Cisco platforms to be "supported" by NCX? Apparently everything in the IWAN deployment guide is supported. That's a pretty complete list of features: routing protocols, QoS, PBR, security, etc... I'm sure they don't have crazy corner case stuff (using appletalk, are you?) but that's okay because...

Extensibility
Need to use a feature that NCX doesn't know about? The toolkit for defining your own features, exposing them in the UI and getting them expressed as device configuration looked pretty straightforward. It boiled down to expressing a YANG model of the feature in question and then mapping that to device specific NETCONF/CLI/REST/whathaveyou configuration directives.

Final Fragments
  • Offline devices which have missed several cycles of updates do not receive that series of individual updates when they come back online. NCX somehow flattens the queued changes into a single update prior to delivering it. Frankly, this blows my mind. It suggests a surprising level intimacy with the device configuration. Imagine, for example, if one of the updates included the bgp upgrade-cli directive. NCX would anticipate the result and merge subsequent address-family ipv4 directives? Maybe I misunderstood the answer to my question on this topic :)
  • NCX knows about its management path to the devices in question, and is careful not to lock itself out. I didn't get a lot of clarity about how this works, but there's no question the guys behind it are thoughtful and clever.

Friday, January 6, 2017

ERSPAN on Comware

The Comware documentation doesn't spell it out clearly, but it's possible to get ERSPAN-like functionality by using a GRE tunnel interface as the target for a local port mirror session.

This is very handy for quick analysis of stuff that's not L2 adjacent with an analysis station.

First, create a local mirror session:

 mirroring-group 1 local  

Next configure an unused physical interface for use by tunnel interfaces:

 service-loopback group 1 type tunnel  
 interface <unused-interface>  
  port service-loopback group 1  
  quit  

Now configure a GRE tunnel interface as the destination for the mirror group:

 interface Tunnel0 mode gre  
  source <whatever>  
  destination <machine running wireshark>  
  mirroring-group 1 monitor-port  
  quit  

Finally, configure the source interface(s):

 interface <interesting-source-interface-1>  
  mirroring-group 1 mirroring-port inbound  
 interface <interesting-source-interface-2>
  mirroring-group 1 mirroring-port inbound  

Traffic from the source interfaces arrives at the analyzer with extra Ethernet/IP/GRE headers attached. Inside each GRE payload is the original frame as collected at a mirroring-group source interface. If the original traffic with extra headers attached (14+20+4 == 38 bytes) exceeds MTU, then the switch fragments the frame. Nothing gets lost and Wireshark handles it gracefully.