Saturday, November 11, 2017

Syslog relay with Scapy

I needed to point some syslog data at a new toy being evaluated by security folks.

Reconfiguring the logging sources to know about the new device would have been too much of a hassle for a quick test. Reconfiguring the Real Log Server (an rsyslog box) to relay the logs wasn't viable because the source IP in the syslog packets would have reflected the syslog box instead of the origin server.

A few lines of python running on the existing rsyslog box did the trick:

 #!/usr/bin/env python2.7  
   
 from scapy.all import *  
   
 def pkt_callback(pkt):  
   del pkt[Ether].src  
   del pkt[Ether].dst  
   del pkt[IP].chksum  
   del pkt[UDP].chksum  
   pkt[IP].dst = '192.168.100.100'  
   sendp(pkt)  
   
 sniff(iface='eth0', filter='udp port 514', prn=pkt_callback, store=0)  

This script has scapy collecting frames matching udp port 514 (libpcap filter) from interface eth0. Each matching packet is handed off to the pkt_callback function. It clears fields which need to be recalculated, changes the destination IP (to the address of the new Security Thing) and puts the packets back onto the wire.

The source IP on these forged packets is unchanged, so the Security Thing thinks it's getting the original logs from real servers/routers/switches/PDUs/weather stations/printers/etc... around the environment.

I'd expected to need to filter out the packets that scapy is sending (don't listen to and re-send your own noise), but that doesn't seem to have been necessary.

Thursday, October 5, 2017

SSH HashKnownHosts File Format

The HashKnownHosts option to the OpenSSH client causes it obfuscate the host field of the ~/.ssh/known_hosts file. Obfuscating this information makes it harder for threat actors (malware, border searches, etc...) to know which hosts you connect to via SSH.

Hashing defaults to off, but some platforms turn it on for you:

 chris:~$ grep Hash /etc/ssh/ssh_config   
   HashKnownHosts yes  
 chris:~$   

Here's an entry from my known_hosts file:

 |1|NWpzcOMkWUFWapbQ2ubC4NTpC9w=|ixkHdS+8OWezxVQvPLOHGi2Oawo= ecdsa-sha2-nistp256 AAAAE2Vj<...>ZHNLpyJsv  

There's one record per line, with the fields separated by spaces. The first field is the remote host (SSH server) identifier.

In this case, the leading characters |1| in the host identifier are the magic string (HASH_MAGIC). It tells us that the field is hashed, rather than a plaintext hostname (or address). The remaining characters in the field comprise two parts: a 160-bit salt (random string) and a 160-bit SHA1 hash result. Both values are base64 encoded.

The various OpenSSH binaries that use information in this file feed both the remote hosts name (or address) and the salt to the hashing function in order to produce the hash result:


So, lets validate a host entry against this record the hard way. The entry above is for an IP address: 10.0.0.1.

 chris:~$ host="10.0.0.1"  
 chris:~$ salt_from_file="NWpzcOMkWUFWapbQ2ubC4NTpC9w="  
 chris:~$ salt_hexdump=$(echo $salt_from_file | base64 --decode | xxd -p)  
 chris:~$ echo -n $host | openssl sha1 -binary -mac HMAC -macopt hexkey:$salt_hexdump | base64  
 ixkHdS+8OWezxVQvPLOHGi2Oawo=  
 chris:~$   

The resulting string (ixkHdS+8OWezxVQvPLOHGi2Oawo=) is the base64 encoded hash result produced by inputting our host IP and the salt we found in the file. It's the same string that we saw in the known_hosts entry, so we know that this entry is for the host 10.0.0.1.

When adding a new record to known_hosts, the salt is a random value invented on the spot. The hash is calculated and the salt, hash and key details are written to the file.

When trying to find a record in an existing known_hosts file, the SSH program can't pick the right line directly. Instead it has to take the hostname (address) it's looking for, and compute the hash using the salt found on each line. When (if) it finds a match, then that's the line it was looking for. SHA1 happens pretty fast on modern hardware, but depending on your use case, this may be a bunch of wasted effort, particularly on systems where there's no point in obfuscating the list of SSH servers to which we connect.



These folks drew the cocktail shaker.

Tuesday, September 26, 2017

Pluribus Networks... Wait, where are we again?


I was privileged to visit Pluribus Networks as a delegate at Network Field Day 16 a couple of weeks ago. Somebody else paid for the trip. Details here.

Much has changed at Pluribus, I hardly recognized the place!

I quite like Pluribus (their use of Solaris under their Netvisor switching OS got me right in the feels early on) so I'm happy to report that most of what's new looks like changes for the better.

When we arrived at Pluribus HQ we were greeted by some new faces, a new logo, color scheme... Even new accent lighting in the demo area!

Gone also are the Server Switches with their monstrous control planes (though still listed on the website, they weren't mentioned in the presentation), Solaris, and a partnership with Supermicro.

In their place we found:

  • The new logo and colors
  • New faces in management and marketing
  • Netvisor running on Linux
  • Whitebox and OCP-friendly switches
  • A partnership with D-Link
  • Some Netvisor improvements

Linux

This was probably inevitable, and likely matters little to Netvisor users. When Pluribus was first getting off the ground, I was waiting for an OpenSolaris release that never happened. That Pluribus stuck with Solaris for as long as they did while Oracle was dismantling the Solaris ecosystem is kind of incredible. Netvisor on Linux is fine, I'm sure.

Switch Hardware

One of Pluribus' claims to fame was their "server switches". These were normal switches using merchant switching silicon (from 2 or 3 different vendors, if I recall... I think Netvisor has a hardware abstraction layer which allows them to switch easily between Broadcom/Intel/Mellanox ASICs), but with enormous control planes sporting lots of cores, lots of RAM, lots of storage, dedicated network processors, etc...

The big switches opened the door to some interesting possibilities, but likely made a tough sell to customers that just wanted an IP network fabric. Which is probably most customers.

These days Pluribus is selling vanilla-looking Open Compute-friendly switches with ONIE, and supporting Netvisor on a handful of 3rd party whitebox platforms.

That D-Link Partnership

Okay, quit laughing. The D-Link switch in question is Trident II based, just like (almost) every other switch in the market. If D-Link helps Pluribus move product, then I'm delighted for all involved. The only thing I don't like about the DXS-5000-54S is that it lacks an RS-232 port. USB console? Ugh. I'll run my Netvisors on something with a proper management interface, thankyouverymuch.

Netvisor

Netvisor still looks pretty great! Some standout features:
  • Netvisor uses standard protocols to interact with neighboring devices, but you manage a Netvisor fabric as a single device.
  • It's still got fantastic telemetry and flow analytics capabilities, even without the monstrous control plane. Some slightly outrageous claims were made in this area toward the end of the presentation, but we didn't have time to dig in.
  • Individual nodes are managed in-band (via the front-panel interfaces, rather than the management LAN port). Incredibly this capability is not universal in this product space. Some platforms rely on the lone management Ethernet interface for fabric control purposes. This fact blows my mind. I'm similarly surprised that whitebox switches don't tend to come with redundant control plane paths. Maybe there's a single "eth0" port baked into the Trident chip for this purpose?
  • Routing is performed by an anycast gateway. That is, moving packets from one broadcast domain to another does not require them to be hauled to a certain point in the fabric. Any Netvisor switch (the nearest Netvisor switch) will do the job. This is a welcome change.
  • Members of a Netvisor fabric don't need to be cabled to one another. This opens the door to using Netvisor only at the leaf tier in a leaf/spine fabric... Or only at the spine... Or at both layers as a single large fabric... Or at both layers, but as two fabrics (one for leaf, one for spine)... Or as smaller deployment units in a huge fabric. Lots of possibilities here.

Saturday, September 23, 2017

KEMP Presented Some Interesting Features at NFD16

KEMP Technologies presented at Network Field Day 16, where I was privileged to be a delegate. Who paid for what? Answers here.


Three facets of the KEMP presentation stood out to me:


The KEMP Management UI Can Manage Non-KEMP Devices

KEMP's centralized management UI, the KEMP 360 Controller, can manage/monitor other load balancers (ahem, Application Delivery Controllers) including AWS ELB, HAProxy, NGINX and F5 BIG-IP.

This is pretty clever: If KEMP gets into an enterprise, perhaps because it's dipping a toe into the cloud at Azure, they may manage to worm their way deeper than would otherwise have been possible. Nice work, KEMPers.

VS Motion Can Streamline Manual Deployment Workflows

KEMP's VS Motion feature allows easy service migrations between KEMP instances by copying service definitions from one box to another. It's probably appropriate when replicating services between production instances and when promoting configurations between dev/test/prod. The mechanism is described in some detail here:


The interface is pretty straightforward. It looks just like the balance transfer UI at my bank: Select the From instance, the To instance, what you want transferred (which virtual service) and then hit the Move button. The interface also sports a Copy button, so in that regard it's better than the UI at the bank. I look forward to the bank allowing me to replicate funds between accounts in the future :)

I think it struck all of the Network Field Day delegates that this feature is primarily useful for manual workflows. An automated workflow wouldn't need an "easy button" like the VS Motion feature. Unfortunately there wasn't enough time to get into KEMP's Automation/API capabilities during the presentation, but Keith Miller was tuned into the live stream and reported that the API is a pleasure to use:
Update from Keith:

It's disappointing to read that the API doesn't return structured data.

VS Motion does not, as I understand it, have the ability to copy TLS certificates around right now, but  the feature request is in.

That Strange License

Frankly, this topic from the NFD16 presentation doesn't make much sense to me.

When you're buying boxes, or even virtual capabilities that are licensed by a bandwidth cap, you're going to have paid-for-but-wasted capacity during off-peak times. KEMP has introduced a consumption-based model to work around that problem: Pay only for what you use!

It sounds great, especially with the popularity of virtual services. When talking about physical boxes, it makes sense that you'd have to pay for any overcapacity you may have provisioned: There's the box, 95% idle, waiting for that peak traffic day... Full of expensive processors and RAM... Oh, and there's the failover box, at 100% idle... You probably didn't expect to get the hardware for free, right?

The situation feels different when we're talking about virtual appliances: How much would you expect to pay for a virtual standby server? One which, if everything goes according to plan, will never see a request from a live client? You're already paying somebody else (the server vendor or IaaS provider) for the hardware, so paying KEMP based on usage seems ideal.

But they've created an altogether new problem: KEMP's consumption based license model finds the peak throughput (at 5 minute intervals) of each participating node, then adds them up to calculate the monthly bill.

Let's imagine that your organization has a rock-steady 1Gb/s flow rate through an active/standby pair of KEMP boxes, plus a DR facility somewhere.

Every month you pay for 1Gb/s of usage.

Then one day the active unit fails, load switches to the standby unit. Several hours later, you shift workload to the DR site while performing maintenance to restore the failed hardware in the main site.

Take the peak throughput from each KEMP unit: Active (failed), Standby (now active) and DR have each hit 1Gb/s. That month you'll pay for 3Gb/s, even though the workload never changed. You just moved it around.

It seems like anybody with any degree of workload mobility will be overpaying with this model, unless the per-bandwidth price is also quite low.

I'd be much more comfortable paying per byte, per TLS setup or per load-balanced request. The sum-of-peaks model seems too unpredictable to me.

Thursday, August 31, 2017

Using FQDN for DMVPN hubs

I've done some testing with specifying DMVPN hubs (NHRP servers, really) using their DNS name, rather than IP address.

This matters to me because of some goofy environments where spoke routers can't predict what network they'll be on (possibly something other than internet), and where I can't leverage multiple hubs per tunnel due to a control plane scaling issue.

The DNS-based configuration includes the following:

 interface Tunnel1  
  ip nhrp nhs dynamic nbma dmvpn-pool.fragmentationneeded.net  

There's no longer a requirement for any ip nhrp map or ip nhrp nhs x.x.x.x configuration when using this new capability.

My testing included some tunnels with very short ISAKMP and IPSec re-key intervals. I found that the routers performed the DNS resolution just once. They didn't go back to DNS again for as long as the hub was reachable.

Spoke routers which failed to establish a secure connection for whatever reason would re-resolve the hub address each time the DNS response expired its TTL. But once they succeeded in connecting, I observed no further DNS traffic for as long as the tunnel survived.

The record I published (dmvpn-pool.fragmentationneeded.net above) includes multiple A records. The DNS server randomizes the record order in its responses and spoke routers always connected to the first address on the list.

The random-ordered DNS response makes for a kind of nifty load balancing and failover capability:

  1. The spokes will naturally balance across the population of hubs, depending on the whim of the DNS server
  2. I don't strictly need a smart (GSLB style) DNS server to effect failover, because spokes will eventually find their way to a working hub, even with bad records in the list.


With 3 hub routers, the following happens when one fails:

  • At T=0, 67% of the routers remain connected.
  • At T=<keepalive>s, 89% of routers are connected (2/3 of the orphans are back online. The others are trying the dead hub again).
  • At T=TTLx1, 96% of routers are connected (1/3 of the orphans from the previous interval tried the dead hub a second time)
  • At T=TTLx2, 99% of routers are back online
Things recover fairly quickly with short TTL intervals, even without a GSLB because the spokes keep trying, and only need to find a working record once. This DMVPN tunnel isn't the only path in my environment, so a couple of minutes outage is acceptable.


A 60 second TTL will result in ~40K queries/month for each spoke that can't connect (problems with firewall, overload NAT, credentials, etc...), so watch out for that if you're using a service that causes you to pay per query :)

Wednesday, August 30, 2017

Small Site Multihoming with DHCP and Direct Internet Access

Cisco recently (15.6.3M2 ) resolved CSCve61996, which makes it possible to fail internet access back and forth between two DHCP-managed interfaces in two different front-door VRFs attached to consumer-grade internet service.

Prior to the IOS fix there was a lot of weirdness with route configuration on DHCP interfaces assigned to VRFs.

I'm using a C891F-K9 for this example. The WAN interfaces are Gi0 and Fa8. They're in F-VRF's named ISP_A and ISP_B respectively:


First, create the F-VRFs and configure the interfaces:

 ip vrf ISP_A  
 ip vrf ISP_B  
   
 interface GigabitEthernet8  
  ip vrf forwarding ISP_A  
  ip dhcp client default-router distance 10  
  ip address dhcp  
 interface FastEthernet0  
  ip vrf forwarding ISP_B  
  ip dhcp client default-router distance 20  
  ip address dhcp  

The distance commands above assign the AD of the DHCP-assigned default route. Without these directives the distance would be 254 in each VRF. They're modified here because we'll be using the distance to select the preferred internet path when both ISPs are available.

Next, let's keep track of whether or not the internet is working via each provider. In this case I'm pinging 8.8.8.8 via both paths, but this health check can be whatever makes sense for your situation. So, a couple of IP SLA monitors and track objects are in order:

 ip sla 1  
  icmp-echo 8.8.8.8  
  vrf ISP_A  
  threshold 500  
  timeout 1000  
  frequency 1  
 ip sla schedule 1 life forever start-time now  
 track 1 ip sla 1  
   
 ip sla 2  
  icmp-echo 8.8.8.8  
  vrf ISP_B  
  threshold 500  
  timeout 1000  
  frequency 1  
 ip sla schedule 2 life forever start-time now  
 track 2 ip sla 2  

Ultimately we'll be withdrawing the default route from each VRF when we determine that the internet has failed. This is introduces a problem: With the default route missing the SLA target will be unreachable. The SLA (and track) will never recover, so the default route will never be restored. So first let's add a static route to our SLA target in each VRF. The default route will get withdrawn, but the host route for the SLA target will persist in each VRF.

 ip route vrf ISP_A 8.8.8.8 255.255.255.255 dhcp 50  
 ip route vrf ISP_B 8.8.8.8 255.255.255.255 dhcp 60  

We used the dhcp keyword as a stand-in for the next-hop IP address. We could have just specified the interface, but specifying a multiaccess interface without a neighbor ID is an ugly practice and assumes that proxy ARP is available from neighboring devices. Not a safe assumption.

Finally, we can set the default route to be withdrawn when the track object goes down:

 interface GigabitEthernet8  
  ip dhcp client route track 1  
   
 interface FastEthernet0  
  ip dhcp client route track 2  

At this point, when everything is healthy, the routing table for ISP_A looks something like this:

 S*  0.0.0.0/0 [10/0] via 192.168.1.126  
    8.0.0.0/32 is subnetted, 1 subnets  
 S    8.8.8.8 [50/0] via 192.168.1.126  
    192.168.1.0/24 is variably subnetted, 2 subnets, 2 masks  
 C    192.168.1.64/26 is directly connected, GigabitEthernet8  
 L    192.168.1.67/32 is directly connected, GigabitEthernet8  

The table for ISP_B looks similar, but with different Administrative Distances. On failure of the SLA/track the default route gets withdrawn but the 8.8.8.8/32 route persists. That looks like this:

    8.0.0.0/32 is subnetted, 1 subnets  
 S    8.8.8.8 [50/0] via 192.168.1.126  
    192.168.1.0/24 is variably subnetted, 2 subnets, 2 masks  
 C    192.168.1.64/26 is directly connected, GigabitEthernet8  
 L    192.168.1.67/32 is directly connected, GigabitEthernet8  

When the ISP is healed, the 8.8.8.8/32 ensures that we'll notice, the SLA will recover, and the default route will be restored.

Okay, now it's time to think about leaking these ISP_A and ISP_B routes into the global routing table (GRT). First, we need an interface in the GRT for use by directly connected clients:

 interface Vlan10  
  ip address 10.10.10.1 255.255.255.0  

And now the leaking configuration:

 ip prefix-list PL_DEFAULT_ONLY permit 0.0.0.0/0  
   
 route-map RM_IMPORT_TO_GRT permit  
  match ip address prefix-list PL_DEFAULT_ONLY  
   
 global-address-family ipv4  
  route-replicate from vrf ISP_A unicast static route-map RM_IMPORT_TO_GRT  
  route-replicate from vrf ISP_B unicast static route-map RM_IMPORT_TO_GRT  

The configuration above leaks only the default route from each F-VRF. The GRT will be offered both routes and will make its selection based on the AD we configured earlier (values 10 and 20).

Here's the GRT with everything working:

 S* + 0.0.0.0/0 [10/0] via 192.168.1.126 (ISP_A)  
    10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks  
 C    10.10.10.0/24 is directly connected, Vlan10  
 L    10.10.10.1/32 is directly connected, Vlan10  

When the ISP_A path fails, the GRT fails over to the higher distance route via ISP_B:

 S* + 0.0.0.0/0 [20/0] via 192.168.1.62 (ISP_B)  
    10.0.0.0/8 is variably subnetted, 2 subnets, 2 masks  
 C    10.10.10.0/24 is directly connected, Vlan10  
 L    10.10.10.1/32 is directly connected, Vlan10  

Strictly speaking, it's not necessary to have the SLA monitor, track object and conditional routing in VRF ISP_B. All of those things could be omitted and the GRT would still fail back and forth between the different F-VRFs based only on the tests in "A". But I like the symmetry.

Okay, so now that we've got the GRT's default route flopping back and forth between these two front-door VRFs, we'll need some NAT. First, enable NVI mode on each interface in the transit path:

 interface GigabitEthernet8   
  ip nat enable  
 interface FastEthernet0  
  ip nat enable  
 interface Vlan10  
  ip nat enable  

Next we'll spell out exactly what's going to get NATted. I like to use route-maps rather than ACLs because the templating is easier when we're matching interfaces rather than ip prefixes:

 route-map RM_NAT->ISP_A permit 10  
  match interface GigabitEthernet8  
   
 route-map RM_NAT->ISP_B permit 10  
  match interface FastEthernet0  
   
 ip nat source route-map RM_NAT->ISP_A interface GigabitEthernet8 overload  
 ip nat source route-map RM_NAT->ISP_B interface FastEthernet0 overload  

That's basically it. The last thing that might prove useful is to automate purging of NAT translation tables when switching between providers. TCP flows can't survive the ISP switchover, and clearing the NAT translations for active flows should make them fail faster than they might have otherwise.

Saturday, June 17, 2017

Serial Pinout for APC

This is just a quick note to remind me how to make serial cables for APC power strips. This cable works between an APC AP8941 and an Opengear terminal server with Cisco-friendly (-X2 in Opengear nomenclature) pinout.


Only pins 3,4 and 6 are populated on the 8P8C end. It probably doesn't matter whether the ground pin (black) lands on pin 4 or 5 because both should be ground on the Opengear end. The yellow wire is unused.

Tuesday, March 21, 2017

Cisco: Not Serious About Network Programmability

"You can't fool me, there ain't no sanity clause!"
Cisco isn't known for providing easy programmatic access to their device configurations, but has recently made some significant strides in this regard.

The REST API plugin for newer ASA hardware is an example of that. It works fairly well, supports a broad swath of device features, is beautifully documented and has an awesome interactive test/dev dashboard. The dashboard even has the ability to spit out example code (java, javascript, python) based on your point/click interaction with it.

It's really slick.

But I Can't Trust It

Here's the problem: It's an un-versioned REST API, and the maintainers don't hesitate to change its behavior between minor releases. Here's what's different between 1.3(2) and 1.3(2)-100:

New Features in ASA REST API 1.3(2)-100

Released: February 16, 2017
As a result of the fix for CSCvb21388, the response type of /api/certificate/details was changed from the CertificateDetails object to a list of CertificateDetails. Scripts utilizing this API will need to be modified accordingly.

So, any code based on earlier documentation is now broken when it calls /api/certificate/details.

This Shouldn't Happen

Don't take my word for it:



Remember than an API is a published contract between a Server and a Consumer. If you make changes to the Servers API and these changes break backwards compatibility, you will break things for your Consumer and they will resent you for it. 




It Gets Worse

Not only does the API fail to provide consistently formatted responses, it doesn't even provide a way to discover its version. Cisco advised me to scrape the 'show version' CLI output in order to divine the correct way to parse the API's responses. Whenever they decide to change things.

The irony of having to abandon the API for screen scraping in order to improve API compatibility is almost too much to bear. Lets assume for the moment that I'm willing to do it. Will the regex that finds the API version today still work on tomorrow's release? Do I even know how to parse the version numbers?

What's the version number of the current release anyway?

  • 1.3(2)-100 (according to the release notes above)
  • 1.3.2.100 (according to show version CLI output)
  • 1.3.2 (according to the 'release:' field on the download page)
This does not look like a road I'm going to enjoy traveling.

Would You Use This API?

When I inquired about version-to-version incompatibilities, Cisco's initial response was:
"This definitely shouldn't be happening."
Followed by:
"We are aware of the limitations resulting for not having versioned ASA REST API releases. And as of now there are no plans for us to fix this."
 Further followed by:
"we will update the documentation to reflect the correct behavior, once we post this fix to CCO."
So hey, no problem right? We might sneak breaking changes into the smallest of maintenance releases, but at least we'll document it! Have fun selling and supporting your application!

Clearly I am one of the angry and resentful customers predicted by the articles quoted above :)

Friday, March 17, 2017

Epoch Rollover: Coming Two Years Early To A Router Near You!

The 2038 Problem

Broken Time? -  Roeland van der Hoorn
Many computer systems and applications keep track of time by counting the seconds from "the epoch", an arbitrary date. Epoch for UNIX-based systems is the stroke of midnight in Greenwich on 1 January 1970.

Lots of application functions and system libraries keep track of the time using a 32-bit signed integer, which has a maximum value of around 2.1 billion. It's good for a bit more than 68 years worth of seconds.

Things are likely to get weird 2.1 billion seconds after the epoch on January 19th, 2038.

As the binary counter rolls over from 01111111111111111111111111111111 to 10000000000000000000000000000000, the sign bit gets flipped. The counter will have changed from its farthest reach after the epoch to its farthest reach before the epoch. time will appear to have jumped from early 2038 to late 1901.

Things might even get weird within the next year (January 2018!) as systems begin encounter freshly minted CA certificates with expirations after the epoch rollover (it's common for CA certificates to last for 20 years.) These certificates may appear to have expired in late 1901, over a century prior to their creation.

NTP's 2036 Problem

NTP has a similar, but not-quite-the-same epoch problem. It keeps track of seconds in an unsigned 32-bit value, so it can count twice as high as the problematic UNIX counter (yay!) but NTP's epoch is set 70 years earlier: 1 January 1900 (boo!) The result is that NTP's counter will roll over about 2 years before the UNIX counter.

Practically speaking, NTP's going to be fine for reasons having to do with it being primarily concerned about small offsets in relative time, and it only having to be within 68 years of correct on startup in order to sync up with an authoritative time source.

So What's Up With This Router?

Here's a weird thing I stumbled across recently. Time calculations with dates in 2036 are going wrong but they're unrelated to NTP:

 router#show crypto pki certificates test-1 
 CA Certificate  
  Status: Available  
  Certificate Serial Number (hex): 14  
  Certificate Usage: Signature  
  Issuer:   
   cn=test-2  
  Subject:   
   cn=test-2  
  Validity Date:   
   start date: 02:38:26 UTC Mar 17 2017  
   end  date: 00:00:00 UTC Jan 1 1900  
  Associated Trustpoints: test-1   

But this one looks okay:

 router#show crypto pki certificates test-2
 CA Certificate  
  Status: Available  
  Certificate Serial Number (hex): 12  
  Certificate Usage: Signature  
  Issuer:   
   cn=test-1  
  Subject:   
   cn=test-1  
  Validity Date:   
   start date: 02:37:31 UTC Mar 17 2017  
   end  date: 06:28:15 UTC Feb 7 2036  
  Associated Trustpoints: test-2   

The real expiration dates of these certificates is just one second apart:

$ openssl x509 -in test-1.crt -noout -enddate
notAfter=Feb 7 06:28:16 2036 GMT
$ openssl x509 -in test-2.crt -noout -enddate
notAfter=Feb 7 06:28:15 2036 GMT
So... That's unfortunate.

Here's the actual certificate data and import procedure used for this experiment in case you feel inclined to test:

 crypto pki trustpoint test-1  
  enrollment terminal  
 crypto pki authenticate test-1  
 -----BEGIN CERTIFICATE-----  
 MIIBeDCCASKgAwIBAgIBFDANBgkqhkiG9w0BAQUFADARMQ8wDQYDVQQDDAZ0ZXN0  
 LTIwIBcNMTcwMzE3MDIzODI2WhgPMjAzNjAyMDcwNjI4MTZaMBExDzANBgNVBAMM  
 BnRlc3QtMjBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQDUjEccGNjjtv8lKNnvGpta  
 Z4x8LB82D2JJwTcvA5blUI2nr4vF41RqG0ifZ+Qtyqo+ntSD2QzDu3LKdSUw46if  
 AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud  
 DgQWBBQ2NpEF0FG/g3ryNgU7Skjbm4IGHTAfBgNVHSMEGDAWgBQ2NpEF0FG/g3ry  
 NgU7Skjbm4IGHTANBgkqhkiG9w0BAQUFAANBAIVyT+iBimH7c/jtBrFGmKq+7YdM  
 eMwf9I/En/TAUqtte7QGLNRyTgBJvGgN/uc0KUjlZ5D6G/kxTwDtzse2Uow=  
 -----END CERTIFICATE-----  
 quit  
   
 crypto pki trustpoint test-2  
  enrollment terminal  
 crypto pki authenticate test-2  
 -----BEGIN CERTIFICATE-----  
 MIIBeDCCASKgAwIBAgIBEjANBgkqhkiG9w0BAQUFADARMQ8wDQYDVQQDDAZ0ZXN0  
 LTEwIBcNMTcwMzE3MDIzNzMxWhgPMjAzNjAyMDcwNjI4MTVaMBExDzANBgNVBAMM  
 BnRlc3QtMTBcMA0GCSqGSIb3DQEBAQUAA0sAMEgCQQDUjEccGNjjtv8lKNnvGpta  
 Z4x8LB82D2JJwTcvA5blUI2nr4vF41RqG0ifZ+Qtyqo+ntSD2QzDu3LKdSUw46if  
 AgMBAAGjYzBhMA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1Ud  
 DgQWBBQ2NpEF0FG/g3ryNgU7Skjbm4IGHTAfBgNVHSMEGDAWgBQ2NpEF0FG/g3ry  
 NgU7Skjbm4IGHTANBgkqhkiG9w0BAQUFAANBAIjboo8wtehMpOReLw01tW8MLYzl  
 rtpwYVGoHCVVpXU+s7YQtfR1pt5ZVHZ8OVeP8SoTtoS+5k97aWgBZ+hu8/M=  
 -----END CERTIFICATE-----  
 quit  

Wednesday, February 1, 2017

Docker's namespaces - See them in CentOS

In the Docker Networking Cookbook (I got my copy directly from Pact Publishing), Jon Langemak explains why the iproute2 utilities can't see Docker's network namespaces: Docker creates its namespace objects in /var/run/docker/netns, but iproute2 expects to find them in /var/run/netns.

Creating a symlink from /var/run/docker/netns to /var/run/netns is the obvious solution:

 $ sudo ls -l /var/run/docker/netns  
 total 0  
 -r--r--r--. 1 root root 0 Feb 1 11:16 1-6ledhvw0x2  
 -r--r--r--. 1 root root 0 Feb 1 11:16 ingress_sbox  
 $ sudo ip netns list  
 $ sudo ln -s /var/run/docker/netns /var/run/netns  
 $ sudo ip netns list  
 1-6ledhvw0x2 (id: 0)  
 ingress_sbox (id: 1)  
 $  

But there's a problem. Look where this stuff is mounted:

 $ ls -l /var/run  
 lrwxrwxrwx. 1 root root 6 Jan 26 20:22 /var/run -> ../run  
 $ df -k /run  
 Filesystem   1K-blocks Used Available Use% Mounted on  
 tmpfs      16381984 16692 16365292  1% /run  
 $   

The symlink won't survive a reboot because it lives in a memory-backed filesystem. My first instinct was to have a boot script (say /etc/rc.d/rc.local) create the symlink, but there's a much better way.

Fine, I'm starting to like systemd

Systemd's tmpfiles.d is a really elegant way of handling touch files, symlinks, empty directories, device nodes, pipes and whatnot which live in volatile filesystems. The feature works from these directories:
  • /etc/tmpfiles.d
  • /run/tmpfiles.d
  • /usr/lib/tmpfiles.d
When the directives found in these directories contradict one another, the instance I've listed earlier wins. This allows an administrator to override package declarations in /usr/lib/tmpfiles.d by creating an entry in /etc/tmpfiles.d. Conflicts between files are resolved by the order of their appearance in a lexical sort.

So, what goes in these directories? Files named <whatever>.conf. Each line in these files controls creation of a file / folder / symlink / etc... There are switches and options to control ownership, permissions, overwrite condition, contents, and so forth.

Here's the file that causes systemd to create my symlink on every boot:

 $ cat /etc/tmpfiles.d/netns.conf   
 L /run/netns - - - - ./docker/netns  

I'm still not quite ready to forgive systemd for taking away the udev network interface naming persistency stuff and replacing it with something that's useless in virtual machines (this helps). But I'm getting there.

Lately I've been really liking each new facet of systemd as I've discovered it.

Tuesday, January 31, 2017

Anuta Networks NCX: Overcoming Skepticism

Anuta Networks demonstrated their NCX network/service orchestration product at Network Field Day 14.

Disclaimer
Anuta Networks page at TechFieldDay.com with videos of their presentations

Anuta's promise with NCX is to provide a vendor and platform agnostic network provisioning tool with a slick user interface and powerful management / provisioning features.

I was skeptical, especially after seeing the impossibly long list of supported platforms.

Impossible!
Network device configurations are complicated! They've got endless features, each of which is tied to others the others in unpredictable ways. Sure, seasoned network ops folks have no problem hopping around a text configuration to discover the ways in which ACLs, prefix lists, route maps, class maps, service policies, interfaces, and whatnot relate to one another... But capturing these complicated relationships in a GUI? In a vendor independent way?

I left the presentation with an entirely different perspective, and a desire to try it out on a network I manage. Seriously, I have a use case for this thing. Here's why I was wrong:

Not a general purpose UI
Okay, so it's a provisioning system, not a general purpose UI. Setup is likely nontrivial because it requires you to consider the types of services and related configurations you deploy in your network, and then express those possibilities in a simple form. Examples of things you might choose to express are:

  • For an MPLS PE device, are we 8021.Q tagging the traffic at the customer handoff? If so, what tag? That's a checkbox and a text field.
  • For a DMVPN router, do we want to allow direct internet access, or backhaul everything to HQ? A checkbox.
  • For various WAN interfaces, choose from a list of provider types in order to get the correct QoS templates applied
Pre-built Templates
NCX ships with understanding of how to do lots of things (create a VLAN,  configure spanning tree, define a BGP neighbor) on lots of different platforms. Much of the work is already done. We're not inventing the wheel with Ansible here.

Vendors and Features
The vendor list was huge, but let's just consider a Cisco for the moment. What does it mean for Cisco platforms to be "supported" by NCX? Apparently everything in the IWAN deployment guide is supported. That's a pretty complete list of features: routing protocols, QoS, PBR, security, etc... I'm sure they don't have crazy corner case stuff (using appletalk, are you?) but that's okay because...

Extensibility
Need to use a feature that NCX doesn't know about? The toolkit for defining your own features, exposing them in the UI and getting them expressed as device configuration looked pretty straightforward. It boiled down to expressing a YANG model of the feature in question and then mapping that to device specific NETCONF/CLI/REST/whathaveyou configuration directives.

Final Fragments
  • Offline devices which have missed several cycles of updates do not receive that series of individual updates when they come back online. NCX somehow flattens the queued changes into a single update prior to delivering it. Frankly, this blows my mind. It suggests a surprising level intimacy with the device configuration. Imagine, for example, if one of the updates included the bgp upgrade-cli directive. NCX would anticipate the result and merge subsequent address-family ipv4 directives? Maybe I misunderstood the answer to my question on this topic :)
  • NCX knows about its management path to the devices in question, and is careful not to lock itself out. I didn't get a lot of clarity about how this works, but there's no question the guys behind it are thoughtful and clever.

Friday, January 6, 2017

ERSPAN on Comware

The Comware documentation doesn't spell it out clearly, but it's possible to get ERSPAN-like functionality by using a GRE tunnel interface as the target for a local port mirror session.

This is very handy for quick analysis of stuff that's not L2 adjacent with an analysis station.

First, create a local mirror session:

 mirroring-group 1 local  

Next configure an unused physical interface for use by tunnel interfaces:

 service-loopback group 1 type tunnel  
 interface <unused-interface>  
  port service-loopback group 1  
  quit  

Now configure a GRE tunnel interface as the destination for the mirror group:

 interface Tunnel0 mode gre  
  source <whatever>  
  destination <machine running wireshark>  
  mirroring-group 1 monitor-port  
  quit  

Finally, configure the source interface(s):

 interface <interesting-source-interface-1>  
  mirroring-group 1 mirroring-port inbound  
 interface <interesting-source-interface-2>
  mirroring-group 1 mirroring-port inbound  

Traffic from the source interfaces arrives at the analyzer with extra Ethernet/IP/GRE headers attached. Inside each GRE payload is the original frame as collected at a mirroring-group source interface. If the original traffic with extra headers attached (14+20+4 == 38 bytes) exceeds MTU, then the switch fragments the frame. Nothing gets lost and Wireshark handles it gracefully.