Saturday, December 21, 2019

How do RFC3161 timestamps work?

RFC3161 exists to demonstrate that a particular piece of information existed at a certain time, by relying on a timestamp attestation from a trusted 3rd party. It's the cryptographic analog of relying on the date found on a postmark or a notary public's stamp.

How does it work? Let's timestamp some data and rip things apart as we go.

First, we'll create a document and have a brief look at it. The document will be one million bytes of random data:

 $ dd if=/dev/urandom of=data bs=1000000 count=1  
 1+0 records in  
 1+0 records out  
 1000000 bytes transferred in 0.039391 secs (25386637 bytes/sec)  
 $ ls -l data  
 -rw-r--r-- 1 chris staff 1000000 Dec 21 14:10 data  
 $ shasum data  
 3de9de784b327c5ecec656bfbdcfc726d0f62137 data  
 $   

Next, we'll create a timestamp request based on that data. The -cert option asks the timestamp authority (TSA) to include their identity (certificate chain) in their reply and -no_nonce omits anti-replay protection from the request. Without specifying that option we'd include a large random number in the request.

 $ openssl ts -query -cert -no_nonce < data | hexdump -C  
 Using configuration from /opt/local/etc/openssl/openssl.cnf  
 00000000 30 29 02 01 01 30 21 30 09 06 05 2b 0e 03 02 1a |0)...0!0...+....|  
 00000010 05 00 04 14 3d e9 de 78 4b 32 7c 5e ce c6 56 bf |....=..xK2|^..V.|  
 00000020 bd cf c7 26 d0 f6 21 37 01 01 ff                |...&..!7...|  
 0000002b  
 $   

The timestamp request is only 43 bytes, so we're definitely not sending the actual document (one million bytes) to the TSA. So what's in the request?

Perusing the data, a couple of things (the two mandatory items, in fact) stand out. First, the SHA1 hash of the data that we captured earlier appears here in the timestamp request. Second the OID specifying SHA1 is included here. The OID (1.3.14.3.2.26) is a little less recognizable because of the insane way OIDs are encoded in ASN.1 (section 8.19), but there it is. The only other information included in the request is result of the -cert flag, which is a boolean (true) in the last 3 bytes. I found that the TSA refused to respond if I failed to request the signer include its certificate chain.

Okay, lets re-generate that request and send it to a real TSA:

 $ openssl ts -query -cert -no_nonce < data | curl --data-binary @- http://timestamp.entrust.net/TSS/RFC3161sha2TS | hexdump -C  
 Using configuration from /opt/local/etc/openssl/openssl.cnf  
  % Total  % Received % Xferd Average Speed  Time  Time   Time Current  
                  Dload Upload  Total  Spent  Left Speed  
 100 3773 100 3730 100  43 24064  277 --:--:-- --:--:-- --:--:-- 24500  
 00000000 30 82 0e 8e 30 03 02 01 00 30 82 0e 85 06 09 2a |0...0....0.....*|  
 00000010 86 48 86 f7 0d 01 07 02 a0 82 0e 76 30 82 0e 72 |.H.........v0..r|  
 00000020 02 01 03 31 0b 30 09 06 05 2b 0e 03 02 1a 05 00 |...1.0...+......|  
 00000030 30 81 e2 06 0b 2a 86 48 86 f7 0d 01 09 10 01 04 |0....*.H........|  
 00000040 a0 81 d2 04 81 cf 30 81 cc 02 01 01 06 09 60 86 |......0.......`.|  
 00000050 48 86 fa 6c 0a 03 05 30 21 30 09 06 05 2b 0e 03 |H..l...0!0...+..|  
 00000060 02 1a 05 00 04 14 3d e9 de 78 4b 32 7c 5e ce c6 |......=..xK2|^..|  
 00000070 56 bf bd cf c7 26 d0 f6 21 37 02 06 5d eb 11 2b |V....&..!7..]..+|  
 00000080 86 c9 18 13 32 30 31 39 31 32 32 31 32 30 30 38 |....201912212008|  
 00000090 30 36 2e 30 30 34 5a 30 04 80 02 01 f4 a0 76 a4 |06.004Z0......v.|  
 000000a0 74 30 72 31 0b 30 09 06 03 55 04 06 13 02 43 41 |t0r1.0...U....CA|  
 000000b0 31 10 30 0e 06 03 55 04 08 13 07 4f 6e 74 61 72 |1.0...U....Ontar|  
 000000c0 69 6f 31 0f 30 0d 06 03 55 04 07 13 06 4f 74 74 |io1.0...U....Ott|  
 000000d0 61 77 61 31 16 30 14 06 03 55 04 0a 13 0d 45 6e |awa1.0...U....En|  
 000000e0 74 72 75 73 74 2c 20 49 6e 63 2e 31 28 30 26 06 |trust, Inc.1(0&.|  
 000000f0 03 55 04 03 13 1f 45 6e 74 72 75 73 74 20 54 69 |.U....Entrust Ti|  
 00000100 6d 65 20 53 74 61 6d 70 69 6e 67 20 41 75 74 68 |me Stamping Auth|  
 00000110 6f 72 69 74 79 a0 82 0a 23 30 82 05 08 30 82 03 |ority...#0...0..|  
 <...4KB Later...>
 00000e80 7a 1b a2 95 ec cc 1e d5 33 06 c3 69 61 6a a5 15 |z.......3..iaj..|  
 00000e90 53 0a                                           |S.|  
 00000e92  
 $   

Holy cow! Our 43 byte request provoked a ~4KB response! What is this thing?

Naturally, it's an ASN.1 document, just like everything else involving cryptography. OID 1.2.840.113549.1.7.2 indicates that it is PKCS#7 signed data (that crazy OID formatting again). The signed data includes:

  • The OID indicating a SHA1 hash and the hash result from our request both appear.
  • OID 2.16.840.114028.10.3.5 indicates the Entrust (their ORG ID is 114028) timestamp policy under which this result was issued. See RFC3161(2.1.5). It seems like there should be a document describing their practices somewhere, but I can't find it.
  • Next, we find that the signed document indicates that it was created at 2019-12-21 20:08:06.004 UTC (this is the whole point)
  • The value 01F4 indicates timestamp accuracy of 500ms. This optional field has the ability to represent seconds, milliseconds and microseconds, but this response only includes milliseconds.
  • The readable strings following the accuracy field are optional timestamp GeneralName data identifying the TSA.
  • The rest of the response (most of it, really) is PKCS#7 (signed data) overhead including the signer's signature and certificate chain.
So... Neat! We submitted a hash of some data to the TSA, and it replied with our hash, the time it saw it, and a verifiable signature. This definitely beats dropping hashes on twitter to prove you had some data at a particular time.

One of the places that timestamps of this sort come up is with code signing: Unlike a live transaction (say, TLS handshake), it is possible for a code signing entity to back-date a software release. Because we generally want software builds to work forever once they're created/signed, how do you stop a signer from creating a back-dated, but apparently valid release after their signing certificate has expired or been compromised? The best answer we've got is to include a 3rd party who can be trusted to not fake the date, so software signatures might include one of these timestamps.

Saturday, December 14, 2019

Physically man-in-the-middling an IoT device with Linux Bridge

This is a quick writeup of how I did some analysis of an IoT device (The Thing) by physically inserting a Linux box into the network path between The Thing and the network service it consumed. The approach described here involves being physically close to the target system, but it should work equally well1 anywhere there's an Ethernet link along the path between The Thing and it's server.


First, the topology: The Thing is attached to an Ethernet switch and is part of the 192.168.1.0/24 subnet. We'll be physically inserting ourselves into the path of the red cable in this diagram.

Initial setup

The first step is to get a dual-homed Linux box into the path. I used an Ubuntu 18.04 machine with the following netplan configuration:

 network:  
  version: 2  
  renderer: networkd  
  ethernets:  
   eth0:  
    dhcp4: no  
   eth1:  
    dhcp4: no  
  bridges:  
   br0:  
    addresses: [192.168.1.2/24]  
    gateway4: 192.168.1.1  
    interfaces:  
     - eth0  
     - eth1  


This configuration defines an internal software-based bridge for handling traffic between The Thing and the switch. Additionally, it creates an IP interface for the Linux host to communicate with neighbors attached to the bridge (everybody on 192.168.1.0/24.) The Thing's TCP connection to the server is uninterrupted, even with the MITM box cabled inline like this:

MITM box with software bridge deployment

Now traffic to and from The Thing flows through the Linux machine. Just... Like... Any other switch. Not super useful. Yet.

We'll need some NAT rules:

 # The first rule is an ebtables (like iptables, but for bridged/L2 traffic)  
 # policy that rewrites frames containing northbound traffic. It's looking for:  
 #  frames arriving on eth0 (-i eth0)  
 #  frames containing IPv4 traffic (-p IPv4)  
 #  frames sourced from The Thing (-s <mac-of-The-Thing>)  
 #  frames containing packets destined for the server (--ip-destination <server-ip>)  
 #  
 # Frames matching all of those requirements, get their header rewritten  
 # (--to-destination <mac-of-br0>) so for delivery to the local IP subsystem  
 # (this box) rather than to the gateway router.  

 ebtables -t nat -A PREROUTING -i eth0 -p IPv4 -s <mac-of-The-Thing> --ip-destination <server-ip> -j dnat --to-destination <mac-of-br0>  

 # The second rule is an iptables rule that that rewrites northbound packets.  
 # It's looking for:  
 #  packets arriving on the br0 IP interface (due to previous rule's dMAC rewrite)  
 #  packets destined for the server's IP address  
 #  
 # Packets matching those reqirements get their header rewritten so that they're  
 # directed to a local network service, rather than the intended server on the  
 # Internet.  

 iptables -t nat -A PREROUTING -i br0 -d <server-ip> -j DNAT --to-destination 192.168.1.2  

 # The final rule modifies southbound traffic created by the MITM system so that  
 # it appears to have come from the real server on the Internet. It's an iptables  
 # rule looking for:  
 #  packets leaving the br0 IP interface  
 #  packets destined for The Thing  
 #  
 # Packets matching those requirements get their header rewritten so that they  
 # appear to have been created by the real server on the Internet.  

 iptables -t nat -A POSTROUTING -o br0 -d 192.168.1.50 -j SNAT --to-source <server-ip>  


With the rules installed, the traffic situation looks like this:

 
NAT fools The Thing into talking (and listening) to the MITM


At this point, the NAT rules have broken the application because now, when the client tries to establish a connection to the server, it winds up talking to whatever's listening on the Linux box. Probably there's no listener there, so the client's [SYN] segment sent toward the server (and intercepted by the Linux box) provokes the MITM to respond with a [RST] segment.

We need to create a listener to accept connections from The Thing, a client to connect to the real server, and then stitch these two processes together to get the application running again.

If the client is sending HTTP traffic, we could use a proxy like burp/zap/fiddler to do that job. But what if it's not HTTP traffic? Or if we compulsively do things the hard way? The simplest proxy we can build here consists of back-to-back instances of netcat. For example, if the client is connecting to the server on TCP/443 we'd do:

 # Create a pipe for southbound data:  
   
 mkfifo -p /tmp/southbound-pipe  
   
 # Start two nc instances to perform MITM byte stream relay  
 # between The Thing and the real server:  
   
 nc -l 443 < /tmp/southbound-pipe | nc <server-ip> 443 > /tmp/southbound-pipe  

Here's how that CLI incantation works:

netcat and pipes and redirection, oh my!

So, rather than acting as an Ethernet bridge (layer 2), our MITM is now operating on the byte stream, somewhere around layer 5 (don't think too hard about this).

Can the client or server detect these shenanigans?
  • Both sides will believe they're talking to the usual IP address (client because of NAT trickery; server because all connections appear to come from gateway router).
  • The client will see impossibly fast TCP round-trip times, because the MITM is physically close. This will likely not be noticed.
  • Both sides will likely experience different incoming IP TTL values. Again, not likely noticed.
  • Finally, at the TCP layer it is likely that our MITM box will present different TCP behavior and options than the original client and server, but these will likely be interoperable and go unnoticed except via pcap analysis.

So, about that byte stream... What's in it anyway? Here's how to see inside:

 # save northbound and southbound data to files  
 nc -l 443 < /tmp/southbound-pipe | tee /tmp/client-data | nc <server-ip> 443 | tee /tmp/server-data > /tmp/southbound-pipe  
   
 # ...or...  
   
 # print northbound and southbound data to the terminal  
 nc -l 443 < /tmp/southbound-pipe | tee /dev/fd/2 | nc <server-ip> 443 | tee /dev/fd/2 > /tmp/southbound-pipe  


If the service is running on TCP/443 as in this example, we're probably going to be disappointed when we look at the intercepted data. Even though we've MITM'ed the TCP bytestream, the TLS session riding on it remains intact, so we're MITMing and relaying an encrypted byte stream.

We need to go deeper. If we have a certificate (and private key) trusted by the client device, we can do that by using openssl s_client and openssl s_server in place of nc:


 mkfifo -p /tmp/cleartext-pipe  
 openssl s_server -cert cert.pem -key key.pem -port 443 < /tmp/cleartext-pipe | tee /tmp/client-data | openssl s_client -connect <server-ip>:443 | tee /tmp/server-data > /tmp/cleartext-pipe  

Will the client or server notice now? Because we're terminating TLS, it provides a whole new layer (keys, certificates, ciphers, etc...) where these shenanigans can be noticed and/or lead to opportunities problems.

Do you need to physically MITM things like this? Probably not. Launching an ARP poisoning attack would likely have led to the same result, but this approach is a little more reliable and definitely more interesting.


1 Subject to performance limitations of your Linux bridge, I guess. Don't go trying this on 100Gb/s links :)

Friday, February 15, 2019

SSH to all of the serial ports

This is just a quick-and-dirty script for logging into every serial port on an Opengear box, one in each tab of a MacOS terminal.

Used it just recently because I couldn't remember where a device console was connected.

Don't change mouse focus while it's running: It'll wind up dumping keystrokes into the wrong window.

for i in $(seq 48)
do
  port=$(expr 3000 + $i)
  sshcmd="ssh -p $port terminalserver"
  osascript \
    -e 'tell application "Terminal" to activate' \
    -e 'tell application "System Events" to tell process "Terminal" to keystroke "t" using command down' \
    -e "tell application \"System Events\" to tell process \"Terminal\" to keystroke \"$sshcmd\"" \
    -e "tell application \"System Events\" to tell process \"Terminal\" to key code 36"
done


Leaving it here in case somebody (probably me) finds it useful in the future.

Monday, January 21, 2019

Cannot connect the virtual device ... because no corresponding device is available on the host

Recently I've been building some VM templates on my MacBook and launching instances of them in VMware. Each time it produced following error:

Cannot connect the virtual device sata0:1 because no corresponding device is available on the host.


Either button caused the guest to boot up. The "No" button ensured that it booted without error on subsequent reboots, while choosing "Yes" allowed me to enjoy the error with each power-on of the guest.

Sata0 is, of course a (virtual) disk controller, and device 1 is an optical drive. I knew that much, but the exact meaning of the error wasn't clear to me, and googling didn't lead to a great explanation.

I wasn't expecting there to be a "corresponding device ... available on the host" because the host has neither a SATA controller nor an optical drive, and no such hardware should be required for my use case, so, what did the error mean?

It turns out that I was producing the template (a .ova file) with the optical drive "connected" (VMware term) to ... something. The issue isn't related to the lack of a device on the host, but that there's no ISO file "inserted" into the virtual drive.

Here's the relevant stanza from the template's .ovf file:

      <Item>
        <rasd:AddressOnParent>1</rasd:AddressOnParent>
        <rasd:AutomaticAllocation>true</rasd:AutomaticAllocation>
        <rasd:Caption>cdrom1</rasd:Caption>
        <rasd:Description>CD-ROM Drive</rasd:Description>
        <rasd:ElementName>cdrom1</rasd:ElementName>
        <rasd:InstanceID>7</rasd:InstanceID>
        <rasd:Parent>5</rasd:Parent>
        <rasd:ResourceType>15</rasd:ResourceType>

      </Item>

The problem here is the AutomaticaAllocation element. AutomaticAllocation is VMware's "connect at boot" feature, which really boils down to "insert a disk into this virtual drive". Without specifying a backend device and/or file, it can't be connected.

Deleting the element, or setting it to "false" fixes the problem.

AddressOnParent: 1 <- This makes it sata0:1 instead of sata0:0
InstanceID: 7 <- This device is the 8th hardware device defined for the guest
Parent: 5 <- This device is a child of the hardware device at instance id #5 (the sata0)
ResourceType: 15 <- This is an optical drive