Saturday, December 14, 2019

Physically man-in-the-middling an IoT device with Linux Bridge

This is a quick writeup of how I did some analysis of an IoT device (The Thing) by physically inserting a Linux box into the network path between The Thing and the network service it consumed. The approach described here involves being physically close to the target system, but it should work equally well1 anywhere there's an Ethernet link along the path between The Thing and it's server.


First, the topology: The Thing is attached to an Ethernet switch and is part of the 192.168.1.0/24 subnet. We'll be physically inserting ourselves into the path of the red cable in this diagram.

Initial setup

The first step is to get a dual-homed Linux box into the path. I used an Ubuntu 18.04 machine with the following netplan configuration:

 network:  
  version: 2  
  renderer: networkd  
  ethernets:  
   eth0:  
    dhcp4: no  
   eth1:  
    dhcp4: no  
  bridges:  
   br0:  
    addresses: [192.168.1.2/24]  
    gateway4: 192.168.1.1  
    interfaces:  
     - eth0  
     - eth1  


This configuration defines an internal software-based bridge for handling traffic between The Thing and the switch. Additionally, it creates an IP interface for the Linux host to communicate with neighbors attached to the bridge (everybody on 192.168.1.0/24.) The Thing's TCP connection to the server is uninterrupted, even with the MITM box cabled inline like this:

MITM box with software bridge deployment

Now traffic to and from The Thing flows through the Linux machine. Just... Like... Any other switch. Not super useful. Yet.

We'll need some NAT rules:

 # The first rule is an ebtables (like iptables, but for bridged/L2 traffic)  
 # policy that rewrites frames containing northbound traffic. It's looking for:  
 #  frames arriving on eth0 (-i eth0)  
 #  frames containing IPv4 traffic (-p IPv4)  
 #  frames sourced from The Thing (-s <mac-of-The-Thing>)  
 #  frames containing packets destined for the server (--ip-destination <server-ip>)  
 #  
 # Frames matching all of those requirements, get their header rewritten  
 # (--to-destination <mac-of-br0>) so for delivery to the local IP subsystem  
 # (this box) rather than to the gateway router.  

 ebtables -t nat -A PREROUTING -i eth0 -p IPv4 -s <mac-of-The-Thing> --ip-destination <server-ip> -j dnat --to-destination <mac-of-br0>  

 # The second rule is an iptables rule that that rewrites northbound packets.  
 # It's looking for:  
 #  packets arriving on the br0 IP interface (due to previous rule's dMAC rewrite)  
 #  packets destined for the server's IP address  
 #  
 # Packets matching those reqirements get their header rewritten so that they're  
 # directed to a local network service, rather than the intended server on the  
 # Internet.  

 iptables -t nat -A PREROUTING -i br0 -d <server-ip> -j DNAT --to-destination 192.168.1.2  

 # The final rule modifies southbound traffic created by the MITM system so that  
 # it appears to have come from the real server on the Internet. It's an iptables  
 # rule looking for:  
 #  packets leaving the br0 IP interface  
 #  packets destined for The Thing  
 #  
 # Packets matching those requirements get their header rewritten so that they  
 # appear to have been created by the real server on the Internet.  

 iptables -t nat -A POSTROUTING -o br0 -d 192.168.1.50 -j SNAT --to-source <server-ip>  


With the rules installed, the traffic situation looks like this:

 
NAT fools The Thing into talking (and listening) to the MITM


At this point, the NAT rules have broken the application because now, when the client tries to establish a connection to the server, it winds up talking to whatever's listening on the Linux box. Probably there's no listener there, so the client's [SYN] segment sent toward the server (and intercepted by the Linux box) provokes the MITM to respond with a [RST] segment.

We need to create a listener to accept connections from The Thing, a client to connect to the real server, and then stitch these two processes together to get the application running again.

If the client is sending HTTP traffic, we could use a proxy like burp/zap/fiddler to do that job. But what if it's not HTTP traffic? Or if we compulsively do things the hard way? The simplest proxy we can build here consists of back-to-back instances of netcat. For example, if the client is connecting to the server on TCP/443 we'd do:

 # Create a pipe for southbound data:  
   
 mkfifo -p /tmp/southbound-pipe  
   
 # Start two nc instances to perform MITM byte stream relay  
 # between The Thing and the real server:  
   
 nc -l 443 < /tmp/southbound-pipe | nc <server-ip> 443 > /tmp/southbound-pipe  

Here's how that CLI incantation works:

netcat and pipes and redirection, oh my!

So, rather than acting as an Ethernet bridge (layer 2), our MITM is now operating on the byte stream, somewhere around layer 5 (don't think too hard about this).

Can the client or server detect these shenanigans?
  • Both sides will believe they're talking to the usual IP address (client because of NAT trickery; server because all connections appear to come from gateway router).
  • The client will see impossibly fast TCP round-trip times, because the MITM is physically close. This will likely not be noticed.
  • Both sides will likely experience different incoming IP TTL values. Again, not likely noticed.
  • Finally, at the TCP layer it is likely that our MITM box will present different TCP behavior and options than the original client and server, but these will likely be interoperable and go unnoticed except via pcap analysis.

So, about that byte stream... What's in it anyway? Here's how to see inside:

 # save northbound and southbound data to files  
 nc -l 443 < /tmp/southbound-pipe | tee /tmp/client-data | nc <server-ip> 443 | tee /tmp/server-data > /tmp/southbound-pipe  
   
 # ...or...  
   
 # print northbound and southbound data to the terminal  
 nc -l 443 < /tmp/southbound-pipe | tee /dev/fd/2 | nc <server-ip> 443 | tee /dev/fd/2 > /tmp/southbound-pipe  


If the service is running on TCP/443 as in this example, we're probably going to be disappointed when we look at the intercepted data. Even though we've MITM'ed the TCP bytestream, the TLS session riding on it remains intact, so we're MITMing and relaying an encrypted byte stream.

We need to go deeper. If we have a certificate (and private key) trusted by the client device, we can do that by using openssl s_client and openssl s_server in place of nc:


 mkfifo -p /tmp/cleartext-pipe  
 openssl s_server -cert cert.pem -key key.pem -port 443 < /tmp/cleartext-pipe | tee /tmp/client-data | openssl s_client -connect <server-ip>:443 | tee /tmp/server-data > /tmp/cleartext-pipe  

Will the client or server notice now? Because we're terminating TLS, it provides a whole new layer (keys, certificates, ciphers, etc...) where these shenanigans can be noticed and/or lead to opportunities problems.

Do you need to physically MITM things like this? Probably not. Launching an ARP poisoning attack would likely have led to the same result, but this approach is a little more reliable and definitely more interesting.


1 Subject to performance limitations of your Linux bridge, I guess. Don't go trying this on 100Gb/s links :)

No comments:

Post a Comment