Yeah, we had to jump through some hoops to extend IPMI’s single-system view of the world into our multi-node architecture.That is exactly why it's confusing. Everything here works reasonably well, but users have to jump through all of the hoops that the product engineers lined up for us.
The build of ipmitool that ships with OS X (2.5b1) doesn't support the Moonshot's double-bridged topology, so I'm using the one that ships with macports (1.8.12). To check whether your version of ipmitool is compatible, run ipmitool -h and look to see whether it supports both the single-bridge (-b, -t) and double-bridge (-B, -T) command line options. If it does, then it's probably okay.
Using IPMI over the network with a regular rack server is pretty straightforward. You specify the device by name or IP, the user credentials and the command/query you want to run. That's about it. Such a command might look like this:
ipmitool —H <IPMI_IP> -U <user> —P <password> —I lanplus chassis identify force
The command above turns on the beacon LED on a server. Most of the options here are obvious. The -I lanplus specifies that we intend to speak over the LAN to a remote host, rather than use IPMI features that may be accessible from within the running OS on the machine. I'm not using the -P <password> option in subsequent examples, rather I use -E which specifies to pull the user password from an environment variable.
Moonshot is quite a bit more complicated than a typical rack mount server. Here's a diagram of the topology from the HP iLO Chassis Management IPMI User Guide:
|Moonshot IPMI Topology|
First, let's get an inventory from the perspective of the Zone MC:
$ ipmitool -H <IPMI_IP> -EU Administrator -I lanplus sdr list all ZoMC | Static MC @ 20h | ok 254 | Log FRU @FEh f0.60 | ok IPMB0 Phys Link | 0x00 | ok ChasMgmtCtlr1 | Static MC @ 44h | ok PsMgmtCtlr1 | Dynamic MC @ 52h | ok PsMgmtCtlr2 | Dynamic MC @ 54h | ok PsMgmtCtlr3 | Dynamic MC @ 56h | ok PsMgmtCtlr4 | Dynamic MC @ 58h | ok CaMC | Static MC @ 82h | ok CaMC | Static MC @ 84h | ok CaMC | Static MC @ 86h | ok <snip> Switch MC | Static MC @ 68h | ok Switch MC | Static MC @ 6Ah | ok
From this, we can see that the Zone MC, Chassis MC, and first power supply MC are all at the addresses we'd expect based on having reviewed HP's drawing. Additionally, we can see the the addresses of the remaining power supplies, the switches, and the cartridges (I snipped the output after the first three cartridges).
You can learn more about each of those discovered devices with:
ipmitool -H <IPMI_IP> -EU Administrator -I lanplus fru print
I've not yet figured out how to relate the cartridge and switch MCs to physical slot numbers other than by flipping on and off the beacon LEDs, or inspecting serial numbers. I think it's supposed to be possible with the picmg addrinfo command, but I've yet to figure out how to relate that output to physical cartridge and switch slots.
Okay, there's one more thing to note in the table above: the IPMB0 bus to which all of our downstream controllers are attached is channel 0x00. We need to know the address here because these controllers potentially have many interfaces. When sending bridged commands, we need to send both the channel number and the target address.
So, now we've got everything we need in order to flip on the beacon LED at cartridge #1:
ipmitool -H <IPMI_IP> -EU Administrator -I lanplus -b 0 -t 0x82 chassis identify force
Yes, the command is chassis identify, but it doesn't illuminate the chassis LED. That's because the command is executing within the context of a cartridge controller. The command above should light the LED on cartridge #1.
Cool, so we're now talking through the Zone MC to the individual cartridges, switches and power supplies! But what about the servers? Moonshot supports multiple servers per cartridge, so we're still one hop away. That's why we need double bridging.
Double bridged commands work the same as single bridged, except that we have to specify the channel number and target address at each of two layers. The first hop is specified with -B and -T, second hop with -b and -t.
First, we need to get the layout of a cartridge controller. We'll run the sdr list all command again, but bridge it through to the cartridge in slot 1:
$ ipmitool -H <IPMI_IP> -EU Administrator -I lanplus -b 0 -t 0x82 sdr list all 01-Front Ambient | 27 degrees C | ok 02-CPU | 0 degrees C | ok 03-DIMM 1 | 26 degrees C | ok 04-DIMM 2 | 26 degrees C | ok 05-DIMM 3 | 28 degrees C | ok 06-DIMM 4 | 27 degrees C | ok 07-HDD Zone | 27 degrees C | ok 08-Top Exhaust | 26 degrees C | ok 09-CPU Exhaust | 27 degrees C | ok CaMC | Static MC @ 82h | ok SnMC | Static MC @ 72h | ok SnMC 1 | Log FRU @01h c1.62 | ok
This is a single-node cartridge (m300 cartridges are all I've got to play with), but, consistent with quad-node cartridges, they require a bridging hop. The SnMC at 0x72 refers to the lone server on this cartridge. I assume that multi-node cartridges would list several SnMC resources here.
Unfortunately, when the sdr list all command is run against the cartridge controller, it doesn't reveal anything about the downstream transit channel like it did when we ran it against the Zone MC. The channel number we need for the second bridge hop is 7. It's documented in chapter 3 of the HP iLO Chassis Management IPMI User Guide.
So, putting this all together, we'll set node 1 on cartridge 1 to boot from its internal HDD, and then set it to boot just once via PXE:
$ ipmitool -H <IPMI_IP> -EU Administrator -B 0 -T 0x82 -b 7 -t 0x72 -I lanplus chassis bootdev disk options=persistent $ ipmitool -H <IPMI_IP> -EU Administrator -B 0 -T 0x82 -b 7 -t 0x72 -I lanplus chassis bootdev pxe
Some Other Useful Commands
- lan print
- sel list
- chasis status
- chassis identify (lights the LED for 15 seconds)
- chassis identify <duration> (0 turns off the LED)
- chassis identify force (lights the LED permanently)
- chassis status
- chassis identify (all variants above)
Double-bridged node commands:
- chassis power status
- chassis power on
- chassis power off
- chassis power cycle (only works when node power is on)
- sol activate (connects you to the node console via Serial-Over-LAN)
- sol deactivate (kills an active sol session)
GotchasI've found that the web interface doesn't indicate beacon LED status when IPMI sets it to expire (the default behavior).
Attempts to use the Virtual Serial Port from the iLO command-line fail when an IPMI SOL session is active. The iLO CLI prompts you to "acquire" the session, but this fails too.
Setting node boot and power options too quickly (one after the other) seems to cause them to fail.
Node boot order settings configured via IPMI while node power is off work, but the iLO command line doesn't recognize that they've happened until the node is powered on.
Version 1.40 of the Moonshot chassis manager firmware introduced the possibility of creating Operator class users who are restricted to viewing/manipulating only a subset of cartridges. This has been handy in a development environment, but there are a couple of gotchas to using IPMI capabilities as an Operator user.
The first gotcha is that the user needs to explicitly declare the intended privilege level specifying -L OPERATOR at the ipmitool command line. I'm not clear on why the privilege level can't be inferred at the chassis manager by looking at the passed credentials, but apparently it cannot.
The second gotcha: By default, the SOL capability requires ADMINISTRATOR class privilege to operate. You can see this by sending the sol info command via ipmitool as an ADMINISTRATOR class user. This requirement seems odd to me: OPERATORs are allowed to interact with the virtual serial port through the SSH interface without any additional configuration.
It is possible to allow OPERATOR users to use the IPMI SOL capability by changing the required privilege level. Do that by sending sol set privilege-level operator via ipmitool with ADMINISTRATOR credentials.