The uses of this particular rack I'll cover in future entries - this is about how I made the rack itself, with help from friends (Steve McIntyre & Andy Simpkins). A common theme is making allowances for using dev kit boards - ready-to-rack ARM servers are not here yet. My aim was to have 4, possibly 6 ARM dev kit boards running services from home, so there was no need for a standard 42U rack, a 9U rack is enough. Hence:
To run various other services around the house (firewall, file server etc.), a microserver was also necessary:
I chose to mount that on a bookcase beneath the wall mounted rack as it kept all the cables at the bottom of the rack itself. The microserver needed a second gigabit network card fitted to cope with being a firewall as well, if you do the same, ensure you find a suitable card with a low profile bracket. Some are described as being low profile but do not package the low profile bracket, only a low profile card and a full height bracket.
Intel EXPI9301CTBLK PRO1000 Network Card & note the low profile bracket in the pack.
The first of the dev kit requirements is the lack of boards which can be racked, so shelves are going to be needed, leading on to something to stop the boards wandering across the shelf when the cables are adjusted & velcro pads in my case.
Second requirement is that dev kit boards are notorious for not rebooting cleanly. Nothing to do with the image being run, the board just doesn't cut power or doesn't come back after cutting power. Sometimes this is down to early revisions of the board, sometimes the board pulls enough power through the USB serial converter to remain alive, whatever the cause, it won't reboot without user involvement. So a PDU becomes necessary - remotely controllable. New units tend to be expensive and/or only suitable for larger racks, I managed to find an older 8 port APC unit, something like:
(Don't be surprised if that becomes a dead link - search for APC Smart Slot Master Switch Power Distribution Unit).
Talking of power, I'm looking to use SATA drives with these boards and the boards themselves come with a variety of wall wart plugs or USB cables, so a number of IEC sockets are needed - not the usual plugs:
or, for devices which genuinely need 2A to boot (use the 1A for attached SATA or leave empty):
Check the power output rating of the USB plugs used to connect to the mains as well - many are 1A or less. Keep the right plug for the right board.
Power is going to also be a problem if, like me, you want to run SATA drives off boards which support SATA. The lack of a standard case means that ATX power is awkward, so check out some cheap SATA enclosures to get a SATA connector with USB power.
I am using these enclosures (prices seem to have risen since I obtained them):
Along with these:
eSATA to SATA Serial External Shielded Cable 1m because the iMx53 boards have SATA connectors but the enclosure exports eSATA. Whilst this might seem awkward, the merit of having both eSATA and simple USB power on one enclosure is not to be under-estimated. (Avoid the stiffer black cables - space will get tight inside the rack.)
Naturally, a 2.5 inch SATA drive is going to be needed for each enclosure, I'm using HDD but SSD is also an option.
Also, consider simple 2 or 3 way fused adaptors so that the board and the SATA drive can be powered from a single PDU port, this makes rebooting much simpler if the board needs a power supply with integrated plug instead of over USB.
Now to the networking (2 x 8 port was cheaper than 1 x 16):
Don't forget the cat5 cables too - you'll want lots of short cables, 1m or shorter inside the rack and a few longer ones going to the microserver and wall socket. I used 8x1m.
Naturally, on the floor below your rack you are going to put a UPS, so the PDU power then needs to be converted to draw from the UPS via IEC plugs instead of standard mains. I decided to use a 6 gang extension 1m cable with an IEC plug - it was the only bit of wiring I had to do and even those are available ready-made if you want to do it that way.
Depending on the board, you may need your own serial to USB converters, you'll certainly need some powered USB hubs.
I'm using a wall mounted 9U rack, so I also needed a masonry drill and 4 heavy duty masonry M8 bolts. The rack comes with a mounting plate which needs to be bolted to the wall but nothing else is included. This step is much easier with someone else to take the weight of the rack as it is guided into the brackets on the mounting plate - the bracket may need a little persuasion to allow for the bolt heads to not get in the way during mounting. Once mounted, the holes in the back of the rack allow for plenty of room, it's just getting to that point.
The rack has side panels which latch out easily, providing easy maintenance. The glass door can be easily reversed to open from the opposite side. However, the key in the glass door is largely useless. The expectation is that the units in the rack are attached at the front but the dev boards on shelves are not going to be 'protected' by a key in the front door. The key therefore ends up being little more than a handle for the glass door.
OK. If you've got this far, it's a case of putting things together:
Yes, you really do want one. Fine, do without the premium one, but the economy one will save you a lot of (physical) pain.
At this stage, it becomes clear that the normal 19 inch server rack shelves don't leave a lot of room at the back of the rack for the cables - and there are a lot of cables.
Each board has power, USB serial connection and network. The SATA has power too. The PDU has a power lead and you'll need network switches too. The network switches need power and an uplink cable.
I positioned the supports in the rack as far forward as possible and attached the shelves to give enough room for the PDU on the base of the rack, the network switches resting on top and the extension bar (with the heavier, stiffer cables) at the back. (I need to bring my shelves another 1 or 2 positions further forward as there is barely enough room for one cable between the shelf and the back of the rack and that makes moving boards around harder than it needs to be.)
The PDU defaults to enabling all ports at startup, so connect to it over telnet and turn off the ports before connecting things and sorting out the network interface to what the rest of the lab needs. (I'm using a 10. range and the PDU was originally set to use 192.168.1.)
That's about it as far as the hardware setup is concerned. Just time to label up each board, note down which PDU port does which device, which serial to USB converter is on which device on the microserver and check the power - my initial problem with one board was traced to the inability of the board to power SATA off the on-board USB port even when using the provided 2A power supply. That meant adding a standard mains adaptor to feed both the SATA power and the board power off the one PDU port - there is little point powering off the board but not the SATA, or vice versa.
I haven't totalled up the expenditure but the biggest expenses were the microserver and the wall mounted rack but don't underestimate how much it will cost to buy 6 IEC plugs, various USB serial converters and how much you may spend on peripheral items.
There is quite a lot of room on the 2 shelves for more boards, what will limit the deployment in this rack is space for the cables, especially power. The shorter the power cables, the easier it will be to maintain the devices in the rack. It might be worth looking at a 12U rack, if only to give plenty of space for cables.
Once I've got the software working, I'll describe what this rack will be doing ... it's got something to do with Debian, ARM, Linux and tests but you've probably already guessed that much ...