A Pi cluster parts list

Previously, I talked about some limitations of building RPi clusters generally.

This time, I’m gonna cut to the chase and present my currently partially complete build.

My basic design unit for the build was a W6.5” x L4.5” block consisting of six Pis bolted together using 11mm M2.5 standoffs and a USB power bar. The underlying hardware costs almost nothing and is available at your hardware store of choice, and there’s plenty of variations of acrylic sleds which fit into such stacks to be had.

The Pis themselves are all model 3 B+s, sourced from wherever you can find cheap pis and SD cards. In price shopping I found that Amazon’s listings for Pis were more expensive than those available on other resellers. I wound up going with CanaKit for mine.

For the case, I used a Nanuk 915. Nanuk is a lower-pricepoint Pelican alternative, and the 915’s internal dimensions (L13.8” x W9.3” x H6.2”) happen to fit two of these 7.5”x4.5” blocks side-by-side with a little room to spare.

For power, I’m using a single consolidated and switched 12v rail. I fabricated it myself using some basic terminal hardware (switch and barrel jack socket embedded in lexan fronting a terminal block) but there’s really no surprises there. The transformer I’m using is a 12V @ 20A / 240W monster monster spec’d to run potentially 10 Pis and switch and the display at once.

For USB hubs to power the Pis, I’m using a Sabrent 60W USB Hub, which runs off of a 12v supply. This is important, because it let me standardize the entire case on a single 12v source rail shared between the currently one USB (and future second) hub, the networking switch and the display.

I will note that it’s important to use short USB cables so that the hub packs well to the “vine” of 5 Pis. I managed to find some 6” micro USB cables which worked fine, but I think you could get that down to about 4”. Or just give up on the USB hub entirely and go with a backplane, which is what I’d probably do were I to build all this again.

For the switch, I went with a NETGEAR 16-Port managed switch (GS116E). I specifically chose the cheapest switch I could find with support for VLAN trunking, and which ran off of a 12v source again so I could get the entire case down to a single transformer.

The real problem with choosing the switch is I wasn’t able to find one shallow enough to fit in the 6.2” depth of the 915 case, let alone when a standard barrel jack is hanging out the back. My “solution” to this was to shell the switch and run it as a bare board, replacing the barrel jack with a soldered pigtail to the rail.

Because I’m working with about a quarter of an inch to spare in this case, all the network cables were hand-made and hand-trunked to the switch.

The last addition to the case was honestly the cheapest screen I could find. The downside to this particular display turned out to be that its wiring is right-hand sided (I would have preferred left) and, critically, that all its I/O buttons are rear facing. This meant not only did I have to drill a VESA mount into the case, but I had to manufacture a stand-off plate so the buttons weren’t all permanently depressed and put pass-through holes in the case as well, to which I added some wire snips as button extensions so the controls were still usable when the display was mounted.

All told, my cost on the build is about $600 on the build so far. If I add the other five Pis, that goes up by about $200. Considering I previously wrote about spending about $800 apiece for three AMD boxes, having a whole portable twelve host network for the price of a single server isn’t shabby at all.

Really the only unsolved problem with this case is cooling. Finding low profile 12v fans has so far proved troublesome since most motherboards run on 3v or 5v, and the 915’s mere 6” of depth doesn’t leave a ton of space for fans underneath the Pi stack.

I will also note that, installed in a dense bolted unit, it’s difficult to get individual Pis in and out which is a rather needed operation when setting the whole thing up. Were I to do this again, I’d definitely consider how to make a sled based design in which removing single Pis was easy work.

But this is what I’ve got and I rather like it! Fingers crossed we get some Intel (compatible) hardware that fits the Pi form factor so I can run a hardware mix one of these days.

Thanks to @krainboltgreene for reminding me that I never actually wrote any of this up.