DIY 5 Node Cluster of Raspberry Pi 3s
Building a Raspberry Pi 3 cluster for under £100 (£250 including five RPi3s)Inspired by all the great Raspberry Pi projects out there, I thought I’d try designing & building something simple myself. The launch of the Raspberry Pi 3 in March 2016 got me enthusiastic about building my very own cluster of Pi’s (a “bramble”

The completed cluster measures 146.4 (w) × 151 (h) × 216mm (d) and weighs 1.4kg. (5.6 × 5.9 × 8.3", 50oz)
My design is fairly similar to that of the commercial Pico Cluster (USA) who sell a fully-finished 5 node cluster of Raspberry Pi 2s for US$499. Once you add on international shipping, import tax, VAT, etc. it becomes a rather expensive £488.30 here in the UK...
3D design in SketchUp
Originally I made my design in 3D using the free version of SketchUp and built rough templates of the Pi’s, network switch, USB hub, sockets, etc. However this proved very messy exporting to a 2D design in SVG/DXF format, ready for the laser cutting. I tried using Simon Beard’s SVG plugin for SketchUp, but my SketchUp file had all sorts of rubbish in it which caused problems, and the resulting SVG files had to be laboriously cleaned-up by hand in Inkscape.I might have been better off designing in 2D with Inkscape, and then using SketchUp to make a 3D model to check component spacing, etc?
2D design in Inkscape
I used the free Inkscape application for 2D design, ready for exporting to the laser cutter. Each colour is a different pass of the laser, at different power/speed levels, so the green lines are cut first to make holes for ports/screws/ventilation, pink are extra cuts to help extract delicate parts, orange is text/lines that are etched and finally blue cuts the outside of each panel.
Download files for laser cutting on two 600×400×3mm sheets (the largest I could cut):
Prototypes laser-cut in 3mm MDF
Using exactly the same design files in the laser cutter, I made a couple of prototype cases in 3mm MDF (which is cheaper than acrylic). The first version (left) fitted together and worked, but it was very tight getting all the cables routed, and there was no ventilation at all. The final design (right) is narrower, shorter, but much deeper. Each case panel has annotations etched to make it easier to join in the correct order, and plenty of vents were added. External ports were moved, internal cable routing much improved, and the case lid now has half the number of elastic clips.Each case took two 600×400×3mm sheets, as unfortunately the design doesn’t quite fit on a single sheet.
Laser cutting was done at Access Space in Sheffield, with their 40W laser. The final case was cut in 3mm extruded acrylic (perspex), and took about 30 minutes. The design could be much-optimised for a more efficient cutting order. Extruded (rather than cast) acrylic has a fine thickness tolerance, which is required for the elastic clips to work.
If you are in Sheffield, Hindleys is a good supplier for a wide range of acrylic sheets.
Case clipping system
There are various solutions for joining case panels together (glue, screws, etc) but I was particularly impressed with this ‘elastic clip’ designed by Patrick Fenner of Deferred Procrastination in Liverpool. It enables a remarkably strong case to be made without any extra parts, which somehow seems more ‘elegant’. Full details of his clip design, including downloadable SVG files, are at: Laser-Cut Elastic-Clips.If you want to make your own case/box using these elastic clips, I’ve written a simple web application that creates downloadable plans for any size of case in SVG format.
My prototype case originally had 8 elastic clips holding the lid on, which was completely over-the-top (as well as needing at least 3 hands to fit!). I’ve replaced 4 of the clips with a simple tab instead, which works well.
Power, temperature & cooling
At idle, the entire system of five RPi3s, network switch & USB hub sips a mere 9W, and at 100% load it still only uses 22W in total. There is the possibility of further reducing the power requirements by disabling WiFi, Bluetooth and HDMI?The USB hub can supply up to 60W (2.4A per Pi), which is more than enough for a couple of power-hungry external devices to be plugged into the USB ports. Using:
vcgencmd measure_tempto measure the SoC core temperature, the cluster idles at 45°C (113°F) with passive cooling.
At 100% load, using:
sysbench --test=cpu --cpu-max-prime=200000 --num-threads=4 run &the SoC core temperatures reach 80°C (176°F) after 7 minutes. At this point the SoCs automatically throttle down their clock speed, to avoid overheating. They can safely run long-term at this temperature, but you don’t get maximum performance.
What about active cooling? You can optionally strap on a 92mm fan if you are going to run the cluster at high load for extended periods.
Exactly the same case design should work with Raspberry Pi 2s which run much cooler than the model 3, and so shouldn’t need any active cooling.
Design choices
- RPi3 vs RPi2 – the RPi3 is shiny and new (and much faster), but actually for a cluster there probably isn’t much advantage: a Pi cluster isn’t built for high performance, you probably won’t use WiFi/Bluetooth on the RPi3, and a RPi2 runs cooler & uses less power.
- Horizontal vs Vertical-mount – every other cluster I’ve seen stacks the Pi’s vertically on top of each other, but mounting them horizontally on rails should give much better airflow across the SoCs (for passive cooling). I had hoped I could design something without any fans.
- Nylon nuts vs PCB spacers – runnng the horizontal rails through (unthreaded) nylon spacers would have been quicker than moving all the nylon nuts into place, but spacers are more expensive and I would probably needed to have cut them down to an exact size. The nuts are quite fiddly to set up, or to swap out a faulty Pi, etc, but they do add extra rigidity to the design.
- Gigabit vs Megabit-switch – the Pi network ports are only 100Mbps, but using a 1000Mbps switch means there is no bottleneck if all the Pi’s are simultaneously saturating their links. e.g., talking to outside network, such as a NAS.
- Switch 5V vs 12V – a network switch can be powered from a spare 5V USB instead of needing a separate 12V supply.
- Eco switch vs Standard – reduces power used when network ports aren’t connected, or aren’t currently active.
- Beefy USB hub vs Standard – many USB hubs don’t provide enough to reliably power an RPi3. They can draw up to a maximum of 2.5A per Pi, but will actually be much less in this cluster, without extra USB and GPIO accessories.
- External LAN x2 vs x1 – with 2 sockets you can chain multiple clusters together, and still have 1 socket to plug into a WAN, NAS, etc.
- Elastic case clips vs Screws – more elegant? a bit quicker to build and very slightly cheaper.
- Glue hub vs Screw – unfortunately this USB hub has no mounting holes, and I didn’t want to drill into the metal heatsink, so a few drops of superglue to attach brass standoffs seemed like the easiest solution for attaching it to a case panel.
- Hub underneath Pi’s? – not ideal, as the hub itself generates some heat that has to go somewhere. This was a compromise because I wanted short, neat USB cables.
- Pi Heatsink vs None – should help with dissipating the heat, and it is the large SoC chip that generates the most heat.
- Case Fan vs None – the design allows for either: passive cooling through the case lid vents, or strap on a 92mm fan if you are going to run the cluster at high load for extended periods.
- Right-angled HDMI vs Standard – HDMI cables are too thick to bend easily, and a standard straight connector will require a higher case to fit.
- Transparent case vs Opaque – so you can see all the Pi-goodness inside. Obvs.
- Rainbow cables vs Boring grey – just because!
Building the Pi Cluster
- Remove network switch case (2 small screws)
Remove USB hub case (tricky – needs to be carefully prised open, there are no screws) - Cut C7 plug power cable to ~35cm length of flex. Solder onto the C8 screw mount socket (it doesn’t matter which wire goes to which pin.) I couldn’t source a ready-made cable with the right-angle C7 plug that I wanted
- Attach the network switch to case base, using 4× 6mm brass spacers + bolts + nuts. This only fits one way around
Screw 2 external LAN ports to inside of the case back (has “AC100-240V” etched on the outside)
Bolt C8 socket port to case back, secure with 2 nuts
Clip case back to case base (marked C+D) -
Plug external LAN into network switch ports 1+3 (no room to use 1+2)
Plug Pi LAN cables into network switch ports 4-8 (be very careful if removing these – it is easy to break tiny plastic clips on the switch ports) - Fit 4× 6mm brass standoffs to underside of case shelf, using screws
Superglue the metal top of the USB hub to the brass standoffs
Plug USB cables and C7 power plug to USB hub (the power port on the hub is fragile, so support it with one hand while plugging in/out)
Plug USB hub to network switch DC power
Clip case shelf to case back (L+K) - Attach 5 self-adhesive heatsinks onto the SoC chip of each Pi
Slide 5 Pi’s onto 4 threaded rods using 48 nuts to secure. This might be faster with the rod in an electric screwdriver? Leave 30mm space at left end, space each Pi 25mm apart. The LAN+USB ends of the Pi’s point towards RF+MA
Attach case sides to Pi rods (EF+AB), secure with 8 metal nuts. One case side has extra vents cut for cooling the USB hub
(NB: this photo shows 170mm rods which stick out of the side of the case, rather than the correct 150mm rods) - Route HDMI cable through case
Clip case sides to case bottom (E+F, A+B) - Screw external dual USB to front case
Bolt external HDMI to front case, 2 nuts
Plug 2 external USBs into any Pi (or 2 separate Pi’s)
Plug LAN cables into Pi’s
Plug microUSB cables into Pi’s
Plug HDMI cable + right-angle adaptor into any Pi -
Clip case front to case bottom (G+H)
Clip front to case shelf (I+J)
Clip case lid to case sides, front & back (M+N+O+P+Q+R+S+T)
Attach self-adhesive rubber feet to underside of base
Bill of materials
Most of these parts were sourced from individual sellers on Amazon or eBay, which of course racks up the postage charges. If there were enough demand, it would be cheaper to bulk buy the parts and have a kit with everything you need to build the cluster.
Edimax ES-5800G V3 Gigabit Ethernet Switch | £18.45 |
Coloured micro USB cables (5 from a 10 pack) | £3.19 |
Coloured Cat6 LAN cables (5 from a 8 pack) | £7.93 |
USB Charger Anear 60W 6 Port USB | £14.99 |
M3 steel hex nuts (4 from a 5 pack) | £1.10 |
M3 steel screws 8mm (20 pack) | £1.70 |
M3 steel screws 14mm (4 from a 5 pack) | £1.35 |
2 pin C8 screw mount socket (1 from a 6 pack) | £1.32 |
1m right angle C7 cable (cut to length) | £1.76 |
M3 brass female standoff 6mm (8 from a 10 pack) | £0.99 |
M2.5 steel threaded bar 150mm + nuts (4 from a 5 pack) | £5.30 |
0.5m HDMI male to female panel mount (inc. bolts) | £2.39 |
Dual USB female socket to male cable | £1.75 |
RJ45 male to female screw mount (2 pack) | £1.98 |
Raspberry Pi heat sink 5mm (5 pack) | £1.09 |
USB to DC 5v 1A 5.5/2.1mm plug 80cm | £3.49 |
M2.5 nylon hex nuts (48 from a 50 pack) | £3.90 |
3mm extruded clear perspex 600×400mm (2 pack) | £10.64 |
Laser cutting charge | n/a |
HDMI 270 degree adaptor | £1.87 |
Polyurethane rubber feet (4 from a 12 pack) | £2.99 |
Subtotal inc P&P | £88.18 |
---|---|
Raspberry Pi 3 (5 pack) | £152.50 |
Kingston class 10 16Gb microSDHC card (5 pack) | £16.95 |
Total inc P&P | £257.63 |
Future improvements/ideas
- Redesign & simplify case to share the same design as my PINE A64+ cluster, removing the need to solder a custom power cable and reducing the cost slightly
- Disable WiFi/Bluetooth/HDMI for reduced power consumption?
- Increase network I/O by adding a USB-gigabit ethernet adaptor to each Pi in the cluster?
This almost triples the network throughput from 94Mbits/sec to 272Mbits/sec, per Pi - Overclock microSD reader for higher performance:
This nearly doubles the Kingston card speed from 20MByte/sec to 38MByte/sec - Add a button on the case to safely shutdown the entire cluster? Perhaps wired to a GPIO pin, which is monitored by each Pi3
Clusters of other Single Board Computers
So far I’ve also built:- DIY 5 Node Cluster of Raspberry Pi 3s
- 40-core ARM cluster using the NanoPC-T3
- 5 Node Cluster of Orange Pi Plus 2e
- Bargain 5 Node Cluster of PINE A64+
- ClusterHAT with 4× Raspberry Pi Zero
- 96-core ARM supercomputer using the NanoPi-Fire3
I’d like to build a small cluster of all the current crop of sub-$100 ARM SBCs, comparing the different features, and with detailed benchmarks. e.g., Odroid C2/XU4 and the Banana Pi M3. Please email me if you’d like to send boards for review.
Video
Featured on episode #039 of the Bilge Tank by Pimoroni:
Software to run on a cluster?
or... What is it for??Education, training, blah, blah... well personally I’m just running a stock Raspbian on each Pi for now, and I’m going to experiment with things like load-balanced web/database servers.
Running Docker on ARM on each node looks like an excellent way of controlling the cluster.
Nick Smith, April 2016.