Worlds Biggest Raspberry Pi Super Computer

Friends in Oracle Developer Marketing came up with the crazy idea of building the worlds biggest Raspberry Pi cluster. The biggest before was about 700 so to beat that 1024 seemed like a good number to aim for. I got talked into helping design it and work out the power, networking.

The cluster was super well covered across over 50 publications and websites.

 

Hardware Design

We decided to use standard 19″ server racks for mounting all the Pis. After a lot of design iterations we managed to pack 21 Raspberry Pis into 2U of space with custom 3D printed brackets threaded onto 8020 Aluminum extrusion. We decided to pair 2 of those with a 48 port network switch sandwidged between them. That made a bank of 42 Pis, we managed to fit in 6 banks in each of the 4 racks and 7 in one, so that gave us:

(6+6+6+7)*42 = 1050 Raspberry Pis

Which was slightly over our 1024 goal but meant we had a few spares in case some broke for any reason.

Original Design

The original idea was to make it look like a old English Police Box as it was the right shape and seemed cool with the Raspberry Pi being Britsh as well. I put together a mockup in Sketchup 3D to help sell the concept to managment. The design for mounting the Pis and the density improved a lot since the initial design.

 

Power Distrubution

We used Raspberry Pi 3 B+ as the 4 was not released when we were doing the purchasing. Its power requirements are 1-2A depending on load so at 1024 that is potencially 2000 Amps at 5V which is over 10KW of power before you get to network switches or cooling etc. For the best density we found 60 Port USB chargers which could produce 60A or 300W. We used 1 for each 42 Pis which gave us about 1.5A average per PI which we thought should be enough if we shut down unused components like HDMI etc to save any power we could.

Networking

We needed a lot of networking and all 48 port switches were managed. Also we wanted it to be as fast as we could at a reasonable cost. So I went with Ubiquity networking gear as the deisgn was really cool and the value for money was impressive. We ended up with

  • 25 x 48 1 Gb Port switch with 10Gb uplinks.
  • 2x 16 port 10Gb switches for the backbone
  • A security gateway to do NAT so the cluster can have private IPs
  • Cloud Key which ran the network managment software.

It was a bit of a learning curve to get it all setup and configured. The biggest challange was with more than 256 hosts we needed a subnet bigger than a class C. We did not want to have multiple subnets as we wanted to be able to UDP multicast to all the Pis. It is surprising how hard it is got get everything to run on a single class B subnet with mask 255.255.0.0.

Its amazing how much physical effort it took to wire. We were super lucky and had amazing help from a group of students from 42 school. Just the network cables as a example, each one came in a plastic bag with two twist ties. If you said it took you 2 minutes to unwrap and plug in the right place then that alone is 36 hours of tedius work.