Clustering the CM4 as much as possible #192
Unanswered
VTHMgNPipola
asked this question in
General
Replies: 1 comment 2 replies
-
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I wanted to build a rack mountable cluster of Raspberry Pi's for quite some time, and a few days ago sent an email to Jeff asking questions about it, since he had released a video on Uptime Lab's 1U CM4 cluster blades, and he directed me to here.
Firstly, what is actually necessary to have in such a device? Uptime Lab's blade has an HDMI port, USB ports and NVMe builtin, but I don't think anyone will be using one of these when it inside the enclosure, only for debugging and testing purposes (the NVMe yes, but an entire NVMe drive for a single CM4 seems like a waste). An SD card slot seems nice to have though, but I'm not sure since I don't know if anyone actually uses the Lite version of the CM4.
Then there's networking and power distribution. I want to use a 3.5" HDD tray-like form factor to hold the compute modules, and I can probably fit 4 or 5 of them on each one, with appropriate airflow, but that's 48 or 60 1GbE connections per 2U of rack space, and that's a lot. Because of that I wanted to integrate a switch on each blade, that would output 2 2.5GbE connections. For power distribution each blade would connect to a 12v back-plane and convert to 5v internally.
The problem with integrating a switch with each blade is that, since I only recently started working with electronics and low level computer stuff, I don't know if it can work that way, and even if it did you would need a higher bandwidth switch to support every CM4 at full speed. And of course costs, since a BCM53112 costs $25 for a single piece, supports only 4 1GbE connections, and is out of stock, plus the over $20 for 2 2.5GbE magjacks. If it does work however, the benefits are that even without a 2.5GbE switch you could communicate with 4 or 5 CM4's using a single ethernet connection, and you cut in half (or more) the amount of connections you need for all the compute modules (you could even stack 2 clusters with a 48-port switch in the middle, and connect it with 100Gbps links to somewhere else if you really wanted to).
What are your opinions on this? Do you know of any company that can supply such ethernet switch ICs at a lower cost, or know where I can go look for them? Have any idea on how I can put even more compute modules in a smaller space? Have an opinion on anything else?
Sorry for the lengthy post and bad english, thanks xD
Beta Was this translation helpful? Give feedback.
All reactions