Details on them? I may be interested as well.Icecold wrote: ↑Sat Sep 11, 2021 11:26 amThat should be the correct bios page. Maybe that specific board doesn't support it? I did get a message from my WTB post on [H] form somebody that said "I have a handful of full-size AsRock Rack server mb's that are socket 2011v3. They all support bifurcation. " so I'm in the process of trying to buy at least one of those to see how it works out.
It seems like a total crapshoot TBH. It seems to be there in some boards you wouldn't expect it to be but isn't in others. I appreciate you checking though
I'm still waiting on more info but I'll let you know.
The guy on [H] got back to me and they're actually Supermicro boards not Asrock but I'm waiting to confirm they do support bifurcation before I buy at least one.
Maybe, but I initially saw this setting talked about all the way back in the Intel Sandy bridge days for getting an extra PCIe slot, so I'm really not sure.
2 x of these: https://www.supermicro.com/en/products/ ... d/X10SRM-F
1 x of these: https://www.supermicro.com/en/products/ ... d/X10SRL-F
1 x of these: https://www.supermicro.com/en/products/ ... d/X10SRi-F
The only one that I think even has a shot at working for this purpose is the last one. I might buy it just to see. This stuff gets fairly confusing with having to consider how many PCIe lanes are available etc.
Edit - that one only comes up to 4 x8 slots and 1 x4 slot with bifurcation. I may still buy it,
I think I probably just need to order the EPYC stuff @Skillz linked earlier in this thread, though for the next build if I don't want to be limited to 4 or 5 video cards.
Does a plebeian i7 support registered DIMMs like this server board requires? And does this board accept an i7 in the first place?
Added in 3 minutes 17 seconds:
I ran a 1080Ti + 1080 system for a while, and the different GPU performance and how the boinc client coped with it bothered me. (Well, I could have worked around that by using dedicated boinc client instances for the different GPUs. But I didn't, back then.)
I didn't even think about the ECC issue. That CPU doesn't support ECC. I purchased it from somebody as a combo deal, and as far as I know he booted it up with that CPU in it. Maybe that motherboard can use non ECC ram and it's just not documented. I do have a couple E5-2670 v3's I rarely use that I could use if need be instead, and then move the i7 over to the machine I pull that from. That's a good point as well about the differing GPU performance being annoying to deal with in terms of bunkering, etc. I should probably try my best to make the builds all the same GPU, if I have enough of those GPU's available.StefanR5R wrote: ↑Mon Sep 13, 2021 3:56 pmDoes a plebeian i7 support registered DIMMs like this server board requires? And does this board accept an i7 in the first place?
Added in 3 minutes 17 seconds:I ran a 1080Ti + 1080 system for a while, and the different GPU performance and how the boinc client coped with it bothered me. (Well, I could have worked around that by using dedicated boinc client instances for the different GPUs. But I didn't, back then.)
Well, hopefully that's the case. Presence of ECC support in the DIMMs is not the issue, but rather a) if the HSW-E (as supposed to -EP) can deal with registered DIMMs (I don't know, never looked into that… or maybe you have to plug unregistered DIMMs then, even though this board is specified for RDIMMs and LRDIMMs?), and b) if the mainboard's BIOS accepts this CPU in the first place. For sure, if this CPU with its higher clocks work, then that's preferable for a GPU build.
I wonder how necessary ensuring the ram is running in quad channel is. I probably should just for the sake of reducing any potential bottlenecks. I should be able to get this up and running with 3 GPU's at least relatively soon, and then I think the bifurcation risers I ordered will take the longest since they're shipping from overseas.
Am I correct in assuming with this having a BMC and IPMI that I will be able to access bios settings, or choose my OS on dual boot, etc. across the network without physical access to it? If so that would be incredibly useful. I've never actually used that feature before so I wasn't 100% sure.
I'm going to have to give this some further thought in terms of how many GPU's to use per build. I hadn't really thought of the annoyances of managing multiple cards that have different performance on the same system, and I have a pretty mixed bag in terms of card types.
Yes, but you need to set that up. I've never done it, but that's what it is for yes.Icecold wrote: ↑Tue Sep 14, 2021 7:39 pmAm I correct in assuming with this having a BMC and IPMI that I will be able to access bios settings, or choose my OS on dual boot, etc. across the network without physical access to it? If so that would be incredibly useful. I've never actually used that feature before so I wasn't 100% sure.
Edit, example on Linux, to view the onboard sensors: sudo ipmitool sensor
But more convenient is the remote GUI through the BMC's ethernet. Just enter the BMC's IPv4 or IPv6 address into a web browser. By default, the BMC pulls its IPv4 address from your DHCP server if you have one. But you can also configure a static IPv4 address for the BMC in the BIOS (alternatively, in the remote GUI if you found the dynamic address).
The remote GUI requires username and password. On older Supermicro boards, it's ADMIN and ADMIN. On newer boards, it's ADMIN and a string of 10 capital letters which should be printed on one of the stickers on the board. Although the previous owner could have had changed that.
Once you are in the remote GUI, you can also view the BMC's VGA video through a feature called iKVM console with an HTML5 capable web browser, as well as pass keyboard and mouse events to it. This lets you access the BIOS screen on boot, as well as the normal OS desktop after boot.
You actually need IPMI or the remote GUI of the BMC if you want to check many of the onboard hardware sensors (fan speeds, some onboard temperatures, voltages).
The BMC is always on and accessible, as long as the PSU provides 5V standby power to the board. You can use the BMC to power on and power off the rest of the computer.
Server board manufacturers generally require a paid licence for a subset of the BMC features. But Supermicro enables all features (except for some management functions which are useful in larger organizations) without paid license.
Should I be concerned with pulling too much power from the PCIe slots? The best configuration from a bus bandwidth perspective would have me running 2-3 cards with unpowered risers and 2 more with powered risers.
PS, that's only true if the BIOS, or the OS respectively, are configured to use the BMC as graphics card. Obviously the BMC doesn't capture video from other video outputs.
This could be a concern. Workstation mainboards which are specifically designed for multi GPU use often have separate 8-pin/ Molex/ SATA power input connectors which prop up the power railsof the PCIe slots.
If you are not sure of what the mainboard can really supply, maybe go with powered risers only.