Notiusweb wrote:Tutor, I like the MBD X10DRX myself out of the 2. For me any time something has USB 3.0, it is a big deal as I use external drives.
I have seen pics of boards in these forums (and bitcoin forums) and they typically are open with the GPUs, very local, they look like true mining or rendering configs. Mine looks like a standard tower on a desk, with wires running out back to the GPUs down below. I am wondering if I could still have 13 GPU without them being 1X speed.
I could be completely wrong here, but just drawing from my own experience, If the boards you listed are using PCIE lanes (not slots) to handle storage, then one thing that would come into play for consideration, is:
MBD X9DRX's
6. 8x SATA2 and 2x SATA3 ports
vs
MBD X10DRX's
6. 10x SATA3 (6Gbps); RAID 0, 1, 5, 10
In my case the board had something similar, LSI and SATA, where I am able to discharge a storage controller to free up lanes. But, if it was all LSI, for example, I would have no choice but to use those lanes to handle my storage. I think in a case where a board handles the storage natively without PCIE Lanes, all good. But if any storage is used by PCIE Lanes, you'd want to have the liberty of sacrificing it for PCIE Lanes. So, It jumped out at me, should any storage be used by PCIE lanes, that you could disable maybe SATA2 or SATA3 independent of one another, in the case of MBD X9DRX, whereas MBD X10DRX would be all or none (unless the 10 ports can be deactivated in some split way.) In fact, I am more inclined now to view the potential overall allocatability of a board, vs its PCIE Slots, although usually I think more slots signal higher yield. Just thoughts, nothing definitive as far as insight. Certainly has nothing to do with Best Practices...more like, Best Hacking-ces.
I would have had no hesitation in getting two X10DRXs if I already had CPUs for them or didn't already have E5 v1/v2 CPUs as I did in the case of the X9DRX (and I forgot to mention this earlier, but I also had and wasn't using 96 gigs of DDR3 ram for each X9DRX) . So outfitting two X10DRX would have cost me many thousands of dollars for equivalent and suitable ram and the CPUs. Moreover, working with 24 other systems makes my money dearer to me. At no point did I mean to imply that anyone else shouldn't prefer or purchase the X10DRX. That's why I stated; "When it comes down to GPU rendering there's not much difference between the MBD X9DRX and the MBD X10DRX; so let your CPU, memory and SATA needs and relative costs and availability break the tie. See, in particular, the bolded differences and underlined similarity:" I admit that by my mentioning SATA storage only rather just saying "storage" that my language was under-inclusive and left out other forms of storage such as USB.
I'm not afraid of snakes and have had many of them as pets throughout my life. So knowing that background information, it shouldn't come as a surprise to anyone that your and my snake dens (resulting from our being really just mere beginners on the path to true GPU monsterdom) doesn't bother me in the least. Our builds truly are still works in development/progress. Also, remember that when you start with something, in your case {literally} it was and is just a standard tower that is now on a desk, it's hard to just trash it - reminds me of the extra CPUs and ram that I wasn't using [and still wouldn't be using if I had purchased two X10DRXs]. If your case looked like anything else, you'd be a magician or wasteful. My late mother's favorite saying when I was a child was "waste not - want not." Wasn't she an early ecological fanatic? However, by all means tackle the esthetics when
you feel the need. But alway remember and take pride in the fact that YOU BUILT IT and in the course of doing it that you learned a lot more information than you had at the start.
One of the things that I always do is to read the manual for a system/motherboard/other component that peaks my interest before I decide to purchase the thing. In this post [
viewtopic.php?f=40&t=43597&start=200#p241271 ] regarding tackling IO space issues, I suggested that the first thing one should do is study his or her system’s block diagram showing the layout of the system's CPU(s), PCIe slots, DMI points, other features/resources/peripherals and their connections … . I've found it best to download manuals as PDFs so that they can be easily searched. The X10 DRX and the X9DRX manuals show that all of the persistent storage travels on the DMI lane of each motherboard. A file search for "LSI" on the manuals for both of these motherboards didn't turn up anything. The diagrams for each of these two motherboards show that nothing is connected to the PCIe lanes of either of them other that what
you connect to it. PCIe socket 11 is, however, connected to the CPU by a DMI lane [
https://en.wikipedia.org/wiki/Direct_Media_Interface ], which is not, although similar to, a PCIe lane. PCIe socket 11 is the one that I intend to populate with one of my 4 port E-Sata card to connect one of my 20 Terabyte external quad 5T hard drive arrays. Among the many other things that the manuals covers, it shows how to disable the SATA controllers completely or per port and how to disable the USB controllers.
Resources:
X10DRX -
http://www.supermicro.com/products/moth ... x10drx.cfm X9DRX -
http://www.supermicro.com/products/moth ... drx_-f.cfm