Home

HCI - Storage Spaces Direct - Two nodes

%3CLINGO-SUB%20id%3D%22lingo-sub-190970%22%20slang%3D%22en-US%22%3EHCI%20-%20Storage%20Spaces%20Direct%20-%20Two%20nodes%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-190970%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSTRONG%3EFind%20a%20powerful%2C%20economical%20HCI%20solution%20that%20is%20suitable%20for%20small%20and%20medium%20environments.%3C%2FSTRONG%3E%3C%2FP%3E%3CP%3E%3CBR%20%2F%3E%3CSPAN%3EThe%20idea%20%3A%3C%2FSPAN%3E%3CBR%20%2F%3E%3CBR%20%2F%3E1.%20a%20small%20hyperconverged%20server%20with%204%20hot-swap%202.5%20NVMe%20%2B%20bi%20proc%20%2B%204x10Gbps%20(rdma%20support)%20%2B%202%20M.2%20ssd%20for%20os%20%2B%20%3CEM%3Eetc...%3C%2FEM%3E%20%2F%20per%20Node%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20style%3D%22width%3A%20371px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F33652i46E77AA5A66D1BCB%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%221.GIF%22%20title%3D%221.GIF%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%3CSTRONG%3E----------------------------%3C%2FSTRONG%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E2.%20%3CSPAN%20class%3D%22short_text%22%3E%3CSPAN%20class%3D%22%22%3EA%20good%20adapted%20sizing%20for%20Storage%20Space%20Direct%3C%2FSPAN%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-left%22%20style%3D%22width%3A%20524px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Fgxcuf89792.i.lithium.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F33653iF7C0ABAB71638374%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20alt%3D%22plan_stocl.png%22%20title%3D%22plan_stocl.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%3CSTRONG%3E---------------------------------%3C%2FSTRONG%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E3.%20C%3CSPAN%20class%3D%22notranslate%22%3Eonfigure%20Quorum%20of%20this%20little%20cluster%20on%20Azure.%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%26nbsp%3B---%3C%2FP%3E%3CP%3E%3CSPAN%20class%3D%22notranslate%22%3E%3CSTRONG%3E%3CSPAN%20class%3D%22short_text%22%3E%3CSPAN%20class%3D%22%22%3EYour%20opinion%20on%20the%20solution%3C%2FSPAN%3E%3C%2FSPAN%3E%20%3F%20%3A)%3C%2Fimg%3E%3C%2FSTRONG%3E%3CBR%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-LABS%20id%3D%22lingo-labs-190970%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EClustering%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EHyper-V%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EStorage%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E%3CLINGO-SUB%20id%3D%22lingo-sub-194780%22%20slang%3D%22en-US%22%3ERe%3A%20Storage%20Spaces%20Direct%20-%20Sizing%20two%20nodes%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-194780%22%20slang%3D%22en-US%22%3E%3CP%3EHi%20people%2C%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EAfter%20hours%20and%20hours%20of%20research%2C%20I%20think%20I%20found%20a%20HCI%20solution%20that%20fits%20everyone%2C%20or%20at%20least%20for%20SME.%3C%2FP%3E%3CP%3EI've%20steer%20my%20choice%20on%20brand%20SuperMicro%20because%20their%20solutions%20are%20more%20flexible.%3C%2FP%3E%3CP%3EI%20suggest%20you%20to%20pay%20attention%20to%20the%20cost%20of%20the%20NVME%20SSD.%3CBR%20%2F%3EThere%20is%20nothing%20clear%20about%20explanation%20of%20their%20sharp%20price%20hikes%20!%3CBR%20%2F%3EFor%20example%2C%20for%20this%20reference%20%22Intel%20DC%20P4500%202%20To%20NVMe%20(SSDPE2KX020T701)%22.%20About%20three%20weeks%20ago%20the%20price%20for%20this%20ref%20was%2030%25%20cheaper%20!%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CTABLE%3E%3CTBODY%3E%3CTR%3E%3CTD%3EINFRASTRUCTURE%20HYPER-CONVERG%C3%89E%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3ED%C3%A9signation%3C%2FTD%3E%3CTD%3ECaract%C3%A9ristiques%3C%2FTD%3E%3CTD%3Eref.%3C%2FTD%3E%3CTD%3EQuantit%C3%A9%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3EServeur%20SuperMicro%3C%2FTD%3E%3CTD%3E-%20Dual%20Socket%20P%20(LGA%203647)%20%7C%20Intel%C2%AE%20Xeon%C2%AE%20Scalable%20Processors%3CBR%20%2F%3E-%20Memory%202666%2F2400%2F2133MHz%20ECC%20DDR4%20SDRAM%20%7C%202666%20ECC%20DDR4%20NVDIMM%3CBR%20%2F%3E-%20Intel%C2%AE%20C621%20chipset%3CBR%20%2F%3E-%202%20x%20Sata%20Dom%20%2F%20SuperDOM%3CBR%20%2F%3E-%202%20x%2010GBase-T%20ports%20via%20AOC-URN6-i2XT%20%7CIntel%C2%AE%20X540%20Dual%20Port%2010GBase-T%3CBR%20%2F%3E-%20IPMI%202.0%20with%20virtual%20media%20over%20LAN%20and%20KVM-over-LAN%20support%3CBR%20%2F%3E-%20Graphic%20ASPEED%20AST2500%20BMC%3CBR%20%2F%3E-%2010%20Hot-swap%202.5%22%20Drive%20Bays%3B%2010%20NVMe%20(4%20hybrid%20ports)%3CBR%20%2F%3E-%208%20x%20Heavy%20duty%20fans%3CBR%20%2F%3E-%202%20x%201000W%20Redundant%20Power%20Supplies%20%7C%20Titanium%20Level%20(96%25)%3CBR%20%2F%3E-%202%20x%26nbsp%3B%20Passive%20CPU%20Heat%20Sink%3CBR%20%2F%3E-%202%20x%20Rack%20mount%20rails%3C%2FTD%3E%3CTD%3ESYS-1029U-TN10RT%3C%2FTD%3E%3CTD%3E2%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3EOPTIONS%20DE%20CONFIGURATION%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3EPuce%20TPM%20SuperMicro%3C%2FTD%3E%3CTD%3ESuperMicro%20Crypto-processeur%20%2F%20TPM%202.0%20with%20Infineon%209670%20controller%3C%2FTD%3E%3CTD%3EAOM-TPM-9670H%3C%2FTD%3E%3CTD%3E2%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3EProcesseur%3C%2FTD%3E%3CTD%3EIntel%C2%AE%20Xeon%C2%AE%20Silver%204116%26nbsp%3B%20%7C%2016.5%20MB%20L3%2C%2012%20C%C5%93urs%2C%2024%20Fils%20%7C%20LGA%203647%20%7C%202.1GHz%3C%2FTD%3E%3CTD%3EBX806734116%3C%2FTD%3E%3CTD%3E4%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3EM%C3%A9moire%3C%2FTD%3E%3CTD%3ECrucial%20DDR4%20%7C%208%20Go%20%7C%20Reg%20ECC%20%7C%20Rank%20%3A%201%20%7C%20Proc%20%3A%202666%20MHz%20%7C%20Minuterie%20%3A%20CL19%3C%2FTD%3E%3CTD%3ECT8G4RFS8266%3C%2FTD%3E%3CTD%3E32%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3EStockage%3C%2FTD%3E%3CTD%3EIntel%20SSD%202.5%22%20DC%20P4500%202%20To%20NVMe%20lec%3A3%2C3Go%2Fs%20%C3%A9cr%3A1%2C2Go%2Fs%26nbsp%3B%26nbsp%3B%20515%20000%20IOPS%26nbsp%3B%26nbsp%3B%20Gar.%205%20ans%3C%2FTD%3E%3CTD%3ESSDPE2KX020T701%3C%2FTD%3E%3CTD%3E10%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3Einternal%20M.2%20NVMe%20card%3C%2FTD%3E%3CTD%3ESupermicro%20PCIe%20Add-On%20Card%20for%20up%20to%20two%20NVMe%20SSDs%3C%2FTD%3E%3CTD%3EAOC-SLG3-2M2%3C%2FTD%3E%3CTD%3E2%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3ESSD%20M.2%3C%2FTD%3E%3CTD%3ESSD%20256%20Go%20Intel%20Pro%207600p%20M.2%20PCIe%20NVMe%26nbsp%3B%3C%2FTD%3E%3CTD%3ESSDPEKKF256G8X1%3C%2FTD%3E%3CTD%3E4%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3ECarte%20Ethernet%3C%2FTD%3E%3CTD%3EMellanox%20ConnectX-4%20Lx%202x%2025Gbps%20RDMA%20over%20Converged%20Ethernet%20(RoCE)%20SFP28%3C%2FTD%3E%3CTD%3EAOC-S25G-m2S%3C%2FTD%3E%3CTD%3E2%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3ECable%20SFP%3C%2FTD%3E%3CTD%3ESuperMicro%201.5m%2025GbE%20SFP28%20to%20SFP28%2C%20Passive%3C%2FTD%3E%3CTD%3ECBL-NTWK-0944-MS28C15M%3C%2FTD%3E%3CTD%3E2%3C%2FTD%3E%3C%2FTR%3E%3CTR%3E%3CTD%3ESuperDom%20SSD%3C%2FTD%3E%3CTD%3ESuperMicro%2064%20Go%2C%20mSATA%2C%20S%C3%A9rie%20ATA%20III%2C%20520%20Mo%2Fs%2C%206%20Gbit%2Fs)%26nbsp%3B%3C%2FTD%3E%3CTD%3ESSD-DM064-SMCMVN1%3C%2FTD%3E%3CTD%3E4%3C%2FTD%3E%3C%2FTR%3E%3C%2FTBODY%3E%3C%2FTABLE%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3E%3CSTRONG%3EIf%20you%20need%20clarification%20on%20this%20conf%2C%20let%20me%20know!%3C%2FSTRONG%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-192607%22%20slang%3D%22en-US%22%3ERe%3A%20Storage%20Spaces%20Direct%20-%20Sizing%20two%20nodes%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-192607%22%20slang%3D%22en-US%22%3E%3CP%3EWell%2C%20quite%20a%20lengthy%20answer%2C%20I%E2%80%99ll%20try%20to%20answer%20some%20of%20the%20points%20you%20brought%20up.%3C%2FP%3E%3CP%3EBeancounters%20are%20an%20important%20factor%20to%20take%20into%20consideration%2C%20too.%20As%20long%20as%20you%20can%20afford%20to%20get%20the%20redundant%20storage%2C%20you%20should%20be%20fine.%20However%2C%20unless%20a%20single%20node%20can%20sustain%20the%20entire%20load%20of%20all%20the%20VMs%20you%20won%E2%80%99t%20be%20able%20to%20do%20things%20like%20patching%20during%20production%20hours.%20Reading%20your%20message%20a%20second%20time%2C%20I%20noticed%20you%20covered%20this.%20%3A)%3C%2Fimg%3E%3C%2FP%3E%3CP%3EI%20was%20thinking%20on%20installing%20at%20least%20one%20local%20(not%20clustered)%20DC%20as%20a%20VM%20on%20one%20of%20the%20nodes%20(possibly%20two%2C%20as%20I%20have%20four%20in%20total).%20Unless%20you%20have%20another%20DC%20beyond%20those%20two%2C%20it%20will%20be%20a%20lengthy%20and%20major%20pain%20to%20restore%20in%20case%20you%20have%20an%20AD%20outage.%20Try%20to%20keep%20at%20least%20a%20DC%20VM%20as%20a%20single%20DC%2C%20and%20nothing%20else%20(and%20DNS%2C%20of%20course).%20If%20you%20have%20so%20many%20roles%2C%20it%E2%80%99ll%20take%20a%20lot%20of%20time%20to%20restore%20and%20clean-up%20everything.%3C%2FP%3E%3CP%3EI%20am%20not%20a%20storage%20expert%2C%20but%20backup%20on%20disk%20is%20definitely%20faster%20and%20less%20painful.%20I%20don%E2%80%99t%20know%20about%20agents%20for%20specific%20apps%20(which%20may%20provide%20faster%20and%20more%20granular%20restore%20options%20for%20a%20database%20or%20CRM%20or%20whatever)%2C%20but%20at%20least%20for%20Windows%20infrastructure%20Windows%20Server%20backup%20is%20good%20and%20reliable.%20Since%20it%20no%20longer%20works%20on%20tape%2C%20a%20bunch%20of%20disks%20would%20be%20great%20(I%20plan%20to%20use%20the%20old%20storage%20for%20that).%20%26nbsp%3BHowever%2C%20if%20you%20plan%20to%20ditch%20the%20tapes%2C%20make%20sure%20you%E2%80%99re%20using%20a%20more%20sophisticated%20file-system%20(that%20can%20handle%20bit%20rot%2C%20for%20instance).%20If%20you%20still%20have%20some%20tape%20library%20laying%20around%2C%20you%20could%20still%20do%20a%20less%20often%20backup%20for%20things%20that%20really%20matter.%3C%2FP%3E%3CP%3EMoving%20on%20to%20Kepler.%20It%E2%80%99s%20not%20my%20project%2C%20so%20I%20cannot%20not%20give%20you%20more%20number%20than%20what%20I%E2%80%99ve%20read%20in%20the%20post.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EAs%20I%E2%80%99ve%20struggled%20a%20lot%20with%20S2D%20(I%E2%80%99m%20not%20a%20storage%20guy%2C%20and%20not%20a%20great%20networking%20guy%2C%20either)%2C%20this%20step-by-step%20article%20for%202-nodes%20HCI%20helped%20me%20a%20lot%20to%20clarify%20some%20fuzzy%20concepts.%3C%2FP%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fwww.tech-coffee.net%2F2-node-hyperconverged-cluster-with-windows-server-2016%2F%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fwww.tech-coffee.net%2F2-node-hyperconverged-cluster-with-windows-server-2016%2F%3C%2FA%3E%3C%2FP%3E%3CP%3EAnd%20now%20for%20some%20more%20performance-related%20links.%20As%20some%20of%20them%20are%20from%20vendors%2C%20they%20may%20be%20useful%20but%20take%20them%20with%20the%20proverbial%20grain%20of%20salt%3A%3C%2FP%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Fstorage%2Fstorage-spaces%2Fperformance-history%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Fstorage%2Fstorage-spaces%2Fperformance-history%3C%2FA%3E%3C%2FP%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Fadministration%2Fperformance-tuning%2Fsubsystem%2Fstorage-spaces-direct%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Fadministration%2Fperformance-tuning%2Fsubsystem%2Fstorage-spaces-direct%2F%3C%2FA%3E%3C%2FP%3E%3CP%3E%3CA%20href%3D%22https%3A%2F%2Fwww.micron.com%2Fabout%2Fblogs%2F2017%2Fseptember%2Fmicrosoft-storage-spaces-direct-is-an-io-performance-beast%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fwww.micron.com%2Fabout%2Fblogs%2F2017%2Fseptember%2Fmicrosoft-storage-spaces-direct-is-an-io-performance-beast%3C%2FA%3E%3C%2FP%3E%3CP%3EAlso%2C%20%3CA%20href%3D%22https%3A%2F%2Fyoutu.be%2FraeUiNtMk0E%3Ft%3D274%22%20target%3D%22_blank%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ehttps%3A%2F%2Fyoutu.be%2FraeUiNtMk0E%3Ft%3D274%3C%2FA%3E%20for%20some%20numbers.%3C%2FP%3E%3CP%3EThere%20was%20another%20post%20from%20a%20Microsoft%20PFE%2C%20in%20which%20I%20remember%20the%20last%20picture%20of%20the%20post%20was%20a%20screenshot%20showing%201GBps%20copy%20between%20nodes%2C%20but%20I%20couldn%E2%80%99t%20find%20the%20link.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EI%20hope%20you%20found%20at%20least%20some%20of%20these%20bits%20helpful.%3C%2FP%3E%3CP%3ECheers.%20Emanuel%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-192470%22%20slang%3D%22en-US%22%3ERe%3A%20Storage%20Spaces%20Direct%20-%20Sizing%20two%20nodes%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-192470%22%20slang%3D%22en-US%22%3E%3CP%3EI%20tried%20to%20consider%20a%204-node%20infrastructure.%20But%20this%20solution%20exceeds%20my%20budgets.%3C%2FP%3E%3CP%3EProcessors%2C%20memory%2C%20network%20cards%2C%20WinServ%20licenses%2C%20everything%20must%20be%20doubled%20against%20a%202-node%20solution.%3C%2FP%3E%3CP%3EFor%20once%2C%20the%20loss%20of%20storage%20capacity%20remains%20more%20advantageous.%3C%2FP%3E%3CP%3EWith%20this%20solution%2C%20I%20think%20I%20can%20demonstrate%20that%20in%20production%2C%20the%20use%20of%20two%20nodes%20is%20not%20necessarily%20more%20risky%20if%20they%20are%20prepared%20and%20configured%20correctly.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EBut%20effectively%2C%20the%20sizing%20choices%20must%20be%20in%20correlation%20with%20the%20production%20environment.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EIn%20my%20case%2C%20the%20environment%20looks%20like%20this%3A%3C%2FP%3E%3CP%3E1st%20node%3A%20SRV-DC%2C%20SRV-FILE%2C%20SRV-DA%2C%20SRV-BD%2C%20SRV-APP-v%2C%20SRV-APP%2C%20etc.%3C%2FP%3E%3CP%3E2nd%20node%3A%20SRV-ADM%2C%20SRV-WSUS%2C%20SRV-PRTG%2C%20SRV-TEST%2C%20SRV-VEEAM%2C%20etc.%3C%2FP%3E%3CP%3EIf%20one%20of%20the%20nodes%20falls%2C%20I'm%20sure%20to%20keep%20excellent%20performance.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3ENVMe%20technology%2C%20coupled%20with%20fast%20network%20interfaces%2C%20must%20assure%20production%20performance%20but%20must%20also%20be%20the%20mainstays%20of%20my%20backup%20system%20!%3C%2FP%3E%3CP%3EIt's%20also%20a%20perfect%20technology%20for%20Veeam's%20WAN%20Accelerator%20cache.%3C%2FP%3E%3CP%3EMy%20Backup%20system...%20which%20is%20also%20another%20interesting%20project%20since%20I'm%20giving%20up%20the%20notion%20of%20archiving%20and%20dedicated%20myself%20on%20long-term%20backup%20!%3C%2FP%3E%3CP%3EYes%20Yes%2C%20I%20just%20want%20to%20be%20sure%20that%20in%2010%20years%20my%20saved%20data%20will%20be%20usable%20quickly.%20This%20is%20only%20rarely%20the%20case%20with%20classical%20archiving.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EFor%20the%20Kepler-47%20project%2C%20the%20idea%20of%20using%20the%20Thunderbolt%20techno%20intrigues%20me%20(real%20performance%2C%20stability)%20I%20will%20not%20be%20against%20a%20little%20more%20info%20on%20the%20subject%20%3Asmiling_face_with_smiling_eyes%3A%3C%2Fimg%3E.%3C%2FP%3E%3CP%3EQCD%20report%20can%20be%20excellent.%3C%2FP%3E%3CP%3EI%20really%20want%20more%20info%20!%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EThank%E2%80%99s%20for%20your%20answer%20Emanuel.%3C%2FP%3E%3CP%3EAnd%20I'm%20waiting%20for%20your%20interesting%20documentation.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EJean-Charles%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-191815%22%20slang%3D%22en-US%22%3ERe%3A%20Storage%20Spaces%20Direct%20-%20Sizing%20two%20nodes%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-191815%22%20slang%3D%22en-US%22%3E%3CP%3EHyper-Converged%20Infrastructure%20(HCI)%20is%20all%20the%20rage%20these%20days%2C%20but%20this%20doesn't%20make%20it%20a%20universal%20solution.%20If%20you%20can%20afford%20it%2C%20great.%20My%20only%20problem%20with%20two%20nodes%20is%20that%20you%20lose%20half%20your%20storage%20capacity.%20Well%2C%20not%20%3CEM%3Elose%3C%2FEM%3E%2C%20because%20you%20get%20redundancy%2C%20but%20at%204%20nodes%20or%20higher%20you%20start%20to%20get%20better%20storage%20efficiency.%3C%2FP%3E%3CP%3EHere's%20a%20nice%20and%20short%20story%20about%20something%20similar%20to%20your%20configuration%3A%20%3CA%20href%3D%22https%3A%2F%2Fblogs.technet.microsoft.com%2Ffilecab%2F2016%2F10%2F14%2Fkepler-47%2F%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3EProject%20Kepler%3C%2FA%3E.%3C%2FP%3E%3CP%3EIf%20you%20need%20more%20links%20on%20the%20topic%20(with%20tutorials%20etc)%2C%20ping%20me.%3C%2FP%3E%3CP%3ECheers.%20Emanuel%3C%2FP%3E%3C%2FLINGO-BODY%3E
Jean-Charles PELLE
Occasional Contributor

Find a powerful, economical HCI solution that is suitable for small and medium environments.


The idea :

1. a small hyperconverged server with 4 hot-swap 2.5 NVMe + bi proc + 4x10Gbps (rdma support) + 2 M.2 ssd for os + etc... / per Node

1.GIF

 

----------------------------

 

2. A good adapted sizing for Storage Space Direct

 

plan_stocl.png

 

---------------------------------

 

3. Configure Quorum of this little cluster on Azure.

 

 ---

Your opinion on the solution ? :)

 

 

4 Replies

Hyper-Converged Infrastructure (HCI) is all the rage these days, but this doesn't make it a universal solution. If you can afford it, great. My only problem with two nodes is that you lose half your storage capacity. Well, not lose, because you get redundancy, but at 4 nodes or higher you start to get better storage efficiency.

Here's a nice and short story about something similar to your configuration: Project Kepler.

If you need more links on the topic (with tutorials etc), ping me.

Cheers. Emanuel

I tried to consider a 4-node infrastructure. But this solution exceeds my budgets.

Processors, memory, network cards, WinServ licenses, everything must be doubled against a 2-node solution.

For once, the loss of storage capacity remains more advantageous.

With this solution, I think I can demonstrate that in production, the use of two nodes is not necessarily more risky if they are prepared and configured correctly.

 

But effectively, the sizing choices must be in correlation with the production environment.

 

In my case, the environment looks like this:

1st node: SRV-DC, SRV-FILE, SRV-DA, SRV-BD, SRV-APP-v, SRV-APP, etc.

2nd node: SRV-ADM, SRV-WSUS, SRV-PRTG, SRV-TEST, SRV-VEEAM, etc.

If one of the nodes falls, I'm sure to keep excellent performance.

 

NVMe technology, coupled with fast network interfaces, must assure production performance but must also be the mainstays of my backup system !

It's also a perfect technology for Veeam's WAN Accelerator cache.

My Backup system... which is also another interesting project since I'm giving up the notion of archiving and dedicated myself on long-term backup !

Yes Yes, I just want to be sure that in 10 years my saved data will be usable quickly. This is only rarely the case with classical archiving.

 

For the Kepler-47 project, the idea of using the Thunderbolt techno intrigues me (real performance, stability) I will not be against a little more info on the subject :smiling_face_with_smiling_eyes:.

QCD report can be excellent.

I really want more info !

 

Thank’s for your answer Emanuel.

And I'm waiting for your interesting documentation.

 

Jean-Charles

Well, quite a lengthy answer, I’ll try to answer some of the points you brought up.

Beancounters are an important factor to take into consideration, too. As long as you can afford to get the redundant storage, you should be fine. However, unless a single node can sustain the entire load of all the VMs you won’t be able to do things like patching during production hours. Reading your message a second time, I noticed you covered this. :)

I was thinking on installing at least one local (not clustered) DC as a VM on one of the nodes (possibly two, as I have four in total). Unless you have another DC beyond those two, it will be a lengthy and major pain to restore in case you have an AD outage. Try to keep at least a DC VM as a single DC, and nothing else (and DNS, of course). If you have so many roles, it’ll take a lot of time to restore and clean-up everything.

I am not a storage expert, but backup on disk is definitely faster and less painful. I don’t know about agents for specific apps (which may provide faster and more granular restore options for a database or CRM or whatever), but at least for Windows infrastructure Windows Server backup is good and reliable. Since it no longer works on tape, a bunch of disks would be great (I plan to use the old storage for that).  However, if you plan to ditch the tapes, make sure you’re using a more sophisticated file-system (that can handle bit rot, for instance). If you still have some tape library laying around, you could still do a less often backup for things that really matter.

Moving on to Kepler. It’s not my project, so I cannot not give you more number than what I’ve read in the post.

 

As I’ve struggled a lot with S2D (I’m not a storage guy, and not a great networking guy, either), this step-by-step article for 2-nodes HCI helped me a lot to clarify some fuzzy concepts.

https://www.tech-coffee.net/2-node-hyperconverged-cluster-with-windows-server-2016/

And now for some more performance-related links. As some of them are from vendors, they may be useful but take them with the proverbial grain of salt:

https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/performance-history

https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/subsystem/storage-...

https://www.micron.com/about/blogs/2017/september/microsoft-storage-spaces-direct-is-an-io-performan...

Also, https://youtu.be/raeUiNtMk0E?t=274 for some numbers.

There was another post from a Microsoft PFE, in which I remember the last picture of the post was a screenshot showing 1GBps copy between nodes, but I couldn’t find the link.

 

I hope you found at least some of these bits helpful.

Cheers. Emanuel

 

Learn how Storage Spaces Direct enables organizations to use industry standard servers with local storage to build highly available and scalable software defined storage. For more information, visit: http://www.microsoft.com/windowsserver2016

Hi people,

 

After hours and hours of research, I think I found a HCI solution that fits everyone, or at least for SME.

I've steer my choice on brand SuperMicro because their solutions are more flexible.

I suggest you to pay attention to the cost of the NVME SSD.
There is nothing clear about explanation of their sharp price hikes !
For example, for this reference "Intel DC P4500 2 To NVMe (SSDPE2KX020T701)". About three weeks ago the price for this ref was 30% cheaper !

 

INFRASTRUCTURE HYPER-CONVERGÉE
DésignationCaractéristiquesref.Quantité
Serveur SuperMicro- Dual Socket P (LGA 3647) | Intel® Xeon® Scalable Processors
- Memory 2666/2400/2133MHz ECC DDR4 SDRAM | 2666 ECC DDR4 NVDIMM
- Intel® C621 chipset
- 2 x Sata Dom / SuperDOM
- 2 x 10GBase-T ports via AOC-URN6-i2XT |Intel® X540 Dual Port 10GBase-T
- IPMI 2.0 with virtual media over LAN and KVM-over-LAN support
- Graphic ASPEED AST2500 BMC
- 10 Hot-swap 2.5" Drive Bays; 10 NVMe (4 hybrid ports)
- 8 x Heavy duty fans
- 2 x 1000W Redundant Power Supplies | Titanium Level (96%)
- 2 x  Passive CPU Heat Sink
- 2 x Rack mount rails
SYS-1029U-TN10RT2
OPTIONS DE CONFIGURATION
Puce TPM SuperMicroSuperMicro Crypto-processeur / TPM 2.0 with Infineon 9670 controllerAOM-TPM-9670H2
ProcesseurIntel® Xeon® Silver 4116  | 16.5 MB L3, 12 Cœurs, 24 Fils | LGA 3647 | 2.1GHzBX8067341164
MémoireCrucial DDR4 | 8 Go | Reg ECC | Rank : 1 | Proc : 2666 MHz | Minuterie : CL19CT8G4RFS826632
StockageIntel SSD 2.5" DC P4500 2 To NVMe lec:3,3Go/s écr:1,2Go/s   515 000 IOPS   Gar. 5 ansSSDPE2KX020T70110
internal M.2 NVMe cardSupermicro PCIe Add-On Card for up to two NVMe SSDsAOC-SLG3-2M22
SSD M.2SSD 256 Go Intel Pro 7600p M.2 PCIe NVMe SSDPEKKF256G8X14
Carte EthernetMellanox ConnectX-4 Lx 2x 25Gbps RDMA over Converged Ethernet (RoCE) SFP28AOC-S25G-m2S2
Cable SFPSuperMicro 1.5m 25GbE SFP28 to SFP28, PassiveCBL-NTWK-0944-MS28C15M2
SuperDom SSDSuperMicro 64 Go, mSATA, Série ATA III, 520 Mo/s, 6 Gbit/s) SSD-DM064-SMCMVN14

 

If you need clarification on this conf, let me know!

Related Conversations
Tabs and Dark Mode
cjc2112 in Discussions on
35 Replies
Extentions Synchronization
Deleted in Discussions on
3 Replies
flashing a white screen while open new tab
Deleted in Discussions on
14 Replies
Stable version of Edge insider browser
HotCakeX in Discussions on
35 Replies