CPU and VCPU it stays difficult to overcome.

%3CLINGO-SUB%20id%3D%22lingo-sub-2024055%22%20slang%3D%22en-US%22%3ECPU%20and%20VCPU%20it%20stays%20difficult%20to%20overcome.%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2024055%22%20slang%3D%22en-US%22%3E%3CP%3EHi%2C%3C%2FP%3E%3CP%3EAfter%20reading%20and%20searching%20the%20internet%20it%20still%20hasn't%20revealed%20th%20truth%20for%20me.%3C%2FP%3E%3CP%3EI%20keep%20struggeling%20with%20provisioning.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EIn%20this%20example%20(real%20case)%20i%20setup%20a%20Server%202019%20as%20hyper-v%20machine.%3C%2FP%3E%3CP%3Eit%20has%202%20processor%20with%20each%2012%20cores%2C%20so%20i%20have%2024%20cores%20in%20total%20(if%20i%20look%20in%20task%20manager%20it%20says%2024%20cores%20and%2048%20logical%20cores)%3C%2FP%3E%3CP%3EAfter%20lots%20of%20reading%20and%20searching%20'the%20internet%20says'%20that%20a%201%20to%204%20ratio%20is%20a%20good%20ratio%20for%20calculating%20VCPU's.%20Since%20i%20have%2024%20cores%2C%20a%20assume%20that%20i%20have%2096%20VCPU's%20available.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EThe%20setup%20is%20as%20follows%2C%20Server%202019%20as%20hyper-v%20layer%2C%20i%20installed%20one%20VM%20machine%20on%20it%2C%20thats%20the%20VDI%20server.%20In%20that%20VDI%20server%20(VM)%20i%20installed%20an%20configured%20all%20my%20vdi-clients%2C%20computers%201%20to%2015.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EThe%20idea%20was%20to%20keep%201%20core%20for%20the%20hyper%20layer%2C%20and%20to%20give%20the%20other%20(cores)%20VCPU's%20to%20the%20VDI-server.%20So%204%20VCPU(s%20for%20the%20layer%20and%2092VCPU%20for%20the%20VDI%20server.%20in%20that%20case%20we%20have%20enough%20VCPU's%20on%20the%20VDI%20server%20to%20give%20to%20our%20vdi-clients.%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3CP%3EAnd%20thats%20were%20our%20idea%20falls%20apart.%20If%20we%20want%20to%20give%2092%20vcpu%20to%20the%20VDI%20server%20it%20does%20not%20work%2C%20we%20can%20only%20give%20a%20maximum%20of%2048%20VCPU%20%3F%20(which%20is%20the%20same%20as%20the%20logical%20processors)%3C%2FP%3E%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-SUB%20id%3D%22lingo-sub-2024980%22%20slang%3D%22en-US%22%3ERe%3A%20CPU%20and%20VCPU%20it%20stays%20difficult%20to%20overcome.%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2024980%22%20slang%3D%22en-US%22%3EThis%20is%20the%20Windows%20Virtual%20Desktop%20forum%20section%2C%20which%20mean%20this%20is%20related%20to%20the%20Azure%20VDI%20solution%20only.%20If%20you%20want%20a%20straight%20answer%20from%20Microsoft%2C%20you%20should%20probably%20try%20to%20another%20forum%20section%20dedicated%20to%20Hyper-V.%3CBR%20%2F%3E%3CBR%20%2F%3EThe%20user%2FCPU%20ratio%20is%20a%20variable%20that%20change%20depending%20of%20the%20user%20workload%20and%20app%20deployment%20types%2C%20like%20session%20host%20vs%20VDI.%20For%20a%20VDI%20scenario%20I%20would%20probably%20bet%20for%20a%201%3A1%201%3A2%20scenario.%20All%20depend%20of%20the%20workload.%20The%20only%20recommendation%20I%20can%20give%20for%20the%20ratio%2C%20is%20test%20your%20workload.%3CBR%20%2F%3E%3CBR%20%2F%3EIn%20this%20configuration%20you%20have%2012%20cores%20and%2024%20VCPU.%20Hyper-V%20might%20see%20hyper-threads%20as%20core%2C%20but%20they%20are%20not.%20Leading%20to%20the%20logical%20number%2048.%3CBR%20%2F%3EIf%20you%20want%20to%20do%20CPU%20overcommitting%20you%20will%20probably%20need%20to%20change%20the%20scheduler%20type%20to%20Classic%20for%20server%202019.%3CBR%20%2F%3ESee%3A%3CBR%20%2F%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Fvirtualization%2Fhyper-v%2Fmanage%2Fmanage-hyper-v-scheduler-types%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3Ehttps%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Fvirtualization%2Fhyper-v%2Fmanage%2Fmanage-hyper-v-scheduler-types%3C%2FA%3E%3CBR%20%2F%3Eand%3CBR%20%2F%3E%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Fvirtualization%2Fhyper-v%2Fmanage%2Fabout-hyper-v-scheduler-type-selection%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3Ehttps%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fwindows-server%2Fvirtualization%2Fhyper-v%2Fmanage%2Fabout-hyper-v-scheduler-type-selection%3C%2FA%3E%3CBR%20%2F%3E%3C%2FLINGO-BODY%3E
New Contributor

Hi,

After reading and searching the internet it still hasn't revealed th truth for me.

I keep struggeling with provisioning.

 

In this example (real case) i setup a Server 2019 as hyper-v machine.

it has 2 processor with each 12 cores, so i have 24 cores in total (if i look in task manager it says 24 cores and 48 logical cores)

After lots of reading and searching 'the internet says' that a 1 to 4 ratio is a good ratio for calculating VCPU's. Since i have 24 cores, a assume that i have 96 VCPU's available.

 

The setup is as follows, Server 2019 as hyper-v layer, i installed one VM machine on it, thats the VDI server. In that VDI server (VM) i installed an configured all my vdi-clients, computers 1 to 15.

 

The idea was to keep 1 core for the hyper layer, and to give the other (cores) VCPU's to the VDI-server. So 4 VCPU(s for the layer and 92VCPU for the VDI server. in that case we have enough VCPU's on the VDI server to give to our vdi-clients.

 

And thats were our idea falls apart. If we want to give 92 vcpu to the VDI server it does not work, we can only give a maximum of 48 VCPU ? (which is the same as the logical processors)

 

1 Reply
This is the Windows Virtual Desktop forum section, which mean this is related to the Azure VDI solution only. If you want a straight answer from Microsoft, you should probably try to another forum section dedicated to Hyper-V.

The user/CPU ratio is a variable that change depending of the user workload and app deployment types, like session host vs VDI. For a VDI scenario I would probably bet for a 1:1 1:2 scenario. All depend of the workload. The only recommendation I can give for the ratio, is test your workload.

In this configuration you have 12 cores and 24 VCPU. Hyper-V might see hyper-threads as core, but they are not. Leading to the logical number 48.
If you want to do CPU overcommitting you will probably need to change the scheduler type to Classic for server 2019.
See:
https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/manage-hyper-v-schedul...
and
https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/manage/about-hyper-v-schedule...