Hi Randy,
This is the way the engine works, so irrespective of being on bare metal, VM, or container. Having HT or not influences how many schedulers are seen by the DB engine, as does CPU affinity. The logic described here accounts for visible schedulers only (see the status column in sys.dm_os_schedulers, usable schedulers will be VISIBLE ONLINE). So whether HT or not, bare metal or VM, the DB engine sees what's VISIBLE ONLINE and works with that number. About NUMA, or Soft-NUMA - for the DB engine, there's no distinction (as we speak) between seeing a HW NUMA or a Soft-NUMA node. After Soft-NUMA is enabled and nodes are formed, the same logic I mentioned in the post is in effect: the DB Engine will always try to assign schedulers from the same NUMA node for (child) task execution, to keep NUMA locality. The coordinating worker thread may be placed in a different NUMA node. Regarding NUMA, there are other scalability-affecting considerations for having Soft-NUMA enabled on large NUMA HW, such as additional I/O Completion and Lazy Writer threads. You can see more about that here: https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/soft-numa-sql-server