SQL Query Test - CPU Recommendations?

I am running the PDT on our new 10.2.200.5 server that is currently a test environment. On the SQL Query Test, I am getting between 4400 ms and 5000 ms. I have cleared all other fails except that the manager/manager account exists and the recovery model is set to Simple. It is a vmWare virtual server and there are no other virtual machines running on this host. C-States are disabled.

The host is several years old and is running Xeon E5630 @ 2.53.GHz. Would this be the most likely problem?

We are preparing to replace our 3 hosts and the initial configuration has Xeon Silver 4110 8 core 2.1GHz processors. There are lots more options and I am wondering if one of the hosts could be amplified to get the best performance for Epicor. Any recommendations?

Are you running on a Cisco Blade System?

I have noticed the best performance when it comes to Epicor / MRP to come from a higher GHz CPU. But then you drop in cores.

Intel Xeon E5-2667v2 3.3GHz, 25M Cache, 8.0GT/s QPI, Turbo, HT, 8C, 130W or better

I get about 1800ms on my CPU Test.

It is not a blade system. They are Lenovo hosts and that is what is being recommended again. I was surprised when I saw the slower CPU speeds. I will take a look at the options. 8 cores matches what we have now and won’t screw up my Microsoft licensing.

Check out the latest HW Sizing Guide: https://epicweb.epicor.com/resources/MRCCustomers/Epicor-ERP-Hardware-Sizing-Guide-WP-ENS.pdf#search=hardware%20sizing

Intel® Xeon® E5-2667 v4 3.2GHz, 25M Cache, 9.60GT/s QPI, Turbo, HT, 8 Cores, 135W

Your Query test could improve if you get SSD Drives or have atleast 15K SAS. Not sure if you use NetApp.

In my experience, it’s always been the case that Epicor is sensitive to CPU clock speed way more than number of cores. This was ESPECIALLY important for E9, slightly less so for E10. Over time the server code base is probably being re-worked to take advantage of many cores available, but there are likely still plenty of code paths that are single threaded.

@Bart_Elia has probably posted a detailed description in the past if you search. If he hasn’t, then standby as I’m sure he will accept the challenge of explaining… :slight_smile:

waits for @Bart_Elia

Blocking? :wink:

Thanks for the input. I forgot that the HW sizing guide included recommendations. I kicked that recommendation back to our supplier to find out options. The Lenovo line only includes the silver, gold, and platinum Xeon processors which get very expensive for the higher clock speeds.

Everything runs off of our Tegile hybrid SAN. Running the Network Diagnostic 0.395 average server time and 0.129 average network time.

Sheesh, I have a day job :wink:

I actually am going to do that horrible it depends developer thing here. The variety of data loads we are seeing in the Cirrus upgrade and telemetry data is actually quite stunning to me. We really have a diverse workload for a customer base so there is rarely one size fits all to the many tweaks. For example, PDT might rate something as medium important but if the data load is high in that area it becomes critical.

We do a fair amount of SQL calls - more than I would prefer, MUCH less than E9. I usually defer to @aidacra and a couple of other folks (Raj, Israel) you might bump into at Insights that are responsible for PDT and all the guidance behind it.