In the previous article I looked at the performance of different mathematical operations and found that the Decimal operations of C# take around 10 times the time it takes for same Double. I also found that the performance profiles of different processors vary greatly. So, I decided to take a look at several different processors and see what other interesting things I could find.
To start with, I ran the code on my
Intel Compute Stick to see how the Atom processor performed. It actually put in a solid and relatively flat performance similar to the Core i7 we looked at last time. Here are the addition results:
And the results for multiplication operations were:
Note that multiplication beats addition yet again. I believe I know why, but will save the explanation for later when I dig even deeper into the underlying code that is being generated. As a hint, take a look at the multiplication tests, I believe it is an artifact of the test rather than an actual instruction speed difference.
To get another architecture
I am going to run out and buy a new computer I need to get a bit creative. I am an Azure head, so looking at the processors available on the
Virtual Machines I noticed that the Lsv2-Series run on the
AMD EPYC™ 7551 processor which would be interesting. So, I will create an L8s-v2 in the East US 2 region. I ssh'd in and used the information from my
Installing .NET Core article to install .NET Core and sftp'd in the code. I ran the test, downloaded the results and deleted the VM (a $464.26 a month burn rate is more than I want to mess around with). The results were...interesting. The addition results were:
And the multiplication results were:
The multiplication results were right in line with the addition results, and the decimal was actually longer! That is the first time the results came out like that, so we need to dig in to figure out what is going on.