[Mono-dev] difference in performance between mono OSX and linux?
jonathan.shore at gmail.com
Sat Jan 21 19:28:19 UTC 2012
I am running a program that does millions of transactions on FIFO queues, etc. Running on my OSX box (which is on a dual CPU / 8 core Xeon E5520) and on a linux box which is a core I7-950, I found a significant performance difference which is difficult to explain.
On a single threaded run, the OSX based E5520 took 21 seconds to process 8 million "transactions" and the linux box took only 8 seconds. The i7-950 is marginally faster (~10% faster) in typical benchmarks that the E5520.
Both runs were done with -llvm enabled and with the same garbage collector. There are probably 16 - 32 million small objects created during the course of this evaluation, but generally with good locality, so should be easy to GC.
The OSX-based mono performance for this app is 3x slower than the linux performance (whereas for typical programs, say in C, the difference would be about 10%). Both the linux and OSX boxes have significant memory (in fact the OSX box has more memory).
Finally, the OSX version is using 2.10.8 32bit and the linux version 2.10.6 64bit. I doubt that there is a retrogression in performance between 2.10.6. Thinking is more likely to be an implementation difference between OSX and linux? I would also think that the 32bit vs 64bit would not make all that much difference?
So I am wondering whether there are differences in implementation between mono on these platforms that could account for a significant performance difference?
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Mono-devel-list