[Vm-dev] Virtualization effects on the Squeak VM?

Andreas Raab andreas.raab at gmx.de
Sun Mar 1 19:04:25 UTC 2009


Hi -

In some of my recent load testing I noticed some interesting behavior 
when running Squeak (specifically the Unix VM) in a virtualized 
environment. I don't know if anyone else does that but I'd be curious in 
sharing information on that.

First the setting: The measures I've taken so far was running RHEL 5.2 
on VMware Player and Server hosted under Windows and compare this to a 
bare metal server (ESX[i] is on the list for next week). BTW, for those 
of you raising their eyebrows about running under Windows this setup is 
purely driven by customer demand so we're just trying to provide people 
with realistic numbers about what they can expect. So far the most 
interesting artifacts were the following:

1) Time millisecondClockValue and its use of gettimeofday. It turns out 
that in our settings, Time millisecondClockValue takes about 10% total 
execution time (we use it for measuring throughput so it does get called 
*a lot*). We will work around this by adding a low-res clock that gets 
updated without calling gettimeofday but it was interesting to notice 
how much time goes into that.

2) Socket write calls. My recent comments about socket write behavior 
turned out to be a virtualization effect. Running the benchmark both 
virtualized and non-virtualized shows a difference of 3-4 in how much 
time is spent in socket writes (30% virtualized vs. 8% bare metal). Keep 
in mind that this is running under load so we're making roughly 50k 
socket write requests per second with a total bandwidth of 10-20 Mbps.

Has anyone seen similar effects and can confirm them? Are there other 
gotchas that people have observed when running virtualized servers?

I suspect that the socket write issue is really the result of VMware 
interfacing with Windows. If anyone has seen similar effects and can 
share some insights, I'd appreciate it.

Cheers,
   - Andreas



More information about the Vm-dev mailing list