Steve -UWM has been running with virtualized mailbox servers since last April. After our upgrade to version 6, our account density (approximately 13,000 per mailbox server) and usage pattern created a bottleneck in the MySQL DB.
We had being using servers with 16 cores and 32GB of ram, configured entirely as boot from SAN (we have a [Dell]Compellent SAN). We transitioned each of those physical machines to three virtual machines, each configured to respect a NUMA boundary. After increasing the memory in each server to 48GB, we had NUMA nodes that were 4 core and 16GB.
We put a 4 Core and 12GB VM in each node, initially starting with three, but having the capacity to add the fourth when growth warrants it. Nodes with those resources were sized for approximately 6,000 users at our usage level, and it seems that number is fairly accurate, the initial signs of the bottleneck can show up as the accounts get closer to that number.
We connected the storage via VMWare's NPIV mode -- I'm a little disappointed in how it's currently implemented by VMware, but it does allow us to manage storage on a "per mailbox server" level even though the machines are VM containers. The goal was to continue to manage storage at the SAN level, and not add the abstraction of VMDKs into the mix. Theoretically, we would be able to boot the mailbox servers from any server (physical or virtual) that we are able to map the volumes to, which at the design time was one of the critical DR requirements. (Not sure it is so much today.)
At this point, we have not virtualized other portions of the infrastructure -- mostly because there's not a performance need. It would be next on our list for Zimbra, but it's not being actively pursued because our campus is about to start an RFP process for email and calendar functionality.
At this point, I'm on the hardware/infrastructure side of the house, so I don't know all the application details any more ... but if you'd like to have a more detailed conversation about our experience, I can probably arrange such a conversation.
In general, we did it because we *had* to reduce density, and we were able to double our [projected] account capacity utilizing the same hardware -- so I would declare it a success.
Steve. On 16-Nov-11 6:23 PM, Steve Hillman wrote:
Hi folks, It's been awhile since we've visited server hardware discussions on this list, and in that time I think that virtualization has advanced to the point where it could be practical to run Zimbra in a VM, if only to gain benefits like fast recovery time and vMotion operations for physical hardware maintenance. We're weighing that decision now for our site. We just upgraded to new hardware with 12 cores and 192gb ram for each server, with 4 mailbox servers in total (for about 60,000 accounts), but we're thinking that might be more than we need and we'd have enough headroom to insert VMware between the hardware and OS I'm curious whether any larger sites have taken the plunge yet. If so, did you stick with similarly sized mailbox servers or did you opt for more servers with fewer users on each? And what did you do with your storage - did you convert to vmdk's and let VMware manage the back-end storage or did you import the LUNs directly into the guest OS (as fibre channel or iSCSI LUNs)?
-- Steven Premeau, Operations Support Supervisor& Data Center Manager University Information Technology Services University of Wisconsin-Milwaukee Office: EMS EB68 | Phone: 414 229-3806 | Cell: 414 416-5421 E-mail: spremeau@uwm.edu