[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] Memory requirements for a typical Mirage OS VM

On 24 Feb 2014, at 10:48, George Dunlap <george.dunlap@xxxxxxxxxxxxx> wrote:

> On 02/23/2014 03:50 PM, Anil Madhavapeddy wrote:
>> On 23 Feb 2014, at 12:57, Lars Kurth <lars.kurth@xxxxxxx> wrote:
>>> On 22/02/2014 22:29, Richard Mortier wrote:
>>>> On 22 Feb 2014, at 22:13, Julian Chesterfield 
>>>> <julian.chesterfield@xxxxxxxxx> wrote:
>>>>> So hosting the website image will require more ram than a minimal image.
>>>> yes; hence my question about what "typical" means :)
>>> The background for the question was whether the event channel improvements 
>>> in Xen 4.4 (and thus the capability to run a lot of smaller VMs on one 
>>> host) will benefit Mirage OS and others. That argement hinges on Mirage OS 
>>> (and similar) having significantly smaller memory footprints that your 
>>> traditional VM.
>>> I guess "typical" means "memory requirements for the type of workloads 
>>> Mirage OS is aiming to target
>> They definitely will have a big positive impact.  Our overall goal is to get 
>> an equivalent number of MirageOS VMs running as you can get distinct 
>> processes running on a single Unix host.  If most are idle (e.g. just brief 
>> amount of traffic) and we are using modern 64-core machines, then we've 
>> estimated that we able to get to 10000 VMs without too many problems, with 
>> these problems rearing their head:
> Lars, I think you're missing part of the question: Matt Wilson's question (re 
> our press release) was whether event channel scalability will have a benefit 
> to MirageOS, OSV, and others *in public clouds*.  At least a few years ago, 
> the assumption was that most public clouds would be using massive amounts of 
> rather inexpensive machines; maybe 8 cores at the most.
> So yes, for 64+core machines, event channels will obviously be a scalability 
> limit.  But is it really even useful to try to run >1000 actual servers on an 
> 8-core box?  Even if you have enough memory for them all, do yo have enough 
> CPU?
> Of course the default size of physical servers in the cloud may have changed; 
> maybe public clouds now have 64-core boxes.  But given the person who asked 
> the question, I'm inclined to think it hasn't changed much.

I don't think it's helpful to speculate on whether public clouds have 8- or 
64-core machines without further data.  Instead, it's worth considering all the 
steps required to support unikernels on the public cloud *with the same levels 
of isolation that Xen provides* (i.e. not containers).

When you consider it like this, Xen has been getting progressively better with 
every release since 1.0.  Off the top of my head:

- device model stub domains (spread cpu load)
- numa affinity (reduce memory pressure of many domain communication)
- cpupool (to gang schedule same-customer unikernels?)
- memory page sharing + swap (still HVM only as I understand it, but still 
usable by a unikernel guest)

...and event channel scalability gets added into this to ease pressure off the 
scheduler.  Scaling is about finding the right resource balances after all, and 
not a single big-bang feature.

MirageOS-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.