[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v1 1/4] xen: add real time scheduler rt
Hi Dario, 2014-09-05 5:36 GMT-04:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>: > On gio, 2014-09-04 at 11:30 -0400, Meng Xu wrote: >> >> >> 2014-09-04 10:27 GMT-04:00 Dario Faggioli <dario.faggioli@xxxxxxxxxx>: > >> > > For instance, I can put, in an SMP guest, two real-time >> applications >> > > with different timing requirements, and pin each one to a >> different >> > > (v)cpu (I mean pin *inside* the guest). At this point, I'd >> like for each >> > > vcpu to have a set of RT scheduling parameters, at the Xen >> level, that >> > > matches the timing requirements of what's running inside. >> > > >> > > This may not look so typical in a server/cloud >> environment, but can >> > > happen (at least in my experience) in a mobile/embedded >> env. >> > >> > But to play devil's advocate for a minute here: >> > >> >> Hehe, please, be my guest! :-D :-D >> >> > couldn't you just put >> > them in two different single-vcpu VMs then? >> > >> >> >> >> Well, let me give a simpler example: >> Suppose we have three tasks in one VM, each task has period 4ms and >> budget 6ms (its utilization is 2/3). >> > You mean budget=4ms and period=6ms, don't you? :-) Right. My mistake. Thank you for the correct! :-) > >> If all these three tasks starts execution at the same time, we can >> use two full-capacity vcpus (200% capacity cpu resource) to schedule >> these three tasks. >> However if you want to get two VMs, each of which has a full capacity >> vcpu (100% capacity cpu), we cannot schedule these three tasks, >> because one tasks cannot (well, at least very hard) migrate from one >> VM to another. >> > But... In this case, in the former configuration (1 VM with 2 vcpus), > each vcpu would (or at least can) have the same bandwidth of 100%, i.e., > the same parameters... or am I missing something? > > What we're tying to assess here, is the usefulness of the possibility of > setting _different_ parameters (and hence different pcpu bandwidth) for > each vcpu. I see. I tried to use the simple example to assess why it is not always a good idea to spread programs to several VMs with one vcpu. (This is also the reason why global scheduling is better than partitioned scheduling in many cases.) The simple example I made is not a good one to show the usefulness of the possibility of setting_different_parameters for each vcpu. The example you raised is the good one. :-) > > Also, it looks like you're assuming to have a real-time scheduler inside > the VM, which may or may not be the case. > >> This is just a simple example, we could of course have an example >> like this but the vcpus are not full-capacity vcpus. :-) >> > Yeah, well, perhaps it's a bit too simple. :-D > > Don't get me wrong, I continue thinking per-vcpu params is something we > really want, it's just the example I'm not sure I'm getting/liking. Sorry for the confusion. My example aims for a different goal as I explained above. :-P > > I still think the example of multiple, concurrent and strictly related > activities having different timing requirements to be a really sensible > one. In fact, in that case, especially if one does not have a real-time > scheduler inside the guest, mapping those requirements on the Xen > scheduler is the easier (only?) way to port the app from baremetal to > virtual machine! Right! I think this may be the easiest one, if they don't have a real-time scheduler inside guest domains. > > PS. BTW, Meng, can you use plain text emails when sending to the list? Ah. It seems that I have used other format emails for a long time and it must "torture" you for a long time. I'm really sorry for that since it is a rule for the mailing list. :-( Thank you very much for letting me know. If this one is not the plain text email, please let me know. (I checked it's plain text by sending email to myself, but just in case. :-) ) Thank you again for your advice! Meng ----------- Meng Xu PhD Student in Computer and Information Science University of Pennsylvania _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |