[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] How can Xen trigger a context switch in an HVM guest domain?
Could you add a timer when watch the changes of CR3 to prevent one task exhaust too long cpu time without changing CR3. - James (Song Wei) XiaYubin wrote: > > Hi, George, > > Thank you for your reply. Actually, I'm looking for a generic > mechanism of cooperative scheduling. The independence of guest OS can > make such mechanism more convincing and practical, just like the > balloon driver does. > > Maybe you are wondering why I asked such a wired question, let me > describe it with more details. My current work is based on "Task-aware > VM scheduling", which is published on VEE'09. By monitoring CR3 > changing at VMM level, Xen can get information of tasks' CPU > consumption to identify CPU hogs and I/O tasks. Therefore, the > task-aware mechanism offers a more fine-grained scheduler than the > original VCPU-level scheduler, as a VCPU may run CPU hogs and I/O > tasks in a mixed style. > > Imagine there are n VMs. One of them, named mix-VM, runs two tasks: > cpuhog and iotask (network). The other VMs, named CPU-VM, run just > cpuhog. All VMs are using PV driver ( GPLPV driver for Windows). > > Here's what supposed to happen when iotask receiving an network > packet: The NIC raises an IRQ, passes to Xen, then domain-0 sends an > inter-domain event to mix-VM, which is likely to be in run-queue. Xen > then schedules it to run immediately and set its state to > preempting-state. Right after that, the mix-VM *should* schedules > iotask to process the incoming packet, and then schedules cpuhog after > processing. When the CR3 is changing to cpuhog, Xen knows that the > mix-VM has finished I/O processing (here we assume that the priority > of cpuhog is usually lower than iotask in most OS), and schedules the > mix-VM out to finish its preempting-state. Therefore, the mix-VM can > preempt other VMs to process I/O ASAP, while making the preempting > time as short as possible to keep fairness. The point is: cpuhog > should not run in preempting-state. > > However, a problem arises when the mix-VM sending packets. When iotask > sends an amount of data (using TCP protocol), it will block and wait > to be waked up after guest kernel sending all the data, which may be > split into thousands of TCP packets. The mix-VM will receives an ACK > packet every time it sending a packet, which makes it enter > preempting-state. Note that at this moment, the CR3 of mix-VM is > cpuhog's (as the only running process). After the guest kernel > processing the ACK packet and sending next packet, it switches to user > mode, which means the cpuhog gets to run in preempting-state. The > point is: as there is no CR3-changing, Xen has no way to run. > > One way is to add a hook at user/kernel mode switching, then Xen can > catch the moment when cpuhog gets to run. However, this way costs too > much. Another way is to force a VM to schedule when it entering > preempting-state. Therefore, it will trap to Xen when CR3 is changed, > and Xen can finish its preempting-state when it schedules cpuhog to > run. That's why I want to trigger guest context switch from Xen. I > don't really care *which* process it will switch to, I just want to > get Xen a chance to run. The point is: is there a better/simpler way > to solve this problem? > > Hope I described the problem clearly. And would you please show more > details about the thought of "reschedule event channel"? Thanks! > > > > _______________________________________________ > Xen-devel mailing list > Xen-devel@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-devel > > -- View this message in context: http://old.nabble.com/How-can-Xen-trigger-a-context-switch-in-an-HVM-guest-domain--tp26141418p26156633.html Sent from the Xen - Dev mailing list archive at Nabble.com. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |