[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-users] RE: Co-scheduling HVM domains...
> -----Original Message----- > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx > [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Roger Cruz > Sent: 27 March 2007 21:46 > To: xen-users@xxxxxxxxxxxxxxxxxxx > Subject: RE: [Xen-users] RE: Co-scheduling HVM domains... > > Mark, > > 0) You are right, I was not subscribed to the list when I > sent the first > email. > > 1) I've heard of something called stub domains that may do something > similar to case 1. I'm trying to get more info on it. Don't > know if it > even works for Windows HVMs. Stub domains has nothing to do with the problem you're trying to solve. > > 2) We'll have a lot more domain pairs than CPUs. They (the pairs) run > independent applications of each other so batching requests is not a > choice at this point. But we do know that the two pairs will > communicate (server-client app) and will use a shared memory construct > to pass the messages back (no IP connection, just a proprietary comm. > protocol using shared memory). There are latency issues involved in > getting to those packets so if we could guarantee that when the client > makes a request, the server will get it (almost) immediately, > then we're > golden. So, I'm interested in tweaking the scheduling algorithm to > guarantee that the server can get on the CPU after the client runs. This will not be easy to solve on Xen. The current scheduler(s) available for Xen are not aware of any relationships between guests, so there's not going to be any "gang scheduling" in Xen. It's probably possible to rewrite the scheduler to understand groups of guests (and if you do this, you should do it so that it's able to understand groups of not just 2 guests, but any number you like to group together - or you'll have NO chance to get the scheduler accepted by the Xen folks, I would think, and then you're forced to maintain your own scheduler forever forwards). I think that would be a very interesting challenge. -- Mats > > Thank you > Roger R. Cruz > > > > > I'm resending this to the list because I don't believe it made it > > through the first time. > > If you're not subscribed there's a delay before somebody > comes along and > > manually allows your mail. > > > I have a multi-processor, multi-core environment where I will be > running > > an application on one HVM domain A (Windows 2003) and another app in > > another HVM domain A' (Windows as well). There will be multiple > > instances of this pair combination. For performance > reasons, I would > > like to find out if there is any way to control the > scheduling of the > > paired domains, such that > > > > > > > > 1) if domain A is scheduled on a physical CPU 1, domain A' is > also > > scheduled at the same time on cpu 2. or > > There's not a simple way of arranging this. > > > 2) if domain A is scheduled on physical CPU, domain A' is the > next > > domain to be scheduled on the CPU, even if domain B was the > next legal > > owner of the time slice. > > If you can dedicate a CPU per domain pair then you can just pin each > domain > pair to a different CPU. This should approximate the behaviour you > want. > Even if you pin several pairs to a CPU it should bound the latency > somewhat... however... > > What's you're application? If requests between your domains can be > batched up > somehow then I'd expect performance to be OK without special > configuration. > > Cheers, > Mark > > -- > Dave: Just a question. What use is a unicyle with no seat? And no > pedals! > Mark: To answer a question with a question: What use is a skateboard? > Dave: Skateboards have wheels. > Mark: My wheel has a wheel! > > _______________________________________________ > Xen-users mailing list > Xen-users@xxxxxxxxxxxxxxxxxxx > http://lists.xensource.com/xen-users > > > _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |