[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 08/17] vmx: nest: L1 <-> L2 context switch



At 11:31 +0100 on 21 May (1274441514), Qing He wrote:
> On Fri, 2010-05-21 at 17:19 +0800, Tim Deegan wrote:
> > At 14:49 +0100 on 20 May (1274366991), Qing He wrote:
> > > I mean, the code doesn't seem to organize well, partly because there
> > > are many different states to cover, and some tricks are used to
> > > work with the current code, vmx_set_host_env would be a good example
> > > of such kind of tricks. Do you have any suggestions on a better code
> > > orgnization?
> > 
> > TBH I expect that any implementation of this is going to be messy.  It's
> > a big interface and there are too many special cases.  
> > 
> > The only thing that strikes me is that you seem to do a full translation
> > of the vvmcs on every vmentry.  Would it be possible (since we already
> > have to intercept every vmread/vmwrite) to keep the svmcs in sync all
> > the time?
> 
> I don't think it's a good idea to change svmcs at the vmread/vmwrite
> time, because
>   1. that means 2 addtional vmclears and 2 additional vmptrld for every
>      vmread/vmwrite

Yes, I guess it does. :(

>   2. it makes things like pv vmcs impossible
>   3. vmread/vmwrite is supposed to be simple access, changing svmcs at
>      these points doesn't look right
> 
> I did consider a bitmap based solution, to only update fields
> that have been written. However, it needs to define a new encoding
> and is purely optimization, so I'd like to just put it as a TODO at
> the moment.

Fair enough. 

Cheers, 

Tim.

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, XenServer Engineering
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.