[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH for-4.5 v6 00/16] Xen VMware tools support



On Wed, Sep 24, 2014 at 01:19:38PM -0400, Don Slutz wrote:
> On 09/23/14 08:30, Ian Campbell wrote:
> >On Mon, 2014-09-22 at 13:19 -0400, Don Slutz wrote:
> 
> [snip]
> 
> >>>I was only responding to the part of your comment in parentheses. :-)
> >>>
> >>>I suppose in large part it would depend on what the hypercalls were
> >>>actually doing; I'd have to go back and look at them to say if they
> >>>need to be in Xen or whether they could be passed on to qemu.
> >>>
> >>Clearly it is possible to pass the VCPU registers to QEMU, but that is
> >>currently not done.
> >I think there's an existing hypercall to get/set the state for a vcpu,
> >perhaps it is too heavy weight to be used here though.
> 
> Yes, very heavy weight
> 
> >An alternative would be a semantically higher level I/O req which took a
> >guest pointer to a key and a guest pointer to the buffer etc, without
> >needing the registers themselves.
> 
> I am looking at adding a new I/O req type for this.  It turns out that
> for vmware_port you need to pass 6 32bit values both ways.  And
> I can overlap the .addr, .data, .count and .size for this.  The other
> option is to increase the size of struct ioreq, which I am assuming
> is not the way to go since it would reduce the max number of vcpus
> as long as "struct shared_iopage" is limited to 1 page.
> 
> "guest pointer to a key and a guest pointer to the buffer" is not how
> this works.  The data is all passed by upto 4 bytes at each IN.  A string
> (which is how guestinfo access looks like) is passed as a length, and
> then each 4 bytes of the string. (I am not trying to say this is good.)
> 
> 
> 
> >>   So a new
> >>version of QEMU would also be needed to go this way.  None the the
> >>proposed features need
> >>any data from QEMU, so I do not think this make sense.
> >The concern is that it is adding a load of complex looking string and
> >pointer manipulation stuff to the hypervisor, the sort of thing which
> >often leads to security vulnerabilities.
> >
> >So that would be better done outside of Xen itself if possible, if a
> >qemu update is the price for that then it doesn't seem so bad to me.
> 
> I have yet to come up with a good reason why not to move the
> VMware port RPC code into QEMU.  I will be looking to do that for
> Xen 4.6 & QEMU 2.3
> 
> 
> Related to that, the code to connect Xen to QEMU so that Xen can
> use any VMware support in QEMU is not that complex.  So added
> the xen part in place of patches 8, 9, 10, 11, 12, 14, 15 and 16
> looks doable.  This would allow X to use the VMware mouse
> code (which is in both qemu-xen and qemu-xen-traditional).  I have
> found this to be a great improvement in using a GUI in a guest
> where the network speeds are not that fast.  I had planned
> on adjusting the Xen to QEMU connector code for 4.6
> 
> Also there is a good chance that the QEMU part could be up streamed
> to QEMU 2.2 (and backported to Xen's QEMU tree) for 4.5
> 
> Now since I did not include this code sooner, would I need a release
> exception to include the Xen to QEMU connector code?

Yes, but without having seen the patches beforehand it might be
a bit too late as they would be brand-new-patches.

> 
> 
> One thing related to this is, should I also change qemu-xen-traditional
> to handle the new new I/O req type, or to only send it if using qemu-xen.

Just qemu-xen.
> 
> It is simple to allow a new QEMU to build with pre-4.5 Xen and post-4.5
> Xen.  No idea of a good way to check that a QEMU binary has this
> support.  However I can say that enabling vmware_port does require
> a QEMU with this support in the docs.

And that would mean for users to take advantage of it would need
to update their QEMU version right?

> 
> 
>     -Don Slutz
> 
> >Ian.
> >
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.