[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: Prototype to use QEMU for PV guest framebuffer



Daniel P. Berrange wrote:
As many of us are all too painfully aware we have completely different VNC server implementations for paravirt vs fullyvirt Xen guests. The former based on libvncserver, the latter integrated into QEMU. There are many new
and interesting ideas being tried out in the VNC server space in particular
wrt to virtualization and having to implement them all twice is not very
desirable. Also libvncserver code is terrible - many bad thread race conditions that are near impossible to diagnose, let alone solve :-(

At the summit there were a couple of suggestions, either taking the VNC
server code from QEMU and splicing it into the xenfb daemon. Another was
to take the QEMU VNC code and put it into a library that can be used by
both QEMU & xenfb. Neither of those is a particularly appealing prospect.

Then Anthony Liguori suggested a 3rd approach, which was to add a new QEMU
machine type to the x86 target, and simply disable all the emulated hardwre
and just make use of QEMU infrastructure for the VNC server, select() event loop and display state.

This sounded very intriguing so I did a little experimentation today doing
exactly that. First of all I've copied tools/xenfb/xenfb.c into the qemu/hw/xenfb.c file. Next I added a qemu/hw/xen.c file to provide the implementation of the QEMU machine type - called 'xenpv'.
All the interesting code (all 100 lines of it) is in hw/xen.c in the
xen_init_pv method - which is called when using the '-M xenpv' command
line arg.
The first thing this does is get the domain ID - to avoid touching QEMU's
command line args, I just use an environ variable XEN_DOMID to pass this
in.

The main_loop() in qemu/vl.c assumes there is always one CPU created in
the VM, but for our purposes we're not doing any CPU emulation merely using the event loop/display state. So to workaround this assumption I create a single CPUState * instance, and permanently disable it by setting 'cpu->hflags = HF_HALTED_MASK'. Seems to do the job.

Next, up we attach to the guest PVFB frontend - just using the existing code
for this - qemu/hw/xenfb.c (formerly xen-unstable/tools/xenfb/xenfb.c).

Now we register a QEMU graphical console, a mouse event receiver, a
keyboard event receiver, and register the xenstored and event channel
file handles with QEMU's event loop.

Finally, initialize QEMU's display state to match the PVFB framebuffer
config (ie 800x600x32).

Pushing mouse & keyboard events through from QEMU to PVFB frontend is
trivial. The only bit I'm unhappy about is that QEMU can't access the
guest framebuffer directly. The DisplayState * struct has its own copy
of the framebuffer - allocated by the VNC or SDL impls in QEMU - and
so whenever the guest framebuffer changes, we have to memcpy() the data
from the guest into the QEMU framebuffer.

The DisplayState driver (VNC or SDL) chooses what the depth and format of this ds->data buffer should be. The PVFB expects a certain format so you would have to do more than just a memcpy(). You'll have to be able to translate between the xenfb format and the ds->data format.

This is very cool stuff though! Since QEMU already supports VNC and SDL, we don't need to maintain any additional pvfb backends.

Regards,

Anthony Liguori

Still, this is no worse than
what the HVM guests already do. Its probably not too hard to change the
QEMU impl of VNC / SDL to use the guest framebuffer directly if we did
a little re-factoring. I wanted to keep it simple for now & not change
any of the upstream QEMU code.

This attached patch is against the current upstream QEMU CVS code, not Xen's ioemu, since I wanted to work against pristine QEMU codebase & avoid any
potential wierd iteractions with HVM stuff added to ioemu. The diff is

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.