[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: [Qemu-devel] [PATCH 0/7] merge some xen bits into qemu
Samuel Thibault wrote: > Samuel Thibault, le Wed 06 Aug 2008 15:25:26 +0100, a écrit : >> Pushing the cleaning changes to Xen first can be done and would entail >> much easier to tackle breakage, and the merge back from qemu would then >> be trivial, why not doing so? > > You didn't answer that part. Really, my only concern is about having > things tested. Isn't it possible for instance to just merge the backend > core (and console/xenfb updates) in Ian's tree first for a start? http://kraxel.fedorapeople.org/patches/qemu-xen/ I didn't touch the build system, it is even more scary than the qemu one alone, I just set CONFIG_XEN unconditionally. I also largely left vl.c as-is, so xend shouldn't need any changes. The -domid switch sets an additional (redundant) variable, to keep the amount of changes as small as possible for now. Also -name and -domain-name are aliased, both set qemu_name and domain_name. In upstream qemu xenpv support is a runtime-switch for the normal qemu, the xen patches leave the qemu-dm target in place. The framebuffer driver probably has some performance regressions. Fixing those depends on the display patches being pushed upstream. > Then you can push your code to qemu, > I guess that could be fine, as you said xen will not need to use e.g. > the block and net backends. blk and net backends are not there (yet). But they should be a nop for xen anyway as long as you don't wind up stuff in xend to put them in use. For the net backend it probably wouldn't be that useful. The block backend should be a good replacement for blktap though and maybe can save you the effort of porting the blktap kernel driver to the pv_ops kernel. cheers, Gerd -- http://kraxel.fedorapeople.org/xenner/ _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |