[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] gross qemu behavior

On Fri, 28 Mar 2014, Paolo Bonzini wrote:
> Il 28/03/2014 18:52, Stefano Stabellini ha scritto:
> > > This is a thorny issue, fixing this behavior is not going to be trivial:
> > > 
> > > - The hypervisor/libxc does not currently expose a
> > >   xc_domain_remove_from_physmap function.
> > > 
> > > - QEMU works by allocating memory regions at the end of the guest
> > >   physmap and then moving them at the right place.
> > > 
> > > - QEMU can destroy a memory region and in that case we could free the
> > >   memory and remove it from the physmap, however that is NOT what QEMU
> > >   does with the vga ROM. In that case it calls
> > >   memory_region_del_subregion, so we can't be sure that the ROM won't be
> > >   mapped again, therefore we cannot free it. We need to move it
> > >   somewhere else, hence the problem.
> Right; QEMU cannot know either if the ROM will be mapped again (examples
> include "cd /sys/bus/pci/devices/0000:0:03.0 && echo 1 > rom && cat rom" or a
> warm reset).
> > > But fortunately we don't actually need to add the VGA ROM to the guest
> > > physmap for it to work, QEMU can trap and emulate. In fact even today we
> > > are not mapping it at the right place anyway, see xen_set_memory:
> But how can you execute from the VGA ROM then?

I don't know, I guess we don't? In that case why does it work today?

> Also, how do you migrate its contents?

That would also not work. We would have to re-initialize it in QEMU on
the receiving end.

> And how is VGA different from say an iPXE ROM?

iPXE is read into memory by hvmloader.

> It would be nice if QEMU could just special case pc.ram (which has
> block->offset == 0), and use the normal method to allocate other RAM regions.
> But I'm afraid that would require some changes in the Xen toolstack as well
> (for migration, for example) and I'm not sure how you could execute from PCI
> Paolo

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.