[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] QEMU commit 04bf2526ce breaks use of xen-mapcache



On Tue, 25 Jul 2017, Paolo Bonzini wrote:
> ----- Original Message -----
> > From: "Stefano Stabellini" <sstabellini@xxxxxxxxxx>
> > To: "Paolo Bonzini" <pbonzini@xxxxxxxxxx>
> > Cc: "Anthony PERARD" <anthony.perard@xxxxxxxxxx>, "Stefano Stabellini" 
> > <sstabellini@xxxxxxxxxx>,
> > xen-devel@xxxxxxxxxxxxx, qemu-devel@xxxxxxxxxx
> > Sent: Tuesday, July 25, 2017 8:08:21 PM
> > Subject: Re: QEMU commit 04bf2526ce breaks use of xen-mapcache
> > 
> > On Tue, 25 Jul 2017, Paolo Bonzini wrote:
> > > > Hi,
> > > > 
> > > > Commits 04bf2526ce (exec: use qemu_ram_ptr_length to access guest ram)
> > > > start using qemu_ram_ptr_length() instead of qemu_map_ram_ptr().
> > > > That result in calling xen_map_cache() with lock=true, but this mapping
> > > > is never invalidated.
> > > > So QEMU use more and more RAM until it stop working for a reason or an
> > > > other. (crash if host have little RAM or stop emulating but no crash)
> > > > 
> > > > I don't know if calling xen_invalidate_map_cache_entry() in
> > > > address_space_read_continue() and address_space_write_continue() is the
> > > > right answer.  Is there something better to do ?
> > > 
> > > I think it's correct for dma to be true... maybe add a lock argument to
> > > qemu_ram_ptr_length, so that make address_space_{read,write}_continue can
> > > pass 0 and everyone else passes 1?
> > 
> > I think that is a great suggestion. That way, the difference between
> > locked mappings and unlocked mappings would be explicit, rather than
> > relying on callers to use qemu_map_ram_ptr for unlocked mappings and
> > qemu_ram_ptr_length for locked mappings. And there aren't that many
> > callers of qemu_ram_ptr_length, so adding a parameter wouldn't be an
> > issue.
> 
> Thanks---however, after re-reading xen-mapcache.c, dma needs to be false
> for unlocked mappings.

If there is a DMA operation already in progress, it means that we'll
already have a locked mapping for it.

When address_space_write_continue is called, which in turn would call
qemu_map_ram_ptr, or qemu_ram_ptr_length(unlocked), if the start and
size of the requested mapping matches the one of the previously created
locked mapping, then a pointer to the locked mapping will be returned.

If they don't match, a new unlocked mapping will be created and a
pointer to it will be returned. (Arguably the algorithm could be
improved so that a new mapping is not created if the address and size
are contained within the locked mapping. This is a missing optimization
today.)

It doesn't matter if a new unlocked mapping is created, or if the locked
mapping is returned, because the pointer returned by
qemu_ram_ptr_length(unlocked) is only used to do the memcpy, and never
again. So I don't think this is a problem.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.