[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/ioreq server: Fix DomU reboot couldn't work when using p2m_ioreq_server p2m_type



>>> On 04.05.17 at 00:15, <xiong.y.zhang@xxxxxxxxx> wrote:
> 'commit 1679e0df3df6 ("x86/ioreq server: asynchronously reset
> outstanding p2m_ioreq_server entries")' will call
> p2m_change_entry_type_global() which set entry.recalc=1. Then
> the following get_entry(p2m_ioreq_server) will return
> p2m_ram_rw type.
> But 'commit 6d774a951696 ("x86/ioreq server: synchronously reset
> outstanding p2m_ioreq_server entries when an ioreq server unmaps")'
> assume get_entry(p2m_ioreq_server) will return p2m_ioreq_server
> type, then reset p2m_ioreq_server entries. The fact is the assumption
> isn't true, and sysnchronously reset function couldn't work. Then
> ioreq.entry_count is larger than zero after an ioreq server unmaps,
> finally this results DomU reboot couldn't work.
> 
> This patch will let get_entry(p2m_ioreq_server) return
> p2m_ioreq_server type instead of p2m_ram_rw type when the type of
> ioreq_server entries havn't been written. The actual type change
> happens in recalc funciton.

I think this is the wrong solution to the problem: get_entry() is
supposed to return the new type, when a type change was done
but hasn't got pushed through the page table hierarchy. One
option I can see would be to add a new flag to p2m_query_t,
allowing to retrieve the currently recorded type instead of the
mandated active one. Another might be to relax
p2m_finish_type_change()'s old type check, accepting that this
would lead to unnecessary calls to p2m_change_type_one(). It
may be possible to avoid some of the extra overhead by e.g.
also looking at the retrieved order - p2m_ioreq_server pages
can only be order-0 right now, so higher order pages could be
skipped.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.