[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4] x86: add p2m_mmio_write_dm





On 12/1/2014 8:13 PM, Tim Deegan wrote:
At 11:17 +0000 on 01 Dec (1417429027), Jan Beulich wrote:
On 01.12.14 at 11:30, <tim@xxxxxxx> wrote:
At 09:32 +0000 on 01 Dec (1417422746), Jan Beulich wrote:
On 01.12.14 at 09:49, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
To my understanding, pages with p2m_ram_ro are not supposed to be
modified by guest. So in __hvm_copy(), when p2m type of a page is
p2m_ram_rom, no copy will occur.
However, to our usage, we just wanna this page to be write protected, so
that our device model can be triggered to do some emulation. The content
written to this page is supposed not to be dropped. This way, if
sequentially a read operation is performed by guest to this page, the
guest will still see its anticipated value.

__hvm_copy() is only a helper function, and doesn't write to
mmio_dm space either; instead its (indirect) callers would invoke
hvmemul_do_mmio() upon seeing HVMCOPY_bad_gfn_to_mfn
returns. The question hence is about the apparent inconsistency
resulting from writes to ram_ro being dropped here but getting
passed to the DM by hvm_hap_nested_page_fault(). Tim - is
that really intentional?

No - and AFAICT it shouldn't be happening.  It _is_ how it was
implemented originally, because it involved fewer moving parts and
didn't need to be efficient (and after all, writes to entirely missing
addresses go to the device model too).

But the code was later updated to log and discard writes to read-only
memory (see 4d8aa29 from Trolle Selander).

Early version of p2m_ram_ro were documented in the internal headers as
sending the writes to the DM, but the public interface (HVMMEM_ram_ro)
has always said that writes are discarded.

Hmm, so which way do you recommend resolving the inconsistency
then - match what the public interface says or what the apparent
original intention for the internal type was? Presumably we need to
follow the public interface mandated model, and hence require the
new type to be introduced.

Sorry, I was unclear -- there isn't an inconsistency; both internal
and public headers currently say that writes are discarded and AFAICT
that is what the code does.

But yes, we ought to follow the established hypercall interface, and
so we need the new type.

During this bit of archaeology I realised that either this new type
should _not_ be made part of P2M_RO_TYPES, or, better, we need a new
class of P2M types (P2M_DISCARD_WRITE_TYPES, say) that should be used
for these paths in emulate_gva_to_mfn() and __hvm_copy(), containing
just p2m_ram_ro and p2m_grant_map_ro.

And I suppose in that latter case the new type could be made part
of P2M_RO_TYPES()?

Yes indeed, as P2M_RO_TYPES is defined as "must have the _PAGE_RW bit
clear in their PTEs".


Thanks Tim.
Following are my understanding of the P2M_RO_TYPES and your comments.
Not sure if I get it right. Please correct me if anything wrong:
1> The P2M_RO_TYPES now bears 2 meanings: one is "w bit is clear in the pte"; and another being to discard the write operations;
2> We better define another class to bear the second meaning.

Also some questions for the new p2m class, say P2M_DISCARD_WRITE_TYPES, and the new predicates, say p2m_is_discard_write: 1> You mentioned emulate_gva_to_mfn() and __hvm_copy() should discard the write ops, yet I also noticed many other places using the p2m_is_readonly, or only the "p2mt == p2m_ram_ro" judgement(in __hvm_copy/__hvm_clear). Among all these other places, is there any ones also supposed to use the p2m_is_discard_write? 2> p2m_grant_map_ro is also supposed to be discarded? Will handling of this type of pages goes into __hvm_copy()/__hvm_clear(), or should?

I'm a new guy of this area, and sorry for my messed questions. :)

Yu




Cheers,

Tim.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.