[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 01/13] xen: RMRR fix

On 2014/10/27 17:41, Jan Beulich wrote:
On 27.10.14 at 03:00, <tiejun.chen@xxxxxxxxx> wrote:
n 2014/10/24 18:52, Jan Beulich wrote:
On 24.10.14 at 09:34, <tiejun.chen@xxxxxxxxx> wrote:
5. Before we take real device assignment, any access to RMRR may issue
ept_handle_violation because of p2m_access_n. Then we just call
update_guest_eip() to return.

I.e. ignore such accesses? Why?

Yeah. This illegal access isn't allowed but its enough to ignore that
without further protection or punishment.

Or what procedure should be concerned here based on your opinion?

If the access is illegal, inject a fault to the guest or kill it, unless you

Kill means we will crash domain? Seems its radical, isn't it? So I guess its better to inject a fault.

But what kind of fault you prefer currently?

can explain why ignoring such an access is correct/necessary (e.g.
I could see this being the equivalent of an access to a memory region
the address of which is not being decoded by any component in a
physical system).

Now in our case we add a rule:
   - if p2m_access_n is set we also set this mapping.

Does that not conflict with eventual use mem-access makes of this

Do you mean what will happen after we reset these ranges as p2m_access_rw? We already reserve these ranges guest shouldn't access these range actually. And a guest still maliciously access them, that device may not work well.

In our case, we always initialize these RMRR ranges with p2m_access_n to
make sure we can intercept any illegal access to these range until we
can reset them with p2m_access_rw via set_identity_p2m_entry(d,
base_pfn, p2m_access_rw).

This restates what the patch does but doesn't answer the question.

Or Yang,

Could you reply this? I guess I'm still misunderstanding Jan's question.



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.