[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v5][PATCH 10/16] tools: introduce some new parameters to set rdm policy

On 2015/7/7 22:40, Ian Jackson wrote:
Chen, Tiejun writes ("Re: [v5][PATCH 10/16] tools: introduce some new parameters to 
set rdm policy"):
On 2015/7/7 21:26, Ian Jackson wrote:
Is "none" not "hoping the user can ignore the problem" ?

Its impossible since the hypervisor or tools can't prevent from
accessing RDM by a VM. So as I said early, "none" is just suitable to
two cases,

#1. Those devices don't own any RDM
#2. Guest OS doesn't access RDM

Compared to other cases, these two cases are more popular in real world
and actual usage. So we'd like to keep "none" as a default.

I have read your 00/ description, and these two emails:
I have also reread the documentation you provide in this patch.

I'm afraid I still don't understand why it is safe for the default to
be `none'.  My view is that the default setting should avoid a
possibility of memory corruption or system malfunction.

RMRR is used to pass through a device. And in this case this require this sort of 1:1 mapping. And especially RMRR is always masked as RESERVED in e820 table. So originally, any VM can't create these mappings, right? So as I said, if these devices you're trying to pass though don't own RDM or you don't pass through any devices at all, so there's no any memory corruption.

Your description in the document says this:

  +"none" is the default value and it means we don't check any reserved
  +regions and then all rdm policies would be ignored. Guest just works
  +as before and the conflict of RDM and guest address space wouldn't
  +be handled, and then this may result in the associated device not
  +being able to work or even crash the VM. So if you're assigning this
  +kind of device, this option is not recommended unless you can make
  +sure any conflict doesn't exist.

So you do not recommend the use of `none', however you make it the

I'm afraid also that I don't quite understand the interaction between
none-vs-host on the one hand and strict-vs-relaxed on the other.  The
documentation would suggest that the only difference between
is that the latter may print some extra warning messages.  But the
code appears to do a lot of work to move guest memory about, when
type=none is specified.

Also I don't understand this:

Its impossible since the hypervisor or tools can't prevent from
accessing RDM by a VM. So as I said early, "none" is just suitable to
two cases,

Perhaps I am missing something here.

The hypervisor can obviously prevent a VM from accessing RDM by not
setting up a mapping for it.  The problem is then that the VM might
try to make the access anyway, and then crash or malfunction.  But


presumably the VM can be instructed via the E820 or some such not to
access these regions.

As I said above we need to create them as 1:1.

For a VM which has been given passthrough access to a device which
does DMA things are more complicated but again I think the hypervisor
and tools should be able to deny accesses using the iommu tables.

RMRR is a special case, and we have to set these mapping as 1:1. This is why we introduce these patches to make sure RMRR don't overlap normal RAM and MMIO.

But then I also don't understand why your comment "the hypervisor or
tools can't prevent from accessing RDM by a VM" explains why "none" is
a good default.

I mean if you don't set these mappings, these devices can't work at all and then crash VM like IGD passthrough. But I'm also saying we don't pass through any devices in most cases, and those devices which own RDM are very rare. So why should we set 'none' to Xen by default?

Sorry if I'm being dense.


Its always fine to me but I just think, is it a good time to start to seek another *optional* approach to overturn current design and implementation ? Unless you're very sure we're doing something wrong. I noticed you should be CCed when we posted this associated design.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.