[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][v2][PATCH 00/14] Fix RMRR

On 2015/5/22 17:46, Jan Beulich wrote:
On 22.05.15 at 11:35, <tiejun.chen@xxxxxxxxx> wrote:
As you know all devices are owned by Dom0 firstly before we create any
DomU, right? Do we allow Dom0 still own a group device while assign another
device in the same group?

Clearly not, or - just like anything else putting the security of a system
at risk - only at explicit host admin request.

You're right.

After we discussed internally, we're intending to cover this simply since the case of shared RMRR is a rare case according to our previous experiences. Furthermore, Xen doesn't have a good existing API to directly assign this sort of group devices and even Xen doesn't identify these devices, so currently we always assign devices one by one, right? This means we have to put more efforts to concern a good implementation to address something like, identification, atomic, hotplug and so on. Obviously, this would involve hypervisor and tools at the same time so this has a little bit of difficulty to work along with 4.6.

So could we do this separately?

#1. Phase 1 to 4.6

#1.1. Do a simple implementation

We just prevent from that device assignment if we're assigning this sort of group devices like this,

@@ -2291,6 +2291,16 @@ static int intel_iommu_assign_device(
              PCI_BUS(bdf) == bus &&
              PCI_DEVFN2(bdf) == devfn )
+            if ( rmrr->scope.devices_cnt > 1 )
+            {
+                reassign_device_ownership(d, hardware_domain, devfn, pdev);
+                printk(XENLOG_G_ERR VTDPREFIX
+ " cannot assign any device with RMRR for Dom%d (%d)\n",
+                       rmrr->base_address, rmrr->end_address,
+                       d->domain_id, ret);
+                ret = -EPERM;
+                break;
+            }
             ret = rmrr_identity_mapping(d, 1, rmrr, flag);
             if ( ret )

Note this is just one draft code to show our idea. And I'm also concerning if we need to introduce a flag to bypass this to make sure we still have a approach to our original behavior.

#1.2. Post a design

We'd like to post a preliminary design to Xen community to get a better solution.

#2. Phase 2 after 4.6

Once the design is clear we will start writing patches to address this completely.

So any idea?


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.