[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 06/13] hvmloader/ram: check if guest memory is out of reserved device memory maps




On 2014/11/13 11:09, Chen, Tiejun wrote:
On 2014/11/12 16:55, Jan Beulich wrote:
On 12.11.14 at 04:05, <tiejun.chen@xxxxxxxxx> wrote:
I don't see any feedback to this point, so I think you still prefer we
should do all check in the callback function.

As a draft this looks reasonable, but there are various bugs to be
dealt with along with cosmetic issues (I'll point out the former, but
I'm tired of pointing out the latter once again - please go back to
earlier reviews of patches to refresh e.g. what types to use for
loop variables).

I tried to address this but obviously we have to pass each 'pdf' to
callback functions,

Yes, but at the generic IOMMU layer this shouldn't be named "bdf",
but something more neutral (maybe "id"). And you again lost the
segment there.

@@ -36,9 +40,24 @@ static int get_reserved_device_memory(xen_pfn_t
start,
           if ( rdm.start_pfn != start || rdm.nr_pages != nr )
               return -ERANGE;

-        if ( __copy_to_compat_offset(grdm->map.buffer,
grdm->used_entries,
-                                     &rdm, 1) )
-            return -EFAULT;
+        if ( d->arch.hvm_domain.pci_force )
+        {
+            if ( __copy_to_compat_offset(grdm->map.buffer,
grdm->used_entries,
+                                         &rdm, 1) )
+                return -EFAULT;
+        }
+        else
+        {
+            for ( i = 0; i < d->arch.hvm_domain.num_pcidevs; i++ )
+            {
+                pt_bdf = PCI_BDF2(d->arch.hvm_domain.pcidevs[i].bus,
+                                  d->arch.hvm_domain.pcidevs[i].devfn);
+                if ( pt_bdf == bdf )
+                    if ( __copy_to_compat_offset(grdm->map.buffer,
grdm->used_entries,
+                                                 &rdm, 1) )
+                        return -EFAULT;

I think d->arch.hvm_domain.pcidevs[] shouldn't contain duplicates,
and hence there's no point continuing the loop if you found a match.


I take a look at this again. Seems we shouldn't just check bdf since as
you know its possible to occupy one entry by multiple different BDFs, so
we have to filter-to-keep one entry. Instead, I really hope we'd check
to expose before we do the hypercall.

Even if eventually we'll reorder that sequence, this just change that approach to get BDF. Are you fine to this subsequent change?

@@ -894,18 +894,55 @@ int platform_supports_x2apic(void)
return cpu_has_x2apic && ((dmar_flags & mask) == ACPI_DMAR_INTR_REMAP);
 }

-int intel_iommu_get_reserved_device_memory(iommu_grdm_t *func, void *ctxt)
+int intel_iommu_get_reserved_device_memory(iommu_grdm_t *func, struct domain *d,
+                                           void *ctxt)
 {
     struct acpi_rmrr_unit *rmrr;
-    int rc = 0;
+    int rc = 0, i, j, seg_check = 1;
+    u16 id, bdf;

-    list_for_each_entry(rmrr, &acpi_rmrr_units, list)
+    if ( d->arch.hvm_domain.pci_force )
     {
-        rc = func(PFN_DOWN(rmrr->base_address),
-                  PFN_UP(rmrr->end_address) - PFN_DOWN(rmrr->base_address),
-                  ctxt);
-        if ( rc )
-            break;
+        list_for_each_entry(rmrr, &acpi_rmrr_units, list)
+        {
+            rc = func(PFN_DOWN(rmrr->base_address),
+ PFN_UP(rmrr->end_address) - PFN_DOWN(rmrr->base_address),
+                      ctxt);
+            if ( rc )
+                break;
+        }
+    }
+    else
+    {
+        list_for_each_entry(rmrr, &acpi_rmrr_units, list)
+        {
+            for ( i = 0; i < d->arch.hvm_domain.num_pcidevs &&
+                            seg_check; i++ )
+            {
+                if ( rmrr->segment == d->arch.hvm_domain.pcidevs[i].seg )
+                {
+                    bdf = PCI_BDF2(d->arch.hvm_domain.pcidevs[j].bus,
+                                   d->arch.hvm_domain.pcidevs[j].devfn);
+                    for (j = 0; (id = rmrr->scope.devices[j]) &&
+                            j < rmrr->scope.devices_cnt && seg_check; j++)
+                    {
+                        if ( bdf == id )
+                        {
+                            rc = func(PFN_DOWN(rmrr->base_address),
+                                      PFN_UP(rmrr->end_address) -
+                                        PFN_DOWN(rmrr->base_address),
+                                      ctxt);
+                            if ( rc )
+                                return;
+                            /* Hit this seg entry. */
+                            seg_check = 0;
+                        }
+                    }
+                }
+            }
+            /* goto next seg entry. */
+            seg_check = 1;
+        }
     }

     return rc;


BTW, I already ping Yang in local to look that possibility to reorder
the sequence of the device assignment and the memory population in iommu
side.

Yang and Kevin,

Any comments to this requirement?

Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.