[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [v7][RFC][PATCH 06/13] hvmloader/ram: check if guest memory is out of reserved device memory maps



On 2014/11/18 16:01, Jan Beulich wrote:
On 18.11.14 at 04:08, <tiejun.chen@xxxxxxxxx> wrote:
Here I tried to implement what you want. Note just pick two key
fragments since others have no big deal.

#1:

@@ -898,14 +898,25 @@ int
intel_iommu_get_reserved_device_memory(iommu_grdm_t *func, void *ctxt)
   {
       struct acpi_rmrr_unit *rmrr;
       int rc = 0;
+    unsigned int i;
+    u32 id;
+    u16 bdf;

       list_for_each_entry(rmrr, &acpi_rmrr_units, list)
       {
-        rc = func(PFN_DOWN(rmrr->base_address),
-                  PFN_UP(rmrr->end_address) - PFN_DOWN(rmrr->base_address),
-                  ctxt);
-        if ( rc )
-            break;
+        for (i = 0; (bdf = rmrr->scope.devices[i]) &&
+                    i < rmrr->scope.devices_cnt && !rc; i++)
+        {
+            id = PCI_SBDF(rmrr->segment, bdf);
+            rc = func(PFN_DOWN(rmrr->base_address),
+                               PFN_UP(rmrr->end_address) -
+                                PFN_DOWN(rmrr->base_address),
+                               id,
+                               ctxt);
+            if ( rc < 0 )
+                return rc;
+        }
+        rc = 0;

Getting close - the main issue is that (as previously mentioned) you
should avoid open-coding for_each_rmrr_device(). It also doesn't

Sorry, are you saying these lines?

>> +        for (i = 0; (bdf = rmrr->scope.devices[i]) &&
>> +                    i < rmrr->scope.devices_cnt && !rc; i++)

So without lookuping devices[i], how can we call func() for each sbdf as you mentioned?

look like you really need the local variable 'id'.

Okay, I can pass PCI_SBDF(rmrr->segment, bdf) directly.

Thanks
Tiejun

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.