[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V3 11/13] HV/IOMMU: Enable swiotlb bounce buffer for Isolation VM





On 8/20/2021 2:11 AM, Michael Kelley wrote:
  }
+
+/*
+ * hv_map_memory - map memory to extra space in the AMD SEV-SNP Isolation VM.
+ */
+void *hv_map_memory(void *addr, unsigned long size)
+{
+       unsigned long *pfns = kcalloc(size / HV_HYP_PAGE_SIZE,
+                                     sizeof(unsigned long), GFP_KERNEL);
+       void *vaddr;
+       int i;
+
+       if (!pfns)
+               return NULL;
+
+       for (i = 0; i < size / HV_HYP_PAGE_SIZE; i++)
+               pfns[i] = virt_to_hvpfn(addr + i * HV_HYP_PAGE_SIZE) +
+                       (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
+
+       vaddr = vmap_pfn(pfns, size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
+       kfree(pfns);
+
+       return vaddr;
+}
This function is manipulating page tables in the guest VM.  It is not involved
in communicating with Hyper-V, or passing PFNs to Hyper-V.  The pfn array
contains guest PFNs, not Hyper-V PFNs.  So it should use PAGE_SIZE
instead of HV_HYP_PAGE_SIZE, and similarly PAGE_SHIFT and virt_to_pfn().
If this code were ever to run on ARM64 in the future with PAGE_SIZE other
than 4 Kbytes, the use of PAGE_SIZE is correct choice.

OK. Will update with PAGE_SIZE.



+void __init hyperv_iommu_swiotlb_init(void)
+{
+       unsigned long bytes;
+
+       /*
+        * Allocate Hyper-V swiotlb bounce buffer at early place
+        * to reserve large contiguous memory.
+        */
+       hyperv_io_tlb_size = 256 * 1024 * 1024;
A hard coded size here seems problematic.   The memory size of
Isolated VMs can vary by orders of magnitude.  I see that
xen_swiotlb_init() uses swiotlb_size_or_default(), which at least
pays attention to the value specified on the kernel boot line.

Another example is sev_setup_arch(), which in the native case sets
the size to 6% of main memory, with a max of 1 Gbyte.  This is
the case that's closer to Isolated VMs, so doing something
similar could be a good approach.


Yes, agree. It's better to keep bounce buffer size with AMD SEV.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.