[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Resend RFC PATCH V4 09/13] x86/Swiotlb/HV: Add Swiotlb bounce buffer remap function for HV IVM



Please split the swiotlb changes into a separate patch from the
consumer.

>  }
> +
> +/*
> + * hv_map_memory - map memory to extra space in the AMD SEV-SNP Isolation VM.
> + */
> +unsigned long hv_map_memory(unsigned long addr, unsigned long size)
> +{
> +     unsigned long *pfns = kcalloc(size / HV_HYP_PAGE_SIZE,
> +                                   sizeof(unsigned long),
> +                    GFP_KERNEL);
> +     unsigned long vaddr;
> +     int i;
> +
> +     if (!pfns)
> +             return (unsigned long)NULL;
> +
> +     for (i = 0; i < size / HV_HYP_PAGE_SIZE; i++)
> +             pfns[i] = virt_to_hvpfn((void *)addr + i * HV_HYP_PAGE_SIZE) +
> +                     (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
> +
> +     vaddr = (unsigned long)vmap_pfn(pfns, size / HV_HYP_PAGE_SIZE,
> +                                     PAGE_KERNEL_IO);
> +     kfree(pfns);
> +
> +     return vaddr;

This seems to miss a 'select VMAP_PFN'.  But more importantly I don't
think this actually works.  Various DMA APIs do expect a struct page
backing, so how is this going to work with say dma_mmap_attrs or
dma_get_sgtable_attrs?

> +static unsigned long __map_memory(unsigned long addr, unsigned long size)
> +{
> +     if (hv_is_isolation_supported())
> +             return hv_map_memory(addr, size);
> +
> +     return addr;
> +}
> +
> +static void __unmap_memory(unsigned long addr)
> +{
> +     if (hv_is_isolation_supported())
> +             hv_unmap_memory(addr);
> +}
> +
> +unsigned long set_memory_decrypted_map(unsigned long addr, unsigned long 
> size)
> +{
> +     if (__set_memory_enc_dec(addr, size / PAGE_SIZE, false))
> +             return (unsigned long)NULL;
> +
> +     return __map_memory(addr, size);
> +}
> +
> +int set_memory_encrypted_unmap(unsigned long addr, unsigned long size)
> +{
> +     __unmap_memory(addr);
> +     return __set_memory_enc_dec(addr, size / PAGE_SIZE, true);
> +}

Why this obsfucation into all kinds of strange helpers?  Also I think
we want an ops vectors (or alternative calls) instead of the random
if checks here.

> + * @vstart:  The virtual start address of the swiotlb memory pool. The 
> swiotlb
> + *           memory pool may be remapped in the memory encrypted case and 
> store

Normall we'd call this vaddr or cpu_addr.

> -     set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
> -     memset(vaddr, 0, bytes);
> +     mem->vstart = (void *)set_memory_decrypted_map((unsigned long)vaddr, 
> bytes);

Please always pass kernel virtual addresses as pointers.

And I think these APIs might need better names, e.g.

arch_dma_map_decrypted and arch_dma_unmap_decrypted.

Also these will need fallback versions for non-x86 architectures that
currently use memory encryption.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.