[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU



Honestly, I don't have an AMD machine to test my code - I just wrote it for 
completion sake. I based my code on deallocate_next_page_table() in the same 
file. 

I agree that the map/unmap can be easily avoided.

Someone more familiar with AMD IOMMU might be able to comment more.

Thanks,
Santosh

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@xxxxxxxx] 
Sent: Tuesday, August 07, 2012 8:52 AM
To: Santosh Jodh
Cc: wei.wang2@xxxxxxx; xiantao.zhang@xxxxxxxxx; xen-devel; Tim (Xen.org)
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@xxxxxxxxxx> wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c     Thu Aug 02 11:49:37 
> 2012 +0200
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c     Tue Aug 07 07:46:14 
> 2012 -0700
> @@ -22,6 +22,7 @@
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/paging.h>
> +#include <xen/softirq.h>
>  #include <asm/hvm/iommu.h>
>  #include <asm/amd-iommu.h>
>  #include <asm/hvm/svm/amd-iommu-proto.h> @@ -512,6 +513,69 @@ static 
> int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
> +u64 gpa) {
> +    u64 address;
> +    void *table_vaddr, *pde;
> +    u64 next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", 
> page_to_maddr(pg));
> +        return;
> +    }
> +
> +    if ( level > 1 )

As long as the top level call below can never pass <= 1 here and the recursive 
call gets gated accordingly, I don't see why you do it differently here than 
for VT-d, resulting in both unnecessarily deep indentation and a pointless 
map/unmap pair around the conditional.

Jan

> +    {
> +        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +        {
> +            if ( !(index % 2) )
> +                process_pending_softirqs();
> +
> +            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +            entry = (u32*)pde;
> +
> +            next_level = get_field_from_reg_u32(entry[0],
> +                                                IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                                
> + IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +            present = get_field_from_reg_u32(entry[0],
> +                                             IOMMU_PDE_PRESENT_MASK,
> +                                             
> + IOMMU_PDE_PRESENT_SHIFT);
> +
> +            address = gpa + amd_offset_level_address(index, level);
> +            if ( (next_table_maddr != 0) && (next_level != 0)
> +                && present )
> +            {
> +                amd_dump_p2m_table_level(
> +                    maddr_to_page(next_table_maddr), level - 1, address);
> +            }
> +
> +            if ( present )
> +            {
> +                printk("gfn: %-16lx  mfn: %-16lx\n",
> +                       address, next_table_maddr);
> +            }
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d) {
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table ) 
> +        return;
> +
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0); }
> +
>  const struct iommu_ops amd_iommu_ops = {
>      .init = amd_iommu_domain_init,
>      .dom0_init = amd_iommu_dom0_init,



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.