[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-ia64-devel][PATCH][VTD] small patches for VTD



On Thu, Oct 23, 2008 at 10:03:55AM +0800, Xu, Anthony wrote:
> Isaku Yamahata wrote:
> > On Wed, Oct 22, 2008 at 05:35:36PM +0800, Xu, Anthony wrote:
> >> The new one,
> >>
> >>
> >>>> -    if (phy_pte.ma != VA_MATTR_NATPAGE)
> >>>> +    /* if a device is assigned to a domain through VTD, the MMIO
> >>>> for this +     * device needs to retain to UC attribute +     */
> >>>> +    if (phy_pte.ma == VA_MATTR_WC)
> >>>>          phy_pte.ma = VA_MATTR_WB;
> >>>>
> >>>
> >>> Doesn't this break the intention of the c/s 15134:466f71b1e831?a
> >>> To be honest, I'm not sure. Kyouya or Akio, do you have any
> >>> comments?
> >>>
> >> This section is not included, need kyouya or akio confirmation.
> >>
> >> Patches about mm.c is not inculded,
> >> I'll send out a separate patch.
> >
> > Sounds good. the stuff in mm.c seems very tough.
> > However the following patch still touches mm.c.
> > Did you forget to remove it accidently?
> 
> I didn't remove all mm.c small patches,
> I just removed the difficult part, which is related to atomic operation
> 
> Please, check in this patch first, I had tested it by booting linux guest.

There is a race.
guest_physmap_{add, remove}_page() can be called for PV domain
simultaneously.
The p2m table and the iommu table are updated without any lock.
So they can be inconsistent with each other.

thanks,

> >> diff -r 02c8733e2d91 xen/arch/ia64/vmx/viosapic.c
> >> --- a/xen/arch/ia64/vmx/viosapic.c    Wed Oct 22 17:20:15 2008 +0900
> >> +++ b/xen/arch/ia64/vmx/viosapic.c    Wed Oct 22 17:08:32 2008 +0800
> >>                       @@ -121,6 +121,13 @@ redir_num, vector);
> >>          return;
> >>      }
> >> +    if ( iommu_enabled )
> >> +    {
> >> +        spin_unlock(&viosapic->lock);
> >> +        hvm_dpci_eoi(current->domain, redir_num,
> >> &viosapic->redirtbl[redir_num]); +        spin_lock(&viosapic->lock);
> >> +    }
> >> +
> >>      service_iosapic(viosapic);
> >>      spin_unlock(&viosapic->lock);
> >>  }
> >
> > viosapic->isr and irr must handled atomically.
> > So unlocking and locking again breaks the requirement.
> > (I haven't looked the viosapic code very closely, though.
> > So I may be wrong.)
> >
> >
> >> diff -r 02c8733e2d91 xen/arch/ia64/xen/mm.c
> >> --- a/xen/arch/ia64/xen/mm.c  Wed Oct 22 17:20:15 2008 +0900
> >> +++ b/xen/arch/ia64/xen/mm.c  Wed Oct 22 17:08:32 2008 +0800 @@
> >>      -1427,6 +1427,8 @@ if (mfn == INVALID_MFN) {
> >>          // clear pte
> >>          old_pte = ptep_get_and_clear(mm, mpaddr, pte);
> >> +        if(!pte_mem(old_pte))
> >> +            return;
> >>          mfn = pte_pfn(old_pte);
> >>      } else {
> >>          unsigned long old_arflags;
> >> @@ -1463,6 +1465,13 @@
> >>      perfc_incr(zap_domain_page_one);
> >>      if(!mfn_valid(mfn))
> >>          return;
> >> +
> >> +    {
> >> +        int i, j;
> >> +        j = 1 << (PAGE_SHIFT-PAGE_SHIFT_4K);
> >> +        for(i = 0 ; i < j; i++)
> >> +            iommu_unmap_page(d, (mpaddr>>PAGE_SHIFT)*j + i); +    }
> >>
> >>      page = mfn_to_page(mfn);
> >>      BUG_ON((page->count_info & PGC_count_mask) == 0); @@ -2844,10
> >>  +2853,16 @@ __guest_physmap_add_page(struct domain *d, unsigned
> >>                           long gpfn, unsigned long mfn)
> >>  {
> >> +    int i, j;
> >> +
> >>      set_gpfn_from_mfn(mfn, gpfn);
> >>      smp_mb();
> >>      assign_domain_page_replace(d, gpfn << PAGE_SHIFT, mfn,
> >>                                 ASSIGN_writable |
> >> ASSIGN_pgc_allocated); +    j = 1 << (PAGE_SHIFT-PAGE_SHIFT_4K);
> >> +    for(i = 0 ; i < j; i++)
> >> +        iommu_map_page(d, gpfn*j + i, mfn*j + i);
> >> +
> >>  }
> >>
> >>  int
> >
> > The same loop logic. Introducing a helper function would help?
> 

-- 
yamahata

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.