[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 07/13] iommu: Make decision about needing IOMMU for hardware domains in advance



>>> On 07.12.17 at 14:50, <olekstysh@xxxxxxxxx> wrote:
> On Thu, Dec 7, 2017 at 10:57 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>> On 06.12.17 at 20:23, <olekstysh@xxxxxxxxx> wrote:
>>> On Wed, Dec 6, 2017 at 7:01 PM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>>>>> On 25.07.17 at 19:26, <olekstysh@xxxxxxxxx> wrote:
>>>>> @@ -175,37 +182,6 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
>>>>>          return;
>>>>>
>>>>>      register_keyhandler('o', &iommu_dump_p2m_table, "dump iommu p2m 
>>>>> table", 0);
>>>>> -    d->need_iommu = !!iommu_dom0_strict;
>>>>> -    if ( need_iommu(d) && !iommu_use_hap_pt(d) )
>>>>> -    {
>>>>> -        struct page_info *page;
>>>>> -        unsigned int i = 0;
>>>>> -        int rc = 0;
>>>>> -
>>>>> -        page_list_for_each ( page, &d->page_list )
>>>>> -        {
>>>>> -            unsigned long mfn = page_to_mfn(page);
>>>>> -            unsigned long gfn = mfn_to_gmfn(d, mfn);
>>>>> -            unsigned int mapping = IOMMUF_readable;
>>>>> -            int ret;
>>>>> -
>>>>> -            if ( ((page->u.inuse.type_info & PGT_count_mask) == 0) ||
>>>>> -                 ((page->u.inuse.type_info & PGT_type_mask)
>>>>> -                  == PGT_writable_page) )
>>>>> -                mapping |= IOMMUF_writable;
>>>>> -
>>>>> -            ret = hd->platform_ops->map_pages(d, gfn, mfn, 0, mapping);
>>>>> -            if ( !rc )
>>>>> -                rc = ret;
>>>>> -
>>>>> -            if ( !(i++ & 0xfffff) )
>>>>> -                process_pending_softirqs();
>>>>> -        }
>>>>> -
>>>>> -        if ( rc )
>>>>> -            printk(XENLOG_WARNING "d%d: IOMMU mapping failed: %d\n",
>>>>> -                   d->domain_id, rc);
>>>>> -    }
>>>>>
>>>>>      return hd->platform_ops->hwdom_init(d);
>>>>>  }
>>>>
>>>> Just to double check - this change was tested on x86 Dom0, at
>>>> least PV (for PVH I'd at least expect that you've did some static
>>>> code analysis to make sure this doesn't put in further roadblocks)?
>>>
>>> I am afraid I didn't get the second part of this sentence.
>>
>> Understandably, since I've broken grammar in the course of
>> re-phrasing a number of times before sending. Dom0 PVH isn't
>> complete at this point, so I can't ask you to actually test it. But
>> I want to be reasonably certain that the change you make won't
>> further complicate this enabling work (you may want to also Cc
>> Roger on future versions of the patch for this very reason), the
>> more that on AMD we've been unconditionally using non-shared
>> page tables for quite some time. (In fact I see chances that the
>> change might actually help the time it takes to set up PVH Dom0,
>> especially when a sufficiently large chunk of memory is being
>> handed to it.)
> 
> As I understand, the current patch was tested on x86 with PV dom0
> (thanks for doing that),

This sounds as if you believe I would have tested anything. I
certainly didn't (or at least I don't recall), and never meant to.

> but wasn't
> tested with PVH dom0 since the latter wasn't ready. And there is some
> activity for bringing PVH dom0
> which the current patch may affect in a negative way (complicate, brake, 
> etc).
> 
> What pending patch(es) or a part of already existing code on x86
> should I pay special attention to?

The question is not so much pending patches, but making sure your
changes don't adversely affect what's already in the tree. Beyond
that I'll defer to Roger.

> Sorry for the maybe naive question, but what should be done from my
> side for this patch to be accepted,
> except addressing comments regarding the patch itself?

You will want (need) to assess the impact of your changes on
code paths you can't possibly test.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.