[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-ia64-devel][PATCH] add one page attribute to indicate this page is physical IO page


  • To: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>
  • From: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
  • Date: Fri, 17 Oct 2008 15:43:00 +0800
  • Accept-language: en-US
  • Acceptlanguage: en-US
  • Cc: xen-ia64-devel <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Fri, 17 Oct 2008 00:43:32 -0700
  • List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
  • Thread-index: AckwJylmsk/YM1XHQGOiN/n2/v0RNgAAuRAg
  • Thread-topic: [Xen-ia64-devel][PATCH] add one page attribute to indicate this page is physical IO page

Isaku Yamahata wrote:
> On Fri, Oct 17, 2008 at 02:41:59PM +0800, Xu, Anthony wrote:
>> Isaku Yamahata wrote:
>>> Thanks for the explanation.
>>> I took a look at your big patch which you send privately.
>>> Please correct me if I'm wrong because I haven't reviewed
>>> it line by line.
>>>
>>> It seems like that _PAGE_DIRECT_IO isn't necessary.
>>> Only users of _PAGE_DIRECT_IO (or ASSIGN_direct_io) are
>>> hypercall of XEN_DOMCTL_memory_mapping and XEN_DOMCTL_ioport_mapping
>>> which are used to assign mmio page or port io page to a domain.
>>> I guess that you had hit page reference count issues in
>>> mm_teardown_pte(), so you added _PAGE_DIRECT_IO flags to avoid that.
>>> Correct?
>>
>>
>> I did not encounter reference count issue.
>>
>> The issue that I encounter is, when Xen destroy a domain with
>> assigned device by VTD.
>> Xen tries to free all memory page including the device MMIO, which,
>> of cause, is not normal Memory page and can't be freed.
>>
>>
>> 1288 -    while (mmio_start <= mmio_end) {
>> 1289 -        (void)__assign_domain_page(d, mmio_start, mach_start,
>> ASSIGN_nocache); 1290 +    while (mach_start < mach_end) {
>> 1291 +        __assign_domain_page(d, mmio_start,
>> 1292 +                mach_start, ASSIGN_nocache | ASSIGN_direct_io);
>>
>>
>> The old code uses __assign_domain_page to assign MMIO page to guest
>> with ASSIGN_nocache flags. However pte_mem verify ASSIGN_nocache
>> page as normal page,
>> I added _PAGE_DIRECT_IO to indicate a new memory page type, which is
>> MMIO,
>> Then xen can avoid to free MMIO page.
>>
>>
>> 2670 -#define pte_mem(pte)   (!(pte_val(pte) & _PAGE_IO) &&
>> !pte_none(pte)) 2671 +#define pte_mem(pte)   (!(pte_val(pte) &
>> _PAGE_DIRECT_IO)&&         \ 2672 +       !(pte_val(pte) & _PAGE_IO)
>> && !pte_none(pte))
>>
>
> We are talking about mm_teardown_pte(). Are we on same page?
>
> Isn't such a case detected by mfn_valid()? I suppose the above
> statement is saying no.
> Assuming so, there is a corresponding struct page_info, but the mfn
> isn't ram. So such page should be detected by is_iomem_page(mfn).
> However the current implementation may not implemented is_iomem_page()
> because we doesn't have such pages owned by DOMID_IO.
> So the things to do is to make those pages owned by DOMID_IO, then
> is_iomem_page() will works correctly. Next modify pte_mem() to call
> is_iomem_page().

Understand your point,
Since we alread have DOMID_IO to indicate mmio page,
_PAGE_DIRECT_IO is not needed.

I'll rebuild the patch.

Thanks,
Anthony


>
> thanks,
>
>>
>> Thanks,
>> Anthony
>>
>>
>>>
>>> However, this isn't the correct way.
>>> ia64 does reference count with p2m table. It's the significant
>>> difference from x86. So you have to attack it instead of
>>> working around.
>>>
>>> - The hypercall, XEN_DOMCTL_memory_mapping and
>>>   XEN_DOMCTL_ioport_mapping, doesn't check whether mfn is
>>>   appropriate to assign to a given domain. At least, the VMM should
>>>   check it. X86 port seems to have the same issue.
>>>
>>> - I expected that mfn_valid(mmio page or port io page) returns
>>>   false. If it is correct, _PAGE_DIRECT_IO won't be necessary, I
>>>   suppose. However you seemed to hit the case mfn_valid() returned
>>>   true. (Yes, it depends on memory and io layout. So it's safe to
>>>   assume   so anyway) Hmm, In such a case the corresponding struct
>>>   page_info   isn't used on ia64. On x86 case, such page_info's are
>>>   obtained by DOMID_IO. Then by making page_info obtained by
>>>   DOMID_IO when initialization sequence we can detect such cases.
>>>
>>> thanks,
>>>
>>> On Thu, Oct 16, 2008 at 06:23:01PM +0800, Xu, Anthony wrote:
>>>> Isaku Yamahata wrote:
>>>>> Hi Anthony.
>>>>>
>>>>> I guess you are working on VT-d support for IA64.
>>>>> You've sent out those patches independently which seem to
>>>>> be preparation for VT-d support.
>>>>>
>>>>> However it is very difficult to guess how you are planning to use
>>>>> those modifications eventually. So it's very difficult to review
>>>>> those patches. At this moment I can comment only on patch style
>>>>> which isn't essential.
>>>>>
>>>>> If you already have working patches, could you please send them
>>>>> as a series of patches? (Even un-cleaned patches would help to
>>>>> understand.) If not, could you provide overview of the design or
>>>>> something like that which helps me to understand its overview and
>>>>> how VT-d patches will be implemented.
>>>>
>>>> Yes I already have working patches, but some of them depend on x86
>>>> side patches, and some x86 side patches depend on ia64 side
>>>> patches.
>>>> It is difficult to send them out at a time.
>>>> So I tried to send out patches which is not related to x86 side
>>>> first.
>>>>
>>>>
>>>>>
>>>>> At this moment the followings come into my mind. (Random thoughts)
>>>>> - One of the Xen/IA64 features is lockless P2M table unlike x86
>>>>>   case. I think it would be very difficult to maintain the VT-d
>>>>>   translation table consistent with the p2m and m2p tables without
>>>>> lock.
>>>>
>>>> Because p2m and m2p do not change, don't need to maintain
>>>> consistent. If page flip is used by PV drive, VTD can't work, x86
>>>> side has the same issue, Xen doesn't know when VTD page table can
>>>> be changed, the page table may be used VTD engine.
>>>>
>>>>>
>>>>> - What scope are you aiming?
>>>>>   Now x86 supports VT-d for VMM protection, dom0, PV domU and HVM
>>>>>   with balloon.
>>>> Balloon changes VTD page table, it is at some risk, maybe VTD
>>>> engine is using VTD page table
>>>>
>>>>
>>>>>   On the other hand ia64 doesn't support balloon for HVM because
>>>>>   the p2m for HVM domain is assumed to be read-only.
>>>>>   How about MSI(-X)?
>>>> If host uses MSI, and there is one interrupt for a function, we can
>>>> use IOAPIC to emulate MSI interrupt.
>>>> There is some potential issue here, because you change edge
>>>> triggerred interrupt to level triggerred interrupt.
>>>> But if both host and guest is using MSI, there is no issue.
>>>>
>>>> If host uses MSI(MSIx), and there are more than one interrupts for
>>>> a function, it is difficult to use ioapic to emulate MSI.
>>>>
>>>> Anthony
>>>>
>>>>
>>>>>
>>>>> thanks,
>>>>>
>>>>> On Sun, Sep 28, 2008 at 12:48:48PM +0800, Xu, Anthony wrote:
>>>>>> Add _PAGE_DIRECT_IO page attribute to indicate this page is
>>>>>> physical IO page
>>>>>>
>>>>>> Signed-off-by; Anthony Xu < anthony.xu@xxxxxxxxx >
>>>>>
>>>>>
>>>>>> _______________________________________________
>>>>>> Xen-ia64-devel mailing list
>>>>>> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>>>>>> http://lists.xensource.com/xen-ia64-devel
>>>>
>>>> _______________________________________________
>>>> Xen-ia64-devel mailing list
>>>> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-ia64-devel
>>
>> _______________________________________________
>> Xen-ia64-devel mailing list
>> Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-ia64-devel

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.