[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1 of 2] After preparing a page for page-in, allow immediate fill-in of the page contents



> Hi,
>
> This looks good to me.  I think it needs two more things to make it
> correct (as well as the tools patch 2/2):
>  - an update to the xenpaging tool to use the new interface; and
Sure, have it ready, will definitely cc Olaf for his ack on that one.
>  - a possible update to the paging state machine --- after all, if the
>    prep call allocates the pageand fills its contents, do we need
>    any more stages on the page-in path?
I am kind of torn about this. Maybe the pager wants to do a set of loads,
and then fire off many vcpu unpauses in a batched fashion (which is
possible with patches I submitted later).

This isn't a necessity for correctness, though. And we still need the
resume kick for cases like a guest accessing a page that has not been
paged out yet (p2m_ram_paging_out)

Andres

>
> One more comment below:
>
> At 16:52 -0500 on 29 Nov (1322585560), Andres Lagar-Cavilla wrote:
>> diff -r 4ee6d40edc2c -r 0ce71e5bfaac xen/arch/x86/mm/p2m.c
>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -974,14 +974,43 @@ void p2m_mem_paging_populate(struct doma
>>   * mfn if populate was called for  gfn which was nominated but not
>> evicted. In
>>   * this case only the p2mt needs to be forwarded.
>>   */
>> -int p2m_mem_paging_prep(struct domain *d, unsigned long gfn)
>> +int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t
>> buffer)
>>  {
>>      struct page_info *page;
>>      p2m_type_t p2mt;
>>      p2m_access_t a;
>> -    mfn_t mfn;
>> +    mfn_t mfn, buf_mfn = _mfn(INVALID_MFN);
>>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>> -    int ret;
>> +    int ret, page_extant = 1;
>> +    void *buf_map = NULL;
>> +
>> +    /* Map buffer page, if any, and get a reference */
>> +    if ( buffer )
>> +    {
>> +        l1_pgentry_t l1e;
>> +        unsigned long buf_gfn;
>> +        p2m_type_t buf_p2mt;
>> +
>> +        if ( (buffer & (PAGE_SIZE - 1)) ||
>> +             (!access_ok(buffer, PAGE_SIZE)) )
>> +            return -EINVAL;
>> +
>> +        guest_get_eff_l1e(current, buffer, &l1e);
>> +        buf_gfn = l1e_get_pfn(l1e);
>> +        buf_mfn = get_gfn(current->domain, buf_gfn,
>> +                            &buf_p2mt);
>> +
>> +        if ( likely( mfn_valid(buf_mfn) &&
>> +                     p2m_is_ram(buf_p2mt) ) )
>> +        {
>> +            get_page(mfn_to_page(buf_mfn), current->domain);
>> +            buf_map = map_domain_page(mfn_x(buf_mfn));
>> +            put_gfn(current->domain, buf_gfn);
>> +        } else {
>> +            put_gfn(current->domain, buf_gfn);
>> +            return -EINVAL;
>> +        }
>> +    }
>
> We could maybe avoid all this mechanism by doing a copy_from_user() of
> the buffer contents directly into the new page, instead of an explicit
> map-and-memcpy().
>
> Cheers,
>
> Tim.
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.