|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2 of 4] xenpaging: use wait queues
> On Thu, Dec 01, Andres Lagar-Cavilla wrote:
>
>> > + spin_lock_init(&d->arch.hvm_domain.gfn_lock);
>> Probably gfn_lock should be replaced with a name a bit more expressive.
>
> Ok, perhaps paged_gfn_queue_lock?
>
>> That notwithstanding, it seems this lock only gets used in the mm layer.
>> If so, it should be declared as an mm_lock_t in arch/x86/mm-locks.h, and
>> subject to ordering constraints.
>
> Is that really needed? The lock is not taken recursivly. What would be
> the benefit of the mm_lock_t in this context? Thats not clear to me.
>
>> Both gfn_lock and the list of wait queues could be collapsed in a single
>> struct, so as to decrease pressure on the size of struct domain.
>
> The global lock allows to adjust the number of vcpus, but yes, maybe it
> can be moved into p2m_mem_paging_queue_head.
>
>
>> > +static struct p2m_mem_paging_queue *p2m_mem_paging_get_queue(struct
>> > domain *d, unsigned long gfn)
>> > +{
>> > + struct p2m_mem_paging_queue_head *h;
>> > + struct p2m_mem_paging_queue *q, *q_match, *q_free;
>> > +
>> > + h = d->arch.hvm_domain.gfn_queue;
>> > + q_match = q_free = NULL;
>> > +
>> > + spin_lock(&d->arch.hvm_domain.gfn_lock);
>> > +
>> > + list_for_each_entry(q, &h->list, list) {
>> "Hashing" the gfn ( i.e. gfn & ((1 << order_of_vcpus) - 1) ) and
>> starting
>> the linear scan from there would shortcut the common case of finding a
>> queued gfn.
>
> I will think about that, good point. Keeping the individual
> p2m_mem_paging_queue* in a flat list could be done.
>
>> Checking the p2m type of the gfn for paging in would make the search
>> O(1)
>> for the case in which the gfn is not queued.
>
> What type check do you mean? These functions are only called from
> within get_gfn_type_access() if p2m_is_paging.
>
>> > +/* Returns 0 if the gfn is still paged */
>> > +static int p2m_mem_paging_get_entry(mfn_t *mfn,
>> > + struct p2m_domain *p2m, unsigned long gfn,
>> > + p2m_type_t *t, p2m_access_t *a, p2m_query_t q,
>> > + unsigned int *page_order)
>> > +{
>> > + *mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order);
>> This will be called in the wake up routine, and it will be racy against
>> any concurrent p2m updates.
>
> How is this different from the current code in get_gfn_type_access()?
> Is get_gfn_type_access() protected by any lock? I thought that only
> p2m_query is allowed to take the p2m_lock?
I proposed using get_gfn* not realizing that that would walk again into
the wait code, my bad. Points i, if you don't perform a lock-protected
access here then it'll have to be updated later.
And ...
>
>> > - /* Evict will fail now, tag this request for pager */
>> > - if ( p2mt == p2m_ram_paging_out )
>> > - req.flags |= MEM_EVENT_FLAG_EVICT_FAIL;
>> > + /* Forward the state only if gfn is in page-out path */
>> > + if ( p2mt == p2m_ram_paging_out || p2mt == p2m_ram_paged ) {
>> > + /* Ignore foreign requests to allow mmap in pager */
>> > + if ( mfn_valid(mfn) && p2mt == p2m_ram_paging_out &&
>> v->domain ==
>> > d ) {
>> > + /* Restore gfn because it is needed by guest before evict
>> */
>> > + set_p2m_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
>> > + paging_mode_log_dirty(d) ? p2m_ram_logdirty :
>> > p2m_ram_rw, a);
>> No event here to tell the pager to cancel its work?
>
> The pager is in the process of writing the gfn to disk after successful
> nomination. Its next step is the eviction, which will now fail because
> the p2mt is not p2m_ram_paging_out anymore. It is already prepared for
> that failure.
> The previous MEM_EVENT_FLAG_EVICT_FAIL was not needed in the first
> place.
Yes it was! The biggest source of overhead is I/O. Tell the pager to stop
I/O early. Otherwise, you've done the paging out I/O pointlessly, to find
out only when you try to finalize the eviction.
Andres
>
>> You're not just adding wait queues, but also short-cutting the state
>> machine of paging, which, whilst delicate, works quite well right now.
>> You
>> should definitely split that into two patches if possible.
>
> I think that can be done, yes.
>
>> And please keep in mind that there are pagers other than xenpaging, so
>> interface churn is a big headache for everyone, if unavoidable.
>
> There will be few changes in the interface in the near future to make it
> more robust.
>
>
>> > mem_event_put_req_producers(d, &d->mem_event->paging);
>> These (mem_event_put_req_producers, foreign_producers) are the kinds of
>> things that are really confusing in the ring management side right now.
>
> You are right, the names should be changed in the patched for mem_event.
>
> Olaf
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |