[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] x86/mm: Improve ring management for memory events. Do not lose guest events


  • To: "Olaf Hering" <olaf@xxxxxxxxx>
  • From: "Andres Lagar-Cavilla" <andres@xxxxxxxxxxxxxxxx>
  • Date: Fri, 13 Jan 2012 07:14:12 -0800
  • Cc: andres@xxxxxxxxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, tim@xxxxxxx, adin@xxxxxxxxxxxxxx
  • Delivery-date: Fri, 13 Jan 2012 15:13:34 +0000
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=lagarcavilla.org; h=message-id :in-reply-to:references:date:subject:from:to:cc:reply-to :mime-version:content-type:content-transfer-encoding; q=dns; s= lagarcavilla.org; b=LMuFvqBKq49BJT80L0jiqBanNQ0/mFIoqq4Fy36FN+0i 2In8Ih1XcwpFsqvFRL30wZxv0jcaR2ruMocu5sb9pIibeYeRF9Lp0wzXJtq8TbV0 BQAxxrb7mspIcn7x7RLpC5JeAtCmpJKhgggX2HrE00ALVlCZzQqGZzCEtWKeU4Q=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

> On Wed, Jan 11, Andres Lagar-Cavilla wrote:
>
> A few comments:
>
>> -static int mem_event_disable(struct mem_event_domain *med)
>> +static int mem_event_ring_available(struct mem_event_domain *med)
>>  {
>> -    unmap_domain_page(med->ring_page);
>> -    med->ring_page = NULL;
>> +    int avail_req = RING_FREE_REQUESTS(&med->front_ring);
>> +    avail_req -= med->target_producers;
>> +    avail_req -= med->foreign_producers;
>>
>> -    unmap_domain_page(med->shared_page);
>> -    med->shared_page = NULL;
>> +    BUG_ON(avail_req < 0);
>> +
>> +    return avail_req;
>> +}
>> +
>
> mem_event_ring_available() should return unsigned since the values it
> provides can only be positive. The function itself enforces this.

Yup.
>
>> -void p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn)
>> +int p2m_mem_paging_drop_page(struct domain *d, unsigned long gfn)
>>  {
>> -    struct vcpu *v = current;
>>      mem_event_request_t req;
>>
>> -    /* Check that there's space on the ring for this request */
>> -    if ( mem_event_check_ring(d, &d->mem_event->paging) == 0)
>> -    {
>> -        /* Send release notification to pager */
>> -        memset(&req, 0, sizeof(req));
>> -        req.flags |= MEM_EVENT_FLAG_DROP_PAGE;
>> -        req.gfn = gfn;
>> -        req.vcpu_id = v->vcpu_id;
>> +    /* We allow no ring in this unique case, because it won't affect
>> +     * correctness of the guest execution at this point.  If this is
>> the only
>> +     * page that happens to be paged-out, we'll be okay..  but it's
>> likely the
>> +     * guest will crash shortly anyways. */
>> +    int rc = mem_event_claim_slot(d, &d->mem_event->paging);
>> +    if ( rc < 0 )
>> +        return rc;
>>
>> -        mem_event_put_request(d, &d->mem_event->paging, &req);
>> -    }
>> +    /* Send release notification to pager */
>> +    memset(&req, 0, sizeof(req));
>> +    req.type = MEM_EVENT_TYPE_PAGING;
>> +    req.gfn = gfn;
>> +    req.flags = MEM_EVENT_FLAG_DROP_PAGE;
>> +
>> +    mem_event_put_request(d, &d->mem_event->paging, &req);
>> +    return 0;
>>  }
>
> p2m_mem_paging_drop_page() should remain void because the caller has
> already done its work, making it not restartable. Also it is only called
> when a gfn is in paging state, which I'm sure can not happen without a
> ring.

Well, the rationale is that returning an error code can only help, should
new error conditions arise. Keep in mind that the pager and the ring can
disappear at any time, so ENOSYS can still happen.
>
> And quilt says:
> Warning: trailing whitespace in lines 167,254 of
> xen/arch/x86/mm/mem_event.c
> Warning: trailing whitespace in line 168 of xen/common/memory.c
> Warning: trailing whitespace in line 1127 of xen/arch/x86/mm/p2m.c
>
quilt ... the good times.

I'll refresh and add your signed-off-by to cover the portions of the work
that originate from your end, is that ok?

Thanks,
Andres
>
> Olaf
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.