[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2 of 5] Improve ring management for memory events. Do not lose guest events



> On Tue, Nov 29, Andres Lagar-Cavilla wrote:
>
>> @@ -133,31 +185,95 @@ static int mem_event_disable(struct mem_
>>      return 0;
>>  }
>>
>> -void mem_event_put_request(struct domain *d, struct mem_event_domain
>> *med, mem_event_request_t *req)
>> +static inline int mem_event_ring_free(struct domain *d, struct
>> mem_event_domain *med)
>>  {
>> +    int free_requests;
>> +
>> +    free_requests = RING_FREE_REQUESTS(&med->front_ring);
>> +    if ( unlikely(free_requests < d->max_vcpus) )
>> +    {
>> +        /* This may happen during normal operation (hopefully not
>> often). */
>> +        gdprintk(XENLOG_INFO, "mem_event request slots for domain %d:
>> %d\n",
>> +                               d->domain_id, free_requests);
>> +    }
>> +
>> +    return free_requests;
>> +}
>> +
>> +/* Return values
>> + * zero: success
>> + * -ENOSYS: no ring
>> + * -EAGAIN: ring is full and the event has not been transmitted.
>> + *          Only foreign vcpu's get EAGAIN
>> + * -EBUSY: guest vcpu has been paused due to ring congestion
>> + */
>> +int mem_event_put_request(struct domain *d, struct mem_event_domain
>> *med, mem_event_request_t *req)
>> +{
>> +    int ret = 0;
>> +    int foreign = (d != current->domain);
>
>> +    /*
>> +     * We ensure that each vcpu can put at least *one* event -- because
>> some
>> +     * events are not repeatable, such as dropping a page.  This will
>> ensure no
>> +     * vCPU is left with an event that they must place on the ring, but
>> cannot.
>> +     * They will be paused after the event is placed.
>> +     * See large comment below in mem_event_unpause_vcpus().
>> +     */
>> +    if ( !foreign && mem_event_ring_free(d, med) < d->max_vcpus )
>> +    {
>> +        mem_event_mark_and_pause(current, med);
>> +        ret = -EBUSY;
>> +    }
>>
>>      mem_event_ring_unlock(med);
>>
>>      notify_via_xen_event_channel(d, med->xen_port);
>> +
>> +    return ret;
>
>
> What will happen if the guest has more vcpus than r->nr_ents in the ring
> buffer? To me it looks like no event can be placed into the ring and
> -EBUSY is returned instead.

MAX_HVM_VCPUS sits at 128 right now. Haven't compile checked, but that
probably means we would need a two page ring. And then, when 1024-cpu
hosts arrive and we grow MAX_HVM_VCPUS, we grow the ring size again.

Or, we could limit the constraint to the number of online vcpus, which
would get somewhat tricky for vcpu hot-plugging.

I can fix that separately, once there is a decision on which way to go re
ring management.
Andres

>
> Olaf
>



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.