[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] mem_event: use wait queue when ring is full



On Fri, Dec 16, Andres Lagar-Cavilla wrote:

> >> And both should use wait queues in extreme cases in which a guest
> >> vcpu with a single action generates multiple memory events. Given
> >> that when we hit a border condition the guest vcpu will place one
> >> event and be flagged VPF_mem_event_paused (or whatever that flag is
> >> named), if a guest vcpu generates another event when flagged,
> >> that's our queue for putting the vcpu on a wait queue.
> >
> > An extra flag is not needed.
> Can you elaborate? Which flag is not needed? And why?

The flag you mentioned in your earlier reply, VPF_mem_event_paused.
Since the vcpu is preempted, a new pause_flag will be no gain.

> > +    /*
> > +     * Configure ring accounting:
> > +     * Each guest vcpu should be able to place at least one request.
> > +     * If there are more vcpus than available slots in the ring, not all 
> > vcpus
> > +     * can place requests in the ring anyway.  A minimum (arbitrary) 
> > number of
> > +     * foreign requests will be allowed in this case.
> > +     */
> > +    if ( d->max_vcpus < RING_SIZE(&med->front_ring) )
> > +        med->max_foreign = RING_SIZE(&med->front_ring) - d->max_vcpus;
> > +    if ( med->max_foreign < 13 )
> > +        med->max_foreign = 13;
> Magic number! Why?

Yes, an arbitrary number of slots for foreign requests.
Which amount is correct? 1? 5? 10?
1 is probably closer to the goal of 'let each vcpu put at least one
request'.

> More generally, does this patch apply on top of a previous patch? What's
> the context here?

As I said, its on top of v6 of my patch. I will send out the full patch
later, but I wont be able to actually test the newer version this year.

Olaf

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.