[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH][RFC] consider vcpu-pin weight on Credit Scheduler TAKE2


  • To: Atsushi SAKAI <sakaia@xxxxxxxxxxxxxx>
  • From: Emmanuel Ackaouy <ackaouy@xxxxxxxxx>
  • Date: Wed, 27 Jun 2007 13:46:38 +0200
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 27 Jun 2007 04:44:38 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:in-reply-to:references:mime-version:content-type:message-id:content-transfer-encoding:cc:from:subject:date:to:x-mailer; b=VNGf0UpTRsxMBD8gYqvRBHnNH5GC+lM6YoTk2X5UBDb96rxJjCFRGUHhPCW2GggBXB6rP1rl5I2iLV9bF20Be/G4RhqB4/z3Zuk68pU2wJbQZuPnYnzvlW+evuVKMOEwabSvoGnj2Av/uHlbjTTDhJ01S19LulRd8V1Pltywelo=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

I think this patch is too large and intrusive in the common paths.
I understand the problem you are trying to fix. I don't think it is
serious enough to call for such a large change. The accounting
code is already tricky enough, don't you think? If you reduce the
scope of the problem you're addressing, I think we should be
able to get a much smaller, cleaner and robust change in place.

There are many different scenarios when using pinning that
screws with set weights. Have you considered them all?

For example:

VCPU0.0:0-1, VCPU0.1:1-2 weight 256
VCPU1.0-0-2, VCPU1.1:0-2 weight 512

Does your patch deal with cases when there are multiple
domains with multiple VCPUs each and not all sharing the
same cpu affinity mask? I'm not even sure myself what
should happen in some of these situations...

I argue that the general problem isn't important to solve. The
interesting problem is a small subset: When a set of physical
CPUs are set aside for a specific group of domains, setting
weights for those domains should behave as expected. For
example, on an 8way host, you could set aside 2CPUs for
development work and assign different weights to domains
running in that dev group. You would expect the weights to
work normally.

The best way to do this though is not to screw around with
weights and credit when VCPUs are pinned. The cleanest
modification is to run distinct credit schedulers: 1 for dev on
2CPUs, and 1 for the rest.

You could probably achieve this in a much smaller patch which
would include administrative interfaces for creating and destroying
these dynamic CPU partition groups as well assigning domains to
them.

On Jun 27, 2007, at 9:58, Atsushi SAKAI wrote:

Hi, Keir

This patch intends
to consider vcpu-pin weight on credit scheduler TAKE2.
http://lists.xensource.com/archives/html/xen-devel/2007-06/ msg00359.html

The difference from previous one is
1) Coding style clean up
2) Skip loop for unused vcpu-pin-count.
3) Remove if pin_count ==1 in multiple loop.
   Then pin_count ==1 is another loop.

Signed-off-by: Atsushi SAKAI <sakaia@xxxxxxxxxxxxxx>

And one question,
Does this patch need following tune up for reducing multiple loop?

From following

-  /* sort weight */
-  for(j=0;j<pin_count;j++)
-  {
-      sortflag = 0;
-      for(k=1;k<pin_count;k++)
-      {
- if ( pcpu_weight[pcpu_id_list[k-1]] > pcpu_weight[pcpu_id_list[k]]
)
-          {
-              sortflag = 1;
-              pcpu_id_handle  = pcpu_id_list[k-1];
-              pcpu_id_list[k-1] = pcpu_id_list[k];
-              pcpu_id_list[k]   = pcpu_id_handle;
-          }
-      }
-      if( sortflag == 0)break;
-  }

To following

+     /* sort weight */
+     for(k=1;k<pin_count;k++)
+     {
+ if ( pcpu_weight[pcpu_id_list[k-1]] > pcpu_weight[pcpu_id_list[k]]
)
+          {
+              pcpu_id_handle  = pcpu_id_list[k-1];
+              pcpu_id_list[k-1] = pcpu_id_list[k];
+              pcpu_id_list[k]   = pcpu_id_handle;
+              if (k > 1) k -= 2;
+           }
+     }


Thanks
Atsushi SAKAI


<vcpupinweight0627.patch>______________________________________________ _
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.