[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Bunching of hypercalls/Xenbus


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Peter Teoh <htmldeveloper@xxxxxxxxx>
  • Date: Sun, 26 Aug 2007 08:16:21 +0800
  • Delivery-date: Thu, 30 Aug 2007 02:58:08 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=beta; h=received:message-id:to:subject:date:mime-version:content-type:x-priority:x-msmail-priority:x-mailer:x-mimeole:from; b=FirHnj/q4KUfceUC78+9Zfd0aMOg3M6WhQlQBKQ7va/wCzyU+wXxDWq9k/0UF7Spr8hkzmNlADGIdK2h4vIto89ZSQzyHBOIPMFUOVeHY4WUzwSl0H8x77enNmP/Zwb+2egg9iYMOqyQ7H8h0KjZF4MUf859hNbOpEld1tXNHOg=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Apologies for the new questions - please enlighten me.
 
In the traditional Linux kernels, we have the delayed I/O concept to improvement performance.   The disk block I/O request is bunched together whenever possible with the previous block request, for performance reason.
 
Analogously, due to the large overheads in making hypercalls, is it possible that we can do this?   Ie, the core instructions will still be executed in its original order, but the overheads of making the multiple VM exit/entry are all bunched together and executed once.   The hypercalls necessarily are coming from different CPU, right?   Can further improvement be made by removing the atomicity requirements of the CPU instruction that triggered off the VM exit condition? (sometimes at least?)   If so then it may be possible to bunch together hypercalls from the same CPU as well.
 
Similarly, for the Xenbus state transition machine - can we improve performance in inter-domain communication - but not necessarily satisfying all Xenbus request immediately all the time?   Can it be done both for PV and HVM scenario?
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.