[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] Re: [PATCH 0/3][IA64] Accelerate IDE PIO on HVM/IA64

Hi Keir,

For disk read, new buffering mechanism works as follows:

 1. Xen traps a single load instruction which performs PIO. 
    FYI, IA64 doesn't have IN/OUT instructions and io ports are memory
+2. Xen checks whether the data is already buffered. 
+3. If the data is buffered and not the last one, 
    xen just returns the data to a guest.
 4. Otherwise, xen make an i/o request to qemu as usual.
    And the guest blocks.
 5. Qemu receives the i/o request and prepares the block of data that
    is buffered in qemu.
+6. Qemu copies the block to a shared page which I add newly as
    "buffered_pio_page". That is, it exposes the qemu's buffering
    data to xen.
 7. Qemu returns the single data on i/o request to xen as usual.
 8. Xen resumes the guest.

The above lines beginning with + are newly added mechanism. 
1,2,3 are almost repeated and transactions between xen and qemu
are drastically reduced.

For disk write, it is almost the same as the above explanation.
The difference is a direction of copy. And what I have to say is
that qemu does nothing until the qemu's buffer becomes full. So
the transaction to qemu can be deferred.


Keir Fraser writes:
 > On 27/2/07 09:34, "Kouya SHIMURA" <kouya@xxxxxxxxxxxxxx> wrote:
 > > The basic idea is to add a buffering mechanism in a hypervisor.
 > > I know this approach is not sophisticated. But there is no other
 > > good way in IA64 which has no string instructions like x86's.
 > > 
 > > This patchset is indispensable to support windows/ia64 on HVM
 > > since installing windows and crash dumping is terribly slow.
 > Can you explain how this new code works? As I understand it the problem is
 > that each PIO instruction decoded by Xen and propagated to qemu only
 > transfers a single word of data. How does this new buffering mechanism work
 > around this?
 >  -- Keir

Xen-ia64-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.