[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] qemu write cacheing and DMA IDE writes
I've been doing some merge work between tools/ioemu and qemu upstream. I came across this commit: changeset: 11209:9bb6c1c1890a07885265bbc59f4dbb660312974e date: Sun Aug 20 23:59:34 2006 +0100 files: [...] description: [qemu] hdparm tunable IDE write cache for HVM qemu 0.8.2 has a flush callback to the storage backends, so now it is possible to implement hdparm tunable IDE write cache enable/disable for guest domains, allowing people to pick speed or data consistency on a case by case basis. As an added benefit, really large LBA48 IOs will now no longer be broken up into smaller IOs on the host side. From: Rik van Riel <riel@xxxxxxxxxx> Signed-off-by: Christian Limpach <Christian.Limpach@xxxxxxxxxxxxx> However there seems to me to a be a bug in it: it does not take effect for DMA writes, which are handled by a separate set of functions. Since most guest operating systems will be using (emulated) DMA, it seems that the result is that we advertise configurable write cacheing but in fact in most cases always cache. To implement configurable write cacheing for DMA would be possible but would involve introducing various new complications to arrange to call an aio fsync, or pass the cacheing flag into the underlying block implementations. Also, according to the ATA spec the `turn off write cache' command must also flush the cache, and that wasn't done. My question is: given how long this has been like this, do we care ? It seems likely that HVM guests switching from emulated IDE to PV drivers may make use of some of the flushing facilities but I'm not aware of the details. My options wrt the qemu merge are to drop all of these related changes, retain what we have but leave DMA transfers always cached, or to fix it properly. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |