[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [MirageOS-devel] rough thoughts on mirage block servers
Hi,
I did a bit of work tidying up the "xen-disk" userspace app which allows you to attach a synthetic block device to a xen guest. In particular I functorised it over the V1_LWT.BLOCK interface, so any other mirage block device can be used as the backing store for the synthetic device. I tested this by implementing V1_LWT.BLOCK in the ocaml-vhd library, which allows the synthetic device to be backed by a vhd-format file on the xen host. So far so good!
While doing a bit of performance optimisation, I hit a bit of a snag. Currently a mirage app using a block device is encouraged to issue BLOCK requests in parallel. For example, a filesystem would probably parallel write all data blocks, Lwt.bind, and then reset a metadata pointer to make the new data live in a final update; in effect the Lwt.bind acts like a 'barrier', forbidding I/O re-ordering across it. In the "xen-disk" app I receive queues of requests from the VM and then I issued them serially -- unsurprisingly the performance is poor. I think I need to create a library which can operate on these queues of requests, identify conficts (reads following writes), and parallelises them as much as possible.
The mirage-block-unix implementation is also pretty terrible, since it serialises everything again. We should probably open the file from parellel threads, or switch to some library like aio. I don't know whether the request-paralleliser should know about any kind of maximum queue depth from the server-side, or whether it should just take all the I/O it can get.
Ideas or suggestions welcome! Sorry the above was a bit more of a stream-of-consciousness than a coherent picture :-) Cheers, Dave _______________________________________________ MirageOS-devel mailing list MirageOS-devel@xxxxxxxxxxxxxxxxxxxx http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |