[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Re: questions about the block backend/frontend driver
Thank you very much for your reply.
2008/9/9 Konrad Rzeszutek <konrad@xxxxxxxxxxxxxxx>
I run the blktrace programs and could get the detailed information of disk I/O read/write requests. However, the blktrace is a tool which provides detailed information about request queue operations to the user space.
> > Yuming > > 2008/9/8 Yuming fang <fangyuming.leo@xxxxxxxxx> > > > Hi, Everyone, > > > > I am trying to understand the code of the block backend/frontend driver. I > > konw the blkback and blkfront communicate with each other through event > > channel and buffer ring. But there are some questions I could not > > understand. > > > > 1. When dom0 receive one disk request from domU1 and another disk request > > from domU2 simultaneously, how these two disk requests are pushed into Linux > > Kernel I/O Scheduler? How the Xen sort them before pushing them into the > > Linux Kernel I/O Scheduler? That depends on which elevator you have. Xen does not sort them, just issues Yeah, after your explanation, I read the code of blkback.c and understand some about it. In the Xen3.3 version, I find there is a \linux-2.6.18-xen-3.3.0\block\ directory, which includes the elevator.c, Linux I/O Scheduling algorithm files(cfq-iosched.c, deadline-iosched.c and so on). Do these files decide the disk I/O Scheduling Algorithm of the Linux-xen0? And if I want to add a different disk I/O Scheduling algorithm in Xen3.3, could I add it in this directory(linux-2.6.18-xen-3.3.0\linux-2.6.18-xen-3.3.0\block\)?
I think it uses the functions of unplug_queue(blkif_t *blkif) and plug_queue(blkif_t *blkif, struct bio *bio) (in \linux-2.6.18-xen-3.3.0\drivers\xen\blkback\blkback,.c) to process the requests. Is my explanation right?
Thanks
Yuming _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |