[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2-resend 22/30] libxl: ocaml: event management [and 1 more messages]

Rob Hoes writes ("[Xen-devel] [PATCH v2-resend 22/30] libxl: ocaml: event management"):

+/* Event handling */
+short Poll_val(value event)
+       switch (Int_val(event)) {
+               case 0: res = POLLIN; break;
+               case 1: res = POLLPRI; break;
+short Poll_events_val(value event_list)
+       while (event_list != Val_emptylist) {
+               events |= Poll_val(Field(event_list, 0));
+               event_list = Field(event_list, 1);

On 12/11/13 14:56, Ian Jackson wrote:

This is quite striking.  You're converting a bitfield into a linked
list of consed enums.  Does ocaml really not have something more
resembling a set-of-small-enum type, represeted as a bitfield ?

The result is going to be a lot of consing every time libxl scratches
its nose.  In some cases very frequently.  For example, if we're
running the bootloader and copying input and output back and forth,
we're using the datacopier machinery in libxl_aoutils.c.  That
involves enabling the fd writeability callback on each output fd,
every time data is read from the input fd, and then disabling the
writeability callback every time the data has been written.  So one
fd register/deregister pair for every lump of console output.  There
are probably other examples.

Unfortunately there's no direct support for bitfields in OCaml's heap data representation. The common pattern is to convert bitfields into lists of constructors e.g.


On the positive side, the GC is optimised specifically for the case of short-lived small objects, since this is what you get when you write a compiler or a theorem prover. An allocation in the minor heap is simply a pointer bump, and the trash is chucked out pretty often. The rule of thumb is that anything which has the allocation profile of a compiler or a theorem prover usually works pretty well :-)

I think if we're allocating a (shortish) list per "lump" of console I/O we're probably ok since I assume we're allocating and deallocating bigger buffers for the console data anyway. For higher throughput channels (vchan, network, disk etc) I'd go for larger, statically-allocated pools of buffers for the data and use a bigger lump-size to amortize the cost of the metadata handling.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.