[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-changelog] Instead of queuing messages when the control channels are full, xcs just
# HG changeset patch # User kaf24@xxxxxxxxxxxxxxxxxxxx # Node ID 5a86ab4c9b79a70963ca5460ef2eba039aa0517e # Parent f5a5e61f038ef7a4eb9180be94cbaba2487822a6 Instead of queuing messages when the control channels are full, xcs just does nothing (see ctrl_interface.c:ctrl_chan_write_request()). The following patch throttles the rate in which consoled writes data to xcs. With the current values, you get a responsive console but avoid data corruption in most scenarios. I'm able to get pretty far in my regression test with this patch. With higher throttle values I'm able to get even further (but the console becomes painfully slow). I implemented proper control channel queuing in xenctld in VM-Tools and it's pretty nasty stuff. This should prevent corruption for most users until we can get rid of xcs. Regards, Anthony Liguori Signed-off-by: Anthony Liguori <aliguori@xxxxxxxxxx diff -r f5a5e61f038e -r 5a86ab4c9b79 tools/consoled/io.c --- a/tools/consoled/io.c Fri Aug 5 09:00:50 2005 +++ b/tools/consoled/io.c Fri Aug 5 09:01:30 2005 @@ -288,6 +288,7 @@ fd_set readfds, writefds; int ret; int max_fd = -1; + int num_of_writes = 0; do { struct domain *d; @@ -312,6 +313,17 @@ } ret = select(max_fd + 1, &readfds, &writefds, 0, &tv); + if (tv.tv_sec == 1 && (++num_of_writes % 100) == 0) { + /* FIXME */ + /* This is a nasty hack. xcs does not handle the + control channels filling up well at all. We'll + throttle ourselves here since we do proper + queueing to give the domains a shot at pulling out + the data. Fixing xcs is not worth it as it's + going away */ + tv.tv_usec = 1000; + select(0, 0, 0, 0, &tv); + } enum_domains(); if (FD_ISSET(xcs_data_fd, &readfds)) { _______________________________________________ Xen-changelog mailing list Xen-changelog@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-changelog
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |