[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 03/29] tools/xenlogd: connect to frontend
On 01.11.23 20:21, Jason Andryuk wrote: On Wed, Nov 1, 2023 at 5:34 AM Juergen Gross <jgross@xxxxxxxx> wrote:Add the code for connecting to frontends to xenlogd. Signed-off-by: Juergen Gross <jgross@xxxxxxxx>diff --git a/tools/xenlogd/xenlogd.c b/tools/xenlogd/xenlogd.c index 792d1026a3..da0a09a122 100644 --- a/tools/xenlogd/xenlogd.c +++ b/tools/xenlogd/xenlogd.c+static void connect_device(device *device) +{ + unsigned int val; + xenevtchn_port_or_error_t evtchn;1.> ++ val = read_frontend_node_uint(device, "version", 0); + if ( val != 1 ) + return connect_err(device, "frontend specifies illegal version"); + val = read_frontend_node_uint(device, "num-rings", 0); + if ( val != 1 ) + return connect_err(device, "frontend specifies illegal ring number");Linux uses 2 rings (XEN_9PFS_NUM_RINGS), and it doesn't connect when max-rings is less than that. max_rings = xenbus_read_unsigned(dev->otherend, "max-rings", 0); if (max_rings < XEN_9PFS_NUM_RINGS) return -EINVAL; new_device() writes max-rings as 1. So this works for mini-os, but not Linux. I'm not requesting you to change it - just noting it. Thanks for the note. I'll change it to allow more rings. + + val = read_frontend_node_uint(device, "event-channel-0", 0); + if ( val == 0 ) + return connect_err(device, "frontend specifies illegal evtchn"); + evtchn = xenevtchn_bind_interdomain(xe, device->domid, val); + if ( evtchn < 0 ) + return connect_err(device, "could not bind to event channel"); + device->evtchn = evtchn; + + val = read_frontend_node_uint(device, "ring-ref0", 0); + if ( val == 0 ) + return connect_err(device, "frontend specifies illegal grant for ring"); + device->intf = xengnttab_map_grant_ref(xg, device->domid, val, + PROT_READ | PROT_WRITE); + if ( !device->intf ) + return connect_err(device, "could not map interface page"); + device->ring_order = device->intf->ring_order; + if ( device->ring_order > 9 || device->ring_order < 1 ) + return connect_err(device, "frontend specifies illegal ring order"); + device->ring_size = XEN_FLEX_RING_SIZE(device->ring_order); + device->data.in = xengnttab_map_domain_grant_refs(xg, + 1 << device->ring_order, + device->domid, + device->intf->ref, + PROT_READ | PROT_WRITE); + if ( !device->data.in ) + return connect_err(device, "could not map ring pages"); + device->data.out = device->data.in + device->ring_size; + + if ( pthread_create(&device->thread, NULL, io_thread, device) ) + return connect_err(device, "could not start I/O thread"); + device->thread_active = true; + + write_backend_state(device, XenbusStateConnected); +} +@@ -122,6 +669,11 @@ int main(int argc, char *argv[]) int syslog_mask = LOG_MASK(LOG_WARNING) | LOG_MASK(LOG_ERR) | LOG_MASK(LOG_CRIT) | LOG_MASK(LOG_ALERT) | LOG_MASK(LOG_EMERG); + char **watch; + struct pollfd p[2] = { + { .events = POLLIN, .revents = POLLIN },Are you intentionally setting revents to enter the loop initially? Shouldn't the watch registration trigger it to fire anyway? I don't remember where I got this from. Probably I really wanted to use the first loop iteration already for processing the first response. I think I can drop setting revents. + { .events = POLLIN } + }; umask(027); if ( getenv("XENLOGD_VERBOSE") ) @@ -134,9 +686,26 @@ int main(int argc, char *argv[]) xen_connect(); + if ( !xs_watch(xs, "backend/xen_9pfs", "main") ) + do_err("xs_watch() in main thread failed"); + p[0].fd = xs_fileno(xs); + p[1].fd = xenevtchn_fd(xe); + + scan_backend(); + while ( !stop_me ) { - sleep(60); + while ( (p[0].revents & POLLIN) && + (watch = xs_check_watch(xs)) != NULL ) + { + handle_watch(watch[XS_WATCH_PATH], watch[XS_WATCH_TOKEN]); + free(watch); + } + + if ( p[1].revents & POLLIN ) + handle_event(); + + poll(p, 2, 10000);Can you just use an infinite timeout and rely on the signal interrupting the system call? Yes, probably. Juergen Attachment:
OpenPGP_0xB0DE9DD628BF132F.asc Attachment:
OpenPGP_signature.asc
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |