[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v5 04/15] argo: init, destroy and soft-reset, with enable command line opt
On Mon, Jan 21, 2019 at 9:55 AM Roger Pau Monné <roger.pau@xxxxxxxxxx> wrote: > > On Mon, Jan 21, 2019 at 01:59:44AM -0800, Christopher Clark wrote: > > Initialises basic data structures and performs teardown of argo state > > for domain shutdown. > > > > Inclusion of the Argo implementation is dependent on CONFIG_ARGO. > > > > Introduces a new Xen command line parameter 'argo': bool to enable/disable > > the argo hypercall. Defaults to disabled. > > > > New headers: > > public/argo.h: with definions of addresses and ring structure, including > > indexes for atomic update for communication between domain and hypervisor. > > > > xen/argo.h: to expose the hooks for integration into domain lifecycle: > > argo_init: per-domain init of argo data structures for domain_create. > > argo_destroy: teardown for domain_destroy and the error exit > > path of domain_create. > > argo_soft_reset: reset of domain state for domain_soft_reset. > > > > Adds a new field to struct domain: struct argo_domain *argo; > > > > In accordance with recent work on _domain_destroy, argo_destroy is > > idempotent. It will tear down: all rings registered by this domain, all > > rings where this domain is the single sender (ie. specified partner, > > non-wildcard rings), and all pending notifications where this domain is > > awaiting signal about available space in the rings of other domains. > > > > A count will be maintained of the number of rings that a domain has > > registered in order to limit it below the fixed maximum limit defined here. > > > > Macros are defined to verify the internal locking state within the argo > > implementation. The macros are ASSERTed on entry to functions to validate > > and document the required lock state prior to calling. > > > > The hash function for the hashtables that hold ring state is derived from > > the string hashing function djb2 (http://www.cse.yorku.ca/~oz/hash.html) > > by Daniel J. Bernstein. Basic testing with a limited number of domains and > > ports has shown reasonable distribution for the table size. > > > > The software license on the public header is the BSD license, standard > > procedure for the public Xen headers. The public header was originally > > posted under a GPL license at: [1]: > > https://lists.xenproject.org/archives/html/xen-devel/2013-05/msg02710.html > > > > The following ACK by Lars Kurth is to confirm that only people being > > employees of Citrix contributed to the header files in the series posted at > > [1] and that thus the copyright of the files in question is fully owned by > > Citrix. The ACK also confirms that Citrix is happy for the header files to > > be published under a BSD license in this series (which is based on [1]). > > > > Signed-off-by: Christopher Clark <christopher.clark6@xxxxxxxxxxxxxx> > > Acked-by: Lars Kurth <lars.kurth@xxxxxxxxxx> > > Reviewed-by: Ross Philipson <ross.philipson@xxxxxxxxxx> > > Thanks. > > Reviewed-by: Roger Pau Monné <roger.pau@xxxxxxxxxx> > > I've got some nits below, but it's purely cosmetic changes to make the > code cleaner. > > > --- > > > > diff --git a/docs/misc/xen-command-line.pandoc > > b/docs/misc/xen-command-line.pandoc > > index d39bcee..93f41bc 100644 > > --- a/docs/misc/xen-command-line.pandoc > > +++ b/docs/misc/xen-command-line.pandoc > > @@ -182,6 +182,21 @@ Permit Xen to use "Always Running APIC Timer" support > > on compatible hardware > > in combination with cpuidle. This option is only expected to be useful for > > developers wishing Xen to fall back to older timing methods on newer > > hardware. > > > > +### argo > > + = List of [ <bool> ] > > + > > +Controls for the Argo hypervisor-mediated interdomain communication > > service. > > + > > +The functionality that this option controls is only available when Xen has > > been > > +compiled with the build setting for Argo enabled in the build > > configuration. > > + > > +Argo is a interdomain communication mechanism, where Xen acts as the > > central > > +point of authority. Guests may register memory rings to recieve messages, > > +query the status of other domains, and send messages by hypercall, all > > subject > > +to appropriate auditing by Xen. > > + > > +* An overall boolean acts as a global control. Argo is disabled by > > default. > > I'm not sure it's worth adding a list item for the boolean, I would > just add the "Argo is disabled by default" to the first paragraph. ack, thanks, done. > > [...] > > +static struct argo_ring_info * > > +find_ring_info(const struct domain *d, const struct argo_ring_id *id) > > +{ > > + struct list_head *cursor, *bucket; > > + > > + ASSERT(LOCKING_Read_rings_L2(d)); > > + > > + /* List is not modified here. Search and return the match if found. */ > > + bucket = &d->argo->ring_hash[hash_index(id)]; > > + > > + for ( cursor = bucket->next; cursor != bucket; cursor = cursor->next ) > > Why are you open-coding list_for_each here? see response to cover letter > You might also consider using list_for_each_entry, so that you can > avoid the list_entry call below. Ack, have done that. > > > + { > > + struct argo_ring_info *ring_info = > > + list_entry(cursor, struct argo_ring_info, node); > > + const struct argo_ring_id *cmpid = &ring_info->id; > > + > > + if ( cmpid->aport == id->aport && > > + cmpid->domain_id == id->domain_id && > > + cmpid->partner_id == id->partner_id ) > > + { > > + argo_dprintk("found ring_info for ring(%u:%x %u)\n", > > + id->domain_id, id->aport, id->partner_id); > > + return ring_info; > > + } > > + } > > + argo_dprintk("no ring_info for ring(%u:%x %u)\n", > > + id->domain_id, id->aport, id->partner_id); > > + > > + return NULL; > > +} > [...] > > +static void > > +pending_remove_all(const struct domain *d, struct argo_ring_info > > *ring_info) > > +{ > > + struct list_head *ring_pending = &ring_info->pending; > > + struct pending_ent *ent; > > + > > + ASSERT(LOCKING_L3(d, ring_info)); > > + > > + /* Delete all pending notifications from this ring's list. */ > > + while ( !list_empty(ring_pending) ) > > Nit: you could use list_first_entry_or_null that joins the list_empty > and list_entry calls. There are no existing users of list_first_entry_or_null anywhere in Xen, and applying it to this loop results in an assignment within the while condition, which also appears to be very rare construct within Xen, so I just used the list_for_each_entry_safe macro. > > > + { > > + ent = list_entry(ring_pending->next, struct pending_ent, node); > > + > > + /* For wildcard rings, remove each from their wildcard list too. */ > > + if ( ring_info->id.partner_id == XEN_ARGO_DOMID_ANY ) > > + wildcard_pending_list_remove(ent->domain_id, ent); > > + list_del(&ent->node); > > + xfree(ent); > > + } > > + ring_info->npending = 0; > > +} > > + > > +static void > > +wildcard_rings_pending_remove(struct domain *d) > > +{ > > + struct list_head *wildcard_head; > > + > > + ASSERT(LOCKING_Write_L1); > > + > > + /* Delete all pending signals to the domain about wildcard rings. */ > > + wildcard_head = &d->argo->wildcard_pend_list; > > + > > + while ( !list_empty(wildcard_head) ) > > + { > > + struct pending_ent *ent = > > + list_entry(wildcard_head->next, struct pending_ent, node); > > Same here regarding the usage of list_first_entry_or_null. list_for_each_entry_safe. > > > + > > + /* > > + * The ent->node deleted here, and the npending value decreased, > > + * belong to the ring_info of another domain, which is why this > > + * function requires holding W(L1): > > + * it implies the L3 lock that protects that ring_info struct. > > + */ > > + ent->ring_info->npending--; > > + list_del(&ent->node); > > + list_del(&ent->wildcard_node); > > + xfree(ent); > > + } > > +} > [...] > > +static void > > +domain_rings_remove_all(struct domain *d) > > +{ > > + unsigned int i; > > + struct argo_ring_info *ring_info; > > + > > + ASSERT(LOCKING_Write_rings_L2(d)); > > + > > + for ( i = 0; i < ARGO_HASHTABLE_SIZE; ++i ) > > + { > > + struct list_head *bucket = &d->argo->ring_hash[i]; > > + > > + while ( !list_empty(bucket) ) > > + { > > + ring_info = list_entry(bucket->next, struct argo_ring_info, > > node); > > list_first_entry_or_null > > > + ring_remove_info(d, ring_info); > > + } > > + } > > + d->argo->ring_count = 0; > > +} > > + > > +/* > > + * Tear down all rings of other domains where src_d domain is the partner. > > + * (ie. it is the single domain that can send to those rings.) > > + * This will also cancel any pending notifications about those rings. > > + */ > > +static void > > +partner_rings_remove(struct domain *src_d) > > +{ > > + unsigned int i; > > + struct argo_send_info *send_info; > > + struct argo_ring_info *ring_info; > > + struct domain *dst_d; > > + > > + ASSERT(LOCKING_Write_L1); > > + > > + for ( i = 0; i < ARGO_HASHTABLE_SIZE; ++i ) > > + { > > + struct list_head *cursor, *bucket = &src_d->argo->send_hash[i]; > > + > > + /* Remove all ents from the send list. Take each off their ring > > list. */ > > + for ( cursor = bucket->next; cursor != bucket; cursor = > > cursor->next ) > > Another open-coded version of list_for_each, see my comments on the > instances above. list_for_each_entry_safe > > > + { > > + send_info = list_entry(cursor, struct argo_send_info, node); > > send_info should be defined here to reduce it's scope. send_info is now iterating in the list_for_each_entry_safe macro, so its position here is retained since now required. > > > + > > + dst_d = get_domain_by_id(send_info->id.domain_id); > > + if ( dst_d && dst_d->argo ) > > + { > > + ring_info = find_ring_info(dst_d, &send_info->id); > > ring_info should be defined here. ack, done. > > > + if ( ring_info ) > > + { > > + ring_remove_info(dst_d, ring_info); > > + dst_d->argo->ring_count--; > > + } > > + else > > + ASSERT_UNREACHABLE(); > > + } > > + else > > + ASSERT_UNREACHABLE(); > > + > > + if ( dst_d ) > > + put_domain(dst_d); > > + > > + list_del(&send_info->node); > > + xfree(send_info); > > + } > > + } > > +} thanks, Christopher _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |