[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH RFC 3/4] xen: add new stable control hypercall


  • To: Juergen Gross <jgross@xxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Thu, 25 Nov 2021 10:38:41 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LWUsxoyMaoylYk7n/ZmtPRb32D9xpyOF1V6P0u5DQHY=; b=m2JcTGyTv7sjOYOSA1uJAVt/nBgVERCAdQ5na2TueTNT0GH5xFBx8lpw0IYFND0xw1ku4cHF2juHXwJPElS77fT5CHIV9KNR2del0Z34489q0PFbGMtw2LkB8VLYqRFVS+ph70poK55FkTDJeLtTuu1Sh7B8ybYQECcQ1EhtMwgggkEK8ceeTovl8DVIYZc6zeOrQ5fMxHaQo/iHKT2ATtmWgKgYlSb1gAiJ2wMZ4Zv5pTisC3BYOvaUMFHx/pMZx11URyInFpMW4Krr5zM5j5QbvNzbFA7P+o4pVYiewY4VZ4PQOMkUWZSRnGMbEYb4IUgMriLtq3pFS1Ct8XOreg==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=dhYzu/MVH+02F+6I9Rz7tp8LkNU+ums2fKz4OYQzV3LDzdTATBtWe6BKZvb/FoGHJvNVX5swtroKBui+qespBZfmEYz3htfPWygKPLGp0filOvPFpcTfuW/kstV5JzRG2lr7gLc3A8SrRDcA39gddBLF0PScFHcf3F6xceJ8Bvr7sShy2MIl44nVKXCSxNyWACOueuaAm/wHKlmhkIa0KKpCTfAuuZYZFkbj7X3n6gcmcKl/mNe7GX6LvcN3DtJxKNM141RRGXxE3kgJ6qJkEF5IsQC/SP9sVJyo4g5Y6wwLmkFKN+fASeX7r1kyR5/XlaIcHD48D2kWVbaUAKxBlw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>, "Daniel P. Smith" <dpsmith@xxxxxxxxxxxxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Delivery-date: Thu, 25 Nov 2021 09:39:08 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 25.11.2021 07:55, Juergen Gross wrote:
> On 22.11.21 16:39, Jan Beulich wrote:
>> On 14.09.2021 14:35, Juergen Gross wrote:
>>> @@ -103,6 +104,43 @@ void domain_reset_states(void)
>>>       rcu_read_unlock(&domlist_read_lock);
>>>   }
>>>   
>>> +int domain_get_dom_state_changed(struct xen_control_changed_domain *info)
>>> +{
>>> +    unsigned int dom;
>>> +    struct domain *d;
>>> +
>>> +    while ( (dom = find_first_bit(dom_state_changed, DOMID_MASK + 1)) <
>>> +            DOMID_FIRST_RESERVED )
>>
>> As per my comment on the earlier patch - the use of DOMID_MASK + 1 vs
>> is quite puzzling here.
> 
> Okay, will change that.
> 
>>
>>> +    {
>>> +        d = rcu_lock_domain_by_id(dom);
>>> +
>>> +        if ( test_and_clear_bit(dom, dom_state_changed) )
>>> +        {
>>> +            info->domid = dom;
>>> +            if ( d )
>>> +            {
>>> +                info->state = XEN_CONTROL_CHANGEDDOM_STATE_EXIST;
>>> +                if ( d->is_shut_down )
>>> +                    info->state |= XEN_CONTROL_CHANGEDDOM_STATE_SHUTDOWN;
>>> +                if ( d->is_dying == DOMDYING_dead )
>>> +                    info->state |= XEN_CONTROL_CHANGEDDOM_STATE_DYING;
>>> +                info->unique_id = d->unique_id;
>>> +
>>> +                rcu_unlock_domain(d);
>>> +            }
>>> +
>>> +            return 0;
>>
>> With rapid creation of short lived domains, will the caller ever get to
>> see information on higher numbered domains (if, say, it gets "suitably"
>> preempted within its own environment)? IOW shouldn't there be a way for
>> the caller to specify a domid to start from?
> 
> I'd rather have a local variable for the last reported domid and start
> from that.

Well, it probably doesn't matter much to have yet one more aspect making
this a single-consumer-only interface.

>>> +/*
>>> + * XEN_CONTROL_OP_get_state_changed_domain
>>> + *
>>> + * Get information about a domain having changed state and reset the state
>>> + * change indicator for that domain. This function is usable only by a 
>>> domain
>>> + * having registered the VIRQ_DOM_EXC event (normally Xenstore).
>>> + *
>>> + * arg: XEN_GUEST_HANDLE(struct xen_control_changed_domain)
>>> + *
>>> + * Possible return values:
>>> + * 0: success
>>> + * <0 : negative Xen errno value
>>> + */
>>> +#define XEN_CONTROL_OP_get_state_changed_domain     1
>>> +struct xen_control_changed_domain {
>>> +    domid_t domid;
>>> +    uint16_t state;
>>> +#define XEN_CONTROL_CHANGEDDOM_STATE_EXIST     0x0001  /* Domain is 
>>> existing. */
>>> +#define XEN_CONTROL_CHANGEDDOM_STATE_SHUTDOWN  0x0002  /* Shutdown 
>>> finished. */
>>> +#define XEN_CONTROL_CHANGEDDOM_STATE_DYING     0x0004  /* Domain dying. */
>>> +    uint32_t pad1;           /* Returned as 0. */
>>> +    uint64_t unique_id;      /* Unique domain identifier. */
>>> +    uint64_t pad2[6];        /* Returned as 0. */
>>
>> I think the padding fields have to be zero on input, not just on return.
> 
> I don't see why this would be needed, as this structure is only ever
> copied to the caller, so "on input" just doesn't apply here.
> 
>> Unless you mean to mandate them to be OUT only now and forever. I also
> 
> The whole struct is OUT only.

Right now, yes. I wouldn't exclude "pad1" to become a flags field,
controlling some future behavioral aspect of the operation.

>> wonder how the trailing padding plays up with the version sub-op: Do we
>> really need such double precaution?
> 
> I can remove it.
> 
>> Also - should we use uint64_aligned_t here?
> 
> Yes.

But you realize this isn't straightforward, for the type not being
available in plain C89 (nor C99)? That's why it's presently used in
tools-only interfaces only, and the respective header are excluded
from the "is ANSI compatible" checking (memory.h and hvm/dm_op.h
have special but imo crude "precautions").

Jan




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.