[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 18/26] xen/domctl: wrap xsm_getdomaininfo() with CONFIG_MGMT_HYPERCALLS



On Mon, 29 Sep 2025, Stefano Stabellini wrote:
> On Sun, 28 Sep 2025, Jan Beulich wrote:
> > On 26.09.2025 21:24, Stefano Stabellini wrote:
> > > On Thu, 25 Sep 2025, Penny, Zheng wrote:
> > >>> -----Original Message-----
> > >>> From: Jan Beulich <jbeulich@xxxxxxxx>
> > >>> Sent: Friday, September 26, 2025 2:53 PM
> > >>> To: Penny, Zheng <penny.zheng@xxxxxxx>
> > >>> Cc: Huang, Ray <Ray.Huang@xxxxxxx>; Daniel P. Smith
> > >>> <dpsmith@xxxxxxxxxxxxxxxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx; 
> > >>> Stabellini,
> > >>> Stefano <stefano.stabellini@xxxxxxx>; Andryuk, Jason
> > >>> <Jason.Andryuk@xxxxxxx>
> > >>> Subject: Re: [PATCH v2 18/26] xen/domctl: wrap xsm_getdomaininfo() with
> > >>> CONFIG_MGMT_HYPERCALLS
> > >>>
> > >>> On 26.09.2025 06:41, Penny, Zheng wrote:
> > >>>>> -----Original Message-----
> > >>>>> From: Jan Beulich <jbeulich@xxxxxxxx>
> > >>>>> Sent: Thursday, September 25, 2025 10:29 PM
> > >>>>>
> > >>>>> On 25.09.2025 11:41, Penny, Zheng wrote:
> > >>>>>>> -----Original Message-----
> > >>>>>>> From: Jan Beulich <jbeulich@xxxxxxxx>
> > >>>>>>> Sent: Thursday, September 11, 2025 9:30 PM
> > >>>>>>>
> > >>>>>>> On 10.09.2025 09:38, Penny Zheng wrote:
> > >>>>>>>> --- a/xen/include/xsm/xsm.h
> > >>>>>>>> +++ b/xen/include/xsm/xsm.h
> > >>>>>>>> @@ -55,8 +55,8 @@ struct xsm_ops {
> > >>>>>>>>      void (*security_domaininfo)(struct domain *d,
> > >>>>>>>>                                  struct xen_domctl_getdomaininfo 
> > >>>>>>>> *info);
> > >>>>>>>>      int (*domain_create)(struct domain *d, uint32_t ssidref);
> > >>>>>>>> -    int (*getdomaininfo)(struct domain *d);
> > >>>>>>>>  #ifdef CONFIG_MGMT_HYPERCALLS
> > >>>>>>>> +    int (*getdomaininfo)(struct domain *d);
> > >>>>>>>>      int (*domctl_scheduler_op)(struct domain *d, int op);
> > >>>>>>>>      int (*sysctl_scheduler_op)(int op);
> > >>>>>>>>      int (*set_target)(struct domain *d, struct domain *e); @@
> > >>>>>>>> -234,7
> > >>>>>>>> +234,11 @@ static inline int xsm_domain_create(
> > >>>>>>>>
> > >>>>>>>>  static inline int xsm_getdomaininfo(xsm_default_t def, struct
> > >>>>>>>> domain
> > >>>>>>>> *d)  {
> > >>>>>>>> +#ifdef CONFIG_MGMT_HYPERCALLS
> > >>>>>>>>      return alternative_call(xsm_ops.getdomaininfo, d);
> > >>>>>>>> +#else
> > >>>>>>>> +    return -EOPNOTSUPP;
> > >>>>>>>> +#endif
> > >>>>>>>>  }
> > >>>>>>>
> > >>>>>>> This is in use by a Xenstore sysctl and a Xenstore domctl. The
> > >>>>>>> sysctl is hence already broken with the earlier series. Now the
> > >>>>>>> domctl is also being screwed up. I don't think MGMT_HYPERCALLS
> > >>>>>>> really ought to extend to any operations available to other than 
> > >>>>>>> the core
> > >>> toolstack.
> > >>>>>>> That's the Xenstore ones here, but also the ones used by qemu
> > >>>>>>> (whether run in
> > >>>>> Dom0 or a stubdom).
> > >>>>>>
> > >>>>>> Maybe not only limited to the core toolstack. In
> > >>>>>> dom0less/hyperlaunched
> > >>>>> scenarios, hypercalls are strictly limited. QEMU is also limited to
> > >>>>> pvh machine type and with very restricted functionality(, only acting
> > >>>>> as a few virtio-pci devices backend). @Andryuk, Jason @Stabellini,
> > >>>>> Stefano Am I understanding correctly and thoroughly about our 
> > >>>>> scenario here for
> > >>> upstream?
> > >>>>>> Tracking the codes, if Xenstore is created as a stub domain, it
> > >>>>>> requires
> > >>>>> getdomaininfo-domctl to acquire related info.  Sorry, I haven't found
> > >>>>> how it was called in QEMU...
> > >>>>>
> > >>>>> It's not "it"; it's different ones. First and foremost I was thinking
> > >>>>> of
> > >>>>>  * XEN_DOMCTL_ioport_mapping
> > >>>>>  * XEN_DOMCTL_memory_mapping
> > >>>>>  * XEN_DOMCTL_bind_pt_irq
> > >>>>>  * XEN_DOMCTL_unbind_pt_irq
> > >>>>> but there may be others (albeit per the dummy xsm_domctl() this is
> > >>>>> the full set). As a general criteria, anything using XSM_DM_PRIV
> > >>>>> checking can in principle be called by qemu.
> > >>>>>
> > >>>>
> > >>>> Understood.
> > >>>> I assume that they are all for device passthrough. We are not 
> > >>>> accepting device
> > >>> passthrough via core toolstack in dom0less/hyperlaunch-ed scenarios. 
> > >>> Jason has
> > >>> developed device passthrough through device tree to only accept "static
> > >>> configured" passthrough in dom0less/hyperlaunch-ed scenario, while it 
> > >>> is still
> > >>> internal , it may be the only accept way to do device passthrough in
> > >>> dom0less/hyperlaunch-ed scenario.
> > >>>
> > >>> Right, but no matter what your goals, the upstream contributions need 
> > >>> to be self-
> > >>> consistent. I.e. not (risk to) break other functionality. (Really the 
> > >>> four domctl-s
> > >>> mentioned above might better have been put elsewhere, e.g. as dm-ops. 
> > >>> Moving
> > >>> them may be an option here.)
> > >>
> > >> Understood.
> > >> I'll move them all to the dm-ops
> > > 
> > > Hi Penny, Jan, I advise against this.
> > > 
> > > I think it is clear that there are open questions on how to deal with
> > > the safety scenarios. I briefly mentioned some of the issues last week
> > > at Xen Summit. One example is the listdomains hypercall that should be
> > > available to the control domain. We cannot resolve all problems with
> > > this patch series. I think we should follow a simpler plan:
> > > 
> > > 1) introduce CONFIG_MGMT_HYPERCALLS the way this patch series does,
> > >    removing all domctls and sysctls
> > > 
> > > 2) make further adjustments, such as making available the listdomains
> > >    hypercall and/or the hypercalls listed by Jan as a second step after
> > >    it
> > 
> > I'm going to be okay-ish with that as long as the help text of the Kconfig
> > option clearly mentions those extra pitfalls.
> 
> +0

Ahah I mistyped this :-)

I meant +1 in the sense that I am happy with the idea of kconfig clearly
mentioning the pitfalls.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.