[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()



> -----Original Message-----
> From: Oleksandr <olekstysh@xxxxxxxxx>
> Sent: 09 December 2020 20:36
> To: paul@xxxxxxx
> Cc: 'Jan Beulich' <jbeulich@xxxxxxxx>; 'Oleksandr Tyshchenko' 
> <oleksandr_tyshchenko@xxxxxxxx>;
> 'Stefano Stabellini' <sstabellini@xxxxxxxxxx>; 'Julien Grall' 
> <julien@xxxxxxx>; 'Volodymyr Babchuk'
> <Volodymyr_Babchuk@xxxxxxxx>; 'Andrew Cooper' <andrew.cooper3@xxxxxxxxxx>; 
> 'George Dunlap'
> <george.dunlap@xxxxxxxxxx>; 'Ian Jackson' <iwj@xxxxxxxxxxxxxx>; 'Wei Liu' 
> <wl@xxxxxxx>; 'Julien Grall'
> <julien.grall@xxxxxxx>; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Subject: Re: [PATCH V3 17/23] xen/ioreq: Introduce domain_has_ioreq_server()
> 
> 
> Hi Paul.
> 
> 
> >>>>>> On 30.11.2020 11:31, Oleksandr Tyshchenko wrote:
> >>>>>>> --- a/xen/include/xen/ioreq.h
> >>>>>>> +++ b/xen/include/xen/ioreq.h
> >>>>>>> @@ -55,6 +55,20 @@ struct ioreq_server {
> >>>>>>>         uint8_t                bufioreq_handling;
> >>>>>>>     };
> >>>>>>>     +/*
> >>>>>>> + * This should only be used when d == current->domain and it's not
> >>>>>>> paused,
> >>>>>> Is the "not paused" part really relevant here? Besides it being rare
> >>>>>> that the current domain would be paused (if so, it's in the process
> >>>>>> of having all its vCPU-s scheduled out), does this matter at all?do
> >>>>>> any extra actionsdo any extra actions
> >>>>> No, it isn't relevant, I will drop it.
> >>>>>
> >>>>>
> >>>>>> Apart from this the patch looks okay to me, but I'm not sure it
> >>>>>> addresses Paul's concerns. Iirc he had suggested to switch back to
> >>>>>> a list if doing a swipe over the entire array is too expensive in
> >>>>>> this specific case.
> >>>>> We would like to avoid to do any extra actions in
> >>>>> leave_hypervisor_to_guest() if possible.
> >>>>> But not only there, the logic whether we check/set
> >>>>> mapcache_invalidation variable could be avoided if a domain doesn't
> >>>>> use IOREQ server...
> >>>> Are you OK with this patch (common part of it)?
> >>> How much of a performance benefit is this? The array is small to simply 
> >>> counting the non-NULL
> >> entries should be pretty quick.
> >> I didn't perform performance measurements on how much this call consumes.
> >> In our system we run three domains. The emulator is in DomD only, so I
> >> would like to avoid to call vcpu_ioreq_handle_completion() for every
> >> Dom0/DomU's vCPUs
> >> if there is no real need to do it.
> > This is not relevant to the domain that the emulator is running in; it's 
> > concerning the domains
> which the emulator is servicing. How many of those are there?
> Err, yes, I wasn't precise when providing an example.
> Single emulator is running in DomD and servicing DomU. So with the
> helper in place the vcpu_ioreq_handle_completion() gets only called for
> DomU vCPUs (as expected).
> Without an optimization the vcpu_ioreq_handle_completion() gets called
> for _all_ vCPUs, and I see it as an extra action for Dom0, DomD vCPUs.
> 
> 
> >
> >> On Arm vcpu_ioreq_handle_completion()
> >> is called with IRQ enabled, so the call is accompanied with
> >> corresponding irq_enable/irq_disable.
> >> These unneeded actions could be avoided by using this simple one-line
> >> helper...
> >>
> > The helper may be one line but there is more to the patch than that. I 
> > still think you could just
> walk the array in the helper rather than keeping a running occupancy count.
> 
> OK, is the implementation below close to what you propose? If yes, I
> will update a helper and drop nr_servers variable.
> 
> bool domain_has_ioreq_server(const struct domain *d)
> {
>      const struct ioreq_server *s;
>      unsigned int id;
> 
>      FOR_EACH_IOREQ_SERVER(d, id, s)
>          return true;
> 
>      return false;
> }

Yes, that's what I had in mind.

  Paul

> 
> --
> Regards,
> 
> Oleksandr Tyshchenko





 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.