[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 4/8] ioreq-server: on-demand creation of ioreq server


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Tue, 8 Apr 2014 10:11:07 +0000
  • Accept-language: en-GB, en-US
  • Cc: "Keir \(Xen.org\)" <keir@xxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Tue, 08 Apr 2014 10:11:27 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHPToYLsncwi1nxxkGqosSybHXFf5sF8KKAgAGNTnD//+OzAIAAJeHA
  • Thread-topic: [PATCH v4 4/8] ioreq-server: on-demand creation of ioreq server

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 08 April 2014 10:51
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org)
> Subject: RE: [PATCH v4 4/8] ioreq-server: on-demand creation of ioreq server
> 
> >>> On 08.04.14 at 11:35, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> >>> On 02.04.14 at 17:11, <paul.durrant@xxxxxxxxxx> wrote:
> >> > @@ -645,13 +643,68 @@ static void
> hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
> >> >      spin_unlock(&s->lock);
> >> >  }
> >> >
> >> > -static int hvm_create_ioreq_server(struct domain *d, domid_t domid)
> >> > +static void hvm_ioreq_server_remove_all_vcpus(struct
> hvm_ioreq_server *s)
> >> >  {
> >> > -    struct hvm_ioreq_server *s;
> >> > +    struct list_head *entry, *next;
> >> >
> >> > -    s = xzalloc(struct hvm_ioreq_server);
> >> > -    if ( !s )
> >> > -        return -ENOMEM;
> >> > +    spin_lock(&s->lock);
> >> > +
> >> > +    list_for_each_safe ( entry, next, &s->ioreq_vcpu_list )
> >> > +    {
> >> > +        struct hvm_ioreq_vcpu *sv = container_of(entry,
> >> > +                                                 struct hvm_ioreq_vcpu,
> >> > +                                                 list_entry);
> >>
> >> list_for_each_entry_safe() avoids the need for the explicit use of
> >> container_of(), making the code easier to read.
> >>
> >
> > It also expands the scope of sv.
> 
> Which does no harm afaict. "entry" (which effectively is "sv") has the
> same wider scope already.
> 
> >> > @@ -4570,6 +4724,18 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >> >              case HVM_PARAM_ACPI_S_STATE:
> >> >                  a.value = d->arch.hvm_domain.is_s3_suspended ? 3 : 0;
> >> >                  break;
> >> > +            case HVM_PARAM_IOREQ_PFN:
> >> > +            case HVM_PARAM_BUFIOREQ_PFN:
> >> > +            case HVM_PARAM_BUFIOREQ_EVTCHN: {
> >> > +                domid_t domid;
> >> > +
> >> > +                /* May need to create server */
> >> > +                domid = d-
> >arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
> >> > +                rc = hvm_create_ioreq_server(d, domid);
> >>
> >> Pretty odd that you do this on reads, but not on writes. What's the
> >> rationale behind this?
> >>
> >
> > The default server does not actually need to be there until something (i.e.
> > QEMU) looks for it by reading one of these params. In future I hope that
> QEMU
> > can be modified to use the explicit ioreq server creation API and we can
> > eventually drop the default server.
> 
> Oh, okay - the writer of these values just drops them in without
> caring what Xen (or another consumer) does with them?
> 

Yes, that's right. setup_guest() in xc_hvm_build_x86.c writes the pfns in , 
along with the other specials (like store, etc.). In the case of these, nothing 
actually needs to happen until QEMU goes looking for what the tools wrote.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.