[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for ioreq server



> -----Original Message-----
> From: George Dunlap [mailto:george.dunlap@xxxxxxxxxxxxx]
> Sent: 06 July 2015 14:28
> To: Paul Durrant; George Dunlap
> Cc: Yu Zhang; xen-devel@xxxxxxxxxxxxx; Keir (Xen.org); Jan Beulich; Andrew
> Cooper; Kevin Tian; zhiyuan.lv@xxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for
> ioreq server
> 
> On 07/06/2015 02:09 PM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> >> George Dunlap
> >> Sent: 06 July 2015 13:50
> >> To: Paul Durrant
> >> Cc: Yu Zhang; xen-devel@xxxxxxxxxxxxx; Keir (Xen.org); Jan Beulich;
> Andrew
> >> Cooper; Kevin Tian; zhiyuan.lv@xxxxxxxxx
> >> Subject: Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES
> for
> >> ioreq server
> >>
> >> On Mon, Jul 6, 2015 at 1:38 PM, Paul Durrant <Paul.Durrant@xxxxxxxxxx>
> >> wrote:
> >>>> -----Original Message-----
> >>>> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> >>>> George Dunlap
> >>>> Sent: 06 July 2015 13:36
> >>>> To: Yu Zhang
> >>>> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org); Jan Beulich; Andrew
> Cooper;
> >>>> Paul Durrant; Kevin Tian; zhiyuan.lv@xxxxxxxxx
> >>>> Subject: Re: [Xen-devel] [PATCH v2 1/2] Resize the
> MAX_NR_IO_RANGES
> >> for
> >>>> ioreq server
> >>>>
> >>>> On Mon, Jul 6, 2015 at 7:25 AM, Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
> >>>> wrote:
> >>>>> MAX_NR_IO_RANGES is used by ioreq server as the maximum
> >>>>> number of discrete ranges to be tracked. This patch changes
> >>>>> its value to 8k, so that more ranges can be tracked on next
> >>>>> generation of Intel platforms in XenGT. Future patches can
> >>>>> extend the limit to be toolstack tunable, and MAX_NR_IO_RANGES
> >>>>> can serve as a default limit.
> >>>>>
> >>>>> Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
> >>>>
> >>>> I said this at the Hackathon, and I'll say it here:  I think this is
> >>>> the wrong approach.
> >>>>
> >>>> The problem here is not that you don't have enough memory ranges.
> The
> >>>> problem is that you are not tracking memory ranges, but individual
> >>>> pages.
> >>>>
> >>>> You need to make a new interface that allows you to tag individual
> >>>> gfns as p2m_mmio_write_dm, and then allow one ioreq server to get
> >>>> notifications for all such writes.
> >>>>
> >>>
> >>> I think that is conflating things. It's quite conceivable that more than 
> >>> one
> >> ioreq server will handle write_dm pages. If we had enough types to have
> >> two page types per server then I'd agree with you, but we don't.
> >>
> >> What's conflating things is using an interface designed for *device
> >> memory ranges* to instead *track writes to gfns*.
> >
> > What's the difference? Are you asserting that all device memory ranges
> have read side effects and therefore write_dm is not a reasonable
> optimization to use? I would not want to make that assertion.
> 
> Using write_dm is not the problem; it's having thousands of memory
> "ranges" of 4k each that I object to.
> 
> Which is why I suggested adding an interface to request updates to gfns
> (by marking them write_dm), rather than abusing the io range interface.
> 

And it's the assertion that use of write_dm will only be relevant to gfns, and 
that all such notifications only need go to a single ioreq server, that I have 
a problem with. Whilst the use of io ranges to track gfn updates is, I agree, 
not ideal I think the overloading of write_dm is not a step in the right 
direction.

  Paul

>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.