[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for ioreq server

> -----Original Message-----
> From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> George Dunlap
> Sent: 06 July 2015 13:36
> To: Yu Zhang
> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org); Jan Beulich; Andrew Cooper;
> Paul Durrant; Kevin Tian; zhiyuan.lv@xxxxxxxxx
> Subject: Re: [Xen-devel] [PATCH v2 1/2] Resize the MAX_NR_IO_RANGES for
> ioreq server
> On Mon, Jul 6, 2015 at 7:25 AM, Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
> wrote:
> > MAX_NR_IO_RANGES is used by ioreq server as the maximum
> > number of discrete ranges to be tracked. This patch changes
> > its value to 8k, so that more ranges can be tracked on next
> > generation of Intel platforms in XenGT. Future patches can
> > extend the limit to be toolstack tunable, and MAX_NR_IO_RANGES
> > can serve as a default limit.
> >
> > Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
> I said this at the Hackathon, and I'll say it here:  I think this is
> the wrong approach.
> The problem here is not that you don't have enough memory ranges.  The
> problem is that you are not tracking memory ranges, but individual
> pages.
> You need to make a new interface that allows you to tag individual
> gfns as p2m_mmio_write_dm, and then allow one ioreq server to get
> notifications for all such writes.

I think that is conflating things. It's quite conceivable that more than one 
ioreq server will handle write_dm pages. If we had enough types to have two 
page types per server then I'd agree with you, but we don't.


>  -George
Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.