[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 16 February 2016 10:34
> To: Paul Durrant
> Cc: Andrew Cooper; George Dunlap; Ian Campbell; Ian Jackson; Stefano
> Stabellini; Wei Liu; Kevin Tian; Zhiyuan Lv; Zhang Yu; 
> xen-devel@xxxxxxxxxxxxx;
> George Dunlap; Keir (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> max_wp_ram_ranges.
> 
> >>> On 16.02.16 at 09:50, <Paul.Durrant@xxxxxxxxxx> wrote:
> >>  -----Original Message-----
> >> From: Tian, Kevin [mailto:kevin.tian@xxxxxxxxx]
> >> Sent: 16 February 2016 07:23
> >> To: Paul Durrant; George Dunlap
> >> Cc: Jan Beulich; George Dunlap; Wei Liu; Ian Campbell; Andrew Cooper;
> >> Zhang Yu; xen-devel@xxxxxxxxxxxxx; Stefano Stabellini; Lv, Zhiyuan; Ian
> >> Jackson; Keir (Xen.org)
> >> Subject: RE: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> >> max_wp_ram_ranges.
> >>
> >> > From: Paul Durrant [mailto:Paul.Durrant@xxxxxxxxxx]
> >> > Sent: Friday, February 05, 2016 7:24 PM
> >> >
> >> > > -----Original Message-----
> >> > > From: dunlapg@xxxxxxxxx [mailto:dunlapg@xxxxxxxxx] On Behalf Of
> >> > > George Dunlap
> >> > > Sent: 05 February 2016 11:14
> >> > > To: Paul Durrant
> >> > > Cc: Jan Beulich; George Dunlap; Kevin Tian; Wei Liu; Ian Campbell;
> >> Andrew
> >> > > Cooper; Zhang Yu; xen-devel@xxxxxxxxxxxxx; Stefano Stabellini;
> >> > > zhiyuan.lv@xxxxxxxxx; Ian Jackson; Keir (Xen.org)
> >> > > Subject: Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter
> >> > > max_wp_ram_ranges.
> >> > >
> >> > > On Fri, Feb 5, 2016 at 9:24 AM, Paul Durrant
> <Paul.Durrant@xxxxxxxxxx>
> >> > > wrote:
> >> > > > Utilizing the default server is a backwards step. GVT-g would have to
> >> use
> >> > > the old HVM_PARAM mechanism to cause it's emulator to become
> >> default. I
> >> > > think a more appropriate mechanism would be p2m_mmio_write_dm
> to
> >> > > become something like 'p2m_ioreq_server_write' and then have a
> >> hypercall
> >> > > to allow it to be mapped to a particular ioreq server.
> >> > > > Obviously only one could claim it but, with a p2t, the bit could be 
> >> > > > re-
> >> > > purposed to simply mean 'go look in the p2t' for more information and
> >> then
> >> > > the p2t could be structured to allow emulations to be steered to one
> of
> >> many
> >> > > ioreq servers (for read and/or write emulation).
> >> > >
> >> > > Right; I had in mind that Xen would allow at any given time a max of N
> >> > > ioreq servers to register for mmio_write_dm ranges, first-come
> >> > > first-served; with 'N' being '1' to begin with.  If a second ioreq
> >> > > server requested mmio_write_dm functionality, it would get -EBUSY.
> >> > > This would allow their current setup (one qemu dm which doesn't do
> >> > > mmio_write_dm, one xengt dm which does) to work without needing
> to
> >> > > worry any more about how many pages might need to be tracked
> (either
> >> > > for efficiency or correctness).
> >> > >
> >> > > We could then extend this to some larger number (4 seems pretty
> >> > > reasonable to me) either by adding an extra 3 types, or by some other
> >> > > method (such as the one Paul suggests).
> >> >
> >> > I think it would be best to do away with the 'write dm' name though. I
> >> would like to see it
> >> > be possible to steer reads+writes, as well as writes (and maybe just
> > reads?)
> >> to a particular
> >> > ioreq server based on type information. So maybe we just call the
> existing
> >> type
> >> > 'p2m_ioreq_server' and then, in the absence of a p2t, hardcode this to
> go
> >> to whichever
> >> > emulator makes the new TBD hypercall.
> >> > I think we need a proper design at this point. Given that it's Chinese
> New
> >> Year maybe I'll
> >> > have a stab in Yu's absence.
> >> >
> >>
> >> Hi, Paul, what about your progress on this?
> >>
> >> My feeling is that we do not need a new hypercall to explicitly claim
> >> whether a ioreq server wants to handle write requests. It can be
> >> implicitly marked upon whether a specific page is requested for
> >> write-protection through a specific ioreq channel, and then that
> >> ioreq server will claim the attribute automatically.
> >
> > Hi Kevin,
> >
> > Is there a hypercall to do that? Maybe I'm missing something but I was
> under
> > the impression that the only way to set write protection was via an
> > HVMOP_set_mem_type and that does not carry an ioreq server id.
> >
> > I'm afraid I have made little progress due to the distractions of trying get
> > some patches into Linux but my thoughts are around replacing the
> > HVM_mmio_write_dm with something like HVM_emulate_0 (i.e. the zero-
> th example
> > of a type that requires emulation, to be followed by others in future) and
> > then add a hypercall along the lines of
> HVMOP_map_mem_type_to_ioreq_server
> > which will take an ioerq server id, a type and flags saying whether it 
> > wishes
> > to handle reads and/or writes to that type.
> >
> > Thoughts (anyone)?
> 
> I think as a general idea also allowing reads to be intercepted is
> nice, but would incur quite a few changes which we don't currently
> have a user for. Hence I'd suggest making the public interface
> ready for that without actually implementing hypervisor support.
> 

Well, we need some form a hypervisor support to replace what's already there.

I'd envisaged that setting HVM_emulate_0 type on a page would mean nothing 
until an ioreq server claims it (i.e. it stays as r/w RAM) but when the ioreq 
server makes the claim the EPT is changed according to whether reads and/or 
writes are wanted and then the fault handler steers transactions to the (single 
at the moment) ioreq server. I'll need to code up a PoC to make sure I'm not 
barking up the wrong tree though.

  Paul

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.