[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 3/3] tools: introduce parameter max_wp_ram_ranges.
> -----Original Message----- [snip] > > > > > > > > I'm afraid I have made little progress due to the distractions of trying > get > > > > some patches into Linux but my thoughts are around replacing the > > > > HVM_mmio_write_dm with something like HVM_emulate_0 (i.e. the > zero- > > > th example > > > > of a type that requires emulation, to be followed by others in future) > and > > > > then add a hypercall along the lines of > > > HVMOP_map_mem_type_to_ioreq_server > > > > which will take an ioerq server id, a type and flags saying whether it > wishes > > > > to handle reads and/or writes to that type. > > > > > > > > Thoughts (anyone)? > > > > > > I think as a general idea also allowing reads to be intercepted is > > > nice, but would incur quite a few changes which we don't currently > > > have a user for. Hence I'd suggest making the public interface > > > ready for that without actually implementing hypervisor support. > > > > > > > Well, we need some form a hypervisor support to replace what's already > there. > > > > I'd envisaged that setting HVM_emulate_0 type on a page would mean > nothing until an > > for "mean nothing" what is the default policy then if guest happens to access > it > before any ioreq server claims it? > My thoughts were that, since no specific emulation has yet been requested (because no ioreq server has yet claimed it) that the default policy is to treat it as r/w RAM as I said below. This is because I think the only legal type transitions should be from HVMMEM_ram_rw to HVMMEM_emulate_0 and back again. > > ioreq server claims it (i.e. it stays as r/w RAM) but when the ioreq server > makes the claim > > the EPT is changed according to whether reads and/or writes are wanted > and then the fault > > handler steers transactions to the (single at the moment) ioreq server. I'll > need to code up > > a PoC to make sure I'm not barking up the wrong tree though. > > > > Curious any reason why we must have a HVM_emulate_0 placeholder > first and why we can't allow ioreq server to claim on any existing type? Which type were you thinking of? Remember that the ioreq server would be claiming *all* pages of that type. > Thinking about XenGT usage, I cannot envisage when a page should > be set to HVM_emulate_0 first. The write-protection operation is dynamic > according to guest page table operations, upon which we'll directly jump > to claim phase... I don't imagine that things would happen that way round in the common case. For XenGT I'd expect the ioreq server to immediately claim HVMMEM_emulate_0 and then set that type on any page that it wants to trap accesses on (which means that I am assuming that the same emulation - i.e. write accesses only - is desired for all pages... but I think that is indeed the case). > > btw does this design consider the case where multiple ioreq servers > may claim on same page? Yes it does and there are currently insufficient page types to allow any more than a single ioreq server to claim a type. My plan is that, in future, we can add a p2t mapping table to allow for more types and then introduce HVMMEM_ioreq_1, HVMMEM_ioreq_2, etc. > For example, different usages may both > want to capture write requests on the same set of pages (say XenGT > selectively write-protects a subset of pages due to shadow GTT, while > another agent wants to monitor all guest writes to any guest memory > page). Monitoring is a different thing altogether. Emulation is costly and not something you'd want to use for that purpose. If you want to monitor writes then log-dirty already exists for that purpose. > > Thanks > Kevin I hope my explanation helped. I think things will be clearer once I've had chance to actually put together a design doc. and hack up a PoC (probably only for EPT at first). Paul _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |