[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [RFC PATCH v2 1/2] xen/memory : Add a stats_table resource type



On Thu, Feb 23, 2023 at 01:42:08PM +0100, Jan Beulich wrote:
> On 23.02.2023 13:16, Matias Ezequiel Vara Larsen wrote:
> > On Fri, Feb 17, 2023 at 03:10:53PM +0100, Jan Beulich wrote:
> >> On 17.02.2023 10:29, Matias Ezequiel Vara Larsen wrote:
> >>> On Fri, Feb 17, 2023 at 09:57:43AM +0100, Jan Beulich wrote:
> >>>> On 17.02.2023 09:50, Matias Ezequiel Vara Larsen wrote:
> >>>>> On Wed, Dec 14, 2022 at 08:56:57AM +0100, Jan Beulich wrote:
> >>>>>> On 14.12.2022 08:29, Jan Beulich wrote:
> >>>>>>> On 07.10.2022 14:39, Matias Ezequiel Vara Larsen wrote:
> >>>>>>>> +static int stats_vcpu_alloc_mfn(struct domain *d)
> >>>>>>>> +{
> >>>>>>>> +    struct page_info *pg;
> >>>>>>>> +
> >>>>>>>> +    pg = alloc_domheap_page(d, MEMF_no_refcount);
> >>>>>>>
> >>>>>>> The ioreq and vmtrace resources are also allocated this way, but 
> >>>>>>> they're
> >>>>>>> HVM-specific. The one here being supposed to be VM-type independent, 
> >>>>>>> I'm
> >>>>>>> afraid such pages will be accessible by an "owning" PV domain (it'll
> >>>>>>> need to guess the MFN, but that's no excuse).
> >>>>>>
> >>>>>> Which might be tolerable if it then can't write to the page. That would
> >>>>>> require "locking" the page r/o (from guest pov), which ought to be
> >>>>>> possible by leveraging a variant of what share_xen_page_with_guest()
> >>>>>> does: It marks pages PGT_none with a single type ref. This would mean
> >>>>>> ...
> >>>>>>
> >>>>>>>> +    if ( !pg )
> >>>>>>>> +        return -ENOMEM;
> >>>>>>>> +
> >>>>>>>> +    if ( !get_page_and_type(pg, d, PGT_writable_page) ) {
> >>>>>>
> >>>>>> ... using PGT_none here. Afaict this _should_ work, but we have no
> >>>>>> precedent of doing so in the tree, and I may be overlooking something
> >>>>>> which prevents that from working.
> >>>>>>
> >>>>>
> >>>>> I do not fully understand this. I checked share_xen_page_with_guest() 
> >>>>> and I
> >>>>> think you're talking about doing something like this for each allocated 
> >>>>> page to
> >>>>> set them ro from a pv guest pov:
> >>>>>
> >>>>> pg->u.inuse.type_info = PGT_none;
> >>>>> pg->u.inuse.type_info |= PGT_validated | 1;
> >>>>> page_set_owner(page, d); // not sure if this is needed
> >>>>>
> >>>>> Then, I should use PGT_none instead of PGT_writable_page in
> >>>>> get_page_and_type(). Am I right?
> >>>>
> >>>> No, if at all possible you should avoid open-coding anything. As said,
> >>>> simply passing PGT_none to get_page_and_type() ought to work (again, as
> >>>> said, unless I'm overlooking something). share_xen_page_with_guest()
> >>>> can do what it does because the page isn't owned yet. For a page with
> >>>> owner you may not fiddle with type_info in such an open-coded manner.
> >>>>
> >>>
> >>> Thanks. I got the following bug when passing PGT_none:
> >>>
> >>> (XEN) Bad type in validate_page 0 t=0000000000000001 c=8040000000000002
> >>> (XEN) Xen BUG at mm.c:2643
> >>
> >> The caller of the function needs to avoid the call not only for writable
> >> and shared pages, but also for this new case of PGT_none.
> > 
> > Thanks. If I understand correctly, _get_page_type() needs to avoid to call
> > validate_page() when type = PGT_none.
> 
> Yes.
> 
> > For the writable and shared pages, this
> > is avoided by setting nx |= PGT_validated. Am I right?
> 
> Well, no, I wouldn't describe it like that. The two (soon three) types not
> requiring validation simply set the flag without calling validate_page().
> 

I see, thanks. I added the corresponding check at _get_page_type() to set the
flag without calling validate_page() for the PGT_none type. I think I am
missing something when I am releasing the pages. I am triggering the following
BUG() when issuing put_page_and_type():
 
(XEN) Xen BUG at mm.c:2698

This is at devalidate_page(). I guess the call to devalidate_page() shall be
also avoided. I was wondering if put_page_and_type() is required in this case.

Matias



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.