[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC v1 56/74] xen/pvshim: add grant table operations



On Mon, Jan 08, 2018 at 10:19:39AM -0700, Jan Beulich wrote:
> >>> On 04.01.18 at 14:06, <wei.liu2@xxxxxxxxxx> wrote:
> > @@ -30,11 +31,17 @@
> >  #include <asm/guest.h>
> >  #include <asm/pv/mm.h>
> >  
> > +#include <compat/grant_table.h>
> 
> Interesting: The event channel patch gave me the impression that
> it is not intended to deal with 32-bit guests.

AFAICT the event channel didn't need any explicit compat stuff. That's
not the case with grant tables however...

> > @@ -360,6 +367,173 @@ void pv_shim_inject_evtchn(unsigned int port)
> >      }
> >  }
> >  
> > +long pv_shim_grant_table_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) 
> > uop,
> > +                            unsigned int count, bool compat)
> > +{
> > +    struct domain *d = current->domain;
> > +    long rc = 0;
> > +
> > +    if ( count != 1 )
> > +        return -EINVAL;
> > +
> > +    switch ( cmd )
> > +    {
> > +    case GNTTABOP_setup_table:
> > +    {
> > +        struct gnttab_setup_table nat;
> > +        struct compat_gnttab_setup_table cmp;
> > +        unsigned int i;
> > +
> > +        if ( unlikely(compat ? copy_from_guest(&cmp, uop, 1)
> > +                             : copy_from_guest(&nat, uop, 1)) ||
> > +             unlikely(compat ? !compat_handle_okay(cmp.frame_list,
> > +                                                   cmp.nr_frames)
> > +                             : !guest_handle_okay(nat.frame_list,
> > +                                                  nat.nr_frames)) )
> > +        {
> > +            rc = -EFAULT;
> > +            break;
> > +        }
> > +        if ( compat )
> > +#define XLAT_gnttab_setup_table_HNDL_frame_list(d, s)
> > +                XLAT_gnttab_setup_table(&nat, &cmp);
> > +#undef XLAT_gnttab_setup_table_HNDL_frame_list
> > +
> > +        nat.status = GNTST_okay;
> > +
> > +        spin_lock(&grant_lock);
> > +        if ( !nr_grant_list )
> > +        {
> > +            struct gnttab_query_size query_size = {
> > +                .dom = DOMID_SELF,
> > +            };
> > +
> > +            rc = xen_hypercall_grant_table_op(GNTTABOP_query_size,
> > +                                              &query_size, 1);
> > +            if ( rc )
> > +            {
> > +                spin_unlock(&grant_lock);
> > +                break;
> > +            }
> > +
> > +            ASSERT(!grant_frames);
> > +            grant_frames = xzalloc_array(unsigned long,
> > +                                         query_size.max_nr_frames);
> 
> Hmm, such runtime allocations (especially when the amount can
> be large) are a fundamental problem. I think this needs setting
> up before the guest is started.

The shim already sets some memory apart for it's own usage. It could
be moved to some shim-start function, but it will likely have to be
freed and allocated again on migration, since the number of grant
table frames can change when migrating from one host to another.

> > +    {
> > +        struct gnttab_query_size op;
> > +        int rc;
> > +
> > +        if ( unlikely(copy_from_guest(&op, uop, 1)) )
> > +        {
> > +            rc = -EFAULT;
> > +            break;
> > +        }
> > +
> > +        rc = xen_hypercall_grant_table_op(GNTTABOP_query_size, &op, count);
> > +        if ( rc )
> > +            break;
> > +
> > +        if ( copy_to_guest(uop, &op, 1) )
> 
> __copy_to_guest() (assuming this coping in and out is necessary
> in the first place).

I guess this could be bypassed by just using uop instead of op in the
hypercall?

> > +        {
> > +            rc = -EFAULT;
> > +            break;
> > +        }
> > +
> > +        break;
> > +    }
> > +    default:
> > +        rc = -ENOSYS;
> 
> -EOPNOTSUPP again please. Plus - what about other sub-ops?

They are not yet implemented. I think this is bare minimum needed to
boot a PV DomU, we can expand this later on.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.