[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC PATCH 7/8]: PVH: grant changes



On Thu, 2012-08-16 at 02:06 +0100, Mukesh Rathor wrote:
> diff --git a/drivers/xen/grant-table.c b/drivers/xen/grant-table.c
> index 0bfc1ef..2430133 100644
> --- a/drivers/xen/grant-table.c
> +++ b/drivers/xen/grant-table.c
> @@ -974,14 +974,19 @@ static void gnttab_unmap_frames_v2(void)
>  static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
>  {
>       struct gnttab_setup_table setup;
> -     unsigned long *frames;
> +     unsigned long *frames, start_gpfn;
>       unsigned int nr_gframes = end_idx + 1;
>       int rc;
>  
> -     if (xen_hvm_domain()) {
> +     if (xen_hvm_domain() || xen_pvh_domain()) {
>               struct xen_add_to_physmap xatp;
>               unsigned int i = end_idx;
>               rc = 0;
> +
> +             if (xen_hvm_domain())
> +                     start_gpfn = xen_hvm_resume_frames >> PAGE_SHIFT;
> +             else
> +                     start_gpfn = virt_to_pfn(gnttab_shared.addr);

I wonder why the HVM case doesn't already use
virt_to_pfn(gnttab_shared.addr) since it appears to set
gnttab_shared.addr:
                gnttab_shared.addr = ioremap(xen_hvm_resume_frames,
                                                PAGE_SIZE * max_nr_gframes);

Perhaps the result of ioremap isn't amenable to virt_to_pfn (I can never
remember off hand)

>               /*
>                * Loop backwards, so that the first hypercall has the largest
>                * index, ensuring that the table will grow only once.
> @@ -990,7 +995,7 @@ static int gnttab_map(unsigned int start_idx, unsigned 
> int end_idx)
>                       xatp.domid = DOMID_SELF;
>                       xatp.idx = i;
>                       xatp.space = XENMAPSPACE_grant_table;
> -                     xatp.gpfn = (xen_hvm_resume_frames >> PAGE_SHIFT) + i;
> +                     xatp.gpfn = start_gpfn + i;
>                       rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap, &xatp);
>                       if (rc != 0) {
>                               printk(KERN_WARNING
> @@ -1053,7 +1058,7 @@ static void gnttab_request_version(void)
>       int rc;
>       struct gnttab_set_version gsv;
>  
> -     if (xen_hvm_domain())
> +     if (xen_hvm_domain() || xen_pvh_domain())

Does something stop pvh using v2?

Is it a hypervisor implementation thing? In which case it might be
better for GNTTABOP_set_version to explicitly fail the set attempt? (the
same may well apply to HVM for all I know...)

>               gsv.version = 1;
>       else
>               gsv.version = 2;
> @@ -1081,13 +1086,24 @@ static void gnttab_request_version(void)
>  int gnttab_resume(void)
>  {
>       unsigned int max_nr_gframes;
> +     char *kmsg="Failed to kmalloc pages for pv in hvm grant frames\n";
>  
>       gnttab_request_version();
>       max_nr_gframes = gnttab_max_grant_frames();
>       if (max_nr_gframes < nr_grant_frames)
>               return -ENOSYS;
>  
> -     if (xen_pv_domain())
> +     /* PVH note: xen will free existing kmalloc'd mfn in
> +      * XENMEM_add_to_physmap */
> +     if (xen_pvh_domain() && !gnttab_shared.addr) {
> +             gnttab_shared.addr =
> +                     kmalloc(max_nr_gframes * PAGE_SIZE, GFP_KERNEL);
> +             if ( !gnttab_shared.addr ) {
> +                     printk(KERN_WARNING "%s", kmsg);

Why this construct instead of just the string literal?

> +                     return -ENOMEM;
> +             }
> +     }
> +     if (xen_pv_domain() || xen_pvh_domain())
>               return gnttab_map(0, nr_grant_frames - 1);
>  
>       if (gnttab_shared.addr == NULL) {



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.