[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 8/8] xen: Swich parameter in get_page_from_gfn to use typesafe gfn



> -----Original Message-----
> From: Julien Grall [mailto:julien.grall@xxxxxxx]
> Sent: 06 November 2018 19:15
> To: sstabellini@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: Julien Grall <julien.grall@xxxxxxx>; Andrew Cooper
> <Andrew.Cooper3@xxxxxxxxxx>; George Dunlap <George.Dunlap@xxxxxxxxxx>; Ian
> Jackson <Ian.Jackson@xxxxxxxxxx>; Jan Beulich <jbeulich@xxxxxxxx>; Konrad
> Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>; Tim (Xen.org) <tim@xxxxxxx>; Wei
> Liu <wei.liu2@xxxxxxxxxx>; Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>;
> Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>; Brian Woods
> <brian.woods@xxxxxxx>; Paul Durrant <Paul.Durrant@xxxxxxxxxx>; Jun
> Nakajima <jun.nakajima@xxxxxxxxx>; Kevin Tian <kevin.tian@xxxxxxxxx>;
> Julien Grall <julie.grall@xxxxxxx>
> Subject: [PATCH 8/8] xen: Swich parameter in get_page_from_gfn to use
> typesafe gfn
> 
> No functional change intended.
> 
> Only reasonable clean-ups are done in this patch. The rest will use _gfn
> for the time being.
> 
> Signed-off-by: Julien Grall <julie.grall@xxxxxxx>
> ---
>  xen/arch/arm/guestcopy.c             |  2 +-
>  xen/arch/arm/mm.c                    |  2 +-
>  xen/arch/x86/cpu/vpmu.c              |  2 +-
>  xen/arch/x86/domain.c                | 12 ++++++------
>  xen/arch/x86/domctl.c                |  6 +++---
>  xen/arch/x86/hvm/dm.c                |  2 +-
>  xen/arch/x86/hvm/domain.c            |  2 +-
>  xen/arch/x86/hvm/hvm.c               |  9 +++++----
>  xen/arch/x86/hvm/svm/svm.c           |  8 ++++----
>  xen/arch/x86/hvm/viridian/viridian.c | 24 ++++++++++++------------
>  xen/arch/x86/hvm/vmx/vmx.c           |  4 ++--
>  xen/arch/x86/hvm/vmx/vvmx.c          | 12 ++++++------
>  xen/arch/x86/mm.c                    | 24 ++++++++++++++----------
>  xen/arch/x86/mm/p2m.c                |  2 +-
>  xen/arch/x86/mm/shadow/hvm.c         |  6 +++---
>  xen/arch/x86/physdev.c               |  3 ++-
>  xen/arch/x86/pv/descriptor-tables.c  |  5 ++---
>  xen/arch/x86/pv/emul-priv-op.c       |  6 +++---
>  xen/arch/x86/pv/mm.c                 |  2 +-
>  xen/arch/x86/traps.c                 | 11 ++++++-----
>  xen/common/domain.c                  |  2 +-
>  xen/common/event_fifo.c              | 12 ++++++------
>  xen/common/memory.c                  |  4 ++--
>  xen/common/tmem_xen.c                |  2 +-
>  xen/include/asm-arm/p2m.h            |  6 +++---
>  xen/include/asm-x86/p2m.h            | 11 +++++++----
>  26 files changed, 95 insertions(+), 86 deletions(-)
> 
[snip]
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 5d00256aaa..a7419bd444 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -317,7 +317,7 @@ static int svm_vmcb_restore(struct vcpu *v, struct
> hvm_hw_cpu *c)
>      {
>          if ( c->cr0 & X86_CR0_PG )
>          {
> -            page = get_page_from_gfn(v->domain, c->cr3 >> PAGE_SHIFT,
> +            page = get_page_from_gfn(v->domain, gaddr_to_gfn(c->cr3),
>                                       NULL, P2M_ALLOC);
>              if ( !page )
>              {
> @@ -2412,9 +2412,9 @@ nsvm_get_nvmcb_page(struct vcpu *v, uint64_t
> vmcbaddr)
>          return NULL;
> 
>      /* Need to translate L1-GPA to MPA */
> -    page = get_page_from_gfn(v->domain,
> -                            nv->nv_vvmcxaddr >> PAGE_SHIFT,
> -                            &p2mt, P2M_ALLOC | P2M_UNSHARE);
> +    page = get_page_from_gfn(v->domain,
> +                             gaddr_to_gfn(nv->nv_vvmcxaddr >>
> PAGE_SHIFT),

Don't you need to lose the '>> PAGE_SHIFT' now?

  Paul

> +                             &p2mt, P2M_ALLOC | P2M_UNSHARE);
>      if ( !page )
>          return NULL;
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.