[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 5/5] x86/HVM: drop redundant access splitting



On Tue, Oct 01, 2024 at 10:50:25AM +0200, Jan Beulich wrote:
> With all paths into hvmemul_linear_mmio_access() coming through
> linear_{read,write}(), there's no need anymore to split accesses at
> page boundaries there. Leave an assertion, though.
> 
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> ---
> v2: Replace ASSERT() by more safe construct.
> 
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1084,7 +1084,7 @@ static int hvmemul_linear_mmio_access(
>  {
>      struct hvm_vcpu_io *hvio = &current->arch.hvm.hvm_io;
>      unsigned long offset = gla & ~PAGE_MASK;
> -    unsigned int chunk, buffer_offset = gla - start;
> +    unsigned int buffer_offset = gla - start;
>      struct hvm_mmio_cache *cache = hvmemul_find_mmio_cache(hvio, start, dir,
>                                                             buffer_offset);
>      paddr_t gpa;
> @@ -1094,13 +1094,17 @@ static int hvmemul_linear_mmio_access(
>      if ( cache == NULL )
>          return X86EMUL_UNHANDLEABLE;
>  
> -    chunk = min_t(unsigned int, size, PAGE_SIZE - offset);
> +    if ( size > PAGE_SIZE - offset )

FWIW, I find this easier to read as `size + offset > PAGE_SIZE` (which
is the same condition used in linear_{read,write}().

Preferably with that adjusted:

Acked-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.