[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 16/16] x86/hvm: track large memory mapped accesses by buffer offset



>>> On 03.07.15 at 18:25, <paul.durrant@xxxxxxxxxx> wrote:
> @@ -635,13 +605,41 @@ static int hvmemul_phys_mmio_access(
>      return rc;
>  }
>  
> +static struct hvm_mmio_cache *hvmemul_find_mmio_cache(
> +    struct hvm_vcpu_io *vio, unsigned long gla, uint8_t dir)
> +{
> +    unsigned int i;
> +    struct hvm_mmio_cache *cache;
> +
> +    for ( i = 0; i < vio->mmio_cache_count; i ++ )
> +    {
> +        cache = &vio->mmio_cache[i];
> +
> +        if ( gla == cache->gla &&
> +             dir == cache->dir )
> +            return cache;
> +    }
> +
> +    i = vio->mmio_cache_count++;
> +    BUG_ON(i == ARRAY_SIZE(vio->mmio_cache));
> +
> +    cache = &vio->mmio_cache[i];
> +    memset(cache, 0, sizeof (*cache));
> +
> +    cache->gla = gla;
> +    cache->dir = dir;
> +
> +    return cache;
> +}

There's still a weakness here (but that also applies if you used physical
addresses): Multiple reads to the same address may not return the
same result. I don't think this needs to be addressed here, but adding
a comment clarifying that this case isn't handled correctly would help
future readers easier understand the state of affairs.

As to the BUG_ON() - I think this would better be a domain_crash(),
just in case we overlooked an exotic instruction accessing more than
3 memory locations.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.