[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] possible grant table issue


  • To: Stefan Berger <stefanb@xxxxxxxxxx>
  • From: Christopher Clark <christopher.w.clark@xxxxxxxxx>
  • Date: Wed, 13 Jul 2005 02:16:10 -0700
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Wed, 13 Jul 2005 09:14:58 +0000
  • Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:cc:in-reply-to:mime-version:content-type:content-transfer-encoding:content-disposition:references; b=NFyL7ZK7fhJU/CaG/zFc16trXk1oAZMR0Uv+kNCDQRDg965veXo9R72fkZHWQkCKl1XxP+XesS4dYyad6EJHqDhlY95tNN5uwRccxaXuiJXvBT5T/5TCiwf4fl8zji4uGLlK2YN1s2j3zP9SA4r4tcMoTlCquZHGySJzqDTij7w=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>

Hi Stefan

Here's how it works:

maptrack_head is the index of the next free entry in the maptrack
array. A handle is just an index into the array.
                                                                                
Ignoring the shift, grant_table->maptrack array entries that are not in use
contain the index of the next free entry, forming a list and
terminating with a sentinel value. When entries are freed, the array
contents are overwritten in order to add the entry to the free list.
                                                                                
Thus the grant_table->maptrack[index] entries are initialised to an increasing
series starting from 1 at index 0, and maptrack_head = 0. The sentinel
value is the number of elements in the array.
                                                                                
So the sequence you describe will result in:
                                                                                
get_maptrack_handle(t)
=> maptrack_head is now 1
=> returns handle of 0
=> t->maptrack_entry[0] in use
                                                                                
get_maptrack_handle(t)
=> maptrack_head is now 2
=> returns handle of 1
=> t->maptrack_entry[1] in use
                                                                                
get_maptrack_handle(t)
=> maptrack_head is now 3
=> returns handle of 2
=> t->maptrack_entry[2] in use
                                                                                
put_maptrack_handle(t, 2)
=> t->maptrack_entry[2] points to next free index 3
=> maptrack_head is now 2
                                                                                
get_maptrack_handle(t)
=> maptrack_head is now 3
=> returns handle of 2
=> t->maptrack_entry[2] in use

You shouldn't be returned a handle of 0 when getting a handle
immediately after freeing handle 2.

Christopher


On 7/12/05, Stefan Berger <stefanb@xxxxxxxxxx> wrote:
> Hello!
> 
> Attached is a patch that dumps some debugging output for the block
> interface backend. The reason why I am posting this patch is due to the
> somewhat strange assignments of the handles that are returned from the
> HYPERVISOR_grant_table_op. I am stopping short of saying it's a bug,
> because I don't know the code well enough, but when looking at the
> hypervisor code I see some place where I doubt that this is right.
> Particularly one should try the following:
> 
> Create user domains that use the block interfaces.
> 
> 1st user domain witll be assigned handle 0x0. - should be ok
> 2nd user domain will be assigned handle 0x1. - should be ok
> 3rd user domain will be assigned handle 0x2. - should be ok
> 
> (handle numbers have obviously been increasing so far)
> 
> bring down 3rd user domain - free'ed handle will be 0x2 - should be ok
> 
> create 3rd user domain again - will be assigned handle 0x0 - this is not
> what I would expect.
> 
> (the code that's causing this is called when handle 0x2 was free'ed
> static inline void
> put_maptrack_handle(
>             grant_table_t *t, int handle)
>         {
>             t->maptrack[handle].ref_and_flags = t->maptrack_head <<
> MAPTRACK_REF_SHIFT;
>             t->maptrack_head = handle;
>                              ^^^^^^
>             t->map_count--;
>         }
> )
> 
> 
> 
> Now when I look  at xen/common/grant_tables.c I see how the handles are
> used in :
> 
> 
> static int
> __gnttab_map_grant_ref(
>     gnttab_map_grant_ref_t *uop,
>     unsigned long *va)
> {
>         [...] // much omitted
> 
>     if ( 0 <= ( rc = __gnttab_activate_grant_ref( ld, led, rd, ref,
>                                                   dev_hst_ro_flags,
>                                                   host_virt_addr,
> &frame)))
>     {
>         /*
>          * Only make the maptrack live _after_ writing the pte, in case we
> 
>          * overwrite the same frame number, causing a maptrack walk to
> find it
>          */
>         ld->grant_table->maptrack[handle].domid = dom;
>                                   ^^^^^^
>         ld->grant_table->maptrack[handle].ref_and_flags
>                                   ^^^^^^
>             = (ref << MAPTRACK_REF_SHIFT) |
>               (dev_hst_ro_flags & MAPTRACK_GNTMAP_MASK);
> 
>         (void)__put_user(frame, &uop->dev_bus_addr);
> 
>         if ( dev_hst_ro_flags & GNTMAP_host_map )
>             *va = host_virt_addr;
> 
>         (void)__put_user(handle, &uop->handle);
> 
> 
> I think this newly assigned handle of '0' (for the re-created 3rd user
> domain) is overwriting some previously assign array entry for the first
> user domain. Please someone who knows have a look at this. All this is
> happening in the domain where the blockdevice backend is located.
> 
>    Stefan
> 
> 
> Signed-off-by : Stefan Berger <stefanb@xxxxxxxxxx>
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
> 
> 
> 
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.