[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v11 2/3] Differentiate IO/mem resources tracked by ioreq server



>>> On 22.01.16 at 04:20, <yu.c.zhang@xxxxxxxxxxxxxxx> wrote:
> @@ -2601,6 +2605,16 @@ struct hvm_ioreq_server 
> *hvm_select_ioreq_server(struct domain *d,
>          type = (p->type == IOREQ_TYPE_PIO) ?
>                  HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
>          addr = p->addr;
> +        if ( type == HVMOP_IO_RANGE_MEMORY )
> +        {
> +             ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
> +                                          &p2mt, P2M_UNSHARE);

It seems to me like I had asked before: Why P2M_UNSHARE instead
of just P2M_QUERY? (This could surely be fixed up while committing,
the more that I've already done some cleanup here, but I'd like to
understand this before it goes in.)

> +             if ( p2mt == p2m_mmio_write_dm )
> +                 type = HVMOP_IO_RANGE_WP_MEM;
> +
> +             if ( ram_page )
> +                 put_page(ram_page);
> +        }
>      }
>  
>      list_for_each_entry ( s,
> @@ -2642,6 +2656,11 @@ struct hvm_ioreq_server 
> *hvm_select_ioreq_server(struct domain *d,
>              }
>  
>              break;
> +        case HVMOP_IO_RANGE_WP_MEM:
> +            if ( rangeset_contains_singleton(r, PFN_DOWN(addr)) )
> +                return s;

Considering you've got p2m_mmio_write_dm above - can this
validly return false here?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.