[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages



>>> On 06.12.17 at 08:50, <chao.gao@xxxxxxxxx> wrote:
> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove the vcpu
> number constraint imposed by one IOREQ page, bump the number of IOREQ page to
> 4 pages. With this patch, multiple pages can be used as IOREQ page.

In case I didn't say so before - I'm opposed to simply changing the upper limit
here. Please make sure any number of vCPU-s can be supported by the new
code (it looks like it mostly does already, so it may be mainly the description
which needs changing). If we want to impose an upper limit, this shouldn't
affect the ioreq page handling code at all.

> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -64,14 +64,24 @@ static struct hvm_ioreq_server *get_ioreq_server(const 
> struct domain *d,
>              continue; \
>          else
>  
> +/* Iterate over all ioreq pages */
> +#define FOR_EACH_IOREQ_PAGE(s, i, iorp) \
> +    for ( (i) = 0, iorp = s->ioreq; (i) < (s)->ioreq_page_nr; (i)++, iorp++ )

You're going too far with parenthesization here: Just like iorp, i is required 
to
be a simple identifier anyway. Otoh you parenthesize s only once.

> +static ioreq_t *get_ioreq_fallible(struct hvm_ioreq_server *s, struct vcpu 
> *v)

What is "fallible"? And why such a separate wrapper anyway? But I guess a lot
of this will need to change anyway with Paul's recent changes. I'm therefore not
going to look in close detail at any of this.

>  static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>  {
> -    hvm_unmap_ioreq_gfn(s, &s->ioreq);
> +    int i;

At the example of this - unsigned int please in all cases where the value can't 
go
negative.

> @@ -688,8 +741,15 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server 
> *s,
>      INIT_LIST_HEAD(&s->ioreq_vcpu_list);
>      spin_lock_init(&s->bufioreq_lock);
>  
> -    s->ioreq.gfn = INVALID_GFN;
> +    FOR_EACH_IOREQ_PAGE(s, i, iorp)
> +        iorp->gfn = INVALID_GFN;
>      s->bufioreq.gfn = INVALID_GFN;
> +    s->ioreq_page_nr = (d->max_vcpus + IOREQ_NUM_PER_PAGE - 1) /
> +                       IOREQ_NUM_PER_PAGE;

DIV_ROUND_UP() - please don't open-code things.

> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -279,6 +279,12 @@
>  #define XEN_HVM_MCA_CAP_LMCE   (xen_mk_ullong(1) << 0)
>  #define XEN_HVM_MCA_CAP_MASK   XEN_HVM_MCA_CAP_LMCE
>  
> -#define HVM_NR_PARAMS 39
> +/*
> + * Number of pages that are reserved for default IOREQ server. The base PFN
> + * is set via HVM_PARAM_IOREQ_PFN.
> + */
> +#define HVM_PARAM_IOREQ_PAGES 39

Why is this needed? It can be derived from the vCPU count permitted for the
domain.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.