[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 1/1] Add mmio_hole_size



>>> On 01.10.14 at 18:33, <dslutz@xxxxxxxxxxx> wrote:
> On 09/30/14 09:15, George Dunlap wrote:
>> On Thu, Sep 11, 2014 at 5:20 PM, Don Slutz <dslutz@xxxxxxxxxxx> wrote:
>>> @@ -237,26 +243,49 @@ void pci_setup(void)
>>>           pci_writew(devfn, PCI_COMMAND, cmd);
>>>       }
>>>
>>> -    /*
>>> -     * At the moment qemu-xen can't deal with relocated memory regions.
>>> -     * It's too close to the release to make a proper fix; for now,
>>> -     * only allow the MMIO hole to grow large enough to move guest memory
>>> -     * if we're running qemu-traditional.  Items that don't fit will be
>>> -     * relocated into the 64-bit address space.
>>> -     *
>>> -     * This loop now does the following:
>>> -     * - If allow_memory_relocate, increase the MMIO hole until it's
>>> -     *   big enough, or until it's 2GiB
>>> -     * - If !allow_memory_relocate, increase the MMIO hole until it's
>>> -     *   big enough, or until it's 2GiB, or until it overlaps guest
>>> -     *   memory
>>> -     */
>>> -    while ( (mmio_total > (pci_mem_end - pci_mem_start))
>>> -            && ((pci_mem_start << 1) != 0)
>>> -            && (allow_memory_relocate
>>> -                || (((pci_mem_start << 1) >> PAGE_SHIFT)
>>> -                    >= hvm_info->low_mem_pgend)) )
>>> -        pci_mem_start <<= 1;
>>> +    if ( mmio_hole_size )
>>> +    {
>>> +        uint64_t max_ram_below_4g = (1ULL << 32) - mmio_hole_size;
>>> +
>>> +        if ( max_ram_below_4g > HVM_BELOW_4G_MMIO_START )
>>> +        {
>>> +            printf("max_ram_below_4g=0x"PRIllx
>>> +                   " too big for mmio_hole_size=0x"PRIllx
>>> +                   " has been ignored.\n",
>>> +                   PRIllx_arg(max_ram_below_4g),
>>> +                   PRIllx_arg(mmio_hole_size));
>>> +        }
>>> +        else
>>> +        {
>>> +            pci_mem_start = max_ram_below_4g;
>>> +            printf("pci_mem_start=0x%lx (was 0x%x) for 
>>> mmio_hole_size=%lu\n",
>>> +                   pci_mem_start, HVM_BELOW_4G_MMIO_START,
>>> +                   (long)mmio_hole_size);
>>> +        }
>>> +    }
>>> +    else
>>> +    {
>>> +        /*
>>> +         * At the moment qemu-xen can't deal with relocated memory regions.
>>> +         * It's too close to the release to make a proper fix; for now,
>>> +         * only allow the MMIO hole to grow large enough to move guest 
>>> memory
>>> +         * if we're running qemu-traditional.  Items that don't fit will be
>>> +         * relocated into the 64-bit address space.
>>> +         *
>>> +         * This loop now does the following:
>>> +         * - If allow_memory_relocate, increase the MMIO hole until it's
>>> +         *   big enough, or until it's 2GiB
>>> +         * - If !allow_memory_relocate, increase the MMIO hole until it's
>>> +         *   big enough, or until it's 2GiB, or until it overlaps guest
>>> +         *   memory
>>> +         */
>>> +        while ( (mmio_total > (pci_mem_end - pci_mem_start))
>>> +                && ((pci_mem_start << 1) != 0)
>>> +                && (allow_memory_relocate
>>> +                    || (((pci_mem_start << 1) >> PAGE_SHIFT)
>>> +                        >= hvm_info->low_mem_pgend)) )
>>> +            pci_mem_start <<= 1;
>>> +    }
>> I don't think these need to be disjoint.  There's no reason you
>> couldn't set the size for the default, and then allow the code to make
>> it bigger for guests which allow that.
> 
> The support for changing mmio_hole_size is still "missing" from QEMU.
> So this code only works for qemu-traditional.  I think Jan said
> back on v1 or v2 (sorry, e-mail issues) that since this is a config,
> disable the auto changing code.

Because it didn't seem like you would want to properly take care
of both cases together (iirc the fact that the configured hole size
could be other than a power of 2 introduced a conflict with the
current resizing logic). I.e. doing one or the other is a suitable
first step imo, but with room for improvement.

>>> --- a/tools/libxc/xenguest.h
>>> +++ b/tools/libxc/xenguest.h
>>> @@ -244,12 +244,23 @@ struct xc_hvm_build_args {
>>>   int xc_hvm_build(xc_interface *xch, uint32_t domid,
>>>                    struct xc_hvm_build_args *hvm_args);
>>>
>>> +int xc_hvm_build_with_hole(xc_interface *xch, uint32_t domid,
>>> +                           struct xc_hvm_build_args *args,
>>> +                           uint64_t mmio_hole_size);
>>> +
>>>   int xc_hvm_build_target_mem(xc_interface *xch,
>>>                               uint32_t domid,
>>>                               int memsize,
>>>                               int target,
>>>                               const char *image_name);
>>>
>>> +int xc_hvm_build_target_mem_with_hole(xc_interface *xch,
>>> +                                      uint32_t domid,
>>> +                                      int memsize,
>>> +                                      int target,
>>> +                                      const char *image_name,
>>> +                                      uint64_t mmio_hole_size);
>> Why on earth do we need all of these extra functions?  Particularly
>> ones like xc_hvm_build_target_mem_with_hole(), which isn't even called
>> by anyone, AFAICT?
> 
> int xc_hvm_build_target_mem(xc_interface *xch,
>                             uint32_t domid,
>                             int memsize,
>                             int target,
>                             const char *image_name)
> {
>      return xc_hvm_build_target_mem_with_hole(xch, domid, memsize, target,
>                                               image_name, 0);
> }
> 
> So it is called...

You're kidding, aren't you? If this is the only caller, we can very well
do without the ..._with_hole() one.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.