[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC Patch] Support for making an E820 PCI hole in toolstack (xl + xm)


  • To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>, <stefano.stabellini@xxxxxxxxxxxxx>, <gianni.tedesco@xxxxxxxxxx>
  • From: Keir Fraser <keir@xxxxxxx>
  • Date: Sat, 13 Nov 2010 07:40:30 +0000
  • Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, bruce.edge@xxxxxxxxx
  • Delivery-date: Fri, 12 Nov 2010 23:41:58 -0800
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=nWcRBE+pR1c82KZ5+SyZQcsNiMZ2gO/jieg/H+1o6L/cxb7/Uw5hXUEsOz0xi5HIYy 6quSaxfJbNKYwYxWqg8zS93Y/rei1Q6WQIVGSvFjX9BsDqhKegV0iKJ2z9mQsW8pwfNb wGe29VNpjHfYn5ZSlt3HR8IvtG9yD8AfneXo8=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcuDBgu69m4G/sYBsUu6Nap1r2+i8Q==
  • Thread-topic: [Xen-devel] [RFC Patch] Support for making an E820 PCI hole in toolstack (xl + xm)

Why doesn't the guest punch its own hole, by relocating RAM above 4GB?
That's what all HVM guests do (in hvmloader).

 -- Keir

On 12/11/2010 23:08, "Konrad Rzeszutek Wilk" <konrad.wilk@xxxxxxxxxx> wrote:

> Hey guys,
> 
> Attached is an RFC patch for making a PCI hole in the PV guests. This allows
> PV guests(*) with 4GB or more to now properly work with or without
> PCI passthrough cards.
> 
> Previously the Linux kernel would not be able to allocate the PCI region
> underneath the 4GB region as that region was all System RAM. And you would see
> this:
> 
> [    0.000000] PM: Registered nosave memory: 00000000000a0000 -
> 0000000000100000
> [    0.000000] PCI: Warning: Cannot find a gap in the 32bit address range
> [    0.000000] PCI: Unassigned devices with 32bit resource registers may
> break!
> [    0.000000] Allocating PCI resources starting at 100100000 (gap:
> 100100000:400000)
> 
> 
> This patchset punches an PCI hole in the E820 region and as well fills the P2M
> properly,
> so that now you can see (*):
> [    0.000000] Allocating PCI resources starting at a0000000 (gap:
> a0000000:60000000)
> 
> It adds a new option to guest config file, which is "pci_hole". The user can
> specify the PFN number, such as '0xc0000' or in case of using the xl, '1'
> which
> will automatically figure out the start of the PCI address.
> 
> *: This option requires support in the Linux kernel to actually deal with two
> entries in the E820 map and P2M space filled with ~0.
> 
> 
> The patches (draft, not ready for upstream) for the Linux kernel to support
> this are
> available at:
> 
> git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git devel/e820-hole
> 
> All of these patches make the E820 of the Linux guest with 4GB (or more)
> passed
> look like this (2.6.37-rc1+devel/e820-hole):
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> [    0.000000]  Xen: 0000000000100000 - 00000000a0000000 (usable)
> [    0.000000]  Xen: 0000000100000000 - 0000000160800000 (usable)
> 
> compared to (2.6.36)
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> [    0.000000]  Xen: 0000000000100000 - 0000000100000000 (usable)
> 
> and (2.6.37-rc1):
> [    0.000000]  Xen: 0000000000000000 - 00000000000a0000 (usable)
> [    0.000000]  Xen: 00000000000a0000 - 0000000000100000 (reserved)
> [    0.000000]  Xen: 0000000000100000 - 0000000100800000 (usable)
> 
> In regards to the patches that I am attaching here, what is the magic
> incantention
> to make the indentation/StyleGuide proper for the tools/libxc directory? The
> tab spacing
> is off a bit (I think).
> 
> I've tested this so far only on 64-bit guests, and I am quite sure the
> tool-stack
> needs some extra care for the 32-bit guests..
> 
> But please take a look and give feedback.
> 
> diff --git a/tools/libxc/xc_dom.h b/tools/libxc/xc_dom.h
> --- a/tools/libxc/xc_dom.h
> +++ b/tools/libxc/xc_dom.h
> @@ -91,6 +91,8 @@ struct xc_dom_image {
>  
>      /* physical memory */
>      xen_pfn_t total_pages;
> +    /* start of the pci_hole. goes up to 4gb */
> +    xen_pfn_t pci_hole;
>      struct xc_dom_phys *phys_pages;
>      int realmodearea_log;
>  
> diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
> --- a/tools/libxc/xc_dom_core.c
> +++ b/tools/libxc/xc_dom_core.c
> @@ -715,17 +715,22 @@ int xc_dom_update_guest_p2m(struct xc_do
>      uint32_t *p2m_32;
>      uint64_t *p2m_64;
>      xen_pfn_t i;
> +    size_t tot_pages;
>  
>      if ( !dom->p2m_guest )
>          return 0;
>  
> +    tot_pages = dom->total_pages;
> +    if (dom->pci_hole)
> +         tot_pages += (0x100000 - dom->pci_hole);
> +
>      switch ( dom->arch_hooks->sizeof_pfn )
>      {
>      case 4:
>          DOMPRINTF("%s: dst 32bit, pages 0x%" PRIpfn "",
> -                  __FUNCTION__, dom->total_pages);
> +                  __FUNCTION__, tot_pages);
>          p2m_32 = dom->p2m_guest;
> -        for ( i = 0; i < dom->total_pages; i++ )
> +        for ( i = 0; i < tot_pages; i++ )
>              if ( dom->p2m_host[i] != INVALID_P2M_ENTRY )
>                  p2m_32[i] = dom->p2m_host[i];
>              else
> @@ -733,9 +738,9 @@ int xc_dom_update_guest_p2m(struct xc_do
>          break;
>      case 8:
>          DOMPRINTF("%s: dst 64bit, pages 0x%" PRIpfn "",
> -                  __FUNCTION__, dom->total_pages);
> +                  __FUNCTION__, tot_pages);
>          p2m_64 = dom->p2m_guest;
> -        for ( i = 0; i < dom->total_pages; i++ )
> +        for ( i = 0; i < tot_pages; i++ )
>              if ( dom->p2m_host[i] != INVALID_P2M_ENTRY )
>                  p2m_64[i] = dom->p2m_host[i];
>              else
> diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
> --- a/tools/libxc/xc_dom_x86.c
> +++ b/tools/libxc/xc_dom_x86.c
> @@ -406,6 +406,15 @@ static int alloc_magic_pages(struct xc_d
>  {
>      size_t p2m_size = dom->total_pages * dom->arch_hooks->sizeof_pfn;
>  
> +    if (dom->pci_hole && (dom->total_pages > dom->pci_hole))
> +    {
> + size_t p2m_pci_hole_size = (0x100000 - dom->pci_hole) *
> +      dom->arch_hooks->sizeof_pfn;
> +
> +        DOMPRINTF("%s: Expanding P2M to include PCI hole (%ld->%ld)\n",
> +  __FUNCTION__, p2m_size, p2m_size + p2m_pci_hole_size);
> + p2m_size += p2m_pci_hole_size;
> +    }
>      /* allocate phys2mach table */
>      if ( xc_dom_alloc_segment(dom, &dom->p2m_seg, "phys2mach", 0, p2m_size) )
>          return -1;
> @@ -712,6 +721,7 @@ int arch_setup_meminit(struct xc_dom_ima
>  {
>      int rc;
>      xen_pfn_t pfn, allocsz, i, j, mfn;
> +    size_t p2m_size;
>  
>      rc = x86_compat(dom->xch, dom->guest_domid, dom->guest_type);
>      if ( rc )
> @@ -723,8 +733,13 @@ int arch_setup_meminit(struct xc_dom_ima
>          if ( rc )
>              return rc;
>      }
> +    p2m_size = dom->total_pages;
>  
> -    dom->p2m_host = xc_dom_malloc(dom, sizeof(xen_pfn_t) * dom->total_pages);
> +    if (dom->pci_hole && (dom->total_pages > dom->pci_hole))
> +  p2m_size += (0x100000 - dom->pci_hole);
> +
> +    DOMPRINTF("Allocating %ld bytes for P2M", p2m_size * sizeof(xen_pfn_t));
> +    dom->p2m_host = xc_dom_malloc(dom, sizeof(xen_pfn_t) * p2m_size);
>      if ( dom->superpages )
>      {
>          int count = dom->total_pages >> SUPERPAGE_PFN_SHIFT;
> @@ -750,21 +765,66 @@ int arch_setup_meminit(struct xc_dom_ima
>      }
>      else
>      {
> -        /* setup initial p2m */
> -        for ( pfn = 0; pfn < dom->total_pages; pfn++ )
> -            dom->p2m_host[pfn] = pfn;
> -        
> -        /* allocate guest memory */
> -        for ( i = rc = allocsz = 0;
> -              (i < dom->total_pages) && !rc;
> -              i += allocsz )
> + /* for PCI mapping, stick INVALID_MFN in the PCI_HOLE */
> +        if ( dom->pci_hole && (dom->total_pages > dom->pci_hole) )
>          {
> -            allocsz = dom->total_pages - i;
> -            if ( allocsz > 1024*1024 )
> -                allocsz = 1024*1024;
> -            rc = xc_domain_populate_physmap_exact(
> -                dom->xch, dom->guest_domid, allocsz,
> -                0, 0, &dom->p2m_host[i]);
> +            /* setup initial p2m in three passes. */
> +            for (pfn = 0; pfn < dom->pci_hole; pfn++)
> +              dom->p2m_host[pfn] = pfn;
> +
> +            xc_dom_printf (dom->xch, "%s: 0x0->0x%lx has PFNs.",
> __FUNCTION__, pfn);
> +            xc_dom_printf (dom->xch, "%s: 0x%lx -> 0x%x has INVALID_MFN",
> +                          __FUNCTION__, pfn, 0x100000);
> +            for (; pfn < 0x100000; pfn++)
> +              dom->p2m_host[pfn] = INVALID_MFN;
> +
> +            for (; pfn < 0x100000 + dom->total_pages - dom->pci_hole; pfn++)
> +              dom->p2m_host[pfn] = pfn;
> +            xc_dom_printf (dom->xch, "%s: 0x%x -> 0x%lx has PFNs.",
> __FUNCTION__,
> +                           0x100000, pfn);
> +
> +            /* allocate guest memory in two passes. */
> +            for (i = rc = allocsz = 0; (i < dom->pci_hole) && !rc; i +=
> allocsz)
> +            {
> +              allocsz = dom->pci_hole - i;
> +              xc_dom_printf (dom->xch, "%s: Populating M2P 0x%lx->0x%lx",
> +                             __FUNCTION__, i, i + allocsz);
> +              rc = xc_domain_populate_physmap_exact (dom->xch,
> dom->guest_domid,
> +                                 allocsz, 0, 0,
> +                                 &dom->p2m_host[i]);
> +            }
> +            for (i = 0x100000, allocsz = rc = 0;
> +                 (i < (0x100000 + dom->total_pages - dom->pci_hole))
> +                  && !rc; i += allocsz)
> +            {
> +              allocsz = (dom->total_pages - dom->pci_hole) - (i - 0x100000);
> +              if (allocsz > 1024 * 1024)
> +                allocsz = 1024 * 1024;
> +              xc_dom_printf (dom->xch, "%s: Populating M2P 0x%lx->0x%lx",
> +                             __FUNCTION__, i, i + allocsz);
> +              rc = xc_domain_populate_physmap_exact (dom->xch,
> dom->guest_domid,
> +                                                      allocsz, 0, 0,
> +                                                      &dom->p2m_host[i]);
> +            }
> +            xc_dom_printf (dom->xch, "%s: Done with PCI populate physmap",
> +                          __FUNCTION__);
> +        } else {
> +                /* setup initial p2m */
> +                for ( pfn = 0; pfn < dom->total_pages; pfn++ )
> +                    dom->p2m_host[pfn] = pfn;
> +                
> +                /* allocate guest memory */
> +                for ( i = rc = allocsz = 0;
> +                      (i < dom->total_pages) && !rc;
> +                      i += allocsz )
> +                {
> +                    allocsz = dom->total_pages - i;
> +                    if ( allocsz > 1024*1024 )
> +                        allocsz = 1024*1024;
> +                    rc = xc_domain_populate_physmap_exact(
> +                        dom->xch, dom->guest_domid, allocsz,
> +                        0, 0, &dom->p2m_host[i]);
> +                }
>          }
>      }
>  
> diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
> --- a/tools/libxc/xc_domain.c
> +++ b/tools/libxc/xc_domain.c
> @@ -481,16 +481,25 @@ int xc_domain_pin_memory_cacheattr(xc_in
>  #include "xc_e820.h"
>  int xc_domain_set_memmap_limit(xc_interface *xch,
>                                 uint32_t domid,
> -                               unsigned long map_limitkb)
> +                               unsigned long map_limitkb,
> +                               xen_pfn_t pci_hole_start)
>  {
>      int rc;
> +    uint64_t delta_kb;
> +    size_t e820_sz;
>      struct xen_foreign_memory_map fmap = {
>          .domid = domid,
>          .map = { .nr_entries = 1 }
>      };
>      DECLARE_HYPERCALL_BUFFER(struct e820entry, e820);
>  
> -    e820 = xc_hypercall_buffer_alloc(xch, e820, sizeof(*e820));
> +    delta_kb = map_limitkb - (uint64_t)(pci_hole_start << 2);
> +    if (pci_hole_start && (delta_kb > 0))
> +      e820_sz = sizeof(*e820);
> +    else
> +      e820_sz = sizeof(*e820)*2;
> +
> +    e820 = xc_hypercall_buffer_alloc(xch, e820, e820_sz);
>  
>      if ( e820 == NULL )
>      {
> @@ -502,6 +511,16 @@ int xc_domain_set_memmap_limit(xc_interf
>      e820->size = (uint64_t)map_limitkb << 10;
>      e820->type = E820_RAM;
>  
> +    if (pci_hole_start && (delta_kb > 0))
> +    {
> + fmap.map.nr_entries ++;
> + e820[0].size = (uint64_t)pci_hole_start << 12;
> + 
> + e820[1].type = E820_RAM;
> + e820[1].addr = (uint64_t)0x100000 << 12; /* val in pfn...  */
> + e820[1].size = (uint64_t)delta_kb << 10; /* .. while here in in kB. */
> +    }
> +
>      set_xen_guest_handle(fmap.map.buffer, e820);
>  
>      rc = do_memory_op(xch, XENMEM_set_memory_map, &fmap, sizeof(fmap));
> @@ -513,7 +532,8 @@ int xc_domain_set_memmap_limit(xc_interf
>  #else
>  int xc_domain_set_memmap_limit(xc_interface *xch,
>                                 uint32_t domid,
> -                               unsigned long map_limitkb)
> +                               unsigned long map_limitkb,
> +                               xen_pfn_t pci_hole_start)
>  {
>      PERROR("Function not implemented");
>      errno = ENOSYS;
> diff --git a/tools/libxc/xenctrl.h b/tools/libxc/xenctrl.h
> --- a/tools/libxc/xenctrl.h
> +++ b/tools/libxc/xenctrl.h
> @@ -913,7 +913,8 @@ int xc_domain_setmaxmem(xc_interface *xc
>  
>  int xc_domain_set_memmap_limit(xc_interface *xch,
>                                 uint32_t domid,
> -                               unsigned long map_limitkb);
> +                               unsigned long map_limitkb,
> +    xen_pfn_t pci_hole_start);
>  
>  int xc_domain_set_time_offset(xc_interface *xch,
>                                uint32_t domid,
> diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
> --- a/tools/libxl/libxl.h
> +++ b/tools/libxl/libxl.h
> @@ -392,6 +392,7 @@ int libxl_device_disk_getinfo(libxl_ctx
>                                libxl_device_disk *disk, libxl_diskinfo
> *diskinfo);
>  int libxl_cdrom_insert(libxl_ctx *ctx, uint32_t domid, libxl_device_disk
> *disk);
>  
> +int libxl_find_pci_hole(uint32_t *start_pfn);
>  /*
>   * Make a disk available in this domain. Returns path to a device.
>   */
> diff --git a/tools/libxl/libxl.idl b/tools/libxl/libxl.idl
> --- a/tools/libxl/libxl.idl
> +++ b/tools/libxl/libxl.idl
> @@ -110,6 +110,7 @@ libxl_domain_build_info = Struct("domain
>                                          ])),
>                   ("pv", "!%s", Struct(None,
>                                         [("slack_memkb", uint32),
> +                                        ("pci_hole_start", uint32),
>                                          ("bootloader", string),
>                                          ("bootloader_args", string),
>                                          ("cmdline", string),
> diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
> --- a/tools/libxl/libxl_dom.c
> +++ b/tools/libxl/libxl_dom.c
> @@ -71,7 +71,8 @@ int libxl__build_pre(libxl_ctx *ctx, uin
>      xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb +
> LIBXL_MAXMEM_CONSTANT);
>      xc_domain_set_memmap_limit(ctx->xch, domid,
>              (info->hvm) ? info->max_memkb :
> -            (info->max_memkb + info->u.pv.slack_memkb));
> +            (info->max_memkb + info->u.pv.slack_memkb),
> +            (info->hvm) ? 0 : info->u.pv.pci_hole_start);
>      xc_domain_set_tsc_info(ctx->xch, domid, info->tsc_mode, 0, 0, 0);
>      if ( info->disable_migrate )
>          xc_domain_disable_migrate(ctx->xch, domid);
> @@ -181,6 +182,8 @@ int libxl__build_pv(libxl_ctx *ctx, uint
>              }
>          }
>      }
> +    if ( info->u.pv.pci_hole_start)
> +        dom->pci_hole = info->u.pv.pci_hole_start;
>  
>      dom->flags = flags;
>      dom->console_evtchn = state->console_port;
> diff --git a/tools/libxl/libxl_pci.c b/tools/libxl/libxl_pci.c
> --- a/tools/libxl/libxl_pci.c
> +++ b/tools/libxl/libxl_pci.c
> @@ -1066,3 +1066,51 @@ int libxl_device_pci_shutdown(libxl_ctx
>      free(pcidevs);
>      return 0;
>  }
> +
> +#define MAX_LINE 300
> +int libxl_find_pci_hole(uint32_t *start_pfn)
> +{
> + FILE *fp;
> + char *s;
> + char buf[MAX_LINE];
> + int ret = -ENODEV;
> + long int pci_hole_phys;
> +
> + *start_pfn = 0;
> + fp = fopen("/proc/iomem", "r");
> + if (!fp)
> +  return ret;
> +
> + while (1) {
> +  s = fgets(buf, MAX_LINE, fp);
> +  if (!s)
> +   break;
> +  if (strlen(buf) < 1)
> +   continue;
> +  if (buf[strlen(buf)-1] == '\n')
> +   buf[strlen(buf)-1] = '\0';
> +  s = strchr(buf,'P');
> +  if (!s)
> +   continue;
> +  if (strncmp(s, "PCI", 3) == 0) {
> +   if (buf[0] == ' ')
> +    continue;
> +   s = strchr(buf,'-');
> +   if (!s)
> +    break;
> +   s[0]='\0';
> +   pci_hole_phys = strtol(buf, NULL, 16);
> +   if (!pci_hole_phys)
> +    break;
> +   /* We don't want to the holes below 16MB. */
> +   if (pci_hole_phys <= 0x1000)
> +    continue;
> +   *start_pfn = pci_hole_phys >> 12;
> +   fprintf(stderr,"The value is 0x%d\n", *start_pfn);
> +   ret = 0;
> +   break;
> +  }
> + }
> + fclose(fp);
> + return ret;
> +}
> diff --git a/tools/libxl/xl_cmdimpl.c b/tools/libxl/xl_cmdimpl.c
> --- a/tools/libxl/xl_cmdimpl.c
> +++ b/tools/libxl/xl_cmdimpl.c
> @@ -1078,6 +1078,14 @@ skip_vfb:
>      if (!xlu_cfg_get_long (config, "pci_power_mgmt", &l))
>          pci_power_mgmt = l;
>  
> +    if (!xlu_cfg_get_long (config, "pci_hole", &l)) {
> + if (l == 1) {
> +    uint32_t pfn_start = 0;
> +    if (!libxl_find_pci_hole(&pfn_start))
> +             b_info->u.pv.pci_hole_start = pfn_start;
> + } else
> +           b_info->u.pv.pci_hole_start = l;
> +    }
>      if (!xlu_cfg_get_list (config, "pci", &pcis, 0, 0)) {
>          int i;
>          d_config->num_pcidevs = 0;
> diff --git a/tools/python/xen/lowlevel/xc/xc.c
> b/tools/python/xen/lowlevel/xc/xc.c
> --- a/tools/python/xen/lowlevel/xc/xc.c
> +++ b/tools/python/xen/lowlevel/xc/xc.c
> @@ -458,6 +458,7 @@ static PyObject *pyxc_linux_build(XcObje
>      unsigned int mem_mb;
>      unsigned long store_mfn = 0;
>      unsigned long console_mfn = 0;
> +    unsigned long pci_hole_start = 0;
>      PyObject* elfnote_dict;
>      PyObject* elfnote = NULL;
>      PyObject* ret;
> @@ -467,14 +468,16 @@ static PyObject *pyxc_linux_build(XcObje
>                                  "console_evtchn", "image",
>                                  /* optional */
>                                  "ramdisk", "cmdline", "flags",
> -                                "features", "vhpt", "superpages", NULL };
> -
> -    if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iiiis|ssisii", kwd_list,
> +                                "features", "vhpt", "superpages",
> +                                "pci_hole", NULL };
> +
> +    if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iiiis|ssisiii", kwd_list,
>                                        &domid, &store_evtchn, &mem_mb,
>                                        &console_evtchn, &image,
>                                        /* optional */
>                                        &ramdisk, &cmdline, &flags,
> -                                      &features, &vhpt, &superpages) )
> +                                      &features, &vhpt, &superpages,
> +                                      &pci_hole_start) )
>          return NULL;
>  
>      xc_dom_loginit(self->xc_handle);
> @@ -486,6 +489,8 @@ static PyObject *pyxc_linux_build(XcObje
>  
>      dom->superpages = superpages;
>  
> +    dom->pci_hole = pci_hole_start;
> +
>      if ( xc_dom_linux_build(self->xc_handle, dom, domid, mem_mb, image,
>                              ramdisk, flags, store_evtchn, &store_mfn,
>                              console_evtchn, &console_mfn) != 0 ) {
> @@ -1659,11 +1664,13 @@ static PyObject *pyxc_domain_set_memmap_
>  {
>      uint32_t dom;
>      unsigned int maplimit_kb;
> -
> -    if ( !PyArg_ParseTuple(args, "ii", &dom, &maplimit_kb) )
> +    unsigned long pci_hole_start = 0;
> +
> +    if ( !PyArg_ParseTuple(args, "ii|i", &dom, &maplimit_kb, &pci_hole_start)
> )
>          return NULL;
>  
> -    if ( xc_domain_set_memmap_limit(self->xc_handle, dom, maplimit_kb) != 0 )
> +    if ( xc_domain_set_memmap_limit(self->xc_handle, dom, maplimit_kb,
> +                                    pci_hole_start) != 0 )
>          return pyxc_error_to_exception(self->xc_handle);
>      
>      Py_INCREF(zero);
> @@ -2661,6 +2668,7 @@ static PyMethodDef pyxc_methods[] = {
>        "Set a domain's physical memory mappping limit\n"
>        " dom [int]: Identifier of domain.\n"
>        " map_limitkb [int]: .\n"
> +      " pci_hole_start [int]: PFN for start of PCI hole (optional).\n"
>        "Returns: [int] 0 on success; -1 on error.\n" },
>  
>  #ifdef __ia64__
> diff --git a/tools/python/xen/xend/XendConfig.py
> b/tools/python/xen/xend/XendConfig.py
> --- a/tools/python/xen/xend/XendConfig.py
> +++ b/tools/python/xen/xend/XendConfig.py
> @@ -241,6 +241,7 @@ XENAPI_CFG_TYPES = {
>      'suppress_spurious_page_faults': bool0,
>      's3_integrity' : int,
>      'superpages' : int,
> +    'pci_hole' : int,
>      'memory_sharing': int,
>      'pool_name' : str,
>      'Description': str,
> @@ -422,6 +423,7 @@ class XendConfig(dict):
>              'target': 0,
>              'pool_name' : 'Pool-0',
>              'superpages': 0,
> +            'pci_hole': 0,
>              'description': '',
>          }
>          
> @@ -2135,6 +2137,9 @@ class XendConfig(dict):
>              image.append(['args', self['PV_args']])
>          if self.has_key('superpages'):
>              image.append(['superpages', self['superpages']])
> +        if self.has_key('pci_hole'):
> +            image.append(['pci_hole', self['pci_hole']])
> +
>  
>          for key in XENAPI_PLATFORM_CFG_TYPES.keys():
>              if key in self['platform']:
> @@ -2179,6 +2184,10 @@ class XendConfig(dict):
>          val = sxp.child_value(image_sxp, 'superpages')
>          if val is not None:
>              self['superpages'] = val
> +
> +        val = sxp.child_value(image_sxp, 'pci_hole')
> +        if val is not None:
> +            self['pci_hole'] = val
>          
>          val = sxp.child_value(image_sxp, 'memory_sharing')
>          if val is not None:
> diff --git a/tools/python/xen/xend/image.py b/tools/python/xen/xend/image.py
> --- a/tools/python/xen/xend/image.py
> +++ b/tools/python/xen/xend/image.py
> @@ -84,6 +84,7 @@ class ImageHandler:
>  
>      ostype = None
>      superpages = 0
> +    pci_hole = 0
>      memory_sharing = 0
>  
>      def __init__(self, vm, vmConfig):
> @@ -711,6 +712,7 @@ class LinuxImageHandler(ImageHandler):
>          self.vramsize = int(vmConfig['platform'].get('videoram',4)) * 1024
>          self.is_stubdom = (self.kernel.find('stubdom') >= 0)
>          self.superpages = int(vmConfig['superpages'])
> +        self.pci_hole = int(vmConfig['pci_hole'])
>  
>      def buildDomain(self):
>          store_evtchn = self.vm.getStorePort()
> @@ -729,6 +731,7 @@ class LinuxImageHandler(ImageHandler):
>          log.debug("features       = %s", self.vm.getFeatures())
>          log.debug("flags          = %d", self.flags)
>          log.debug("superpages     = %d", self.superpages)
> +        log.debug("pci_hole       = %d", self.pci_hole)
>          if arch.type == "ia64":
>              log.debug("vhpt          = %d", self.vhpt)
>  
> @@ -742,7 +745,8 @@ class LinuxImageHandler(ImageHandler):
>                                features       = self.vm.getFeatures(),
>                                flags          = self.flags,
>                                vhpt           = self.vhpt,
> -                              superpages     = self.superpages)
> +                              superpages     = self.superpages,
> +                              pci_hole       = self.pci_hole)
>  
>      def getBitSize(self):
>          return xc.getBitSize(image    = self.kernel,
> @@ -774,7 +778,6 @@ class LinuxImageHandler(ImageHandler):
>          args = args + ([ "-M", "xenpv"])
>          return args
>  
> -
>  class HVMImageHandler(ImageHandler):
>  
>      ostype = "hvm"
> @@ -1065,7 +1068,7 @@ class X86_Linux_ImageHandler(LinuxImageH
>          # set physical mapping limit
>          # add an 8MB slack to balance backend allocations.
>          mem_kb = self.getRequiredMaximumReservation() + (8 * 1024)
> -        xc.domain_set_memmap_limit(self.vm.getDomid(), mem_kb)
> +        xc.domain_set_memmap_limit(self.vm.getDomid(), mem_kb, self.pci_hole)
>          rc = LinuxImageHandler.buildDomain(self)
>          self.setCpuid()
>          return rc
> diff --git a/tools/python/xen/xm/create.py b/tools/python/xen/xm/create.py
> --- a/tools/python/xen/xm/create.py
> +++ b/tools/python/xen/xm/create.py
> @@ -680,6 +680,11 @@ gopts.var('superpages', val='0|1',
>             fn=set_int, default=0,
>             use="Create domain with superpages")
>  
> +gopts.var('pci_hole', val='0x<XXX>|0',
> +           fn=set_int, default=0,
> +           use="""Create domain with a PCI hole. The value is the PFN of the
> +           start of PCI hole. Usually that is 0xc0000.""")
> +
>  def err(msg):
>      """Print an error to stderr and exit.
>      """
> @@ -770,6 +775,9 @@ def configure_image(vals):
>          config_image.append(['args', vals.extra])
>      if vals.superpages:
>          config_image.append(['superpages', vals.superpages])
> +    if vals.pci_hole:
> +        config_image.append(['pci_hole', vals.pci_hole])
> +
>  
>      if vals.builder == 'hvm':
>          configure_hvm(config_image, vals)
> diff --git a/tools/python/xen/xm/xenapi_create.py
> b/tools/python/xen/xm/xenapi_create.py
> --- a/tools/python/xen/xm/xenapi_create.py
> +++ b/tools/python/xen/xm/xenapi_create.py
> @@ -285,6 +285,8 @@ class xenapi_create:
>                  vm.attributes["s3_integrity"].value,
>              "superpages":
>                  vm.attributes["superpages"].value,
> +            "pci_hole":
> +                vm.attributes["pci_hole"].value,
>              "memory_static_max":
>                  get_child_node_attribute(vm, "memory", "static_max"),
>              "memory_static_min":
> @@ -697,6 +699,8 @@ class sxp2xml:
>              = str(get_child_by_name(config, "s3_integrity", 0))
>          vm.attributes["superpages"] \
>              = str(get_child_by_name(config, "superpages", 0))
> +        vm.attributes["pci_hole"] \
> +            = str(get_child_by_name(config, "pci_hole", 0))
>          vm.attributes["pool_name"] \
>              = str(get_child_by_name(config, "pool_name", "Pool-0"))
>  
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.