[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] xen/tools: Add 64 bits big bar support



>>> On 16.08.12 at 12:48, "Hao, Xudong" <xudong.hao@xxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> >>> On 15.08.12 at 08:54, Xudong Hao <xudong.hao@xxxxxxxxx> wrote:
>> > --- a/tools/firmware/hvmloader/config.h    Tue Jul 24 17:02:04 2012 +0200
>> > +++ b/tools/firmware/hvmloader/config.h    Thu Jul 26 15:40:01 2012 +0800
>> > @@ -53,6 +53,10 @@ extern struct bios_config ovmf_config;
>> >  /* MMIO hole: Hardcoded defaults, which can be dynamically expanded. */
>> >  #define PCI_MEM_START       0xf0000000
>> >  #define PCI_MEM_END         0xfc000000
>> > +#define PCI_HIGH_MEM_START  0xa000000000ULL
>> > +#define PCI_HIGH_MEM_END    0xf000000000ULL
>> 
>> With such hard coded values, this is hardly meant to be anything
>> more than an RFC, is it? These values should not exist in the first
>> place, and the variables below should be determined from VM
>> characteristics (best would presumably be to allocate them top
>> down from the end of physical address space, making sure you
>> don't run into RAM).

No comment on this part?

>> > @@ -133,23 +142,35 @@ void pci_setup(void)
>> >          /* Map the I/O memory and port resources. */
>> >          for ( bar = 0; bar < 7; bar++ )
>> >          {
>> > +            bar_sz_upper = 0;
>> >              bar_reg = PCI_BASE_ADDRESS_0 + 4*bar;
>> >              if ( bar == 6 )
>> >                  bar_reg = PCI_ROM_ADDRESS;
>> >
>> >              bar_data = pci_readl(devfn, bar_reg);
>> > +            is_64bar = !!((bar_data & (PCI_BASE_ADDRESS_SPACE |
>> > +                         PCI_BASE_ADDRESS_MEM_TYPE_MASK)) ==
>> > +                         (PCI_BASE_ADDRESS_SPACE_MEMORY |
>> > +                         PCI_BASE_ADDRESS_MEM_TYPE_64));
>> >              pci_writel(devfn, bar_reg, ~0);
>> >              bar_sz = pci_readl(devfn, bar_reg);
>> >              pci_writel(devfn, bar_reg, bar_data);
>> > +
>> > +            if (is_64bar) {
>> > +                bar_data_upper = pci_readl(devfn, bar_reg + 4);
>> > +                pci_writel(devfn, bar_reg + 4, ~0);
>> > +                bar_sz_upper = pci_readl(devfn, bar_reg + 4);
>> > +                pci_writel(devfn, bar_reg + 4, bar_data_upper);
>> > +                bar_sz = (bar_sz_upper << 32) | bar_sz;
>> > +            }
>> > +            bar_sz &= (((bar_data & PCI_BASE_ADDRESS_SPACE) ==
>> > +                        PCI_BASE_ADDRESS_SPACE_MEMORY) ?
>> > +                       0xfffffffffffffff0 :
>> 
>> This should be a proper constant (or the masking could be
>> done earlier, in which case you could continue to use the
>> existing PCI_BASE_ADDRESS_MEM_MASK).
>> 
> 
> So the PCI_BASE_ADDRESS_MEM_MASK can be defined as 'ULL'.

I'd recommend not touching existing variables. As said before,
by re-ordering the code you can still use the constant as-is.

>> > @@ -193,10 +212,14 @@ void pci_setup(void)
>> >          pci_writew(devfn, PCI_COMMAND, cmd);
>> >      }
>> >
>> > -    while ( (mmio_total > (pci_mem_end - pci_mem_start)) &&
>> > -            ((pci_mem_start << 1) != 0) )
>> > +    while ( mmio_total > (pci_mem_end - pci_mem_start) &&
>> pci_mem_start )
>> 
>> The old code here could remain in place if ...
>> 
>> >          pci_mem_start <<= 1;
>> >
>> > +    if (!pci_mem_start) {
>> 
>> .. the condition here would get changed to the one used in the
>> first part of the while above.
>> 
>> > +        bar64_relocate = 1;
>> > +        pci_mem_start = PCI_MIN_MMIO_ADDR;
>> 
>> Which would then also make this assignment (and the
>> constant) unnecessary.
>> 
> 
> Cool, I'll leave the old code, and just add
> 
> if (pci_mem_start = PCI_MIN_MMIO_ADDR)
>     bar64_relocate = 1;

No, that's not correct. It's the other half of the while()'s condition
that you need to re-check here.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.