[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] RE: PCI DMA Limitations (Stephen Donnelly)



you can change "#define CONFIG_DMA_BITSIZE 30
(/xen/include/asm-x86/config.h)" and modify  "#define
DEFAULT_IO_TLB_DMA_BITS 30
(linux-2.6-xen-sparse/arch/i386/kernel/swiotlb.c)" to change it's size.



                                                                                
                                                 
                    xen-devel-request@xxxxxxxxx                                 
                                                 
                    source.com                         收件人:     
xen-devel@xxxxxxxxxxxxxxxxxxx                                
                    发件人:                           抄送:                          
                                          
                    xen-devel-bounces@xxxxxxxxx        主题:  RE: PCI DMA 
Limitations (Stephen Donnelly)                        
                    source.com                                                  
                                                 
                                                                                
                                                 
                                                                                
                                                 
                    2007-03-26 14:12                                            
                                                 
                    请答复 给 xen-devel                                             
                                             
                                                                                
                                                 
                                                                                
                                                 



Send Xen-devel mailing list submissions to
           xen-devel@xxxxxxxxxxxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
           http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel
or, via email, send a message with subject or body 'help' to
           xen-devel-request@xxxxxxxxxxxxxxxxxxx

You can reach the person managing the list at
           xen-devel-owner@xxxxxxxxxxxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Xen-devel digest..."


Today's Topics:

   1. Re: question about gmfn_to_mfn() (tgh)
   2. Re: question about machine-to-physic table         and
phy-to-machine
      table (tgh)
   3. Re: RFC: [0/2] Remove netloop by lazy copying in netback
      (Herbert Xu)
   4. Re: memsize for HVM save/restore (Zhai, Edwin)
   5. PCI DMA Limitations (Stephen Donnelly)
   6. Xen error:no memory allocted to domU (Hao Sun)
   7. question about reboot VM (tgh)


----------------------------------------------------------------------

Message: 1
Date: Mon, 26 Mar 2007 09:09:10 +0800
From: tgh <tianguanhua@xxxxxxxxxx>
Subject: Re: [Xen-devel] question about gmfn_to_mfn()
To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Cc: Guy Zana <guy@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Message-ID: <46071D36.8070506@xxxxxxxxxx>
Content-Type: text/plain; charset=UTF-8; format=flowed

Thank you all for all replys

In the HVM ,xen maintain guestos p2m table ,while in the paravirt ,guest
maintain its own p2m,is it right?

Then in the paravirt case, if a VM's memory maxsize is 512M,and it is
allocated 256M by "xm mem-set ",and it maybe only use 128M for running
its OS and application , then what does its v2p table(or it is v2m talbe
,I am not for sure) and p2m table look like in the aspect of size? and
what about guestOS's mem_map size,is it 512M or 256M or 128M or
something else?

another confusion for me is how does guestOS maitain its p2m table
(linux has the v2p table ,but not p2m table),and what about the working
procession for these tables?

could you help me
Thanks in advance








Keir Fraser ??????:
> GPFN is guest machine frame number. It equals GPFN for fully-translated
> (e.g., HVM guests). It equals MFN for ordinary PV guests which maintain
> their own p2m translation table.
>
>  -- Keir
>
>
> On 23/3/07 06:56, "Guy Zana" <guy@xxxxxxxxxxxx> wrote:
>
>
>> mfn = machine frame number, it is an index to a page in the real memory
of the
>> system.
>> gmfn = guest's machine frame number and it sometimes called gpfn or just
pfn.
>> Guests have a translation table between their own virtualized pseudo
physical
>> memory and the real machine memory -> this is exactly what gmfn_to_mfn
does.
>> Pfn is a generic term that might be used in all kind of situations so
you
>> should understand from the context.
>>
>> Guy.
>>
>>
>>> -----Original Message-----
>>> From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> [mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of tgh
>>> Sent: Friday, March 23, 2007 5:31 AM
>>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>>> Subject: [Xen-devel] question about gmfn_to_mfn()
>>>
>>> hi
>>>  I read the code of balloon part,  I am  confused about  the
>>> meaning and function of " mfn = gmfn_to_mfn(d, gmfn);"
>>>  what is gmfn  and  and what is mfn?  and" #define gmfn_to_mfn(_d,
>>> gpfn)  mfn_x(sh_gfn_to_mfn(_d, gpfn))"
>>> it seems that  gmfn and gpfn  is the same  or  what is the
>>> trick in it ?
>>>
>>> I am confused about it
>>>
>>> could you help me
>>> Thanks in advance
>>>
>>>
>>> _______________________________________________
>>> Xen-devel mailing list
>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-devel
>>>
>>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel
>>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
>
>




------------------------------

Message: 2
Date: Mon, 26 Mar 2007 09:39:56 +0800
From: tgh <tianguanhua@xxxxxxxxxx>
Subject: Re: [Xen-devel] question about machine-to-physic table         and
           phy-to-machine table
To: Daniel Stodden <stodden@xxxxxxxxxx>
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Message-ID: <4607246C.1060709@xxxxxxxxxx>
Content-Type: text/plain; charset=UTF-8; format=flowed

Thank you for your reply

>> I am confused about the meaning and function of  machine-to-physic
address
>>
>
> it *is* confusing, admittedly. in my understanding, one reaseon for
> 'm2p'/'p2m' being used is that guest operating systems, most prominently
> linux, have always been using 'pfn' for 'page frame number' and the like
> when referring to 'physical' memory. now you need some kind of
> distinction in the paravirtual guest case, because those oses will deal
> with both.
>
in the paravirt case, guestos maintain its own mfn which need m2p and
p2m ,is it right?
I am confused about how does guestOS maintain its virt-to-physic and
physic-to-mach mapping ,in the linux ,there is only v2p mapping, how
does guestOS maintain its p2m mapping ,and when a virt address is put
into a mmu, does cpu hardware convert virt-addr into machine address or
guest's phyiscal address?

I am confused about it

could you help me
Thanks in advance
> that host memory becoming a non-contiguous, non-physical one clearly
> doesn't justify to substitute the names all across the kernel codebase.
> equally, you could not name it virtual or similar in the vmm, because
> the term 'virtual' has obviously been allocated elsewhere.
>
> so host memory became 'machine' memory. in a different universe, it
> might have rather been the actual 'physical' one. or 'host' memory.
> virtual machine memory got a 'p' like in both 'pseudo-physical' and/or
> 'pfn' and i suppose turned for a significant number of people into
> 'physical' at some point. which is largely misleading.
>
> regards,
> daniel
>
>




------------------------------

Message: 3
Date: Mon, 26 Mar 2007 12:19:47 +1000
From: Herbert Xu <herbert@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] RFC: [0/2] Remove netloop by lazy copying in
           netback
To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Cc: Xen Development Mailing List <xen-devel@xxxxxxxxxxxxxxxxxxx>
Message-ID: <20070326021947.GA10672@xxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=us-ascii

On Sun, Mar 25, 2007 at 01:27:04PM +0100, Keir Fraser wrote:
>
> So we're back to the problem of doing this switch when Xen is doing the
p2m
> translation (as on ia64 for example). On x86 we have a
XENMEM_add_to_physmap
> hypercall. This could be generalised to other architectures and extended.
> For example, we could add a XENMAPSPACE_gpfn -- which would mean take the
> 'thing' currently mapped at the specified gpfn and map it at the new gpfn
> location instead. I'd certainly personally rather see add_to_physmap()
> extended than add extra single-purpose crap to the grant-table
interfaces.

I've had a look at this and it seems that

1) We don't have the underlying operations suited for this.

We need something that can replace a p2m entry atomically and more
importantly swap two p2m entries rather than setting one and unmapping
the other.  The former is because we can't easily process p2m page
faults in the guest.  The latter is because we stlil need to unmap the
grant table entry after this operation so we have to keep the entry
around.

This is actually one of the reasons I picked the grant tables interface
originally in that we could unmap it at the same time rather than doing
a full swap followed by an unmap.

So are you OK with adding underlying operations that allows a full swap
of two p2m entries? This would then be used as follows in translated mode:

           a) new_addr = alloc_page
           b) memcpy(new_addr, addr, len)
           c) p2m_swap(__pa(new_addr), __pa(addr))
           d) grant_unmap(__pa(new_addr))

2) I'm unsure what you want me to do for non-translated mode, i.e., x86.

Are you happy with the new grant table operation or do you want to follow
a swap mode as above? The swapping code would look like:

           a) new_addr = alloc_page
           b) memcpy(new_addr, addr, len)
           c) pte_swap(new_addr, addr)
           d) grant_unmap(new_addr)

Cheers,
--
Visit Openswan at http://www.openswan.org/
Email: Herbert Xu ~{PmV>HI~} <herbert@xxxxxxxxxxxxxxxxxxx>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt



------------------------------

Message: 4
Date: Mon, 26 Mar 2007 11:13:18 +0800
From: "Zhai, Edwin" <edwin.zhai@xxxxxxxxx>
Subject: [Xen-devel] Re: memsize for HVM save/restore
To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Ewan Mellor <ewan@xxxxxxxxxxxxx>,
           "Zhai,         Edwin" <edwin.zhai@xxxxxxxxx>
Message-ID: <20070326031318.GZ21485@xxxxxxxxxxxxxxxxxxxxxx>
Content-Type: text/plain; charset=us-ascii

On Sat, Mar 24, 2007 at 02:18:44PM +0000, Keir Fraser wrote:
> On 24/3/07 11:37, "Zhai, Edwin" <edwin.zhai@xxxxxxxxx> wrote:
>
> > But then qemu broke, because it also require the memsize to locate the
share
> > page. We can't use the previous method, as it requires a lot of changes
in
> > qemu.
>
> Doesn't your new 'general layout' patch put the PFNs of xenstore, ioreq,
> buffered_ioreq in the saved image, and restore in xc_hvm_restore? Qemu-dm

yes,

> should obtain the addresses via HVMOP_get_param.
>
> You do not need the memsize parameter.

I don't think so.
Besides locating PFNs, memsize is also used in QEMU for other purpose, such
as
bitmap allocation, dev init and map_foreign*. So memsize is a must for qemu

init.

See following code in xc_hvm_build:
if ( v_end > HVM_BELOW_4G_RAM_END )
    shared_page_nr = (HVM_BELOW_4G_RAM_END >> PAGE_SHIFT) - 1;
else
    shared_page_nr = (v_end >> PAGE_SHIFT) - 1;

So it's impossible to get memsize by saved PFNs when restore a big memory
guest.


>
>  -- Keir
>

--
best rgds,
edwin



------------------------------

Message: 5
Date: Mon, 26 Mar 2007 15:35:24 +1200
From: "Stephen Donnelly" <sfdonnelly@xxxxxxxxx>
Subject: [Xen-devel] PCI DMA Limitations
To: xen-devel@xxxxxxxxxxxxxxxxxxx
Message-ID:
           <5f370d430703252035i62091c0drebc7e375703c5ca7@xxxxxxxxxxxxxx>
Content-Type: text/plain; charset="iso-8859-1"

I've been reading the XenLinux code from 3.0.4 and would appreciate
clarification of the limitations on PCI DMA under Xen. I'm considering how
to deal with a peripheral that requires large DMA buffers.

All 'normal Linux' PCI DMA from Driver Domains (e.g. dom0) occurs via the
SWIOTLB code via a restricted window. e.g. when booting:

Software IO TLB enabled:
 Aperture:     64 megabytes
 Kernel range: 0xffff880006ea2000 - 0xffff88000aea2000
 Address size: 30 bits
PCI-DMA: Using software bounce buffering for IO (SWIOTLB)

The size of the aperture is configurable when the XenLinux kernel boots.
The
maximum streaming DMA allocation (via dma_map_single) is is limited by
IO_TLB_SIZE to 128 slabs  * 4k = 512kB. Synchronisation is explicit via
dma_sync_single and involves the CPU copying pages via these 'bounce
buffers'. Is this correct?

If the kernel is modified by increasing IO_TLB_SIZE, will this allow larger
mappings, or is there a matching limitation in the hypervisor?

Coherent mappings via dma_alloc_coherent exchange VM pages for contiguous
low hypervisor pages. The allocation size is limited by MAX_CONTIG_ORDER =
9
in xen_create_contiguous_region to 2^9 * 4k = 2MB?

Is it possible to increase MAX_CONTIG_ORDER in a guest OS unilaterally, or
is there a matching limitation in the hypervisor? I didn't see any options
to Xen to configure the amount of memory reserved for coherent DMA
mappings.

Is there a simpler/more direct way to provide DMA access to large buffers
in
guest VMs? I was curious about how RDMA cards (e.g. Infiniband) are
supported, are they required to use DAC and scatter-gather in some way?

Thanks,
Stephen.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://lists.xensource.com/archives/html/xen-devel/attachments/20070326/4ca54f50/attachment.html


------------------------------

Message: 6
Date: Mon, 26 Mar 2007 12:03:17 +0800
From: Hao Sun <sunhao@xxxxxxxxxx>
Subject: [Xen-devel] Xen error:no memory allocted to domU
To: xen-devel@xxxxxxxxxxxxxxxxxxx
Message-ID:

<OFDC147108.E414FB0E-ON482572AA.00123D5A-482572AA.00163161@xxxxxxxxxx>
Content-Type: text/plain; charset="gb2312"

Hi,
    I installed xen-unstable.hg on SLES10. I created a domU configure file
"vm1" like below:

disk = [ 'file:/etc/xen/images/vm1,hda,w' ]
memory = 128
vcpus = 2
builder = 'linux'
name = 'vm1'
vif = [ 'mac=00:19:3e:43:04:02' ]
localtime = 0
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'restart'
extra = ' TERM=xterm'
bootloader = '/usr/lib/xen/boot/domUloader.py'
bootentry = '/dev/hda2:/boot/vmlinuz-xen,/boot/initrd-xen'

Error occured when I tried to create a domU using this configure file.
After I inputted "xm create -c vm1", this process paussed.
I logged in system in another console and found the domU vm1's stat is "p"
after I type in "xm list". The memory allocted to domU vm1 is zero which I
think is
the reason why it is paused. My dom0's memory is 1.8G. I don't know why no
memory has been allocted to domU.

Did anyone come up with this problem? Please give me some suggestion,
thanks!


Best Regards, Sun Hao(孙皓)
E-mail: sunhao@xxxxxxxxxx

-------------- next part --------------
An HTML attachment was scrubbed...
URL:
http://lists.xensource.com/archives/html/xen-devel/attachments/20070326/68b843b1/attachment.htm


------------------------------

Message: 7
Date: Mon, 26 Mar 2007 14:12:33 +0800
From: tgh <tianguanhua@xxxxxxxxxx>
Subject: [Xen-devel] question about reboot VM
To: xen-devel@xxxxxxxxxxxxxxxxxxx
Message-ID: <46076451.8000002@xxxxxxxxxx>
Content-Type: text/plain; charset=GB2312

hi
I try to understand the"xm reboot" for vm, but confused about the python
code
I could not find which function or code in C language is called by the
python when rebooting

could you help me
Thanks in advance




------------------------------

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


End of Xen-devel Digest, Vol 25, Issue 215
******************************************


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.