[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] RE: Xen/ia64
There is an implementation of Xen/ia64 hypercalls already. See: http://article.gmane.org/gmane.comp.emulators.xen.devel/4734 This isn't cast in concrete... if there is a better way, we can change it. But let me clarify what is there: - There is a default break value that Xen/ia64 interprets as a hypercall. This is a per-domain variable so if a future version of Linux uses this break value to do something else, it can be changed (e.g. as a domain launch parameter). - If the break value does not match, the break is reflected to the executing domain. If it does match, r2 contains the hypercall number. - The only implemented hypercalls now are for firmware (EFI/SAL/PAL) emulation. These are really "hyperthunks" since domain loading generates stubs that contain the break instructions. - The parameter interface is dependent on the hypercall number. E.g. PAL hypercalls pass parameters in different registers than SAL hypercalls. Most Xen/x86 calls pass parameters in memory. It would be nice if Xen/ia64 could take advantage of the register stack to pass all parameters in registers but if this messes up portability (e.g. for frontend/backend driver code) between Xen/x86 and Xen/ia64, it's probably not worth it. - Right now, there is very limited support for Xen/ia64 to access domain memory (it's only used for fetching opcodes for privop emulation). I have a patch that fixes this but there's a bug that I haven't tracked down yet so I haven't promoted it to -unstable. Once this code is there, we can try out hypercalls that pass parameters via domain memory. (The patch also implements two test hypercalls to fetch and zero-out privop counters and pass back a long text string to domain0.) Linux/ia64 implements something called "fsyscall" (fast system call) for certain system calls, which uses the "epc" instruction instead of break. There are many restrictions to fsyscalls, but they may work for Xen/ia64 for certain simple hypercalls. But since that is strictly a performance improvement, let's try to stick to break instructions for hypercalls at first. Dan P.S. I will be unable to access email soon until March 1. > -----Original Message----- > From: Håvard Bjerke [mailto:Havard.Bjerke@xxxxxxxxxxx] > Sent: Friday, February 18, 2005 3:15 AM > To: Magenheimer, Dan (HP Labs Fort Collins) > Cc: xen-devel@xxxxxxxxxxxxxxxxxxxxx > Subject: Xen/ia64 > > Has anyone made any thoughts as to how hypercalls should be > implemented in Xen/ia64? > > In Xen/x86 they are basically syscalls, only with interrupt > vector 0x82 instead of 0x80. So it's a matter of pushing the > registers into the stack, loading the hypercall number (long) > and arguments (5x long) into the registers, and interrupting > with 'int 0x82': > __asm__ __volatile__ ( > "pushl %%ebx; pushl %%ecx; pushl %%edx; pushl > %%esi; pushl %%edi; " > "movl 4(%%eax),%%ebx ;" > "movl 8(%%eax),%%ecx ;" > "movl 12(%%eax),%%edx ;" > "movl 16(%%eax),%%esi ;" > "movl 20(%%eax),%%edi ;" > "movl (%%eax),%%eax ;" > TRAP_INSTR "; " // = int 0x82 > "popl %%edi; popl %%esi; popl %%edx; popl %%ecx; > popl %%ebx" > : "=a" (ret) : "0" (&hypercall) : "memory" ); > > However, in Linux/ia64 a syscall is called with a break instruction: > mov r15 = NR // the syscall number. r15 is a scratch register. > break 0x100000 > [...] > > What's the ideal way to do a hypercall in Xen/ia64? Simply > use 'break 0x100001'? Or is 0x100001 reserved for something > else in Linux/ia64? > > Håvard > > -- > Håvard K. F. Bjerke > http://www.idi.ntnu.no/~havarbj/ > ------------------------------------------------------- SF email is sponsored by - The IT Product Guide Read honest & candid reviews on hundreds of IT Products from real users. Discover which products truly live up to the hype. Start reading now. http://ads.osdn.com/?ad_ide95&alloc_id396&op=click _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxxx https://lists.sourceforge.net/lists/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |