[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86emul: support clzero
>>> On 24.09.15 at 13:59, <andrew.cooper3@xxxxxxxxxx> wrote: > On 24/09/15 09:02, Jan Beulich wrote: >>>>> On 23.09.15 at 19:37, <andrew.cooper3@xxxxxxxxxx> wrote: >>> On 22/09/15 14:06, Jan Beulich wrote: >>>> ... in anticipation of this possibly going to get used by guests for >>>> basic thinks like memset() or clearing or pages. >>>> >>>> Since the emulation doesn't use clzero itself, checking the guest's >>>> CPUID for the feature to be exposed is (intentionally) being avoided >>>> here. All that's required is sensible guest side data for the clflush >>>> line size. >>>> >>>> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx> >>> Where have you found this instruction? Googling, I have found a >>> presentation talking about it being new in the new AMD Zen cores, but I >>> still can't locate any technical documentation on the matter. >> Sadly no technical documentation so far, despite me pinging for it >> after the respective binutils patch >> > (https://sourceware.org/git/gitweb.cgi?p=binutils-gdb.git;a=commitdiff;h=029f3 > > 522619e8b77a7b848be23f4c13e50087d8b) >> got posted and went in. > > While I don't see an obvious issue with your patch, I can't claim to > have reviewed it without some documentation to refer to. Understood. Depending on the actual semantics the patch may allow the instruction to be used (emulated) in more cases than on actual hardware, which I don't see as an issue. That's mainly due to the undefinedness of "cache line" for memory types not using the cache (i.e. the instruction may not do what one might expect on WC or UC memory, which is what I've been trying to find out since said binutils posting, but I'm pretty certain it would at best be undefined; us giving it defined behavior would not violate that). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |