[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 01/12] x86: infrastructure to allow converting certain indirect calls to direct ones





On 02/10/2018 14:35, Andrew Cooper wrote:
On 02/10/18 14:28, Julien Grall wrote:
Hi Andrew,

On 02/10/2018 14:21, Andrew Cooper wrote:
On 02/10/18 11:12, Jan Beulich wrote:
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -66,6 +66,10 @@
     #define ROUNDUP(x, a) (((x) + (a) - 1) & ~((a) - 1))
   +#define count_va_arg_(dot, a1, a2, a3, a4, a5, a6, a7, a8, x, ...) x
+#define count_va_arg(args...) \
+    count_va_arg_(., ## args, 8, 7, 6, 5, 4, 3, 2, 1, 0)

This particular bit of review split out for obvious reasons.

We already have __count_args() in the ARM SMCCC infrastructure.  Please
can we dedup that (broken out into a separate patch) rather than
introducing a competing version.

The ARM version is buggy.  It is off-by-two in the base case, and
doesn't compile if fewer than two parameters are passed.  This version
functions correctly, but should be named with a plural.

The ARM version is *not* buggy. What the ARM version count is the
number of parameters for the SMCCC function, *not* the number of
parameter for the call.

This matches the SMCCC where the first parameter is always the
function ID, the last parameter is always the result structure.

The code will end up to be less readable using a generic count. I am
planning to Nack anything trying to use a generic count in the SMCCC
code.

Yes it is buggy, because it doesn't behave as the name suggests.

If you mean the infrastructure to have SMCCC specific behaviour, the
macros should be named suitably, to avoid interfering with common code.

Well the code is coming from Linux. Feel free to send a patch to fix the name in Linux and Xen.
Cheers,

--
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.