[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] build/xen: fix symbol generation with LLVM LD


  • To: Jan Beulich <jbeulich@xxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Fri, 6 May 2022 17:35:24 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=PmLsG8TMb93Qe7yuPQkF7sZunb94UW5ByYTU30X0JOQ=; b=IfeyBbnslXLkUzm4b9rhszZOt6Mm3s8ZyZI/7qKZPMtDc1mYXEA768r6NW+EktzhoiF6BMGfw6NTpxAz9ZbCFCyq510rTAIpDreHOsM3HEcU1ug7wDV/+1r1dJzqtWIRgiFMt+MCVndUuBgO0KLfS6iY3SQKVvl6wsXRWPtuumKm8mX/YQWaSLBKly3lWXupA7oe0vY45IyZsceSYiYIDXvFMp6oRhJlac02C9fwqMTZ0F13w21YVyVQrFL+nppfIGH+NJe+gzb2H2gKXudBbfX0/j5wXhjC3MiNwf8TceLW9R0UEEBmAGKwuMXeXevZIDGdMCMmCgkmuzhxiCqzYQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LG9xdNJATDhiJZeFPFcZqngwEYqsdg918F6IeiW6dYva2smqSTXwLGQxS+X5K/C08+A+M/ZnSo3dRGpdJf9zk1qw3MZmDi7VNsWw3sbxGVe5M6vemfcl8qCLLGGizQA6qHjP/cnu6gKcZGsaMEew56jkpCKTjbuXq2YXck0dwa6RBEUMIcYWY1fJXa5j+A6w4BUV4FaJZG/kBHcV/1UaCcl7/Gs0ccaF46sOaBimSSJC7gBgeisxjCtLj4y1gwVAjjr8DF3H2UKbqX12pYeC3tUjK5lo9rVUkHa6Gz5xJjkR2DEd5PxDsxE1e83ncBViWNbySzqzs+h+nwYphnc3KQ==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Fri, 06 May 2022 15:35:41 +0000
  • Ironport-data: A9a23:pqXdJqmyL5ALffzoDLW484Xo5gydJ0RdPkR7XQ2eYbSJt1+Wr1Gzt xJOW2jTOayKNjOnfN5zad+/o00F7J/WzINqT1Bt+SEzFCMWpZLJC+rCIxarNUt+DCFioGGLT Sk6QoOdRCzhZiaE/n9BCpC48T8kk/vgqoPUUIYoAAgoLeNfYHpn2EsLd9IR2NYy24DkW13V4 LsenuWEULOb828sWo4rw/rrRCNH5JwebxtB4zTSzdgS1LPvvyF94KA3fMldHFOhKmVgJcaoR v6r8V2M1jixEyHBqD+Suu2TnkUiGtY+NOUV45Zcc/DKbhNq/kTe3kunXRa1hIg+ZzihxrhMJ NtxWZOYSgEJN4OLsfomWABZKWZSY6Me8p76GC3q2SCT5xWun3rE5dxLVRhzEahGv+F9DCdJ6 OASLy0LYlabneWqzbmnS+5qwMM+MM3sO4BZsXZlpd3bJa9+HdafHOOXupkBgmZYasNmRJ4yY +IDbjVidlLYagBnMVYLEpMu2uyvgxETdhUH8AzP/vFsvAA/yiQh4KbtatvxZuCmH8txkBeYq GKe+V3AV0Ry2Nu3jGDtHmiXru3FkD7/WYkSPKal7fMsi1qWrkQRFRkXWF2TsfS/zEmkVLp3M FcI8yAjqawz8k2DTdTnWRC85nmesXY0RN54A+A8rgaXxcL88wufQ2QJUDNFQNgnr9MtAywn0 EeTmNHkDiApt6eaIU9x7Z+RpDK2fCQQc2kLYHZdSRNfuoey5oYukhjIU9BvVravicH4Ei3xx DbMqzUig7IUjogA0KDTEU37vg9Ab6PhFmYdjjg7lEr8hu+lTOZJv7CV1GU=
  • Ironport-hdrordr: A9a23:Nuw3Wav6OYs2KDwwoAroXdIf7skC5IMji2hC6mlwRA09TyXGra 2TdaUgvyMc1gx7ZJhBo7+90We7MBbhHLpOkPEs1NCZLXLbUQqTXfhfBO7ZrwEIdBefygcw79 YCT0E6MqyLMbEYt7eE3ODbKadG/DDvysnB64bjJjVWPGdXgslbnntE422gYylLrWd9dPgE/M 323Ls7m9PsQwVfUu2LQl0+G8TTrdzCk5zrJTYAGh4c8QGLyRel8qTzHRS01goXF2on+8ZozU H11yjCoomzufCyzRHRk0fV8pRtgdPkjv9OHtaFhMQ5IijlziyoeINicbufuy1dmpDm1H8a1P 335zswNcV67H3cOkmzvBvWwgHllA0j7nfzoGXo9UfLkIjcfnYXGsBBjYVWfl/y8Ew7puxx16 pNwiawq4dXJQmoplWz2/H4EzVR0makq3srluAey1ZFV5EFVbNXpYsDuGtIDZY7Gj7g4oxPKp gjMCjl3ocWTbqmVQGYgoE2q+bcHUjbXy32D3Tqg/blnQS/xxtCvgklLM92pAZ0yHtycegA2w 3+CNUYqFh/dL5pUUtDPpZwfSKWMB27ffueChPlHbzYfJt3SE7lmtrQ3Igfwt2MVdgh8KYS8a 6xIm+w81RCMX7TNQ==
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Fri, May 06, 2022 at 03:31:12PM +0200, Roger Pau Monné wrote:
> On Fri, May 06, 2022 at 02:56:56PM +0200, Jan Beulich wrote:
> > On 05.05.2022 16:21, Roger Pau Monne wrote:
> > > --- a/xen/include/xen/compiler.h
> > > +++ b/xen/include/xen/compiler.h
> > > @@ -125,10 +125,11 @@
> > >  #define __must_be_array(a) \
> > >    BUILD_BUG_ON_ZERO(__builtin_types_compatible_p(typeof(a), 
> > > typeof(&a[0])))
> > >  
> > > -#ifdef CONFIG_CC_HAS_VISIBILITY_ATTRIBUTE
> > > -/* Results in more efficient PIC code (no indirections through GOT or 
> > > PLT). */
> > > -#pragma GCC visibility push(hidden)
> > > -#endif
> > > +/*
> > > + * Results in more efficient PIC code (no indirections through GOT or 
> > > PLT)
> > > + * and is also required by some of the assembly constructs.
> > > + */
> > > +#pragma GCC visibility push(protected)
> > >  
> > >  /* Make the optimizer believe the variable can be manipulated 
> > > arbitrarily. */
> > >  #define OPTIMIZER_HIDE_VAR(var) __asm__ ( "" : "+g" (var) )
> > 
> > This has failed my pre-push build test, with massive amounts of errors
> > about asm() constraints in the alternative call infrastructure. This
> > was with gcc 11.3.0.
> 
> Hm, great. I guess I will have to use protected with clang and hidden
> with gcc then, for lack of a better solution.
> 
> I'm slightly confused as to why my godbolt example:
> 
> https://godbolt.org/z/chTnMWxeP
> 
> Seems to work with gcc 11 then.  I will have to investigate a bit I
> think.

So it seems the problem is explicitly with constructs like:

void (*foo)(void);

void test(void)
{
    asm volatile (".long [addr]" :: [addr] "i" (&(foo)));
}

See:

https://godbolt.org/z/TYqeGdWsn

AFAICT gcc will consider the function pointer foo to go through the
GOT/PLT redirection table, while clang will not.  I think gcc behavior
is correct because in theory foo could be set from a different module?
protect only guarantees that references to local functions cannot be
overwritten, but not external ones.

I don't really see a good way to fix this, rather that setting
different visibilities based on the compiler.  clang would use
protected and gcc would use hidden.  I think it's unlikely to have a
toolstack setup to use gcc as the compiler and LLVM LD as the
linker, which would be the problematic configuration, and even in that
case it's kind of a cosmetic issue with symbol resolution, binary
output from the linker would still be correct.

Let me know if that seems acceptable.

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.