[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 1/3] [not-for-unstable] xen/arm: vgic-v3: Delay the initialization of the domain information


  • To: Julien Grall <julien.grall@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>
  • From: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
  • Date: Sat, 29 Sep 2018 00:38:49 +0100
  • Autocrypt: addr=andrew.cooper3@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs 6+ahAA==
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx, shameerali.kolothum.thodi@xxxxxxxxxx, andre.przywara@xxxxxxx
  • Delivery-date: Fri, 28 Sep 2018 23:39:15 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 28/09/18 21:35, Julien Grall wrote:
>
>
> On 09/28/2018 12:11 AM, Stefano Stabellini wrote:
>> On Wed, 26 Sep 2018, Julien Grall wrote:
>>> Hi Stefano,
>>>
>>> On 09/25/2018 09:45 PM, Stefano Stabellini wrote:
>>>> On Tue, 4 Sep 2018, Andrew Cooper wrote:
>>>>> On 04/09/18 20:35, Julien Grall wrote:
>>>>>> Hi,
>>>>>>
>>>>>> On 09/04/2018 08:21 PM, Julien Grall wrote:
>>>>>>> A follow-up patch will require to know the number of vCPUs when
>>>>>>> initializating the vGICv3 domain structure. However this
>>>>>>> information
>>>>>>> is
>>>>>>> not available at domain creation. This is only known once
>>>>>>> XEN_DOMCTL_max_vpus is called for that domain.
>>>>>>>
>>>>>>> In order to get the max vCPUs around, delay the domain part of the
>>>>>>> vGIC
>>>>>>> v3 initialization until the first vCPU of the domain is
>>>>>>> initialized.
>>>>>>>
>>>>>>> Signed-off-by: Julien Grall <julien.grall@xxxxxxx>
>>>>>>>
>>>>>>> ---
>>>>>>>
>>>>>>> Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>>>>>>
>>>>>>> This is nasty but I can't find a better way for Xen 4.11 and older.
>>>>>>> This
>>>>>>> is not necessary for unstable as the number of vCPUs is known at
>>>>>>> domain
>>>>>>> creation.
>>>>>>>
>>>>>>> Andrew, I have CCed you to know whether you have a better idea
>>>>>>> where
>>>>>>> to
>>>>>>> place this call on Xen 4.11 and older.
>>>>>>
>>>>>> I just noticed that d->max_vcpus is initialized after
>>>>>> arch_domain_create. So without this patch on Xen 4.12, it will
>>>>>> not work.
>>>>>>
>>>>>> This is getting nastier because arch_domain_init is the one
>>>>>> initialize
>>>>>> the value returned by dom0_max_vcpus. So I am not entirely sure what
>>>>>> to do here.
>>>>>
>>>>> The positioning after arch_domain_create() is unfortunate, but I
>>>>> couldn’t manage better with ARM's current behaviour and Jan's
>>>>> insistence
>>>>> that the allocation of d->vcpu was common.  I'd prefer if the
>>>>> dependency
>>>>> could be broken and the allocation moved earlier.
>>>>>
>>>>> One option might be to have an arch_check_domainconfig() (or
>>>>> similar?)
>>>>> which is called very early on and can sanity check the values,
>>>>> including
>>>>> cross-checking the vgic and max_vcpus settings?  It could even be
>>>>> responsible for mutating XEN_DOMCTL_CONFIG_GIC_NATIVE into the
>>>>> correct
>>>>> real value.
>>>>>
>>>>> As for your patch here, its a gross hack, but its probably the best
>>>>> which can be done.
>>>>
>>>> *Sighs*
>>>> If that is what we have to do, it is as ugly as hell, but that is what
>>>> we'll do.
>>>
>>> This is the best we can do with the current code base. I think it
>>> would be
>>> worth reworking the code to make it nicer. I will add it in my TODO
>>> list.
>>>
>>>>
>>>> My only suggestion to marginally improve it would be instead of:
>>>>
>>>>> +    if ( v->vcpu_id == 0 )
>>>>> +    {
>>>>> +        rc = vgic_v3_real_domain_init(d);
>>>>> +        if ( rc )
>>>>> +            return rc;
>>>>> +    }
>>>>
>>>> to check on d->arch.vgic.rdist_regions instead:
>>>>
>>>>         if ( d->arch.vgic.rdist_regions == NULL )
>>>>         {
>>>>            // initialize domain
>>>
>>> I would prefer to keep v->vcpu_id == 0 just in case we end up to
>>> re-order the
>>> allocation in the future.
>>
>> I was suggesting to check on (rdist_regions == NULL) exactly for
>> potential re-ordering, in case in the future we end up calling
>> vcpu_vgic_init differently and somehow vcpu_init(vcpu1) is done before
>> before vcpu_init(vcpu0). Ideally we would like a way to check that
>> vgic_v3_real_domain_init has been called before and I thought
>> rdist_regions == NULL could do just that...
>
> What I meant by re-ordering is we manage to allocate the
> re-distributors before the vCPUs are created but still need
> vgic_v3_real_domain_init for other purpose.
>
> But vCPU initialization is potentially other issue.
>
> Anyway. both way have drawbacks. Yet I still prefer checking on the
> vCPU. It less likely vCPU0 will not be the first one initialized.

With the exception of the idle domain, all vcpus are strictly allocated
in packed ascending order.  Loads of other stuff will break if that
changed, so I wouldn't worry about it.

Furthermore, there is no obvious reason for this behaviour to ever change.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.