[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 4/8] x86/SVM: Add vcpu scheduling support for AVIC


  • To: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, "Natarajan, Janakarajan" <jnataraj@xxxxxxx>, Janakarajan Natarajan <Janakarajan.Natarajan@xxxxxxx>, xen-devel@xxxxxxxxxxxxx
  • From: Boris Ostrovsky <boris.ostrovsky@xxxxxxxxxx>
  • Date: Thu, 19 Apr 2018 19:04:19 -0400
  • Autocrypt: addr=boris.ostrovsky@xxxxxxxxxx; prefer-encrypt=mutual; keydata= xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/ kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM Jg6OxFYd01z+a+oL
  • Cc: Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Wei Liu <wei.liu2@xxxxxxxxxx>, Suravee Suthikulpanit <suravee.suthikulpanit@xxxxxxx>, George Dunlap <George.Dunlap@xxxxxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Ian Jackson <ian.jackson@xxxxxxxxxxxxx>, Julien Grall <julien.grall@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>
  • Delivery-date: Thu, 19 Apr 2018 23:03:38 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Openpgp: preference=signencrypt

On 04/19/2018 02:18 PM, Andrew Cooper wrote:
> On 19/04/18 16:54, Natarajan, Janakarajan wrote:
>> On 4/13/2018 12:57 PM, Andrew Cooper wrote:
>>> On 04/04/18 00:01, Janakarajan Natarajan wrote:
>>>> @@ -63,6 +64,54 @@ avic_get_physical_id_entry(struct svm_domain *d,
>>>> unsigned int index)
>>>>       return &d->avic_physical_id_table[index];
>>>>   }
>>>>   +static void avic_vcpu_load(struct vcpu *v)
>>>> +{
>>>> +    unsigned long tmp;
>>>> +    struct arch_svm_struct *s = &v->arch.hvm_svm;
>>>> +    int h_phy_apic_id;
>>>> +    struct avic_physical_id_entry *entry = (struct
>>>> avic_physical_id_entry *)&tmp;
>>>> +
>>>> +    ASSERT(!test_bit(_VPF_blocked, &v->pause_flags));
>>>> +
>>>> +    /*
>>>> +     * Note: APIC ID = 0xff is used for broadcast.
>>>> +     *       APIC ID > 0xff is reserved.
>>>> +     */
>>>> +    h_phy_apic_id = cpu_data[v->processor].apicid;
>>>> +    ASSERT(h_phy_apic_id < AVIC_PHY_APIC_ID_MAX);
>>>> +
>>>> +    tmp = read_atomic((u64*)(s->avic_last_phy_id));
>>>> +    entry->host_phy_apic_id = h_phy_apic_id;
>>>> +    entry->is_running = 1;
>>>> +    write_atomic((u64*)(s->avic_last_phy_id), tmp);
>>> What is the purpose of s->avic_last_phy_id ?
>>>
>>> As far as I can tell, it is always an unchanging pointer into the
>>> physical ID table, which is only ever updated synchronously in current
>>> context.
>>>
>>> If so, I don't see why it needs any of these hoops to be jumped though.
>> s->avic_last_phy_id is used to quickly access the entry in the table.
>>
>> When the code was pushed for Linux, memory barriers were used and it
>> was suggested that atomic operations
>> be used instead to ensure compiler ordering. The same is done here.
> Ok - summing up a conversation on IRC, and some digging around the manual.
>
> Per VM, there is a single Physical APIC Table, which lives in a 4k
> page.  This table is referenced by the VMCB, and read by hardware when
> processing guest actions.
>
> The contents of this table a list of 64bit entries,
>
> struct __packed avic_physical_id_entry {
>     u64 host_phy_apic_id  : 8;
>     u64 res1              : 4;
>     u64 bk_pg_ptr_mfn     : 40;
>     u64 res2              : 10;
>     u64 is_running        : 1;
>     u64 valid             : 1;
> };
>
> which are indexed by guest APIC_ID.
>
> AMD hardware allows writes to the APIC_ID register, but OSes don't do
> this in practice (the register is read/discard on some hardware, and
> strictly read-only in x2apic).  The implementation in Xen is to crash
> the domain if we see a write here, and that is reasonable behaviour
> which I don't expect to change going forwards.
>
> As a result, the layout of the Physical APIC Table is fixed based on the
> APIC assignment during domain creation.  Also, the bk_pg_ptr_mfn and its
> valid bit (valid) are set up during construction, and remain unchanged
> for the lifetime of the domain.
>
> The only fields which change during runtime are the host_phys_apic_id,
> and its valid bit (is_running), and these change on vcpu context switch.
>
> Therefore, on ctxt_switch_from(), we want a straight __clear_bit() on
> e->is_running to signify that the vcpu isn't allocated to a pcpu.
>
> On ctxt_switch_to(), we want a simple
>
> e->host_phy_apic_id = this_pcpu_apic_id;
> smp_wmb();
> __set_bit(e->is_running);
>
> which guarantees that the host physical apic id field is valid and up to
> date, before hardware sees it being reported as valid.  As these changes
> are only made in current context, there are no other ordering or
> atomicity concerns.
>
> This table is expected to live in regular WB RAM, and the manual has no
> comment/reference to requiring special accesses.  Therefore, I'm
> moderately confident that the above ordering is sufficient for correct
> behaviour, and no explicitly atomic actions are required.
>
> Thoughts/comments/suggestions?


The entry can also be written as a single raw 64-bit value (I think you
suggested in one of the reviews to make it a union with a uint64_t).

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.