[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/5] x86/AMD: make C-state handling independent of Dom0



On Fri, Jun 21, 2019 at 12:37:47AM -0600, Jan Beulich wrote:
> >>> On 19.06.19 at 17:54, <Brian.Woods@xxxxxxx> wrote:
> > On Wed, Jun 19, 2019 at 12:20:40AM -0600, Jan Beulich wrote:
> >> >>> On 18.06.19 at 19:22, <Brian.Woods@xxxxxxx> wrote:
> >> > On Tue, Jun 11, 2019 at 06:42:33AM -0600, Jan Beulich wrote:
> >> >> >>> On 10.06.19 at 18:28, <andrew.cooper3@xxxxxxxxxx> wrote:
> >> >> > On 23/05/2019 13:18, Jan Beulich wrote:
> >> >> >> TBD: Can we set local_apic_timer_c2_ok to true? I can't seem to find 
> >> >> >> any
> >> >> >>      statement in the BKDG / PPR as to whether the LAPIC timer 
> >> >> >> continues
> >> >> >>      running in CC6.
> >> >> > 
> >> >> > This ought to be easy to determine.  Given the description of CC6
> >> >> > flushing the cache and power gating the core, I'd say there is a
> >> >> > reasonable chance that the LAPIC timer stops in CC6.
> >> >> 
> >> >> But "reasonable chance" isn't enough for my taste here. And from
> >> >> what you deduce, the answer to the question would be "no", and
> >> >> hence simply no change to be made anywhere. (I do think though
> >> >> that it's more complicated than this, because iirc much also depends
> >> >> on what the firmware actually does.)
> >> > 
> >> > The LAPIC timer never stops on the currently platforms (Naples and
> >> > Rome).  This is a knowledgable HW engineer so.
> >> 
> >> Thanks - I've taken note to set the variable accordingly then.
> >> 
> >> >> >> TBD: We may want to verify that HLT indeed is configured to enter 
> >> >> >> CC6.
> >> >> > 
> >> >> > I can't actually spot anything which talks about HLT directly.  The
> >> >> > closest I can post is CFOH (cache flush on halt) which is an
> >> >> > auto-transition from CC1 to CC6 after a specific timeout, but the
> >> >> > wording suggests that mwait would also take this path.
> >> >> 
> >> >> Well, I had come across a section describing how HLT can be
> >> >> configured to be the same action as the I/O port read from one
> >> >> of the three ports involved in C-state management
> >> >> (CStateBaseAddr+0...2). But I can't seem to find this again.
> >> >> 
> >> >> As to MWAIT behaving the same, I don't think I can spot proof
> >> >> of your interpretation or proof of Brian's.
> >> > 
> >> > It's not really documented clearly.  I got my information from the HW
> >> > engineers.  I've already posted what information I know so I won't
> >> > repeat it.
> >> 
> >> At least a pointer to where you had stated this would have been
> >> nice. Iirc there's no promotion into CC6 in that case, in contrast
> >> to Andrew's reading of the doc.
> > 
> > &mwait_v1_patchset
> 
> Hmm, I've looked through the patch descriptions there again, but I
> can't find any explicit statement to the effect of there being no
> promotion into deeper states when using MWAIT.
> 
> Jan

https://lists.xenproject.org/archives/html/xen-devel/2019-02/msg02007.html

Since you're under NDA, I can send you the email I received from the HW
engineering but as a basic recap:

If the HW is configured to use CC6 for HLT (CC6 is enabled and some
other NDA bits which gets OR'd with firmware so you can only
functionally CC6 on HLT off, but can't make sure it's on), then the
flow is:
1) HLT
2) timer
3) flush the caches etc
4) CC6

This can be interrupted though.  The HW engineer said that while they
aren't the same (as IO based C-states), they end up at the same place.

The whole reason HLT was selected to be used in my patches is because
we can't look in the CST table from Xen and it's always safe to use,
even if CC6 is disabled in BIOS (which we can't tell).  At this point,
I'm repeating our conversion we had in my v1 patch set.  If you need
any further info, let me know.

-- 
Brian Woods

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.