[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] xen/arm: acpi: Support memory reserve configuration table



On Tue, 6 Sept 2022 at 09:17, Leo Yan <leo.yan@xxxxxxxxxx> wrote:
>
> Hi Marc,
>
> On Tue, Sep 06, 2022 at 07:27:17AM +0100, Marc Zyngier wrote:
> > On Tue, 06 Sep 2022 03:52:37 +0100,
> > Leo Yan <leo.yan@xxxxxxxxxx> wrote:
> > >
> > > On Thu, Aug 25, 2022 at 10:40:41PM +0800, Leo Yan wrote:
> > >
> > > [...]
> > >
> > > > > > But here I still cannot create the concept that how GIC RD tables 
> > > > > > play
> > > > > > roles to support the para virtualization or passthrough mode.
> > > > >
> > > > > I am not sure what you are actually asking. The pending tables are 
> > > > > just
> > > > > memory you give to the GICv3 to record the state of the interrupts.
> > > >
> > > > For more specific, Xen has its own RD pending table, and we can use
> > > > this pending table to set state for SGI/PPI/LPI for a specific CPU
> > > > interface.  Xen works as hypervisor, it saves and restores the pending
> > > > table according to switched in VM context, right?
> > > >
> > > > On the other hand, what's the purpose for Linux kernel's GIC RD
> > > > pending table?  Is it only used for nested virtulisation?  I mean if
> > > > Linux kernel's GIC RD pending table is not used for the drivers in
> > > > Dom0 or DomU, then it's useless to pass it from the primary kernel to
> > > > secondary kernel; as result, we don't need to reserve the persistent
> > > > memory for the pending table in this case.
> > >
> > > I don't receive further confirmation from Marc, anyway, I tried to cook
> > > a kernel patch to mute the kernel oops [1].
> >
> > What sort of confirmation do you expect from me? None of what you
> > write above make much sense in the face of the architecture.
>
> Okay, I think have two questions for you:
>
> - The first question is if we really need to reserve persistent memory
>   for RD pending table and configuration table when Linux kernel runs
>   in Xen domain?
>
> - If the first question's answer is no, so it's not necessary to reserve
>   RD pending table and configuration table for Xen, then what's the good
>   way to dismiss the kernel oops?
>
> IIUC, you consider the general flow from architecture view, so you prefer
> to ask Xen to implement EFI stub to comply the general flow for EFI
> booting sequence, right?
>
> If the conclusion is to change Xen for support EFI stub, then this
> would be fine for me and I will hold on and leave Xen developers to work
> on it.
>

As I mentioned before, proper EFI boot support in Xen would be nice.
*However*, I don't think it makes sense to go through all the trouble
of implementing that just to shut up a warning that doesn't affect Xen
to begin with.


> > > [1] 
> > > https://lore.kernel.org/lkml/20220906024040.503764-1-leo.yan@xxxxxxxxxx/T/#u
> >
> > I'm totally baffled by the fact you're trying to add some extra hacks
> > to Linux just to paper over some of the Xen's own issues.
>
> I have a last question for why kernel reserves RD pending table and
> configuration table for kexec.  As we know, the primary kernel and
> the secondary kernel use separate memory regions,

This is only true for kdump, not for kexec in general.

> this means there have
> no race condition that secondary kernel modifies the tables whilist the
> GIC accesses the table if the secondary kernel allocates new pages for
> RD tables.  So only one potential issue I can image is the secondary
> kernel sets new RD pending table and configuration table, which might
> introduce inconsistent issue with rest RDs in the system.
>
> Could you confirm if my understanding is correct or not?
>
> Sorry for noise and many questions.  I understand this is a complex
> and difficult topic for me, and it's very likely that I am absent
> sufficient knowledge for this part, this is just what I want to
> learn from the discussion and from you :-)
>
> Thanks,
> Leo



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.