[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v1 19/27] xen/riscv: emulate guest writes to virtual APLIC MMIO
- To: Jan Beulich <jbeulich@xxxxxxxx>
- From: Oleksii Kurochko <oleksii.kurochko@xxxxxxxxx>
- Date: Mon, 20 Apr 2026 17:02:13 +0200
- Authentication-results: eu.smtp.expurgate.cloud; dkim=pass header.s=20251104 header.d=gmail.com header.i="@gmail.com" header.h="Content-Transfer-Encoding:In-Reply-To:From:Content-Language:References:Cc:To:Subject:User-Agent:MIME-Version:Date:Message-ID"
- Cc: Romain Caritey <Romain.Caritey@xxxxxxxxxxxxx>, Alistair Francis <alistair.francis@xxxxxxx>, Connor Davis <connojdavis@xxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Michal Orzel <michal.orzel@xxxxxxx>, Julien Grall <julien@xxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
- Delivery-date: Mon, 20 Apr 2026 15:02:27 +0000
- List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
On 4/16/26 3:19 PM, Jan Beulich wrote:
On 14.04.2026 18:04, Oleksii Kurochko wrote:
On 4/2/26 4:18 PM, Jan Beulich wrote:
On 10.03.2026 18:08, Oleksii Kurochko wrote:
+static int cf_check vaplic_emulate_store(const struct vcpu *vcpu,
+ unsigned long addr, uint32_t value)
+{
+ struct vaplic *vaplic = to_vaplic(vcpu->domain->arch.vintc);
+ struct aplic_priv *priv = vaplic->base.info->private;
+ uint32_t offset = addr & APLIC_REG_OFFSET_MASK;
See ./CODING_STYLE as to uses of fixed-width types.
+ unsigned long aplic_addr = addr - priv->paddr_start;
+ const uint32_t *auth_irq_bmp = vcpu->domain->arch.vintc->private;
+
+ switch ( offset )
+ {
+ case APLIC_SETIP_BASE ... APLIC_SETIP_LAST:
And (taking this just as example) any misaligned accesses falling in this range
are fine?
Do you mean something like 0x1C02 instead of 0x1C00 or 0x1C04?
Yes.
+ /*
+ * As sourcecfg register starts from 1:
+ * 0x0000 domaincfg
+ * 0x0004 sourcecfg[1]
+ * 0x0008 sourcecfg[2]
+ * ...
+ * 0x0FFC sourcecfg[1023]
+ * It is necessary to calculate an interrupt number by substracting
Nit: subtracting
+ * of APLIC_DOMAINCFG instead of APLIC_SOURCECFG_BASE.
+ */
+ if ( !AUTH_IRQ_BIT(regval_to_irqn(offset - APLIC_DOMAINCFG)) )
+ /* interrupt not enabled, ignore it */
Throughout the series: Please adhere to ./CODING_STYLE.
+ return 0;
+
+ break;
And any value is okay to write?
No, it should be in a range
[APLIC_SOURCECFG_SM_INACTIVE,APLIC_SOURCECFG_SM_LEVEL_LOW].
I will add the check before break:
if ( value > APLIC_SOURCECFG_SM_LEVEL_LOW )
{
gdprintk(XENLOG_WARNING,
"value(%u) is incorrect for sourcecfg register\n",
value);
value = APLIC_SOURCECFG_SM_INACTIVE;
}
And why would writing APLIC_SOURCECFG_SM_INACTIVE be any better, when
that's not what the guest wanted? Simply ignore such writes, unless the
spec mandates specific behavior for out-of-range avlues?
The spec doesn't mandate specific behavior for out-of-range values but I
thought it would be better to make irq inactive instead of just ignoring
so it won't affect somehow potential occurrence of this interrupt.
+ case APLIC_TARGET_BASE ... APLIC_TARGET_LAST:
+ struct vcpu *target_vcpu = NULL;
+
+ /*
+ * Look at vaplic_emulate_load() for explanation why
+ * APLIC_GENMSI is substracted.
+ */
There's no vaplic_emulate_load() - how can I go look there?
It is introduced in the next patch.
As before - it should be possible to review patch series strictly
sequentially. Further, what if this patch gets committed, and the other
gets delayed by several months?
Got you, I will re-order patches.
+ if ( !AUTH_IRQ_BIT(regval_to_irqn(offset - APLIC_GENMSI)) )
+ /* interrupt not enabled, ignore it */
+ return 0;
+
+ for ( int i = 0; i < vcpu->domain->max_vcpus; i++ )
unsigned int
+ {
+ struct vcpu *v = vcpu->domain->vcpu[i];
+
+ if ( v->vcpu_id == (value >> APLIC_TARGET_HART_IDX_SHIFT) )
+ {
+ target_vcpu = v;
+ break;
+ }
+ }
+
+ ASSERT(target_vcpu);
What guarantees the pointer to be non-NULL? The incoming value can be
arbitrary, afaict.
I didn't understand your point. It is just checking that target_vcpu has
been found. If after for() loop the value of target_vcpu is still NULL
then something wrong in Xen.
If that's true, then the assertion is fine to have. I can't help the
impression though that a guest could pick a value such that you can't
possibly find the target vCPU. Asserting on guest controlled input is
not okay, as was said several times before.
I will then do domain_crash() that as a value is incorrect in case if
target_vcpu is NULL, I missed that guest could put wrong value.
+ if ( !(vaplic->regs.domaincfg & APLIC_DOMAINCFG_DM) )
+ {
+ vaplic_dm_update_target(cpuid_to_hartid(target_vcpu->processor),
+ &value);
+ }
+ else
+ vaplic_update_target(priv->imsic_cfg,
+ vcpu_guest_file_id(target_vcpu),
+ cpuid_to_hartid(target_vcpu->processor),
+ &value);
I'm struggling with the naming here: When DM is clear, a function with "dm"
in the name is called.
it means direct (delivery) mode. Maybe it is better to put dm at the end
of the function name? Or it is just better to change it to something else?
Without a better understanding of what is wanted, all I can say is that
calling something with "dm" in its name when the condition says it's not
"dm" is confusing.
Basically it should be the following. If domaincfg.DM (here dm is
delivery mode according to spec) is 0 then it means that APLIC works in
direct delivery mode, if DM bit is 1 then MSI delivery mode is used.
So just for clarity I will rename:
- vaplic_dm_update_target -> vaplic_ddm_update_target
- vaplic_update_target -> vaplic_mdm_update_target
Or maybe just s/ddm/direct and s/mdm/msi will be just better in the
function names.
+ default:
+ panic("%s: unsupported register offset: %#x\n", __func__, offset);
Crashing the host for the guest doing something odd? It's odd that the function
only ever returns 0 anyway - it could simply return an error here (if the
itention is to not ignore such writes).
But maybe it is a legal offset and we really want to support it?
Still not a reason to crash the entire host?
Agree, domain crash will be more then enough.
Thanks.
~ Oleksii
|