|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V3 12/29] x86/vvtd: Add MMIO handler for VVTD
On Thu, Sep 21, 2017 at 11:01:53PM -0400, Lan Tianyu wrote:
> From: Chao Gao <chao.gao@xxxxxxxxx>
>
> This patch adds VVTD MMIO handler to deal with MMIO access.
>
> Signed-off-by: Chao Gao <chao.gao@xxxxxxxxx>
> Signed-off-by: Lan Tianyu <tianyu.lan@xxxxxxxxx>
> ---
> xen/drivers/passthrough/vtd/vvtd.c | 91
> ++++++++++++++++++++++++++++++++++++++
> 1 file changed, 91 insertions(+)
>
> diff --git a/xen/drivers/passthrough/vtd/vvtd.c
> b/xen/drivers/passthrough/vtd/vvtd.c
> index c851ec7..a3002c3 100644
> --- a/xen/drivers/passthrough/vtd/vvtd.c
> +++ b/xen/drivers/passthrough/vtd/vvtd.c
> @@ -47,6 +47,29 @@ struct vvtd {
> struct page_info *regs_page;
> };
>
> +/* Setting viommu_verbose enables debugging messages of vIOMMU */
> +bool __read_mostly viommu_verbose;
> +boolean_runtime_param("viommu_verbose", viommu_verbose);
> +
> +#ifndef NDEBUG
> +#define vvtd_info(fmt...) do { \
> + if ( viommu_verbose ) \
> + gprintk(XENLOG_G_INFO, ## fmt); \
If you use gprintk you should use XENLOG_INFO, the '_G_' variants are
only used with plain printk.
> +} while(0)
> +#define vvtd_debug(fmt...) do { \
> + if ( viommu_verbose && printk_ratelimit() ) \
Not sure why you need printk_ratelimit, XENLOG_G_DEBUG is already
rate-limited.
> + printk(XENLOG_G_DEBUG fmt); \
Any reason why vvtd_info uses gprintk and here you use printk?
> +} while(0)
> +#else
> +#define vvtd_info(fmt...) do {} while(0)
> +#define vvtd_debug(fmt...) do {} while(0)
No need for 'fmt...' just '...' will suffice since you are discarding
the parameters anyway.
> +#endif
> +
> +struct vvtd *domain_vvtd(struct domain *d)
> +{
> + return (d->viommu) ? d->viommu->priv : NULL;
Unneeded parentheses around d->viommu.
Also, it seems wring to call domain_vvtd with !d->viommu. So I think
this helper should just be removed, and d->viommu->priv fetched
directly.
> +}
> +
> static inline void vvtd_set_reg(struct vvtd *vtd, uint32_t reg, uint32_t
> value)
> {
> vtd->regs->data32[reg/sizeof(uint32_t)] = value;
> @@ -68,6 +91,73 @@ static inline uint64_t vvtd_get_reg_quad(struct vvtd *vtd,
> uint32_t reg)
> return vtd->regs->data64[reg/sizeof(uint64_t)];
> }
>
> +static int vvtd_in_range(struct vcpu *v, unsigned long addr)
> +{
> + struct vvtd *vvtd = domain_vvtd(v->domain);
> +
> + if ( vvtd )
> + return (addr >= vvtd->base_addr) &&
> + (addr < vvtd->base_addr + PAGE_SIZE);
So the register set covers a PAGE_SIZE, but hvm_hw_vvtd_regs only
covers from 0 to 1024B, it seems like there's something wrong here...
> + return 0;
> +}
> +
> +static int vvtd_read(struct vcpu *v, unsigned long addr,
> + unsigned int len, unsigned long *pval)
> +{
> + struct vvtd *vvtd = domain_vvtd(v->domain);
> + unsigned int offset = addr - vvtd->base_addr;
> +
> + vvtd_info("Read offset %x len %d\n", offset, len);
> +
> + if ( (len != 4 && len != 8) || (offset & (len - 1)) )
What value does hardware return when performing unaligned reads or
reads with wrong size?
Here you return with pval not set, which is dangerous.
> + return X86EMUL_OKAY;
> +
> + if ( len == 4 )
> + *pval = vvtd_get_reg(vvtd, offset);
> + else
> + *pval = vvtd_get_reg_quad(vvtd, offset);
...yet here you don't check for offset < 1024.
> +
> + return X86EMUL_OKAY;
> +}
> +
> +static int vvtd_write(struct vcpu *v, unsigned long addr,
> + unsigned int len, unsigned long val)
> +{
> + struct vvtd *vvtd = domain_vvtd(v->domain);
> + unsigned int offset = addr - vvtd->base_addr;
> +
> + vvtd_info("Write offset %x len %d val %lx\n", offset, len, val);
> +
> + if ( (len != 4 && len != 8) || (offset & (len - 1)) )
> + return X86EMUL_OKAY;
> +
> + if ( len == 4 )
> + {
> + switch ( offset )
> + {
> + case DMAR_IEDATA_REG:
> + case DMAR_IEADDR_REG:
> + case DMAR_IEUADDR_REG:
> + case DMAR_FEDATA_REG:
> + case DMAR_FEADDR_REG:
> + case DMAR_FEUADDR_REG:
> + vvtd_set_reg(vvtd, offset, val);
Hm, so you are using a full page when you only care for 6 4B
registers? Seem like quite of a waste of memory.
Thanks, Roger.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |