[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V1 02/16] xen/ioreq: Make x86's IOREQ feature common




On 14.09.20 17:17, Jan Beulich wrote:

Hi Jan

On 10.09.2020 22:21, Oleksandr Tyshchenko wrote:
---
  MAINTAINERS                     |    8 +-
  xen/arch/x86/Kconfig            |    1 +
  xen/arch/x86/hvm/dm.c           |    2 +-
  xen/arch/x86/hvm/emulate.c      |    2 +-
  xen/arch/x86/hvm/hvm.c          |    2 +-
  xen/arch/x86/hvm/io.c           |    2 +-
  xen/arch/x86/hvm/ioreq.c        | 1425 +--------------------------------------
  xen/arch/x86/hvm/stdvga.c       |    2 +-
  xen/arch/x86/hvm/vmx/vvmx.c     |    3 +-
  xen/arch/x86/mm.c               |    2 +-
  xen/arch/x86/mm/shadow/common.c |    2 +-
  xen/common/Kconfig              |    3 +
  xen/common/Makefile             |    1 +
  xen/common/ioreq.c              | 1410 ++++++++++++++++++++++++++++++++++++++
This suggests it was almost the entire file which got moved. It would
be really nice if you could convince git to show the diff here, rather
than removal and addition of 1400 lines.

Additionally I wonder whether what's left in the original file wouldn't
better become inline functions now. If this was done in the previous
patch, the change would be truly a rename then, I think.
Last time when trying to make something inline in arch files for the RFC series (I don't remember exactly for what it was) I got completely stuck with the build issues due to the header (inter-?)dependencies, which I failed to resolve).
Anyway I got your point and will try to experiment with that.



--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -19,41 +19,12 @@
  #ifndef __ASM_X86_HVM_IOREQ_H__
  #define __ASM_X86_HVM_IOREQ_H__
-bool hvm_io_pending(struct vcpu *v);
-bool handle_hvm_io_completion(struct vcpu *v);
-bool is_ioreq_server_page(struct domain *d, const struct page_info *page);
+#include <asm/hvm/emulate.h>
+#include <asm/hvm/hvm.h>
+#include <asm/hvm/vmx/vmx.h>
Are all three really needed here? Especially the last one strikes me as
odd.

We can leave only #include <asm/hvm/emulate.h> here and move #include <asm/hvm/vmx/vmx.h> to x86/hvm/ioreq.c.
Also #include <asm/hvm/hvm.h> could be dropped.



--- /dev/null
+++ b/xen/include/xen/ioreq.h
@@ -0,0 +1,82 @@
+/*
+ * ioreq.h: Hardware virtual machine assist interface definitions.
+ *
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __IOREQ_H__
+#define __IOREQ_H__
__XEN_IOREQ_H__ please.

ok



+#include <xen/sched.h>
+
+#include <asm/hvm/ioreq.h>
Is this include really needed here (i.e. by the code further down in
the file, and hence by everyone including this header), or rather
just in a few specific .c files?
I think, just in few specific .c files. Which are x86/hvm/ioreq.c and common/ioreq.c now and several other files later on (x86/hvm/dm.c, arm/io.c, etc)
Shall I include that header in these files instead?



+#define GET_IOREQ_SERVER(d, id) \
+    (d)->arch.hvm.ioreq_server.server[id]
arch.hvm.* feels like a layering violation when used in this header.
Got it. The only reason why GET_IOREQ_SERVER is here is inline get_ioreq_server(). I will make it non-inline and move both to common/ioreq.c. I assume this layering violation issue applies for hvm_domain_has_ioreq_server() introduced in
[PATCH V1 11/16] xen/ioreq: Introduce hvm_domain_has_ioreq_server()


--
Regards,

Oleksandr Tyshchenko




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.