[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Minios-devel] Some considerations of ARM Unikraft supports




> -----Original Message-----
> From: Simon Kuenzer [mailto:simon.kuenzer@xxxxxxxxx]
> Sent: 2018年2月8日 17:29
> To: Wei Chen <Wei.Chen@xxxxxxx>; Julien Grall <julien.grall@xxxxxxxxxx>;
> Costin Lupu <costin.lup@xxxxxxxxx>
> Cc: Felipe Huici <Felipe.Huici@xxxxxxxxx>; Kaly Xin <Kaly.Xin@xxxxxxx>; Shijie
> Huang <Shijie.Huang@xxxxxxx>; Florian Schmidt <Florian.Schmidt@xxxxxxxxx>; nd
> <nd@xxxxxxx>; minios-devel@xxxxxxxxxxxxx
> Subject: Re: [Minios-devel] Some considerations of ARM Unikraft supports
> 
> 
> 
> On 08.02.2018 06:00, Wei Chen wrote:
> > Hi Simon,
> >
> >> -----Original Message-----
> >> From: Simon Kuenzer [mailto:simon.kuenzer@xxxxxxxxx]
> >> Sent: 2018年2月7日 19:36
> >> To: Wei Chen <Wei.Chen@xxxxxxx>; Julien Grall <julien.grall@xxxxxxxxxx>;
> >> Costin Lupu <costin.lup@xxxxxxxxx>
> >> Cc: Felipe Huici <Felipe.Huici@xxxxxxxxx>; Kaly Xin <Kaly.Xin@xxxxxxx>;
> Shijie
> >> Huang <Shijie.Huang@xxxxxxx>; Florian Schmidt <Florian.Schmidt@xxxxxxxxx>;
> nd
> >> <nd@xxxxxxx>; minios-devel@xxxxxxxxxxxxx
> >> Subject: Re: [Minios-devel] Some considerations of ARM Unikraft supports
> >>
> >>
> >>
> >> On 07.02.2018 07:16, Wei Chen wrote:
> >>> Hi Simon,
> >>>
> >>>> -----Original Message-----
> >>>> From: Simon Kuenzer [mailto:simon.kuenzer@xxxxxxxxx]
> >>>> Sent: 2018年2月7日 0:34
> >>>> To: Wei Chen <Wei.Chen@xxxxxxx>; Julien Grall <julien.grall@xxxxxxxxxx>
> >>>> Cc: Felipe Huici <Felipe.Huici@xxxxxxxxx>; Kaly Xin <Kaly.Xin@xxxxxxx>;
> >> Shijie
> >>>> Huang <Shijie.Huang@xxxxxxx>; Florian Schmidt 
> >>>> <Florian.Schmidt@xxxxxxxxx>;
> >>>> Costin Lupu <costin.lup@xxxxxxxxx>; nd <nd@xxxxxxx>; minios-
> >>>> devel@xxxxxxxxxxxxx
> >>>> Subject: Re: [Minios-devel] Some considerations of ARM Unikraft supports
> >>>>
> >>>> Hi Wei,
> >>>>
> >>>> On 06.02.2018 08:58, Wei Chen wrote:
> >>>>> Hi Simon,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Simon Kuenzer [mailto:simon.kuenzer@xxxxxxxxx]
> >>>>>> Sent: 2018年2月6日 0:21
> >>>>>> To: Wei Chen <Wei.Chen@xxxxxxx>; Julien Grall <julien.grall@xxxxxxxxxx>
> >>>>>> Cc: Felipe Huici <Felipe.Huici@xxxxxxxxx>; Kaly Xin <Kaly.Xin@xxxxxxx>;
> >>>> Shijie
> >>>>>> Huang <Shijie.Huang@xxxxxxx>; Florian Schmidt
> <Florian.Schmidt@xxxxxxxxx>;
> >>>>>> Costin Lupu <costin.lup@xxxxxxxxx>; nd <nd@xxxxxxx>; minios-
> >>>>>> devel@xxxxxxxxxxxxx
> >>>>>> Subject: Re: [Minios-devel] Some considerations of ARM Unikraft
> supports
> >>>>>>
> >>>>>> Hi Wei, hi Julien,
> >>>>>>
> >>>>>> thanks a lot for discussing this already, I put my comments inline.
> >>>>>>
> >>>>>> On 05.02.2018 08:22, Wei Chen wrote:
> >>>>>>> Hi Julien,
> >>>>>>>
> >>>>>>> Thanks for your comments!
> >>>>>>> Replies inline.
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: Julien Grall [mailto:julien.grall@xxxxxxxxxx]
> >>>>>>>> Sent: 2018年2月2日 18:43
> >>>>>>>> To: Wei Chen <Wei.Chen@xxxxxxx>; Simon Kuenzer
> <simon.kuenzer@xxxxxxxxx>
> >>>>>>>> Cc: Felipe Huici <Felipe.Huici@xxxxxxxxx>; Kaly Xin
> <Kaly.Xin@xxxxxxx>;
> >>>>>> Shijie
> >>>>>>>> Huang <Shijie.Huang@xxxxxxx>; Florian Schmidt
> >> <Florian.Schmidt@xxxxxxxxx>;
> >>>>>>>> Costin Lupu <costin.lup@xxxxxxxxx>; nd <nd@xxxxxxx>; minios-
> >>>>>>>> devel@xxxxxxxxxxxxx
> >>>>>>>> Subject: Re: [Minios-devel] Some considerations of ARM Unikraft
> >> supports
> >>>>>>>>
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> On 02/02/18 09:10, Wei Chen wrote:
> >>>>>>>>> This week I am trying to boot Unikraft on ARM64/KVM platform. In
> this
> >>>>>>>> progress I have
> >>>>>>>>> got some considerations and written a simple proposal:
> >>>>>>>>>
> >>>>>>>>> My first target is to enable Unikraft on ARM64+Kvm, so this proposal
> >>>> would
> >>>>>>>> focus on ARM64+Kvm.
> >>>>>>>>> But the goal of ARM support is to enable Unikraft on ARM32/ARM64
> based
> >>>>>>>> hypervisors (ARM32/64 Kvm,
> >>>>>>>>> ARM64 Xen and etc). So we have to consider to keep current multi-
> arch
> >>>>>>>> framework and reuse common
> >>>>>>>>> code like virtual drivers for ARM32/ARM64.
> >>>>>>>>>
> >>>>>>>>> 1. Modify the folders for multi-architectures
> >>>>>>>>>          1.1. Add arm64 folder to unikraft/arch:
> >>>>>>>>>                  unikraft----arch----arm
> >>>>>>>>>                                |-----x86_64
> >>>>>>>>>                                |-----arm64 <-- New
> >>>>>>>>>
> >>>>>>>>>               Above folders contains architecture specified 
> >>>>>>>>> Makefile,
> >>>> Config,
> >>>>>>>> Compiler flags and some
> >>>>>>>>>               code. In most cases, these files are exclusive. So
> we'd
> >>>> better
> >>>>>>>> keep each arcitecture in
> >>>>>>>>>               a standalone floder. This also can avoid doing to much
> >>>> changes
> >>>>>> to
> >>>>>>>> Unikraft Makefile.
> >>>>>>>>>
> >>>>>>>>>               If we add arm64 to unikraft/arch/arm, we have to do
> more
> >>>> ARCH
> >>>>>>>> comparasion in Makefile:
> >>>>>>>>>                  unikraft----arch----arm----arm32
> >>>>>>>>>                            |      |-----arm64 <-- New
> >>>>>>>>>                           |
> >>>>>>>>>                            |-----x86_64
> >>>>>>>>>               Before:$(UK_BASE)/arch/$(ARCH)/Makefile.uk.
> >>>>>>>>>               After:$(UK_BASE)/arch/arm/$(ARCH)/Makefile.uk
> >>>>>>>>>               This change is complex, so we'd better to add arm64
> >> folder
> >>>> to
> >>>>>>>> unikraft/arch.
> >>>>>>>>
> >>>>>>>> Except the assembly code, most of the C code should be very similar
> >>>>>>>> between ARM64 and ARM32. So it might make more sense to have a
> >> directory
> >>>>>>>> arch/arm with sub-folder arm32 and arm64.
> >>>>>>>>
> >>>>>>>
> >>>>>>> This is one option I had considered. But this will add a new variable
> >>>>>> (VENDOR) to
> >>>>>>> make scripts. e.g. :$(UK_BASE)/arch/$(VENDOR)/$(ARCH)/Makefile.uk
> >>>>>>> And currently, only architecture dependent code will be placed in
> $(ARCH)
> >>>>>> folder.
> >>>>>>> For example, in arm folder, there are some files for arm32 math
> library.
> >>>>>> These
> >>>>>>> files can only be used for arm32.
> >>>>>>
> >>>>>> What is this vendor variable about? Is it something that applies to a
> >>>>>> specific silicon? Is it required to add subfolders for it?
> >>>>>>
> >>>>>
> >>>>> Yes, it applies to a specific silicon. But "VENDOR" is not very accurate
> >>>> here.
> >>>>> I had considered it again, because x86 is not a "VENDOR", and not all
> x86
> >>>> chips
> >>>>> Belong to intel, Maybe use "FAMILY" is better.
> >>>>>
> >>>>> If we really have some common C code for ARM32/64, I agree to add
> >> subfolders
> >>>>> for it.
> >>>>>
> >>>>> unikraft----arch----arm----arm32  ARM family arm32 and arm64
> architectures
> >>>>>                 |       |-----arm64
> >>>>>                 |
> >>>>>                 |------x86----i386
> >>>>>                         |-----x86_64 X86 family i386 and x86_64
> >> architectures
> >>>>>
> >>>>
> >>>> Sorry, I forgot to mention that you also should add only code here which:
> >>>> 1) ...is exposed to the user with an interface in include/uk/arch/*
> >>>> 2) ...works with all platforms (including linuxu which is special).
> >>>>       So for instance, you should not add code that uses privileged
> >>>>       instruction that could not be executed in Linux userspace. If there
> >>>>       is a different implementation needed, it is a hint that this
> >>>>       functionality need to be moved to the platform API
> >>>>       (include/uk/plat/*)
> >>>>
> >>>
> >>> Ahh, I understand now. Thanks for your explanation.
> >>>
> >>>> I had a discussion with Costin, and we were thinking of placing code
> >>>> that is shared by multiple platforms (but not by all, or is not
> >>>> architecture code) in plat/common/arm/* and plat/common/arm/arm64/*.
> >>>> Your platforms libs would include the source files from this directory.
> >>>>
> >>>> Subdirectories (for e.g., timer, GIC) are fine. What do you think? If
> >>>> you agree we will put a commit that introduces a structure to the
> >>>> staging branch.
> >>>>
> >>>
> >>> I think this idea is good. But the example here is not very accurate ; )
> >>> Once the "drivers" folder has been introduced, I still want to move the
> >>> timer, GIC to it.
> >>>
> >>
> >> Hum. You are right, we should probably distinguish which drivers go
> >> bundled to the platform libraries and which drivers are a selectable
> >> option and stay as independent library. This is not clear at all yet.
> >>
> >> What would you guys think if we do the following:
> >>
> >> plat/common/arm/* <-- code that is shared among multiple ARM platform
> >>                         libs (probably includes bare essential drivers
> >>                         like interrupt controllers and timers for
> >>                         scheduling)
> >> plat/common/x86/* <-- same for x86 platform libs
> >> plat/common/drivers/* <-- device and bus drivers that are going to be
> >>                             built as individual libraries
> >>                             (e.g., NIC, block device drivers)
> >> plat/common/drivers/include/* <-- Include folder for driver APIs that
> >>                                     depend on each other (for example:
> >>                                     PCI bus so that e1000 works with
> >>                                     pcifront but also linuxu's VFIO-based
> >>                                     pci bus)
> >>
> >
> > It looks good.
> >
> >> Note that the NET or BLOCK device API (that are implemented by
> >> individual drivers) should be defined by libraries in libs/ (e.g.,
> >> lib/uknet, lib/ukblock; network stacks would then use uknet for doing
> >> networking I/O, VFSs would use ukblock).
> >>
> >> The structure of the drivers folder is still not clear though. How
> >> should we organize the sub structure? Would maybe something similar to
> >> Linux's drivers folder make sense? I think people might be most familiar
> >> with this.
> >>
> >
> > I am OK for reusing the Linux's drivers structure.
> >
> >> If we have this, each of the platform Config.uk's would list only a
> >> subset of drivers that they can work with (e.g., pcifront on the Xen
> >> platform lib only).
> >> We also have to figure out how we handle Makefile.uk's and Config.uk's
> >> for a driver library. Probably we need global switches for each driver
> >> that can enable by one or multiple platforms. A new menu item (either in
> >> the root or platform structure) should appear that lists only enabled
> >> drivers and allows us to configure each of them individually.
> >> The platform's Linker.uk would then need to include the depending and
> >> compiled driver library objects to the final linking.
> >>
> >> @Wei, Costin: What do you think? Does this makes sense to you?
> >> I think the best way to go with this question The best might be to go
> >> just with this and see if it fits our needs. If not, we restructure it
> >> afterwards.
> >>
> >
> > Ok, I agree to go with this first. If not fits, we can restructure it ASAP.
> >
> 
> Great, I will use this structure for KVM x86, too.
> I will send out a patch that introduce this new structure. I will ask
> you guys for review.
> 

Ok. I will collate the discussions we have got these days, and then re-send
a refined proposal ; )


> >>>>>>>
> >>>>>>> If some C codes are very similar between arm32 and arm64, I think this
> >>>> code
> >>>>>> would
> >>>>>>> be very similar between arm and x86 too. We can place these codes in
> >>>>>> Unikraft/lib.
> >>>>>>>
> >>>>>>> Above 2 options would affect the common framework, so I still want to
> >> get
> >>>>>> some
> >>>>>>> Comments from Simon.
> >>>>>>
> >>>>>> I welcome this discussion because one of the exercises of Unikraft's
> 0.2
> >>>>>> releases is to figure out how to do the right split.
> >>>>>> I am okay with changing the structure of the arch folder substructure
> if
> >>>>>> we can foresee already that it will make sense. In such a case, I would
> >>>>>> also like to adopt the same principle to the x86 architecture folder.
> >>>>>>
> >>>>>> The idea of architecture libraries is that they contain code which is
> >>>>>> only special to the CPU but the same to all of the target platforms
> >>>>>> (xen, kvm, linux). We were originally expecting that this is mostly
> >>>>>> assembly code but we might be wrong with our original assumption. So,
> if
> >>>>>> you foresee any common C code for 32 and 64bit ARM that would be
> >>>>>> duplicated otherwise, we should use a single arm folder instead.
> >>>>>>
> >>>>>
> >>>>> Sorry, about " use a single arm folder instead". Does it mean we don't
> add
> >>>>> Any subfolders to arm or x86 folder? Like following?
> >>>>>
> >>>>> unikraft----arch----arm
> >>>>>                 |
> >>>>>                 |------x86
> >>>>>
> >>>>
> >>>> Sorry, I wasn't clear. I meant:
> >>>> arch/arm/*
> >>>>
> >>>> with specific code in:
> >>>>
> >>>> arch/arm/arm32
> >>>> arch/arm/arm64
> >>>>
> >>>
> >>> Thanks for your clarification, I got it now.
> >>>
> >>>>>>>
> >>>>>>>>>
> >>>>>>>>>          1.2. Add arm64 to unikraft/include/uk/arch
> >>>>>>>>>
> >>>>>>>>>          1.3. Add arm64 kvm platform code to unikraft/plat/kvm/arm,
> and
> >>>> use
> >>>>>>>> Makefile to select
> >>>>>>>>>               objects for correct architecutre:
> >>>>>>>>>
> >>>>>>>>>               ifeq ($(ARCH_X86_64),y)
> >>>>>>>>>                  LIBKVMPLAT_SRCS-y +=
> $(LIBKVMPLAT_BASE)/x86/entry64.S
> >>>>>>>>>                  LIBKVMPLAT_SRCS-y +=
> >> $(LIBKVMPLAT_BASE)/x86/cpu_x86_64.c
> >>>>>>>>>               else ifeq ($(ARCH_ARM_64),y)
> >>>>>>>>>                  LIBKVMPLAT_SRCS-y +=
> $(LIBKVMPLAT_BASE)/arm/entry64.S
> >>>>>>>>>                  LIBKVMPLAT_SRCS-y +=
> >> $(LIBKVMPLAT_BASE)/arm/cpu_arm64.c
> >>>>>>>>>               else ifeq ($(ARCH_ARM_64),y)
> >>>>>>>>>                  LIBKVMPLAT_SRCS-y += $(LIBKVMPLAT_BASE)/arm/entry.S
> >>>>>>>>>                  LIBKVMPLAT_SRCS-y +=
> $(LIBKVMPLAT_BASE)/arm/cpu_arm.c
> >>>>>>>>>               endif
> >>>>>>>>>
> >>>>>>>>>          1.4. Add a "drivers" folder to unikraft/
> >>>>>>>>>               This because we may have some virtual device drivers
> can
> >> be
> >>>>>> shared
> >>>>>>>> among platforms.
> >>>>>>>>>               For example, we can reuse virtual uart, timer and gic
> >>>> drivers
> >>>>>> from
> >>>>>>>> arm32/arm64 Kvm/xen.
> >>>>>>
> >>>>>> Is it okay for you to wait with the driver folder a bit? I am currently
> >>>>>> working on PCI for x86 KVM and I figured that Unikraft need an
> mechanism
> >>>>>> to select drivers for devices (and maybe buses) individually for each
> >>>>>> platform. But drivers are still something that depend on the platform.
> >>>>>> For instance Xen could reuse the same PCI drivers with pcifront, linux
> >>>>>> with VFIO, but a third platform might not support PCI at all.
> >>>>>>
> >>>>>> Because of this, I am currently considering to introduce an folder in
> >>>>>> plat: e.g., plat/common/drivers/pci/virtio-net. What do you guys think?
> >>>>>>
> >>>>>
> >>>>> That's quite good, I will wait it : )
> >>>>>
> >>>>>>>>>
> >>>>>>>>> 2. Bootloader
> >>>>>>>>>          2.1. Because of the BIOS, x86 is using multiboot to load
> >> kernel
> >>>> on
> >>>>>>>> Linux-KVM QEMU. But on ARM platforms,
> >>>>>>>>>               we can skip the EFI and boot from the Virtual
> Machine's
> >> RAM
> >>>>>> base
> >>>>>>>> address. So we can place _libkvmplat_entry
> >>>>>>>>>               to the CPU's reset entry by link script. On ARM64
> >> platform,
> >>>> the
> >>>>>>>> default virtual machine CPU model is cortex A15.
> >>>>>>>>
> >>>>>>>> Cortex A15 does not support 64-bit. So how come it is the default
> >>>>>>>> virtual machine CPU model for ARM64?
> >>>>>>>>
> >>>>>>>
> >>>>>>>     From the code, if we don't specify any cpumodel, the mach-virt's
> >> default
> >>>>>>> cpumodel will be set to "cortex-a15". But you'are right, if we use
> >> cortex-
> >>>> 15
> >>>>>>> by default, we can run any 64-bit image. Here is my mistake. We have
> to
> >>>> set
> >>>>>>> correct cpumodel (cortex-a53/a57 or host) in command line to make 64-
> bit
> >>>>>> image
> >>>>>>> work. But the mach-virt is still using the a15memmap and a15irqmap.
> >>>>>>>
> >>>>>>>
> >>>>>>>> But likely, you want to expose the same MIDR as the underlying CPU.
> So
> >>>>>>>> if an errata has to be implemented in Unikraft, it will be able to
> know
> >>>> it.
> >>>>>>>>
> >>>>>>>
> >>>>>>> Exposing the underlying CPU's MIDR to guest is depending on the
> >>>> hypervisors.
> >>>>>>> For Unikraft itself, it doesn't know whether this MIDR is the same as
> >> the
> >>>>>> underlying
> >>>>>>> CPU or not. And actually, no matter what cpumodel the hypervisor is
> >>>>>> emulating, the
> >>>>>>> code is running on the physical CPU directly. We don't emulate the CPU
> >>>>>> instructions.
> >>>>>>> If we run Unikraft on a corext-a53 host CPU, we can compile this image
> >>>> with
> >>>>>> gcc flags
> >>>>>>> like fix-a53-error.
> >>>>>>>
> >>>>>>>>>
> >>>>>>>>>               plat/kvm/arm/link64.ld:
> >>>>>>>>>               ENTRY(_libkvmplat_entry)
> >>>>>>>>>               SECTIONS {
> >>>>>>>>>                   . = 0x40000000;
> >>>>>>>>>
> >>>>>>>>>                   /* Code */
> >>>>>>>>>                   _stext = .;
> >>>>>>>>>
> >>>>>>>>>                   .text :
> >>>>>>>>>                   {
> >>>>>>>>>                       *(.text)
> >>>>>>>>>                       *(.text.*)
> >>>>>>>>>                   }
> >>>>>>>>>
> >>>>>>>>>                   _etext = .;
> >>>>>>>>>                   ...
> >>>>>>>>>               }
> >>>>>>>>>
> >>>>>>>>>          2.2. Use the fixed physical addresses of PL011 uart, timer
> and
> >>>> GIC.
> >>>>>> So
> >>>>>>>> we can skip the device tree parse.
> >>>>>>>>
> >>>>>>>> What does promise you the PL011, timer, GIC will always be at the
> same
> >>>>>>>> address?
> >>>>>>>
> >>>>>>> My original idea was that we selected a fixed machine (mach-virt) for
> >>>>>> Unikraft to run.
> >>>>>>> In this case, the memory map is fixed.
> >>>>>>>
> >>>>>>>> Or do you expect the user to hack unikraft build system to set
> >>>>>>>> the address?
> >>>>>>>>
> >>>>>>>
> >>>>>>> For my opinion, Yes. Why should we need to parse the device tree to
> >>>> increase
> >>>>>> our boot
> >>>>>>> time and footprint?
> >>>>>>>
> >>>>>>
> >>>>>> Sorry for my stupid question: Would this hardcode the guest device
> >>>>>> configuration that you would need to set with KVM? I mean, how are
> >>>>>> network devices (or other) are handover to the guest? If yes, I am
> >>>>>> concerned that Unikraft is getting difficult to use on ARM. I would
> >>>>>> rather prefer to provide a configuration option where users could
> >>>>>> disable that the image scans the device tree and expects devices at
> >>>>>> hardcoded places.
> >>>>>
> >>>>> While I was writing this proposal, I hadn't consider so many devices. I
> >> just
> >>>>> considered some platform devices like interrupt controller, timer and
> UART.
> >>>>> At that moment, I prefer to hardcode. But now I think parse the device
> >> tree
> >>>>> is better. Because the virtual net/block devices are dynamic
> configuration
> >>>>> for a VM.
> >>>>>
> >>>>
> >>>> Good. Unikraft has libfdt already included. You probably should use this
> >>>> one for doing the parsing and depend the platform libraries on it (see
> >>>> arm32 platforms).
> >>>>
> >>>>>>
> >>>>>>>> At least from Xen PoV, the memory layout is not part of the ABI and a
> >>>>>>>> guest should rely on the DT for getting the correct addresses.
> >>>>>>>>
> >>>>>>>
> >>>>>>> I understand your concern. It's not a part of the ABI. So the
> addresses
> >>>> can
> >>>>>> be changed
> >>>>>>> for different boards.
> >>>>>>>
> >>>>>>> I think we must do a tradeoff between flexibility and deploy density
> >> (boot
> >>>>>> time and footprint)
> >>>>>>>
> >>>>>>
> >>>>>> If this makes sense for you: I prefer having the most flexible as
> >>>>>> default and provide configuration options with Config.uk to switch them
> >>>>>> off individually. I think Unikraft should handover such tradeoff
> >>>>>> question to Unikernel builders.
> >>>>>>
> >>>>>
> >>>>> That would be good.
> >>>>>
> >>>>
> >>>> Perfect ;-)
> >>>>
> >>>>>>>>>          2.3. Setup exception traps.
> >>>>>>>>>
> >>>>>>>>> 3. Support single CPU.
> >>>>>>
> >>>>>> This is fine for the first version. The other platforms also just
> >>>>>> support a single CPU for now.
> >>>>>>
> >>>>>>>>>
> >>>>>>>>> 4. Support multiple threads.
> >>>>>>>>>          4.1. Implement GIC interrupt controller drivers. If we
> doesn't
> >>>>>> specify
> >>>>>>>> the gic version in QEMU's parameter,
> >>>>>>>>>               default GIC will be detected by kvm_arm_vgic_probe.
> Most
> >> ARM
> >>>>>> hosts
> >>>>>>>> are using GICv2, GICv3 and GICv4,
> >>>>>>>>>               and QEMU will provide GICv2 and GICv3 emulators. For
> best
> >>>>>>>> compatibility, we have to implement gicv2
> >>>>>>>>>               and gicv3 drivers without MSI/MSI-X support. This
> means
> >> we
> >>>>>> don't
> >>>>>>>> need to implement gicv2m, gicv3-its
> >>>>>>>>>               for Unikraft in this time.
> >>>>>>>>>          4.2. Implment ARMv8 virtual timer driver.
> >>>>>>>>>
> >>>>>>
> >>>>>> Please contact Costin what is required from the Unikraft's scheduler
> >>>>>> API. I CC'ed him.
> >>>>>>
> >>>>>
> >>>>> Thanks, I will contact Costin when I start to implement this driver.
> >>>>>
> >>>>>>>>> 5. Setup a 1:1 mapping pagetable for Physical memory and Virtual
> >> memory.
> >>>>>>>>>          5.1. Configure MMU
> >>>>>>>>>          5.2. Create page tables with 1GB or 2MB block
> >>>>>>>>>
> >>>>>>
> >>>>>> Good.
> >>>>>>
> >>>>>>>>> 6. Implement PSCI interface to support machine shutdown.
> >>>>>>>>
> >>>>>>>> FWIW, system_off only exist from PSCI 0.2 and onwards.
> >>>>>>>>
> >>>>>>>
> >>>>>>> It seem the psci-0.2 is the default PSCI version of mach-virt with 
> >>>>>>> KVM.
> >>>>>>>
> >>>>>>>>>
> >>>>>>>>> 7. Network, block and etc IO devices?
> >>>>>>>>>         Should we have to port virtual device driver like 
> >>>>>>>>> virtio-net,
> >> pv-
> >>>> net
> >>>>>>>> from KVM and Xen?
> >>>>>>
> >>>>>> After we agreed how Unikraft should include drivers we can start with
> >>>>>> porting them. Is KVM on ARM using virtio-net, too? Is there a virtual
> >>>>>> PCI bus attached?
> >>>>>
> >>>>> Yes, KVM on ARM is using virtio-net too. The virtio-net is connect to a
> >>>>> virtio-mmio bus. But there is a ECAM PCI host controller emulator too.
> >>>>>
> >>>>
> >>>> How are other devices attached? For instance block devices. I remember
> >>>> we have SD card emulation. Maybe we need another bus driver that uses
> >>>> FDT later to make them work in Unikraft.
> >>>>
> >>>
> >>> By default, all virtio devices will attach to virtio-mmio bus. PCI Pass
> >> through
> >>> devices can be connected to ECAM PCI host emulate. So if we want to
> support
> >>> ARM PCI pass through, we have to implement ECAM PCI host driver for
> Unikraft.
> >>>
> >>> If you want to add a SD Card controller to VM. This controller may attach
> to
> >>> Platform bus or simple-bus.
> >>>           SD_MMC_1@B000000 {         ===>> attach SD MMC to platform bus
> >>>                  compatible = "SD1...";
> >>>           }
> >>>
> >>>           platform@c000000 {
> >>>                   compatible = "qemu,platform", "simple-bus";
> >>>                   ranges = <0x0 0x0 0xc000000 0x2000000>;
> >>>                   interrupt-parent = <0x8001>;
> >>>                   #address-cells = <0x1>;
> >>>                   #size-cells = <0x1>;
> >>>
> >>>                   SD_MMC_2@c003000 { ===>> attach SD MMC to simple bus
> >>>                          compatible = "SD2...";
> >>>                   }
> >>>           };
> >>>
> >>> Both of above buses are very simple. We should implement them for 
> >>> Unikraft.
> >>> But I am not sure what is the "SD card emulation" meaning? Is it a SD card
> >>> Controller emulator for guest or just a block device? If it's a block
> device,
> >>> Why should we have to care about is it a SD card or not?
> >>>
> >>>
> >>
> >> Hey, thanks for the clarification. For you question: Maybe I used the
> >> wrong words. I meant this SD card reader entries in dtb that are used
> >> for attaching block devices to the guest - and emulated by QEMU. Is this
> >> way of attaching block devices the default way for ARM?
> >>
> >
> > QEMU can emulate lots of ARM machines (Raspberry Pi, Samsung Exynos, virt
> and etc).
> > The machine "virt" emulates a virtual board, it a stripped-down minimalist
> platform.
> > Virtio is the default configuration. All block devices attach to the VM by
> virtio-scsi
> > But if we select the machine like Raspberry Pi, it emulates the real
> Raspberry Pi board.
> > The block device attach to the VM by a SDHC host controller. For our use
> case, I think
> > We should always use the "virt' machine, like other projects that have been
> used in cloud
> > already. So I think we don't need to implement SDHC controller driver to
> support block
> > devices.
> >
> 
> Sounds reasonable. I agree. Thanks!
> 
> >>>>>>
> >>>>>>>>
> >>>>>>>> There are no emulation provided on Xen, so you would need PV drivers
> to
> >>>>>>>> get access to the network/block.
> >>>>>>
> >>>>>> This is fine ;-).
> >>>>>>
> >>>>>>>
> >>>>>>> Yes, I have the same opinion with you 😊
> >>>>>>>
> >>>>>>>
> >>>>>>>>
> >>>>>>>> Cheers,
> >>>>>>>>
> >>>>>>>> --
> >>>>>>>> Julien Grall
> >>>>>>
> >>>>>> Thanks,
> >>>>>>
> >>>>>> Simon
> >>>>
> >>>> Thanks,
> >>>>
> >>>> Simon
_______________________________________________
Minios-devel mailing list
Minios-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/minios-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.