[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Stratos-dev] Enabling hypervisor agnosticism for VirtIO backends



On Wed, 15 Sep 2021, Trilok Soni wrote:
> On 9/14/2021 8:29 PM, Stefano Stabellini wrote:
> > On Tue, 14 Sep 2021, Trilok Soni wrote:
> > > On 9/13/2021 4:51 PM, Stefano Stabellini via Stratos-dev wrote:
> > > > On Mon, 6 Sep 2021, AKASHI Takahiro wrote:
> > > > > > the second is how many context switches are involved in a
> > > > > > transaction.
> > > > > > Of course with all things there is a trade off. Things involving the
> > > > > > very tightest latency would probably opt for a bare metal backend
> > > > > > which
> > > > > > I think would imply hypervisor knowledge in the backend binary.
> > > > > 
> > > > > In configuration phase of virtio device, the latency won't be a big
> > > > > matter.
> > > > > In device operations (i.e. read/write to block devices), if we can
> > > > > resolve 'mmap' issue, as Oleksandr is proposing right now, the only
> > > > > issue
> > > > > is
> > > > > how efficiently we can deliver notification to the opposite side.
> > > > > Right?
> > > > > And this is a very common problem whatever approach we would take.
> > > > > 
> > > > > Anyhow, if we do care the latency in my approach, most of
> > > > > virtio-proxy-
> > > > > related code can be re-implemented just as a stub (or shim?) library
> > > > > since the protocols are defined as RPCs.
> > > > > In this case, however, we would lose the benefit of providing "single
> > > > > binary"
> > > > > BE.
> > > > > (I know this is is an arguable requirement, though.)
> > > > 
> > > > In my experience, latency, performance, and security are far more
> > > > important than providing a single binary.
> > > > 
> > > > In my opinion, we should optimize for the best performance and security,
> > > > then be practical on the topic of hypervisor agnosticism. For instance,
> > > > a shared source with a small hypervisor-specific component, with one
> > > > implementation of the small component for each hypervisor, would provide
> > > > a good enough hypervisor abstraction. It is good to be hypervisor
> > > > agnostic, but I wouldn't go extra lengths to have a single binary. I
> > > > cannot picture a case where a BE binary needs to be moved between
> > > > different hypervisors and a recompilation is impossible (BE, not FE).
> > > > Instead, I can definitely imagine detailed requirements on IRQ latency
> > > > having to be lower than 10us or bandwidth higher than 500 MB/sec.
> > > > 
> > > > Instead of virtio-proxy, my suggestion is to work together on a common
> > > > project and common source with others interested in the same problem.
> > > > 
> > > > I would pick something like kvmtool as a basis. It doesn't have to be
> > > > kvmtools, and kvmtools specifically is GPL-licensed, which is
> > > > unfortunate because it would help if the license was BSD-style for ease
> > > > of integration with Zephyr and other RTOSes.
> > > > 
> > > > As long as the project is open to working together on multiple
> > > > hypervisors and deployment models then it is fine. For instance, the
> > > > shared source could be based on OpenAMP kvmtool [1] (the original
> > > > kvmtool likely prefers to stay small and narrow-focused on KVM). OpenAMP
> > > > kvmtool was created to add support for hypervisor-less virtio but they
> > > > are very open to hypervisors too. It could be a good place to add a Xen
> > > > implementation, a KVM fatqueue implementation, a Jailhouse
> > > > implementation, etc. -- work together toward the common goal of a single
> > > > BE source (not binary) supporting multiple different deployment models.
> > > 
> > > I have my reservations on using "kvmtool" to do any development here.
> > > "kvmtool" can't be used on the products and it is just a tool for the
> > > developers.
> > > 
> > > The benefit of the solving problem w/ rust-vmm is that some of the crates
> > > from
> > > this project can be utilized for the real product. Alex has mentioned that
> > > "rust-vmm" today has some KVM specific bits but the rust-vmm community is
> > > already discussing to remove or re-org them in such a way that other
> > > Hypervisors can fit in.
> > > 
> > > Microsoft has Hyper-V implementation w/ cloud-hypervisor which uses some
> > > of
> > > the rust-vmm components as well and they had shown interest to add the
> > > Hyper-V
> > > support in the "rust-vmm" project as well. I don't know the current
> > > progress
> > > but they had proven it it "cloud-hypervisor" project.
> > > 
> > > "rust-vmm" project's license will work as well for most of the project
> > > developments and I see that "CrosVM" is shipping in the products as well.
> > 
> > Most things in open source start as a developers tool before they become
> > part of a product :)
> 
> Agree, but I had an offline discussions with one the active developer of
> kvmtool and the confidence of using it in the product was no where near we
> expected during our evaluation. Same goes the QEMU and one of the biggest
> problem was no. of security issues against this huge codebase of QEMU.

That is fair, but it is important to recognize that these are *known*
security issues.

Does rust-vmm have a security process and a security response team? I
tried googling for it but couldn't find relevant info.

QEMU is a very widely used and very well inspected codebase. It has a
mailing list to report security issues and a security process. As a
consequence we know of many vulnerabilities affecting the code base.
As far as I am aware rust-vmm has not been inspected yet with the same
level of attention and the same amount of security researchers.

That said, of course it is undeniable that the larger size of QEMU
implies a higher amount of security issues. But for this project, we
wouldn't be using the whole of QEMU of course. We would be narrowing it
down to a build with only few revelant pieces. I imagine that the total
LOC count would still be higher but the number of relevant security
vulnerabilities would only be a small fraction of the QEMU total.

 
> > I am concerned about how "embeddable" rust-vmm is going to be. Do you
> > think it would be possible to run it against an RTOS together with other
> > apps written in C?
> 
> I don't see any limitations of rust-vmm. For example, I am confident that we
> can port rust-vmm based backend into the QNX as host OS and same goes w/
> Zephyr as well. Some work is needed but nothing fundamentally blocking it. We
> should be able to run it w/ Fuchsia as well with some effort.
 
That's good to hear.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.