[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] How to support a larger disk with hvt + kv_ro=direct (pass-through)?



Hi,

`mirage configure -t hvt --kv_ro=direct` is equal to `mirage configure -t hvt 
--kv_ro=crunch` in the current implementation as shown in the source code.
https://github.com/mirage/mirage/blob/f2c9347efff552e452b35d980fa9eb1520dc2de3/lib/mirage_impl_kv.ml#L49

Currently available communication paths between Unikernel and hostOS layers with Solo5-hvt are "block 
device", "network device" and "console device".
Supporting what you want requires additional modification in the Solo5-hvt 
layer.

Kind regards,

--
Takayuki Imada


On 3/27/19 1:44 AM, Hiroshi Doyu wrote:
Hello,

Configured unix+kv_ro=direct, it seems to read a file from disk dynamically[1].
But hvt+kv_ro=direct seems to use a ramdisk(crunch?) statically[2].

How can hvt do the similar pass-through to a file as "unix+kv_ro=direct"?

[1]
$ mirage configure -t unix --kv_ro=direct && make depend && make
$ ./kv_ro
2019-03-26 16:31:28 +00:00: INF [application] foo
$ echo -n "hello" > t/secret
$ ./kv_ro
2019-03-26 16:32:15 +00:00: INF [application] hello

[2]
$ mirage configure -t hvt --kv_ro=direct && make depend && make
$ sudo ./solo5-hvt kv_ro.hvt
2019-03-26 16:34:19 -00:00: INF [application] foo
$ echo -n "hello" > t/secret
$ sudo ./solo5-hvt kv_ro.hvt
2019-03-26 16:35:11 -00:00: INF [application] foo

the above code change for skelton/device-usage/kv_ro:
https://github.com/ehirdoy/mirage-skeleton/commit/bff6688e075ffec67723d5663a984263bffa8563
_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/mirageos-devel


_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/mirageos-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.