[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] Building a sample File storage app





On Fri, Feb 12, 2016 at 1:14 PM, Thomas Leonard <talex5@xxxxxxxxx> wrote:
On 12 February 2016 at 05:56, Vanshdeep Singh <kansi13@xxxxxxxxx> wrote:
> Hi,
> I tried running the my implementation directly on xen and performance was
> much better (no idea why).
> But I have run into new issues,
> - Tried creating a disk of size 1000Mb using "fat create disk.img 102400KiB"
> and it returned
> "fat: unimplemented" even though the disk was created.

Boot_sector.format_of_clusters will choose FAT32 if it needs 65527 or
more clusters. However, it appears that only FAT16 is implemented. I'm
not sure what changes are required for FAT32.

For testing, you could format it with mkfs (mkfs -t fat disk.img), but
I guess you'll have the same problem using it.Â
I was able to successfully create and run the a 1GB fat disk using mkfs.fatÂ

> Then I tried running
> it on the xen and got an
> error after I ran the image on xen,
> Fatal error: exception Fs.Make(B)(M).Fs_error(_)
> Raised at file "src/core/lwt.ml", line 789, characters 22-23

Mirage error reporting really needs sorting out. For now, you could
use Printexc.register_printer in fs.ml to tell it how to display the
error as something other than "_".

> - I also tried uploading a file with size around 30MiB onto a disk.img of
> size 100MiB. The hanged
> after writing 4Mb of data.

> Any suggestion on how to deal with the above situations ?

Was it spinning (high CPU load shown in "xl list") or waiting for
something (idle)?

If spinning, you can grab a stack trace to find out where:

 http://www.brendangregg.com/blog/2016-01-27/unikernel-profiling-from-dom0.html

If it's waiting for something, annotate your main thread with
MProf.Trace.should_resolve and compile with tracing on. When you view
the trace, your thread (which never finishes) will be shown in red and
you can follow the yellow arrows to discover what it was waiting for.
See:

 https://mirage.io/wiki/profiling

Both of these techniques may be useful for finding performance problems too.
I have tried to narrow down the problem and it turn turns out that the code gets
stuck at Fs.write because on commenting Fs.write all the data is successfully
received and iterated using Lwt_stream.iter_s . But when I try to write using Fs.write
first 3 to 4 page buffers are successfully written and then it hangs. I tried to profile
the vm using mirage-trace-view but there was not much I could understand. I am
attaching the results in case you can see and suggest something.

Note: I was trying to upload a 30Mb file which could copy into the disk using "fat add"
command but when I tried uploading and writing to the disk the Fs.write call won't
return after writing a few page buffers.

about the files: trace.ctf 1-5 show the incremental trace of the vm when I upload the
30Mb file

Â

> Regards,
> Vansh
>
>
> On Thu, Feb 11, 2016 at 8:11 PM, Vanshdeep Singh <kansi13@xxxxxxxxx> wrote:
>>
>> Hi Thomas,
>> I am chosen to implement the disk in FAT format. Drawing inspiration from
>> your code I
>> have tried to do disk writing operations but instead of V1_LWT.BLOCK I
>> have chosen to
>> go wo with V1_LWT.FS because for the api but the write performance I get
>> is very poor.
>> I takes more than 11 sec to upload a 67Kb file. The file is uploaded
>> quickly but the time
>> taken to write to disk is long hence they delay.
>>
>> Much of my implementation is similar to this code
>>
>> https://github.com/0install/0repo-queue/blob/master/upload_queue.ml#L159-L172
>> the difference comes in the flush_page_buffer . Since I am using V1_LWT.FS
>> I use
>> FS.write call to write the data to the disk i.e.
>>
>>>
>>> buffered_data = Cstruct.sub page_buffer 0 !page_buffer_offset
>>>
>>> Fs.write fs path !file_offset buffered_data
>>
>>
>>
>> How can I improve the performance ?
>>
>> Note: I am testing this using --unix
>>
>>
>> Regards,
>> Vansh
>>
>> On Sun, Feb 7, 2016 at 11:28 PM, Thomas Leonard <talex5@xxxxxxxxx> wrote:
>>>
>>> On 6 February 2016 at 20:48, Vanshdeep Singh <kansi13@xxxxxxxxx> wrote:
>>> > Hi,
>>> > I am trying to build a sample file storage web app and I am need some
>>> > directions
>>> > on how to approach it, particularly I am trying to figure out how to do
>>> > storage.
>>> > Currently, I am drawing my insight from here and here (irmin). Any kind
>>> > of
>>> > suggestion
>>> > would be really helpful.
>>> >
>>> > NOTE: files of any size could be uploaded so I am aiming at streaming
>>> > uploads/downloads.
>>>
>>> Hi Vansh,
>>>
>>> Currently, FAT is the only supported file-system on Mirage/Xen:
>>>
>>>Â Âhttps://github.com/mirage/ocaml-fat
>>>
>>> If your needs are simpler then you could also implement your own
>>> scheme. The file queue example you linked just stores the files
>>> sequentially on the disk, which is fine for a queue.
>>>
>>> If you want to help build something better (e.g. to support Irmin),
>>> the ocaml-btree project is under development:
>>>
>>>
>>> http://lists.xenproject.org/archives/html/mirageos-devel/2016-01/msg00059.html
>>>
>>>
>>> --
>>> Dr Thomas Leonard    http://roscidus.com/blog/
>>> GPG: DA98 25AE CAD0 8975 7CDAÂ BD8E 0713 3F96 CA74 D8BA
>>
>>
>



--
Dr Thomas Leonard    http://roscidus.com/blog/
GPG: DA98 25AE CAD0 8975 7CDAÂ BD8E 0713 3F96 CA74 D8BA

Attachment: trace.ctf1
Description: Binary data

Attachment: trace.ctf2
Description: Binary data

Attachment: trace.ctf3
Description: Binary data

Attachment: trace.ctf4
Description: Binary data

Attachment: trace.ctf5
Description: Binary data

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.