[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] Building a sample File storage app



Working on the tracing part but here's an interesting observation, If I don't increment the file_offset
after writing each page_buffer then the all of the data is uploaded, only when I increment the
file_offset after writing each page_buffer it gets stuck after writing 3-4Mb of data.
Further, I tried creating a FAT 16 disk with logical sector size 4096 i.e.
mkfs.fat -F 16 -S 4096 -C disk.img 1024000
using this disk I was able to write almost 30MB of data after which it started to hang again.
I am speculating that it hangs in write_to_locationÂwhile allocating new sectors. Any ideas on this ?


On Sun, Feb 14, 2016 at 3:38 AM, Thomas Leonard <talex5@xxxxxxxxx> wrote:
On 13 February 2016 at 07:38, Vanshdeep Singh <kansi13@xxxxxxxxx> wrote:
>
>
> On Fri, Feb 12, 2016 at 1:14 PM, Thomas Leonard <talex5@xxxxxxxxx> wrote:
>>
>> On 12 February 2016 at 05:56, Vanshdeep Singh <kansi13@xxxxxxxxx> wrote:
>> > Hi,
>> > I tried running the my implementation directly on xen and performance
>> > was
>> > much better (no idea why).
>> > But I have run into new issues,
>> > - Tried creating a disk of size 1000Mb using "fat create disk.img
>> > 102400KiB"
>> > and it returned
>> > "fat: unimplemented" even though the disk was created.
>>
>> Boot_sector.format_of_clusters will choose FAT32 if it needs 65527 or
>> more clusters. However, it appears that only FAT16 is implemented. I'm
>> not sure what changes are required for FAT32.
>>
>> For testing, you could format it with mkfs (mkfs -t fat disk.img), but
>> I guess you'll have the same problem using it.
>
> I was able to successfully create and run the a 1GB fat disk using mkfs.fat
>>
>>
>> > Then I tried running
>> > it on the xen and got an
>> > error after I ran the image on xen,
>> > Fatal error: exception Fs.Make(B)(M).Fs_error(_)
>> > Raised at file "src/core/lwt.ml", line 789, characters 22-23
>>
>> Mirage error reporting really needs sorting out. For now, you could
>> use Printexc.register_printer in fs.ml to tell it how to display the
>> error as something other than "_".
>>
>> > - I also tried uploading a file with size around 30MiB onto a disk.img
>> > of
>> > size 100MiB. The hanged
>> > after writing 4Mb of data.
>>
>> > Any suggestion on how to deal with the above situations ?
>>
>> Was it spinning (high CPU load shown in "xl list") or waiting for
>> something (idle)?
>>
>> If spinning, you can grab a stack trace to find out where:
>>
>>
>> http://www.brendangregg.com/blog/2016-01-27/unikernel-profiling-from-dom0.html
>>
>> If it's waiting for something, annotate your main thread with
>> MProf.Trace.should_resolve and compile with tracing on. When you view
>> the trace, your thread (which never finishes) will be shown in red and
>> you can follow the yellow arrows to discover what it was waiting for.
>> See:
>>
>>Â Âhttps://mirage.io/wiki/profiling
>>
>> Both of these techniques may be useful for finding performance problems
>> too.
>
> I have tried to narrow down the problem and it turn turns out that the code
> gets
> stuck at Fs.write because on commenting Fs.write all the data is
> successfully
> received and iterated using Lwt_stream.iter_s . But when I try to write
> using Fs.write
> first 3 to 4 page buffers are successfully written and then it hangs. I
> tried to profile
> the vm using mirage-trace-view but there was not much I could understand. I
> am
> attaching the results in case you can see and suggest something.

The first two traces seem to be mostly networking stuff. It might be
worth simplifying the test case so the unikernel just writes test data
directly (or reads a small request and writes it many times).

The third doesn't have many labels, so it might be mirage-block-xen
stuff. I see I started adding trace events, but never got around to
submitting a PR:

 https://github.com/talex5/mirage-block-xen/tree/tracing

(trace labels have no cost when compiling without tracing, so it would
be good to have more!)

The last two traces show the unikernel constantly waking up and them
immediately sleeping again without doing anything. Very odd. Might be
worth adding some trace labels around here:

 https://github.com/mirage/mirage-platform/blob/dfd00d518570c074b4e9b36a59472f5e7354df5f/xen/lib/main.ml#L62

> Note: I was trying to upload a 30Mb file which could copy into the disk
> using "fat add"
> command but when I tried uploading and writing to the disk the Fs.write call
> won't
> return after writing a few page buffers.
>
> about the files: trace.ctf 1-5 show the incremental trace of the vm when I
> upload the
> 30Mb file
>
>
>>
>>
>> > Regards,
>> > Vansh
>> >
>> >
>> > On Thu, Feb 11, 2016 at 8:11 PM, Vanshdeep Singh <kansi13@xxxxxxxxx>
>> > wrote:
>> >>
>> >> Hi Thomas,
>> >> I am chosen to implement the disk in FAT format. Drawing inspiration
>> >> from
>> >> your code I
>> >> have tried to do disk writing operations but instead of V1_LWT.BLOCK I
>> >> have chosen to
>> >> go wo with V1_LWT.FS because for the api but the write performance I
>> >> get
>> >> is very poor.
>> >> I takes more than 11 sec to upload a 67Kb file. The file is uploaded
>> >> quickly but the time
>> >> taken to write to disk is long hence they delay.
>> >>
>> >> Much of my implementation is similar to this code
>> >>
>> >>
>> >> https://github.com/0install/0repo-queue/blob/master/upload_queue.ml#L159-L172
>> >> the difference comes in the flush_page_buffer . Since I am using
>> >> V1_LWT.FS
>> >> I use
>> >> FS.write call to write the data to the disk i.e.
>> >>
>> >>>
>> >>> buffered_data = Cstruct.sub page_buffer 0 !page_buffer_offset
>> >>>
>> >>> Fs.write fs path !file_offset buffered_data
>> >>
>> >>
>> >>
>> >> How can I improve the performance ?
>> >>
>> >> Note: I am testing this using --unix
>> >>
>> >>
>> >> Regards,
>> >> Vansh
>> >>
>> >> On Sun, Feb 7, 2016 at 11:28 PM, Thomas Leonard <talex5@xxxxxxxxx>
>> >> wrote:
>> >>>
>> >>> On 6 February 2016 at 20:48, Vanshdeep Singh <kansi13@xxxxxxxxx>
>> >>> wrote:
>> >>> > Hi,
>> >>> > I am trying to build a sample file storage web app and I am need
>> >>> > some
>> >>> > directions
>> >>> > on how to approach it, particularly I am trying to figure out how to
>> >>> > do
>> >>> > storage.
>> >>> > Currently, I am drawing my insight from here and here (irmin). Any
>> >>> > kind
>> >>> > of
>> >>> > suggestion
>> >>> > would be really helpful.
>> >>> >
>> >>> > NOTE: files of any size could be uploaded so I am aiming at
>> >>> > streaming
>> >>> > uploads/downloads.
>> >>>
>> >>> Hi Vansh,
>> >>>
>> >>> Currently, FAT is the only supported file-system on Mirage/Xen:
>> >>>
>> >>>Â Âhttps://github.com/mirage/ocaml-fat
>> >>>
>> >>> If your needs are simpler then you could also implement your own
>> >>> scheme. The file queue example you linked just stores the files
>> >>> sequentially on the disk, which is fine for a queue.
>> >>>
>> >>> If you want to help build something better (e.g. to support Irmin),
>> >>> the ocaml-btree project is under development:
>> >>>
>> >>>
>> >>>
>> >>> http://lists.xenproject.org/archives/html/mirageos-devel/2016-01/msg00059.html
>> >>>
>> >>>
>> >>> --
>> >>> Dr Thomas Leonard    http://roscidus.com/blog/
>> >>> GPG: DA98 25AE CAD0 8975 7CDAÂ BD8E 0713 3F96 CA74 D8BA
>> >>
>> >>
>> >
>>
>>
>>
>> --
>> Dr Thomas Leonard    http://roscidus.com/blog/
>> GPG: DA98 25AE CAD0 8975 7CDAÂ BD8E 0713 3F96 CA74 D8BA
>
>



--
Dr Thomas Leonard    http://roscidus.com/blog/
GPG: DA98 25AE CAD0 8975 7CDAÂ BD8E 0713 3F96 CA74 D8BA

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.