[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [MirageOS-devel] An HTTP server with TLS



On 7 January 2015 at 17:35, Dave Scott <Dave.Scott@xxxxxxxxxx> wrote:
>
>> On 7 Jan 2015, at 17:12, Thomas Leonard <talex5@xxxxxxxxx> wrote:
>>
>> On 7 January 2015 at 10:56, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
>>> On 7 Jan 2015, at 10:45, Thomas Leonard <talex5@xxxxxxxxx> wrote:
>>>>
>>>> On 7 January 2015 at 10:42, Anil Madhavapeddy <anil@xxxxxxxxxx> wrote:
>>>>> On 5 Jan 2015, at 09:53, Thomas Leonard <talex5@xxxxxxxxx> wrote:
>>>>>>
>>>>>> I'd like to add TLS to my Mirage web server. What's the best way to do 
>>>>>> this?
>>>>>>
>>>>>> My Unikernel.Main functor currently takes a (H : Cohttp_lwt.Server)
>>>>>> argument. I see that main.ml configures this using:
>>>>>>
>>>>>> module Conduit1 = Conduit_mirage.Make(Stackv41)(Vchan1)
>>>>>> module Http1 = HTTP.Make(Conduit1)
>>>>>>
>>>>>> Can conduit deal with TLS for me? The conduit docs say "The reason
>>>>>> this library exists is to provide a degree of abstraction from the
>>>>>> precise SSL library used", which suggests that it should.
>>>>>
>>>>> Conduit_mirage doesn't support this yet -- just Conduit_lwt_unix.
>>>>> Before adding it in, I was waiting for xentropyd and the C bindings
>>>>> to work, which should all be in the trees.  If we could now get a
>>>>> mirage-skeleton example of a manual SSL server using the TCP/IP
>>>>> stack directly, then the Conduit_mirage version won't be too far
>>>>> behind.
>>>>
>>>> tls/mirage/example has a direct example that works on Xen. I'm going
>>>> to look at getting HTTPS support working now, unless you want to do it
>>>> first.
>>>>
>>>
>>> Go for it!  I'm taking a shot at pulling the OCaml runtime out of
>>> mirage-platform at the moment.
>>
>> OK. Could someone clarify the buffer-alignment rules for me again?
>>
>> V1.mli says:
>>
>> module type NETWORK = sig
>>  type page_aligned_buffer
>>  (** Abstract type for a page-aligned memory buffer *)
>>
>> and
>>
>> module type ETHIF = sig
>>  type buffer
>>  (** Abstract type for a memory buffer that may not be page aligned *)
>>
>> tcpip's ethif.ml just passes the (non-aligned) buffer straight through
>> to Netif, which seems wrong.
>>
>> V1_LWT restricts the types with:
>>
>> module type NETWORK = NETWORK
>>   with type page_aligned_buffer = Io_page.t
>>
>> module type ETHIF = ETHIF
>>   with type buffer = Cstruct.t
>>
>> io-page is a bit vague about what an Io_page.t is:
>>
>> type t = (char, Bigarray.int8_unsigned_elt, Bigarray.c_layout) 
>> Bigarray.Array1.t
>> (** Type of memory blocks. *)
>>
>> Io_page.get n returns "a memory block of [n] pages", so an Io_page.t
>> isn't a single page of memory.
>>
>> The actual problem I'm seeing with TLS on Xen is:
>>
>> Invalid page: offset=2920, length=1245
>>
>> This comes from Netif. The buffer underlying buffer is page aligned
>> (it's allocated by Tls_mirage.conv_io), so I assume tcpip is splitting
>> it at an unfortunate point.
>>
>> It appears it was working before because HTTP_IO buffers its writes
>> using tcpip's Channel module, which batches them into single IO pages.
>> With TLS, these page-sized chunks don't go directly to TCP, but got
>> via TLS instead.
>>
>> So:
>>
>> 1. What does "page-aligned memory buffer" really mean?
>
> Itâs a bit of a mess atm :)
>
> I think we need to write down our alignment requirements somewhere. I assume
> they all come from the low-level drivers i.e. the higher-level layers donât
> really care (is that true?)
>
> Skimming though the netfront code I think that the protocol allows you to
> grant a page and provide an offset within it, so you donât need to align
> everything. You do need to split requests that cross page boundaries though.
> One wrinkle is that if you donât trust the network backend (say itâs in
> a driver domain with a dodgy wifi driver and has been compromised) then
> you may not want to grant a page which happens to also contain some secret
> data as well as your payload, since the untrustworthy backend can ignore the
> offset and read the whole thing. Thinking about it, I suppose that would be
> the driver-domain equivalent of heartblead: leaking random (Cstruct) buffers
> on every packet.

It might be worth having Netif just copy everything to a pool of
pre-shared pages. That would save the time used granting and revoking
pages too, as well as improving security.

Interestingly, it wouldn't add any performance overhead in this case
because copying the data in Netif would simply avoid the need for a
similar copy in TLS.

The current path the data takes when downloading from my file queue is:

- Block.read reads the data into a (multi-page) Io_page.
- I call Cstruct.to_string to copy the data into strings for
Cohttp_lwt_body.of_stream.
- cohttp writes each string to Channel.write_string, which allocates
some Io_pages and copies the string into those in page-sized chunks.
- Channel flushes the list of io-page-backed cstructs to conduit,
which forwards them to Mirage_tls.
- TLS doesn't know they're io-pages, so it allocates a new multi-page
Io_page and copies into that (conv_io).
- TLS passes the multi-page Io_page into conduit again, which forwards
it to TCP.
- TCP makes a series of MTU-sized views onto the data and passes this
list of cstructs to IP.
- IP adds an IP header buffer to the front of this list and forwards to Ethif.
- Ethif forwards to Netif.
- Netif forwards those views that lie within the first page of their
underlying buffer to Xen and rejects the rest.

As a first step, would it be worth changing Io_page to have separate
types "a single page of RAM" and "a sequence of pages"? This seems to
be causing some confusion.

> IIRC the blkfront code expects the sectors to be page aligned.
>
>
>> gnttab_stubs.c checks that the underlying Bigarray starts on a page boundary.
>>
>> netif.ml checks that the cstruct's off + len <= page_size.
>
> I think we discovered experimentally that netback didnât like it if it
> crossed a boundary.
>
>>
>> So from this, it seems that it means a page-aligned buffer no larger
>> than a page.
>>
>>
>> 2. Should Ethif split requests that cross page boundaries into
>> multiple requests to Netif? Or do the APIs need changing?
>>
>>
>> 3. Where should buffering happen? Between HTTP and TLS (as now), or
>> between TLS and TCP?
>>
>>
>> 4. Should we propagate buffer sizes backwards somehow, so that TCP can
>> suggest to TLS to send data as TCP-sized chunks of data within a
>> single Io_page?
>
> Sometimes the drivers are able to process pages which youâve allocated 
> yourself,
> while in other cases theyâd prefer to do the allocation from a fixed pool.
> For example blkfront with persistent grants, blkfront in userspace and vchan.
> Perhaps we need to add some kind of allocator abstraction?
>
> Cheers,
> Dave
>



-- 
Dr Thomas Leonard        http://0install.net/
GPG: 9242 9807 C985 3C07 44A6  8B9A AE07 8280 59A5 3CC1
GPG: DA98 25AE CAD0 8975 7CDA  BD8E 0713 3F96 CA74 D8BA

_______________________________________________
MirageOS-devel mailing list
MirageOS-devel@xxxxxxxxxxxxxxxxxxxx
http://lists.xenproject.org/cgi-bin/mailman/listinfo/mirageos-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.