[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] sHype access control architecture for Xen



On 6/21/05, Reiner Sailer <sailer@xxxxxxxxxxxxxx> wrote:
> This E-mail contains the sHype access control architecture
> for inclusion into the Xen hypervisor (xeno-unstable.bk).
> This is a follow-up on earlier postings:
> http://lists.xensource.com/archives/html/xen-devel/2005-04/msg00864.html
> 
> The *_xen.diff patch includes the core sHype access control
> architecture. Default is the NULL-policy.
> 
> The *_tools.diff patch includes the necessary additions to the
> tools directory:
>   a) adding support for an additional VM configuration paramter
>   b) adding basic policy management support into tools/policy
> 
> The default setting is the NULL policy. After patching in the diff-
> files, you should see no change in behavior. Please refer to the
> attached shype4xen_readme.txt file for instructions on how to
> activate and experiment with sHype.
> 
> While we have added support for saving and restoring security
> information when saving and restoring domains, the architecture
> currently supports save/restore only on the same hypervisor system
> running the same sHype policy. Future versions will include more
> flexible support for save/restore/migration.
> 
> Our group will submit a java-based policy translation tool for sHype to
> this mailing list today as well. This tool takes as input an XML-based
> descriptions of user-defined sHype policies and translates them into a
> binary policy format that can be loaded into sHype.

any plan to write the tool in other language, not Java? i guess not
many people (include me) are willing to install Java on their system.

since python is used in xen, i  think it is a good candidate.

i will play with the code and give some feedbacks.

regards,
aq

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.