I want to make some python scripts to create an "Appliance" with VirtualBox. However, I can't find any documentation anywhere on making calls to VBoxService.exe. Well, I've found stuff that works from OUTSIDE the Machine, but nothing from working from inside the machine.
Does anyone know anything about this? If there's a library for another language like C I'd be okay with it, though Python would be heavily preferred.
Consider using libvirt. The VirtualBox support is bleeding-edge (not in any release, may not even be in source control yet, but is available as a set of patches on the mailing list) -- but this single API, available for C, Python and several other languages, lets you control virtual machines and images running in Qemu/KVM, Xen, LXC (Linux Containers), UML (User-Mode Linux), OpenVZ and others.
I build and administer virtual appliances (in an automated QA context) using libvirt with the qemu/KVM backend, and it meets my needs very well.
libvirt can be configured to allow remote access (such as controlling or querying VBoxService or libvirtd from within one of the VMs, which you appear to want to do -- though I question the wisdom and utility), with numerous authentication and transport options available.
[Caveat: libvirt principally targets Unixlike operating systems; it can be built for win32, but YMMV]
Related
I would like to develop some Python3.6 software. The problem is that the software would run on hundreds of uniquely configured build environments that may or may not have python installed and do not have access to the internet or pypi. The machine are a mix between windows and Suse. It's important not to mess with the build environment so I would like to package my software with a isolated python environment with all the dependencies.
I'm finding it difficult to find a solution that would meet my criteria.
I've come across python virtual environments but they do not have an interpreter and are not really intended to copied around.
Another person on stack overflow recommend PEX, this looks perfect but does not seem to be compatible with Windows.
I also have thought about making the software a statically linked binary, using Cython. But again to my knowledge this still requires the correct python to be installed and has to use pure Python.
https://pyoxidizer.readthedocs.io/en/latest/comparisons.html has a comparison of various solutions in this space. It looks that if you need a cross-platform solution that doesn't require the target systems to be pre-configured (e.g. with a particular version of python pre-installed), your options are PyInstaller, PyOxidizer and Docker.
PyInstaller is more established, while PyOxidizer claims to have faster startup.
I'd expect Docker to be the least problematic if you have complex dependencies. It must be preinstalled on the target systems, but the build environments will probably have it already installed. Obviously it comes with more overhead.
For to offer interactive examples about data analysis, I'd like to embed an interactive python shell. It does not necessarily have to be a real Python shell. Users shall be given tasks that they can execute in the shell. This is similar to existing tutorials, as seen on, e.g., http://www.codecademy.org, but I'd like to work with libraries that those solutions do not offer, as far as I understood.
In order to get a real shell on the website, I think of two approaches:
I found projects like http://www.repl.it, but it seems rather difficult to include the necessary libraries like SciPy, NumPy, and Pandas. In addition, user input has to be validated and I'm not sure whether that works with those shells I found.
I could pipe the commands through a web applications to a Python installation on my server, but I'm scared of using eval() on foreign, arbitrary code. Is there a safe mode for Python? I found http://www.pypy.org. Although they offer a Python sandbox, unfortunately, they do not support the libraries I need.
Alternatively, I thought of just embedding a "fake shell", which I build to copy the behaviour of the functions that I want to explain. Of course, this would result in more work, as I would have to write a fake interface, but for now it seems to be the only possibility.
I hope that this question is not too generic; I'm looking for either a good HTML/JS library that helps me put a fake shell on my website or a library/service/software that can embed a real Python shell with the required modules installed.
There is no way to run untrusted Python safely; Python's dynamic nature allows for too many ways to break through any protective layers you could care to think of.
Instead, run each session on a new virtual machine, properly locked down (firewalled, unprivileged user), which you shut down after a hard time limit. New sessions get a new, clean virtual machine.
This isolates you from any malicious code that might run and try to break out of a sandbox; a good virtual machine is hardware-isolated by the processor from the host OS, something a Python-only layer could never achieve.
This process is sometimes called sandboxing.
You can find some good information on the python wiki
There are basically three options available:
machine-level mechanisms (such as a VM, as Martijn Pieters suggested)
OS-level mechanisms (such as a chroot or SELinux)
custom interpreters, such as pypy (which has sandboxing capabilities, as you mentioned), or Jython, where you may be able to use the Java security manager or applet mechanisms.
You may also want to check Restricted Python, which is especially useful for very restricted environments, but security will depend on its configuration.
Ultimately, your choice of solution will depend on what you want to restrict:
Filesystem access? Block everything, or allow certain directories?
Network access, such as sockets?
Arbitrary system calls?
Here's the deal. I am developing a framework whose sole users have extremely messed up python installations on their servers (Linux). They all have multiple versions of Python on their servers and their PYTHONHOME and PYTHONPATH variables are pointing to different versions.
Since my framework will require Python 2.6, I thought that a safe way to distribute my application might be to bundle a pre-compiled version of Python with my application. To test this theory out, I downloaded ActivePython and bundled all the necessary files with by application. My main script calls #!/vendor/ActivePython2.6/bin/python.
So far, I have tested the framework on different server distributions and with different people's servers and it seems to have worked so far with no problems (yet).
My question is, are there any problems in doing this and are there any alternatives?
I'd recommend against it. You'll run into problems between 32b and 64b versions, between different libc versions, noexec locations, wrong selinux/apparmor profiles for custom paths and many other potential problems...
Unless you're planning to release (and test!) a package for each separate distribution, architecture and version, I'd say you're creating problems for yourself. The alternative is to provide both versions of course - provide the framework only by default and make the static python package available in case of problems.
Is it possible to create an environment to safely run arbitrary Python scripts under Linux? Those scripts are supposed to be received from untrusted people and may be too large to check them manually.
A very brute-force solution is to create a virtual machine and restore its initial state after every launch of an untrusted script. (Too expensive.)
I wonder if it's possible to restrict Python from accessing the file system and interacting with other programs and so on.
Consider using a chroot jail. Not only is this very secure, well-supported and tested but it also applies to external applications you run from python.
There are 4 things you may try:
As you already mentioned, using a virtual machine or some other form of virtualisation (perhaps solaris zones are lightweight enough?). If the script breaks the OS there then you don't care.
Using chroot, which puts a shell session into a virtual root directory, separate from the main OS root directory.
Using systrace. Think of this as a firewall for system calls.
Using a "jail", which builds upon systrace, giving each jail it's own process table etc.
Systrace has been compromised recently, so be aware of that.
You could run jython and use the sandboxing mechanism from the JVM. The sandboxing in the JVM is very strong very well understood and more or less well documented. It will take some time to define exactly what you want to allow and what you dnt want to allow, but you should be able to get a very strong security from that ...
On the other side, jython is not 100% compatible with cPython ...
Try searching for "sandboxing python", e.g.:
http://wiki.python.org/moin/SandboxedPython
http://wiki.python.org/moin/How%20can%20I%20run%20an%20untrusted%20Python%20script%20safely%20(i.e.%20Sandbox)
could you not just run as a user which has no access to anything but the scripts in that directory?
I am a member of a team that is about to launch a beta of a python (Django specifically) based web site and accompanying suite of backend tools. The team itself has doubled in size from 2 to 4 over the past few weeks and we expect continued growth for the next couple of months at least. One issue that has started to plague us is getting everyone up to speed in terms of getting their development environment configured and having all the right eggs installed, etc.
I'm looking for ways to simplify this process and make it less error prone. Both zc.buildout and virtualenv look like they would be good tools for addressing this problem but both seem to concentrate primarily on the python-specific issues. We have a couple of small subprojects in other languages (Java and Ruby specifically) as well as numerous python extensions that have to be compiled natively (lxml, MySQL drivers, etc). In fact, one of the biggest thorns in our side has been getting some of these extensions compiled against appropriate versions of the shared libraries so as to avoid segfaults, malloc errors and all sorts of similar issues. It doesn't help that out of 4 people we have 4 different development environments -- 1 leopard on ppc, 1 leopard on intel, 1 ubuntu and 1 windows.
Ultimately what would be ideal would be something that works roughly like this, from the dos/unix prompt:
$ git clone [repository url]
...
$ python setup-env.py
...
that then does what zc.buildout/virtualenv does (copy/symlink the python interpreter, provide a clean space to install eggs) then installs all required eggs, including installing any native shared library dependencies, installs the ruby project, the java project, etc.
Obviously this would be useful for both getting development environments up as well as deploying on staging/production servers.
Ideally I would like for the tool that accomplishes this to be written in/extensible via python, since that is (and always will be) the lingua franca of our team, but I am open to solutions in other languages.
So, my question then is: does anyone have any suggestions for better alternatives or any experiences they can share using one of these solutions to handle larger/broader install bases?
Setuptools may be capable of more of what you're looking for than you realize -- if you need a custom version of lxml to work correctly on MacOS X, for instance, you can put a URL to an appropriate egg inside your setup.py and have setuptools download and install that inside your developers' environments as necessary; it also can be told to download and install a specific version of a dependency from revision control.
That said, I'd lean towards using a scriptably generated virtual environment. It's pretty straightforward to build a kickstart file which installs whichever packages you depend on and then boot virtual machines (or production hardware!) against it, with puppet or similar software doing other administration (adding users, setting up services [where's your database come from?], etc). This comes in particularly handy when your production environment includes multiple machines -- just script the generation of multiple VMs within their handy little sandboxed subnet (I use libvirt+kvm for this; while kvm isn't available on all the platforms you have developers working on, qemu certainly is, or you can do as I do and have a small number of beefy VM hosts shared by multiple developers).
This gets you out of the headaches of supporting N platforms -- you only have a single virtual platform to support -- and means that your deployment process, as defined by the kickstart file and puppet code used for setup, is source-controlled and run through your QA and review processes just like everything else.
I always create a develop.py file at the top level of the project, and have also a packages directory with all of the .tar.gz files from PyPI that I want to install, and also included an unpacked copy of virtualenv that is ready to run right from that file. All of this goes into version control. Every developer can simply check out the trunk, run develop.py, and a few moments later will have a virtual environment ready to use that includes all of our dependencies at exactly the versions the other developers are using. And it works even if PyPI is down, which is very helpful at this point in that service's history.
Basically, you're looking for a cross-platform software/package installer (on the lines of apt-get/yum/etc.) I'm not sure something like that exists?
An alternative might be specifying the list of packages that need to be installed via the OS-specific package management system such as Fink or DarwinPorts for Mac OS X and having a script that sets up the build environment for the in-house code?
I have continued to research this issue since I posted the question. It looks like there are some attempts to address some of the needs I outlined, e.g. Minitage and Puppet which take different approaches but both may accomplish what I want -- although Minitage does not explicitly state that it supports Windows. Lacking any better options I will try to make either one of these or just extensive customized use of zc.buildout work for our needs, but I still feel like there must be better options out there.
You might consider creating virtual machine appliances with whatever production OS you are running, and all of the software dependencies pre-built. Code can be edited either remotely, or with a shared folder. It worked pretty well for me in a past life that had a fairly complicated development environment.
Puppet doesn't (easily) support the Win32 world either. If you're looking for a deployment mechanism and not just a "dev setup" tool, you might consider looking into ControlTier (http://open.controltier.com/) which has a open-source cross-platform solution.
Beyond that you're looking at "enterprise" software such as BladeLogic or OpsWare and typically an outrageous pricetag for the functionality offered (my opinion, obviously).
A lot of folks have been aggressively using a combination of Puppet and Capistrano (even non-rails developers) for deployment automation tools to pretty good effect. Downside, again, is that it's expecting a somewhat homogeneous environment.