So my team and I have bought into Docker - it is fantastic for deployment and testing. My real question is how to set up a great developer experience, specifically around writing Python apps, but this question could be generalized to nodejs, Java, etc.
The problem: When writing a Python app, I really like having decent linting/autocomplete functionality, there are some really good editors out there (Atom, VSCode, PyCharm) that provide these, but most really want a Python install on the local disk. The real advantage of Docker is that all of the core language and any project libraries can all be in the container, so reproducing all of that on the host machine just for developing is a pain.
I know that PyCharm pro does support Docker and docker-compose, but I found it quite sluggish and a lot of the test running capabilities were busted. On top of that, I really would like something that I can commit to version control so that the team can share dev setup and people don't have to repeat all of the steps for their own system.
A few Ideas that I had were:
Install an editor (like Atom) in a sidecar Docker container and use X11 forwarding
Use a browser based editor such as https://c9.io/ in a container - this seems most promising
Install some agent in a dev container that could handle autocomplete/linting, etc. and connect to it from a locally running editor - I think this would be the best solution, but I also think that right now it actually doesn't exist.
Has anyone had luck setting up a more productive development environment besides just mounting volumes and editing text?
You should use an 'advanced' IDE like IntelliJ (Pycharm) and configure a remote Python SDK using SSH-Access to your App-Docker-Container (using a shared ssh-key to auth against the app-container with a preinstalled openssh server and preconfigured authorized_keys file).
You can share this SDK information in your project file with all devs, so they wlll have this setup out of the box
1) This will ensure, your IDE knows about all the python libs/symbols available/installed in your docker-container during runtime. It will also enable you to properly debug remotely at the same time
2) This ensures, you have an IDE at your hand including a lot of important additional features like the inspector, 3way duff, search in path.. . hardly any of the Browser-Based IDEs will catch up with Pycharm at this point IMHO
Of course, as already mentioned in the comments, you need to share aka mount your code into the container. On linux, you plainly use host-volume-mounts from your local src folder to the container.
On OSX, you will run into performance issues when using host mounts. You might use something like http://docker-sync.io ( i am biased - there are also a lot of other similar tools )
I know this is an old question, but as I stumbled across it while trying to see what other editors might offer in this space, I would like to point out Visual Studio Code's notion of a Dev Container, which seems to provide the best level of integration I've seen for this so far. I'm hoping to see this turn into an industry trend myself.
Could use x11docker
x11docker allows to run graphical desktop applications (and entire desktops) in Docker Linux containers.
Docker allows to run applications in an isolated container environment. Containers need much less resources than virtual machines for similar tasks.
Docker does not provide a display server that would allow to run applications with a graphical user interface.
x11docker fills the gap. It runs an X display server on the host system and provides it to Docker containers.
Additionally x11docker does some security setup to enhance container isolation and to avoid X security leaks. This allows a sandbox environment that fairly well protects the host system from possibly malicious or buggy software.
https://github.com/mviereck/x11docker
https://github.com/mviereck/x11docker/wiki (extensive! knowledge)
https://dev.to/brickpop/my-dream-come-true-launching-gui-docker-sessions-with-dx11-in-seconds-1a53
Related
I have the last six months been working on a Python GUI application that I will use at work. Specifically my GUI will run on a couple of super computer clusters that I use for work.
However, I am mostly developing the software at my personal computer, and here I do not have direct access to the commands that my GUI will call, since the GUI will use subprocess to call commands that only are available on the computing cluster.
So, in order to efficiently develop the program, I often have to copy the directory containing all files related to the GUI, to the cluster. Then I test my current version there, locate all my bugs, fix them by editing the files on the cluster, and finally copy back all files to my computer, overwriting the old version.
This just seems like a bad way of doing it, but I have to be able to test my software in the environment it is made for in order to find my bugs.
Surely this is a common problem in software development... What do actual programmers do (as opposed to hobby programmers such as myself)?
Edit:
Examples of commands that are only available on the computing cluster, that I make heavy use of, are squeue, sacct, and scontrol (SLURM related commands).
Edit2:
I could mention that I tested using ssh connections with Python, but it slowed down the commands significantly, having to establish the ssh connection for each command I wanted. Unless I could set of a lasting ssh session, as in logging in when opening my program, I don't think the ssh-ing will work.
Explore the concepts that make Vagrant a popular choice for developers
Vagrant is a tool for building and managing virtual machine
environments in a single workflow. With an easy-to-use workflow and
focus on automation, Vagrant lowers development environment setup
time, increases production parity, and makes the "works on my machine"
excuse a relic of the past.
Your use case is covered by a couple of vagrant boxes that create a slurm cluster for development purposes. A good starting point might be
Example slurm cluster on your laptop (multiple VMs via vagrant)
If you understand and can setup your development environment with tools like Vagrant, you might explore next which options modern code editors or integrated development environments (IDE) offer for remote development. Remote development covers some other use cases, that might fit into your developer toolbox as well.
A "good enough", free and open source code editor for Python development is Visual Studio Code. According to the docs it has powerful features for remote development.
Visual Studio Code Remote Development allows you to use a container, remote machine, or the Windows Subsystem for Linux (WSL) as a full-featured development environment.
Read the docs
VS Code Remote Development
As part of an effort to make the scikit-image examples gallery interactive, I would like to build a web service that receives a Python code snippet, executes it, and provides me with the generated output image.
For safety, the Python instances launched should be sandboxed and resource controlled, so I was thinking of using LXC containers.
Is this a good way to approach the problem? If so, what is the recommended way of launching one Python VM per request?
Stefan, perhaps "Docker" could be of use? I get the impression that you could constrain the VM that the application is run in -- an example web service:
http://docs.docker.io/en/latest/examples/python_web_app/
You could try running the application on Digital Ocean, like so:
https://www.digitalocean.com/community/articles/how-to-install-and-use-docker-getting-started
[disclaimer: I'm an engineer at Continuum working on Wakari]
Wakari Enterprise (http://enterprise.wakari.io) is aiming to do exactly this, and we're hoping to back-port the functionality into Wakari Cloud (http://wakari.io) so "published" IPython Notebooks can have some knobs on them for variable input control, then they can be "invoked" in a sandboxed state, and then the output given back to the user.
However for things that exist now, you should look at Sage Notebook. A few years ago several people worked hard on a Sage Notebook Cell Server that could do exactly what you were asking for: execute small code snippets. I haven't followed it since then, but it seems it is still alive and well from a quick search:
http://sagecell.sagemath.org/?q=ejwwif
http://sagecell.sagemath.org
http://www.sagemath.org/eval.html
For the last URL, check out Graphics->Mandelbrot and you can see that Sage already has some great capabilities for UI widgets that are tied to the "cell execution".
I think docker is the way to go for this. The instances are very light weight, and docker is designed to spawn 100s of instances at a time (Spin up time is fractions of a second vs traditional VMs couple of seconds). Configured correctly I believe it also gives you a complete sandboxed environment. Then it matters not about trying to sandbox python :-D
I'm not sure if you really have to go as far as setting up LXC containers:
There is seccomp-nurse, a Python sandbox that leverages the seccomp feature of the Linux kernel.
Another option would be to use PyPy, which has explicit support for sandboxing out of the box.
In any case, do not use pysandbox, it is broken by design and has severe security risks.
the more I read the worse it gets... I am starting out with Python and I cannot make my
mind up on how to set up my dev environment. I want to use Python and Django to build web applications.
Ideally, I'd love to use and IDE on Win7 (which would help with tooltips and help about methods, classes, etc) and have the web app run on a virtual linux machine (need apache+mysql). I have downloaded the turnkey linux appliance for Django and it seems to work fine.
So, in the end, it is unclear to me if people here are recommending to edit my code on the same machine where the app runs. I'd prefer to code on the Win7 machine and then publish the app/files on the fly to the linux virtual box, then accessing the app via the browser.
This is the setup for my current php project at work and I think it works perfectly.
Please clarify if people normally code and run their web apps all on one machine only or not.
Thank you!
I don't know how it can be unclear to you "if people here are recommending to edit my code on the same machine where the app runs". The easy answer is no, no way, never, ever. There can be no ambiguity about that.
Edit Ah, apologies for the misunderstanding - you clarify in the comments that you're not talking about your production environment. In that case, yes it is a perfectly good idea - even preferable - to edit on the same machine as your development app is running. There's no reason not to, and it makes life a whole lot easier.
Note you shouldn't really use Apache in development: it requires a lot of configuration, and doesn't automatically reload after code changes without even more configuration. Use the development server. And in case you were concerned, all of this runs perfectly well on a Windows machine.
I agree with the comment. Personally, I use Aptana Studio (http://www.aptana.com/) alongside GIT to have local version control (integrates well into Aptana). From there on, it is easy to either deploy locally or push the changes to a remote GIT repo.
I am working on a project where I am quite comfortable running linux, virtualenv, pip, manage.py runserver, git and so on for back-end development. I work with a front-end developer who needs to collaborate remotely, currently via a Dropbox synced copy of the codebase (also in a git branch) on Windows. A development server on my side lets the developer see their changes semi-live.
Although this has served us fairly well so far, has anyone come across a similar working arrangement with a better setup for collaboration?
I'm mindful that the source control learning curve and environmental management overhead is potentially significant and somewhat unnecessary for front-end work (as long as I commit from time to time). I'm considering a VM based setup such as BitNami's DjangoStack so that the front-end dev has their own server setup, but I thought I'd ask about other experiences.
I would recommend vagrant not only for quick development setups (which it excels at), but also for sharing VM configurations as you can publish your own vagrant file which your designer uses.
It relies on VirtualBox Sun Oracle's open source hypervisor and is available for free on all major platforms.
I have been in a very similar situation before Rog, where the backend was a Ruby on Rails setup running on *nix, and the frontend guy needed windows. We initially set up a Windows-Apache-MySql+git+RoR (using Cygwin and other tools) but eventually installing our app libraries and gems became a pain on the windows setup (anytime we would introduce a new gem (or app in django terms) the setup would break on windows). In the end we finally made the front-end guy work on *nix setup.
andLinux is extremely useful in these situations, it lets your run a seamless install of linux withing a windows 2000 setup, so the front end guy can still use windows tool. It is not like a dual boot, but here both the OS are running at the same time. Have a look into it.
We're writing a web-based tool to configure our services provided by multiple servers. This includes interfaces configuration, dhcp configs etc. etc.
Having configs in database and views that generate proper output, how to send it/make it available for servers?
I'm thinking about sending it through scp and invoking reload command to services through ssh. I'm also thinking about using Func to do all the job, as this is Python tool and will seemingly integrate with python-based (django) config tool.
Any other proposals?
I tried using Puppet for config management, mostly because of all the buzz around it. Unfortunately, I discovered (too late) that the puppetmaster scales horribly, and does not handle heterogeneous environments well. It works for tens of servers, but its inherent architecture prevents scaling.
So I switched to Cfengine 3, which you barely notice any performance impact of, and scales much better because of its distributed architecture. Also, I later discovered that Puppet is just an attempt to reimplement Cfengine 2 inefficiently in Ruby. See http://verticalsysadmin.com/blog/uncategorized/relative-origins-of-cfengine-chef-and-puppet
If your setup is going to be used for something useful, not just play around with, go with Cfengine 3!
You can take a look at Fabric.
As an example, this is an adapted excerpt from one of my backup scripts that starts Mercurial server on remote host and pushes local changesets there:
from fabric.api import *
env.hosts = ['login#my.host.com']
def mybckp():
run('cd ~/somedir; hg serve -a 111.222.111.222 -d') # start mercurial server in daemon mode
local('hg push') # push local changesets
To execute it, I simply type:
fab mybckp
Basically, what Fabric offers is easy&convenient SSH access to shell of one more (remote) hosts, from inside of Python script.
I think you are looking for Puppet and Foreman to manage puppet (create groups of servers).
There are many ways to do this, including Chef, Bcfg2, Capistrano etc. Puppet has biggest "lead" now. There is definitely a learning curve, but the results are worth it.
You could keep your servers config files on the puppet master (in version control). And when you deploy the latest config files on the master, puppet clients can automatically pull them and restart services. Puppet "templates" can dynamically generate config files for each server.
Puppet has "Providers" for things like Packages(apt, yum), Files and OS awareness.
It really depends what you're intending to do, as the question is a little vague. The other answers cover the tools available; choosing one over the other comes down to purpose.
Are you intending to manage servers, and services on those servers? If so, try Puppet, CFEngine, or some other tool for managing server configurations.
Or, more specifically, are you looking for a deployment/buildout tool that talks to servers? So that you can type in something along the lines of "mytool deploy myproject", and have your project propagate to all the servers? In which case, fabric would be the tool to use.
Generally a good configuration will consist of both anyway... but for what it's worth, from the sound of it (managing DHCP/network/etc.), Puppet's the way to go.