We're writing a web-based tool to configure our services provided by multiple servers. This includes interfaces configuration, dhcp configs etc. etc.
Having configs in database and views that generate proper output, how to send it/make it available for servers?
I'm thinking about sending it through scp and invoking reload command to services through ssh. I'm also thinking about using Func to do all the job, as this is Python tool and will seemingly integrate with python-based (django) config tool.
Any other proposals?
I tried using Puppet for config management, mostly because of all the buzz around it. Unfortunately, I discovered (too late) that the puppetmaster scales horribly, and does not handle heterogeneous environments well. It works for tens of servers, but its inherent architecture prevents scaling.
So I switched to Cfengine 3, which you barely notice any performance impact of, and scales much better because of its distributed architecture. Also, I later discovered that Puppet is just an attempt to reimplement Cfengine 2 inefficiently in Ruby. See http://verticalsysadmin.com/blog/uncategorized/relative-origins-of-cfengine-chef-and-puppet
If your setup is going to be used for something useful, not just play around with, go with Cfengine 3!
You can take a look at Fabric.
As an example, this is an adapted excerpt from one of my backup scripts that starts Mercurial server on remote host and pushes local changesets there:
from fabric.api import *
env.hosts = ['login#my.host.com']
def mybckp():
run('cd ~/somedir; hg serve -a 111.222.111.222 -d') # start mercurial server in daemon mode
local('hg push') # push local changesets
To execute it, I simply type:
fab mybckp
Basically, what Fabric offers is easy&convenient SSH access to shell of one more (remote) hosts, from inside of Python script.
I think you are looking for Puppet and Foreman to manage puppet (create groups of servers).
There are many ways to do this, including Chef, Bcfg2, Capistrano etc. Puppet has biggest "lead" now. There is definitely a learning curve, but the results are worth it.
You could keep your servers config files on the puppet master (in version control). And when you deploy the latest config files on the master, puppet clients can automatically pull them and restart services. Puppet "templates" can dynamically generate config files for each server.
Puppet has "Providers" for things like Packages(apt, yum), Files and OS awareness.
It really depends what you're intending to do, as the question is a little vague. The other answers cover the tools available; choosing one over the other comes down to purpose.
Are you intending to manage servers, and services on those servers? If so, try Puppet, CFEngine, or some other tool for managing server configurations.
Or, more specifically, are you looking for a deployment/buildout tool that talks to servers? So that you can type in something along the lines of "mytool deploy myproject", and have your project propagate to all the servers? In which case, fabric would be the tool to use.
Generally a good configuration will consist of both anyway... but for what it's worth, from the sound of it (managing DHCP/network/etc.), Puppet's the way to go.
Related
So my team and I have bought into Docker - it is fantastic for deployment and testing. My real question is how to set up a great developer experience, specifically around writing Python apps, but this question could be generalized to nodejs, Java, etc.
The problem: When writing a Python app, I really like having decent linting/autocomplete functionality, there are some really good editors out there (Atom, VSCode, PyCharm) that provide these, but most really want a Python install on the local disk. The real advantage of Docker is that all of the core language and any project libraries can all be in the container, so reproducing all of that on the host machine just for developing is a pain.
I know that PyCharm pro does support Docker and docker-compose, but I found it quite sluggish and a lot of the test running capabilities were busted. On top of that, I really would like something that I can commit to version control so that the team can share dev setup and people don't have to repeat all of the steps for their own system.
A few Ideas that I had were:
Install an editor (like Atom) in a sidecar Docker container and use X11 forwarding
Use a browser based editor such as https://c9.io/ in a container - this seems most promising
Install some agent in a dev container that could handle autocomplete/linting, etc. and connect to it from a locally running editor - I think this would be the best solution, but I also think that right now it actually doesn't exist.
Has anyone had luck setting up a more productive development environment besides just mounting volumes and editing text?
You should use an 'advanced' IDE like IntelliJ (Pycharm) and configure a remote Python SDK using SSH-Access to your App-Docker-Container (using a shared ssh-key to auth against the app-container with a preinstalled openssh server and preconfigured authorized_keys file).
You can share this SDK information in your project file with all devs, so they wlll have this setup out of the box
1) This will ensure, your IDE knows about all the python libs/symbols available/installed in your docker-container during runtime. It will also enable you to properly debug remotely at the same time
2) This ensures, you have an IDE at your hand including a lot of important additional features like the inspector, 3way duff, search in path.. . hardly any of the Browser-Based IDEs will catch up with Pycharm at this point IMHO
Of course, as already mentioned in the comments, you need to share aka mount your code into the container. On linux, you plainly use host-volume-mounts from your local src folder to the container.
On OSX, you will run into performance issues when using host mounts. You might use something like http://docker-sync.io ( i am biased - there are also a lot of other similar tools )
I know this is an old question, but as I stumbled across it while trying to see what other editors might offer in this space, I would like to point out Visual Studio Code's notion of a Dev Container, which seems to provide the best level of integration I've seen for this so far. I'm hoping to see this turn into an industry trend myself.
Could use x11docker
x11docker allows to run graphical desktop applications (and entire desktops) in Docker Linux containers.
Docker allows to run applications in an isolated container environment. Containers need much less resources than virtual machines for similar tasks.
Docker does not provide a display server that would allow to run applications with a graphical user interface.
x11docker fills the gap. It runs an X display server on the host system and provides it to Docker containers.
Additionally x11docker does some security setup to enhance container isolation and to avoid X security leaks. This allows a sandbox environment that fairly well protects the host system from possibly malicious or buggy software.
https://github.com/mviereck/x11docker
https://github.com/mviereck/x11docker/wiki (extensive! knowledge)
https://dev.to/brickpop/my-dream-come-true-launching-gui-docker-sessions-with-dx11-in-seconds-1a53
We want to use continuous deployment.
We have:
all sources (python) in a local RhodeCode (git) server.
Jenkins for automated testing
SSH connections to the production systems (linux).
a tool which can update servers in one command.
Now something like this should be implemented:
run tests with Jenkins
if there is a failure. Stop, mail developers
If all tests are OK:
deploy
We are long enough in the business to write some scripts to do this.
My questions:
How to you update the version numbers? You could increment them, you could use a timestamp ...
Since we already use Jenkins, I think we do it in a script called by Jenkins. Any reason to do it with a different (better) tool?
My fear: Jenkins becomes a central server for things which are not related to testing (deploy). I think other tools like SaltStack or Ansible should be used for this. Up to now we use Fabric (simple layer above ssh). Maybe we should switch to a central management system before starting with continuous deployment.
Since we already use Jenkins, I think we do it in a script called by
Jenkins. Any reason to do it with a different (better) tool?
To answer your question: No, there aren't any big reasons to not go with Jenkins for deployment.
Pros:
You already know Jenkins (and you probably know some of the quirks)
You don't need to introduce yet another technology
You said that you want to write scripts called by Jenkins, so you can switch easily to a different system later.
Cons:
there might be better tools out there for deployment
Does not tie the best with Change Control tools.
Additional Considerations:
Do not use the same server for prod deployment and continuous build/integration. These are two different tasks performed by two different roles. Therefore two different permission schemes might be employed.
Use permissions wisely. I use two different permissions for my deploy and CI servers. We have 3 Jenkins servers right now.
CI and deploy to uncontrolled environments (Developers can play with these environments)
Deploy to controlled environments. (QA environemnts and upwards)
Deploy to prod (yes, that's the only purpose in live of this server.) with the most restrictive permission scheme.
sandbox, actually there is this forth server for Jenkins admins to play with.
Store your deployable artifacts outside of Jenkins (and you do if I read your question correctly).
So depending on your existing infrastructure and procedure you decide for the tooling. Jenkins won't log you in as long as you keep as much of the logic as possible in scripts that are only executed by Jenkins.
We are developing a distributed application in Python. Right now, we are about to re-organize some of our system components and deploy them on separate servers, so I'm looking to understand more about deployment for an application such as this. We will have several back-end code servers, several database servers (of different types) and possibly several front-end servers.
My question is this: what / which are good deployment patterns for distributed applications (in Python or in general)? How can I manage pushing code to several servers (whose IP's should be parameterized in the deployment system), static files to several front ends, starting / stopping processes in the servers, etc.? We are looking for possibly an easy-to-use solution, but mostly, something that once set-up will get out of our way and let us deploy as painlessly as possible.
To clarify: we are aware that there is no one standard solution for this particular application, but this question is rather more geared towards a guide of best practices for different types / parts of deployment than a single, unified solution.
Thanks so much! Any suggestions regarding this or other deployment / architecture pointers will be very appreciated.
It all depends on your application.
You can:
use Puppet to deploy servers,
use Fabric to remotely connect to the servers and execute specific tasks,
use pip for distributing Python modules (even non-public ones) and install dependencies,
use other tools for specific tasks (such as use boto to work with Amazon Web Services APIs, eg. to start new instance),
It is not always that simple and you will most likely need something customized. Just take a look at your system: it is not so "standard", so do not expect it to be handled in a "standard" way.
I received a project recently and I am wondering how to do something in a correct and secure manner.
The situation is the following:
There are classes to manage linux users, mysql users and databases and apache virtual hosts. They're used to automate the addition of users in a small shared-hosting environnement. These classes are then used in command-line scripts to offer a nice interface for the system administrator.
I am now asked to build a simple web interface to offer a GUI to the administrator and then offer some features directly to the users (change their unix password and other daily procedures).
I don't know how to implement the web application. It will run in Apache (with the apache user) but the classes need to access files and commands that are only usable by the root user to do the necessary changes (e.g useradd and virtual hosts configuration files). When using the command-line scripts, it is not a problem as they are run under the correct user. Giving permissions to the apache user would probably be dangerous.
What would be the best technique to allow this through the web application ? I would like to use the classes directly if possible (it would be handier than calling the command line scripts like external processes and parsing output) but I can't see how to do this in a secure manner.
I saw existing products doing similar things (webmin, eBox, ...) but I don't know how it works.
PS: The classes I received are simple but really badly programmed and barely commented. They are actually in PHP but I'm planning to port them to python. Then I'd like to use the Django framework to build the web admin interface.
Thanks and sorry if the question is not clear enough.
EDIT: I read a little bit about webmin and saw that it uses its own mini web server (called miniserv.pl). It seems like a good solution. The user running this server should then have permissions to modify the files and use the commands. How could I do something similar with Django? Use the development server? Would it be better to use something like CherryPy?
Hello
You can easily create web applications in Python using WSGI-compliant web frameworks such as CherryPy2 and templating engines such as Genshi. You can use the 'subprocess' module to manadge external commands...
You can use sudo to give the apache user root permission for only the commands/scripts you need for your web app.
I have a web service to which users upload python scripts that are run on a server. Those scripts process files that are on the server and I want them to be able to see only a certain hierarchy of the server's filesystem (best: a temporary folder on which I copy the files I want processed and the scripts).
The server will ultimately be a linux based one but if a solution is also possible on Windows it would be nice to know how.
What I though of is creating a user with restricted access to folders of the FS - ultimately only the folder containing the scripts and files - and launch the python interpreter using this user.
Can someone give me a better alternative? as relying only on this makes me feel insecure, I would like a real sandboxing or virtual FS feature where I could run safely untrusted code.
Either a chroot jail or a higher-order security mechanism such as SELinux can be used to restrict access to specific resources.
You are probably best to use a virtual machine like VirtualBox or VMware (perhaps even creating one per user/session).
That will allow you some control over other resources such as memory and network as well as disk
The only python that I know of that has such features built in is the one on Google App Engine. That may be a workable alternative for you too.
This is inherently insecure software. By letting users upload scripts you are introducing a remote code execution vulnerability. You have more to worry about than just modifying files, whats stopping the python script from accessing the network or other resources?
To solve this problem you need to use a sandbox. To better harden the system you can use a layered security approach.
The first layer, and the most important layer is a python sandbox. User supplied scripts will be executed within a python sandbox. This will give you the fine grained limitations that you need. Then, the entire python app should run within its own dedicated chroot. I highly recommend using the grsecurity kernel modules which improve the strength of any chroot. For instance a grsecuirty chroot cannot be broken unless the attacker can rip a hole into kernel land which is very difficult to do these days. Make sure your kernel is up to date.
The end result is that you are trying to limit the resources that an attacker's script has. Layers are a proven approach to security, as long as the layers are different enough such that the same attack won't break both of them. You want to isolate the script form the rest of the system as much as possible. Any resources that are shared are also paths for an attacker.