How to provision an EC2 Server programmatically and configure default settings - python

I would like to provision an AWS EC2 server using python and set default user and password among other things. The idea is that the developer selects from our site menu the items they would like to have installed e.g MySQL, Nginx etc. When one clicks submit, I'm using boto to create the EC2 server and now I would like to install the softwares and set the default user credentials so that it can be mailed to the user.
I would like to make the above self service i.e everything is well automated and one can customize as per the needs without system engineer involvement. Note: I don't want to share the aws keypairs, that will be left for the servers admin.
I'm thinking of using fabric on the above but it seems it will require a lot of code and configuration. Is there the best and recommended way one can provision a linux server and do the above? How does providers like digitalOceans set default root passwords during create? I would like to keep this details unique as possible.

Use the User Data field to pass a #cloud-config setup to the Amazon EC2 instance. Some of the things it can do include:
Including users and groups, including defining user passwords
Writing out arbitrary files
Add yum / apt repository
Install via chef / puppet
Run commands on first boot
Install arbitrary packages
See: Cloud config examples

Related

should new python virtualenv's be created with new linux user accounts?

I'm starting with Python 3, using Raspbian (from Debian), and using virtualenv. I understand how to create/use a virtualenv to "sandbox" different Python project, HOWEVER I'm a bit unclear on whether one should be setting up a different linux user for each project (assuming that the project/virtualenv will be used to create & then run a daemon process on the linux box).
So when creating separate python environments the question I think is should I be:
creating a new linux user account for each deamon/acript I'm working on, so that both the python virtual environment, and the python project code area can live under directories owned by this user?
perhaps just create one new non-administrator account at the beginning, and then just use this account for each project/virtual environmnet
create everything under the initial admin user I first log with for raspbian (e.g. "pi" user) - Assume NO for this option, but putting it in for completeness.
TL;DR: 1. no 2. yes 3. no
creating a new linux user account for each deamon/script I'm working on, so that both the python virtual environment, and the python project code area can live under directories owned by this user?
No. Unnecessary complexity and no real benefit to create many user accounts for this. Note that one user can be logged in multiple sessions and running multiple processes.
perhaps just create one new non-administrator account at the beginning, and then just use this account for each project/virtual environment
Yes, and use sudo from the non-admin account if/when you need to escalate privilege.
create everything under the initial admin user I first log with for raspbian (e.g. "pi" user) - Assume NO for this option, but putting it in for completeness.
No. Better to create a regular user, not run everything as root. Using a non-root administrator account would be OK, though.
It depends on what you're trying to achieve. From virtualenv's perspective you could do any of those.
#1 makes sense to me if you have multiple services that are publicly accessible and want to isolate them.
If you're running trusted code on an internal network, but don't want the dependencies clashing then #2 sounds reasonable.
Given that the Pi is often used for a specific purpose (not a general purpose desktop say) and the default account goes largely unused, using that account would be fine. Make sure to change the default password.
In the general case, there is no need to create a separate account just for a virtualenv.
There can be reasons to create a separate account, but they are distinct from, and to some extent anathema to, virtual environments. (If you have a dedicated account for a service, there is no need really to put it in a virtualenv -- you might want to if it has dependencies you want to be able to upgrade easily etc, but the account already provides a level of isolation similar to what a virtualenv provides within an account.)
Reasons to use a virtual environment:
Make it easy to run things with different requirements under the same account.
Make it easy to install things for yourself without any privileges.
Reasons to use a separate account:
Fine-grained access control to privileged resources.
Properly isolating the private resources of the account.

How to authorize Azure Python SDK on VM instance?

In AWS, you can assign a role to a VM, which then authorizes the instance when it makes queries to the AWS SDK. I am looking for similar functionality in Azure, or something that would enable me to do close to that.
I found this post which suggests that this is not possible in the way AWS does it. Are there any workarounds for this? I really don't want the system administrator to have to login to the instance and give their Azure Active Directory credentials to authorize it.
Excellent question :). I would suggest to wait a few days, we have something in progress that seems to fit your need. I created this issue for tracking.
The most simple would be to create a Service Principal credentials for these VMs. To do that, execute a post deployment script to install the CLI and "az ad sp create-for-rbac --sdk-auth >~/mycredentials.json". Then, you can start SDK script reading this credential file.
The "create-for-rbac" commands already exists if you want to look at it (--sdk-auth is the new option coming), so you can see that you can specify all scope and permissions needed in this command.
(I own the Azure SDK for Python at MS)

Linux user scheme for a Django production server

I'm currently trying to set up nginx + uWSGI server for my Django homepage. Some tutorials advice me to create specific UNIX users for certain daemons. Like nginx user for nginx daemon and so on. As I'm new to Linux administration, I thought just to create second user for running all the processes (nginx, uWSGI etc.), but it turned out that I need some --system users for that.
Main question is what users would you set up for nginx + uWSGI server and how to work with them? Say, I have server with freshly installed Debian Squeeze.
Should I install all the packages, virtual environment and set up all the directories as root user and then create system ones to run the scripts?
I like having regular users on a system:
multiple admins show up in sudo logs -- there's nothing quite like asking a specific person why they made a specific change.
not all tasks require admin privileges, but admin-level mistakes can be more costly to repair
it is easier to manage the ~/.ssh/authorized_keys if each file contains only keys from a specific user -- if you get four or five different users in the file, it's harder to manage. Small point :) but it is so easy to write cat ~/.ssh/id_rsa.pub | ssh user#remotehost "cat - > ~/.ssh/authorized_keys" -- if one must use >> instead, it's precarious. :)
But you're right, you can do all your work as root and not bother with regular user accounts.

Modify system configuration files and use system commands through web interface

I received a project recently and I am wondering how to do something in a correct and secure manner.
The situation is the following:
There are classes to manage linux users, mysql users and databases and apache virtual hosts. They're used to automate the addition of users in a small shared-hosting environnement. These classes are then used in command-line scripts to offer a nice interface for the system administrator.
I am now asked to build a simple web interface to offer a GUI to the administrator and then offer some features directly to the users (change their unix password and other daily procedures).
I don't know how to implement the web application. It will run in Apache (with the apache user) but the classes need to access files and commands that are only usable by the root user to do the necessary changes (e.g useradd and virtual hosts configuration files). When using the command-line scripts, it is not a problem as they are run under the correct user. Giving permissions to the apache user would probably be dangerous.
What would be the best technique to allow this through the web application ? I would like to use the classes directly if possible (it would be handier than calling the command line scripts like external processes and parsing output) but I can't see how to do this in a secure manner.
I saw existing products doing similar things (webmin, eBox, ...) but I don't know how it works.
PS: The classes I received are simple but really badly programmed and barely commented. They are actually in PHP but I'm planning to port them to python. Then I'd like to use the Django framework to build the web admin interface.
Thanks and sorry if the question is not clear enough.
EDIT: I read a little bit about webmin and saw that it uses its own mini web server (called miniserv.pl). It seems like a good solution. The user running this server should then have permissions to modify the files and use the commands. How could I do something similar with Django? Use the development server? Would it be better to use something like CherryPy?
Hello
You can easily create web applications in Python using WSGI-compliant web frameworks such as CherryPy2 and templating engines such as Genshi. You can use the 'subprocess' module to manadge external commands...
You can use sudo to give the apache user root permission for only the commands/scripts you need for your web app.

Deliver to a specific version via Inbound Mail service

I have an app that services inbound mail and I have deployed a new development version to Google App Engine. The default is currently set to the previous version.
Is there a way to specify that inbound mail should be delivered to a particular version?
This is well documented using URLs but I can't find any reference to version support in the inbound mail service...
No, this isn't currently supported. You could write some code for your default version that routes mail to other versions via URLFetch, though.
There is an easier way to do this than writing code that routes between different versions using URLFetch.
If you have a large body of code that is email oriented and you need to have a development version,
simply use one of your ten applications as the development application (version).
This allows you to do things like have test-specific entities in the development application Datastore and you can
test as much as you want running on appengine live.
The only constraints are:
because the application has a different name, for email sending from the application, you either need to send from your gmail account or have a
configuration that switches the application name
sending test email to the application will have a slightly different email address (not a big issue I think)
keep an app.yaml with a different application name
you burn another one of your ten possible apps
Most RCS will allow you to have the same project checked out into different directories. Once you are ready for launch
(all development code is committed and testing done), update the 'production' directory (except for app.yaml) and then deploy.

Categories

Resources