Background:
I need to run a program on a remote server without logging into the server. The server would only allow people to contact it, not to see what files are on it or login.
The server has in it a .py and an installation of Python. That file has the hardcoded login credentials to a database that I do not want users of the local machine program to see.
I would like to contact the server, pass it arguments and request it to run the program with them.
The reason is I don't want my program on the local machine to store the login credentials for the server, so I dont want to ssh because that would again require some form of localized credentialing which leaves the credentials exposed to the users at some point (and I never want them to see them).
Traditionally when I am making remote machine calls, its to an exposed API through the requests or http.client libraries; from what I can find, this may require the socket library, and that is a library I am not very familiar with and couldn't seem to find examples of what I am trying to do.
Server code:
import sys
from pymongo import MongoClient
usr = 'user_login'
pwd = 'user_pass'
client = MongoClient('mongodb://' + usr + ':' + pwd +'#host:port')
db = client['some_db']
db.add_user(sys.argv[1], sys.argv[2])
Question(s):
How can I do a one way request to a server containing a script file and pass it arguments?
Is this the appropriate way to handle ensuring users cannot see admin credentials?
Question 2 is somewhat out of scope here, because the underlying problem is that the program needs to create a new user account on a database, but you must be logged in with an authorized account to do so. If I hardcoded or stored credentials in a file that the local machine could see, then a savvy user could debug the program and see what they are.
I figured putting them on a server where nobody can see the files, just ask the server to run them, would be a safe bet and ensure security.
What you need seems to be a typical HTTP endpoint. Things to read and consider:
A simplest web framework, like web.py. Or you can run your program as a CGI script, no matter the language.
A config file that would keep the credentials of the mongodb admin user. Read it every time your program runs.
The hardest part: a proper authorization layer on top of your script. Else anyone will be able to run it. At the very least, basic http auth, or token bearer auth.
Unless it's inside your tiny private trusted VPN with built-in encryption, serve your endpoint over HTTPS. Since it will be fronted by a web server anyway (either as a pass-through, or via uwsgi), just use the fact that your web server already does HTTPS (and it does, right?).
a simple example
from flask import Flask, request
app = Flask(__name__)
#app.route("/adduser")
def hello():
user = request.args.get("user")
pwd = request.args.get("pwd")
client = MongoClient('mongodb://' + usr + ':' + pwd +'#host:port')
db = client['some_db']
db.add_user(user, pwd)
return "user inserted"
if __name__ == '__main__':
app.run()
and making the following request http://localhost:5000/adduser?user=root&pwd=1234
on top of that you can use POST over SSL (HTTPS) so the data will be encrypted and transmitted in the request body instead the url (like GET) this suppose to be sufficient, security wise
Related
I have a flask running at domain.com
I also have another flask instance on another server running at username.domain.com
Normally user logs in through domain.com
However, for paying users they are suppose to login at username.domain.com
Using flask, how can I make sure that sessions are shared between domain.com and username.domain.com while ensuring that they will only have access to the specifically matching username.domain.com ?
I am concerned about security here.
EDIT:
Later, after reading your full question I noticed the original answer is not what you're looking for.
I've left the original at the bottom of this answer for Googlers, but the revised version is below.
Cookies are automatically sent to subdomains on a domain (in most modern browsers the domain name must contain a period (indicating a TLD) for this behavior to occur). The authentication will need to happen as a pre-processor, and your session will need to be managed from a centralised source. Let's walk through it.
To confirm, I'll proceed assuming (from what you've told me) your setup is as follows:
SERVER 1:
Flask app for domain.com
SERVER 2:
Flask app for user profiles at username.domain.com
A problem that first must be overcome is storing the sessions in a location that is accessible to both servers. Since by default sessions are stored on disk (and both servers obviously don't share the same hard drive), we'll need to do some modifications to both the existing setup and the new Flask app for user profiles.
Step one is to choose where to store your sessions, a database powered by a DBMS such as MySQL, Postgres, etc. is a common choice, but people also often choose to put them somewhere more ephemeral such as Memcachd or Redis for example.
The short version for choosing between these two starkly different systems breaks down to the following:
Database
Databases are readily available
It's likely you already have a database implemented
Developers usually have a pre-existing knowledge of their chosen database
Memory (Redis/Memchachd/etc.)
Considerably faster
Systems often offer basic self-management of data
Doesn't incur extra load on existing database
You can find some examples database sessions in flask here and here.
While Redis would be more difficult to setup depending on each users level of experience, it would be the option I recommend. You can see an example of doing this here.
The rest I think is covered in the original answer, part of which demonstrates the matching of username to database record (the larger code block).
Old solution for a single Flask app
Firstly, you'll have to setup Flask to handle subdomains, this is as easy as specifying a new variable name in your config file. For example, if your domain was example.com you would append the following to your Flask configuration.
SERVER_NAME = "example.com"
You can read more about this option here.
Something quick here to note is that this will be extremely difficult (if not impossible) to test if you're just working off of localhost. As mentioned above, browsers often won't bother to send cookies to subdomains of a domain without dots in the name (a TLD). Localhost also isn't set up to allow subdomains by default in many operating systems. There are ways to do this like defining your own DNS entries that you can look into (/etc/hosts on *UNIX, %system32%/etc/hosts on Windows).
Once you've got your config ready, you'll need to define a Blueprint for a subdomain wildard.
This is done pretty easily:
from flask import Blueprint
from flask.ext.login import current_user
# Create our Blueprint
deep_blue = Blueprint("subdomain_routes", __name__, subdomain="<username>")
# Define our route
#deep_blue.route('/')
def user_index(username):
if not current_user.is_authenticated():
# The user needs to log in
return "Please log in"
elif username != current_user.username:
# This is not the correct user.
return "Unauthorized"
# It's the right user!
return "Welcome back!"
The trick here is to make sure the __repr__ for your user object includes a username key. For eg...
class User(db.Model):
username = db.Column(db.String)
def __repr__(self):
return "<User {self.id}, username={self.username}>".format(self=self)
Something to note though is the problem that arises when a username contains special characters (a space, #, ?, etc.) that don't work in a URL. For this you'll need to either enforce restrictions on the username, or properly escape the name first and unescape it when validating it.
If you've got any questions or requests, please ask. Did this during my coffee break so it was a bit rushed.
You can do this with the builtin Flask sessions, which are cookie-based client-side sessions. To allow users to login to multiple subdomains in '.domain.com', you need only to specify
app.config['SESSION_COOKIE_DOMAIN'] = '.domain.com'
and the client's browser will have a session cookie that allows him to login to every Flask instance that is at 'domain.com'.
This only works if every instance of Flask has the same app.secret_key
For more information, also see
Same Flask login session across two applications
I am building an app on Google App Engine using Flask. I am implementing Google+ login from the server-side flow described in the Python examples: https://developers.google.com/+/web/signin/server-side-flow and https://github.com/googleplus/gplus-quickstart-python/blob/master/signin.py.
Both of the examples have:
credentials = oauth_flow.step2_exchange(code)
and
session['credentials'] = credentials
storing the credentials object to the Flask session. When I run this code on my Google App Engine project, I get the error:
TypeError: <oauth2client.client.OAuth2Credentials object at 0x7f6c3c953610> is not JSON serializable
As discussed in this issue (marked WontFix), the OAuth2Credentials is not designed to by JSON serializable. It has methods to_json and from_json, which could be used to store it, e.g:
session['credentials'] = credentials.to_json()
However, in the same issue:
Never store a Credentials object in a cookie, it contains the applications
client id and client secret.
Perhaps I misunderstand how a Flask session object works, but from the doc:
... A session basically makes it possible to remember information from one request to another. The way Flask does this is by using a signed cookie. So the user can look at the session contents, but not modify it unless they know the secret key...
And therefore, we should not be storing a credentials object in the session, even if it is a signed cookie.
In my case, I currently only need to re-use the access token for disconnect purposes, so I can just store that.
What is the correct way to deal with this situation? Should the credentials not be stored in the session at all? Should at this point in the examples there be a comment "Securely save credentials here"?
Flask used to use pickle instead of JSON to store values in the session, and the Google example code was written with that in mind. Flask switched to a JSON-based format to reduce the impact of the server-side secret being disclosed (a hacker can hijack your process with pickle, not with JSON).
Store just the access token in your session:
session['credentials'] = credentials.access_token
You can recreate the credentials object with that token, using the AccessTokenCredentials class at a later time, should you need it again:
credentials = AccessTokenCredentials(session['credentials'], 'user-agent-value')
The AccessTokenCredentials object stores just the credentials; because it lacks the client id and client secret it cannot be used to refresh the token, however.
The user agent value is something you get to make up; it can help diagnose problems if you have access to the OAuth server logs; with Google I would not count on that so just make something up here.
"Flask by default uses the Werkzeug provided 'secure cookie' as session system. It works by pickling the session data, compressing it and base64 encoding it." - http://flask.pocoo.org/snippets/51/
In other words, flask is really weird. Anything you put in the session, it gets ciphered with the server key, sent to the client and stored in the client. The server then receives it on each subsequent request and decodes it with the same key. It also means session data will survive server reboots because it's sitting in the client.
To improve this for my app I've used the flask SessionInterface with Couchdb - and now the client only knows a sessionID that is checked against my database where the actual data is stored. Hurray.
Check this out, it has a few approaches to server side sessions depending what db you may be using - http://flask.pocoo.org/snippets/category/sessions/
In google app engine i have created my own user API appropriately called user so it doesn't interfere with the google app engine API users. Like most multiuser websites, two "versions" of the site are available to the user depending on whether or not they are logged in. Thus is created a file called router.py with the following code
import webapp2
from lib import user
import guest
import authorized
if user.isLoggedIn():
app = webapp2.WSGIApplication(authorized.WSGIHandler,debug=True)
else:
app = webapp2.WSGIApplication(guest.WSGIHandler,debug=True)
the guest and authorized modules are formated like your conventional application script for example:
import webapp2
import os
class MainPage(webapp2.RequestHandler):
def get(self,_random):
self.response.out.write('authorized: '+_random)
WSGIHandler = [('/(.*)', MainPage)]
Thus the router file easily selects which WSGIApplication url director to use by grabbing the WSGIHandler variable from either the guest or authorized module. However the user must close all tabs for the router to detect a change in the isLoggedIn() function. If you log in it does not recognize that you have done so until every tab is closed. I have two possible reasons for this:
isLoggedIn() uses os.environ['HTTP_COOKIE'] to retrieve cookies and see if a user is logged in, it then checks the cookie data against the database to make sure they are valid cookie. Possibly this could have an error where the cookies on the server's end aren't being refreshed when the page is? Maybe because i'm not getting the cookies from self.request.
Is it possible that in order to conserve frontend hours or something that google app engine caches the scripts from the server with in the server's memcache? i Doubt it but i am at a loss for the reason for this behavior.
Thanks in advance for the help
Edit
Upon more testing i found that as suspected the router.py file responded correctly and directed the user based in logged in when a comment was added to it. This seems to indicate caching.
Edit 2
I have uncovered some more information on the WSHI Application:
The Python runtime environment caches imported modules between requests on a single web server, similar to how a standalone Python application loads a module only once even if the module is imported by multiple files. Since WSGI handlers are modules, they are cached between requests. CGI handler scripts are only cached if they provide a main() routine; otherwise, the CGI handler script is loaded for every request.
I wonder how efficient to refresh the WSGI module somehow. This would undoubtably task the server, but solve my problem. Again, this seems to be a partial solution.
Edit 3
Again, any attempt to randomize a comment in the router.py file is ineffective. The id statement looking for user login is completely overlooked and the WSGIApplication is set to its original state. I'm not yet sure if this is due to the module cache in the webapp2 module or thanks to the module cache on the user API. I suspect the latter.
The problem is not "caching", it is just how WSGI applications work. A WSGI process stays alive for a reasonably long period of time, and serves multiple requests during that period. The app is defined when that process is started up, and does not change until the process is renewed. You should not try to do anything dynamic or request-dependent at that point.
replace router.py with:
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
from lib import user
import guest
import authorized
def main():
if user.isLoggedIn():
run_wsgi_app(authorized.application)
else:
run_wsgi_app(guest.application)
if __name__ == "__main__":
main()
downgrading to the old webapp allows you to change dynamically the wsgi application. it's tested and works perfectly! The CGI adaptor run_wsgi_app allows for the webapp to change it's directory list without caching.
Aim: to create user friendly web interface to linux program without any ssh (console) terrible stuff.
I have chosen Python + Django + Apache.
Problem: user should login through the browser to linux user and then all user`s requests should be served on behalf of this linux user.
By now, server is run as root and when user login through a browser, server root can switch to required user using django user name:
uid = pwd.getpwnam(userName)[2]
os.setuid(uid)
and it can execute all django stuff on behalf of appropriate user.
The problem is that server must be run as root!
How could I provide possibility to normally run server with usual apache user rights with providing login to linux user through a browser? (Just get user Name and PWD from the Http POST request and login to appropriate user using Python)?
Update: I need to map any user via web to specific linux user to give him his home directory to execute specific linux program only as this specific user! I guess something like this is realized in webmin?
Possible solution: I could execute su userName but it doesn't work without terminal:
p = subprocess.Popen(["su", "test"], stdout = subprocess.PIPE, stdin = subprocess.PIPE, stderr = subprocess.STDOUT)
suOUT = p.communicate(input="test")[0]
print suOUT
I just got:
su: must be run from a terminal
I'm not sure what "standard" approaches are for dealing with this problem. However, this is a simple technique for environments with a small number of users that doesn't involve sudo, nor changing UID inside the web server (this is likely to be very problematic for concurrent access by multiple users).
Launch a daemon process for each user having access to this application. This process should itself serve web requests for that user over FastCGI (substitute for protocol of your choice). Your web server should have some user to port number mapping. Then, redirect your gateway's requests to the proper FastCGI process based on the logon used by the Django user.
Example (using internal redirects by NGINX, assuming setup with FastCGI):
User foo logs on to Django web application
User requests page /.../
Django application receives request for /.../ by user foo
Django application returns custom HTTP header X-Accel-Redirect to indicate internal redirect to /delegate/foo/.../.
NGINX forwards finds location /delegate/foo/ associated to a FastCGI handler on port 9000
FastCGI handler is running as user foo and grants access to stuff in home directory.
You can substitute the web server and communication protocol to combinations of your choice. I used FastCGI here because it allows to write both the gateway and the handler as Django applications. I chose NGINX because of the internal redirect feature. This prevents impersonation by direct use of /delegate/foo/.../ URLs by users other than foo.
Update
Example:
Assuming you have the flup module, you can start a FastCGI server directly using Django. To start a Django application over FastCGI under a specific user account, you can use:
sudo -u $user python /absolute/path/to/manage.py runfcgi host=127.0.0.1 port=$port
Substitute the $user for the user name and $port for a unique port for that user (no two users can share the same port).
Assuming an NGINX configuration, you can set this up like:
location /user/$user {
internal;
fastcgi_pass 127.0.0.1:$port;
# additional FastCGI configuration...
}
Make sure to add one such directive for each $user and $port combination above.
Then, from your front-end Django application, you can check permissions and stuff using:
#login_required
def central_dispatch_view ( request ):
response = HttpResponse()
response['X-Accel-Redirect'] = '/user/'+request.user.username
return response
Disclaimer: This is totally untested, and almost a year after the original answer, I'm not sure this is possible, mainly because the documentation on XSendFile in NGINX specifies that this should work with static files. I haven't inquired any further to know if you can actually perform an internal NGINX redirect from a FastCGI application.
Alternate solution:
A better approach might not involve internal redirects, but instead to use a FastCGI authorizer. Basically, a FastCGI is a program that your webserver runs before serving a request. Then, you can bypass the shady internal redirect thing and just have a FastCGI authorizer that check if the request accessing /user/foo/ actually can from a Django user logged in as foo. This authorizer program won't be able to run as a Django application (since this is not an HTTP request/response cycle), but you can write it using flup and access your Django settings.
You can include the wsgi user in the sudoers file, and limit the commands and arguments it can run. Why can't you use sudo?
for example:
Cmnd_Alias TRUSTED_CMDS = /bin/su johndoe /some/command, \
/bin/su janedoe /some/command
my_wsgi_user ALL = NOPASSWD: TRUSTED_CMDS
From the security perspective, you should assume the users have shell access - I think its ok for a coorporate intranet but not for a public site.
From python/django you will be able to call ['sudo', '/bin/su', 'johndoe', '/some/command'].
Another solution if you really can't use sudo (with NOPASSWD) is connect via ssh using the user credentials (user, password) with paramiko.
I'm attempting to implement a simple Single Sign On scenario where some of the participating servers will be windows (IIS) boxes. It looks like SPNEGO is a reasonable path for this.
Here's the scenario:
User logs in to my SSO service using his username and password. I authenticate him using some mechanism.
At some later time the user wants to access App A.
The user's request for App A is intercepted by the SSO service. The SSO service uses SPNEGO to log the user in to App A:
The SSO service hits the App A web page, gets a "WWW-Authenticate: Negotiate" response
The SSO service generates a "Authorization: Negotiate xxx" response on behalf of the user, responds to App A. The user is now logged in to App A.
The SSO service intercepts subsequent user requests for App A, inserting the Authorization header into them before passing them on to App A.
Does that sound right?
I need two things (at least that I can think of now):
the ability to generate the "Authorization: Negotiate xxx" token on behalf of the user, preferably using Python
the ability to validate "Authorization: Negotiate xxx" headers in Python (for a later part of the project)
This is exactly what Apple does with its Calendar Server. They have a python gssapi library for the kerberos part of the process, in order to implement SPNEGO.
Look in CalendarServer/twistedcaldav/authkerb.py for the server auth portion.
The kerberos module (which is a c module), doesn't have any useful docstrings, but PyKerberos/pysrc/kerberos.py has all the function definitions.
Here's the urls for the svn trunks:
http://svn.calendarserver.org/repository/calendarserver/CalendarServer/trunk
http://svn.calendarserver.org/repository/calendarserver/PyKerberos/trunk
Take a look at the http://spnego.sourceforge.net/credential_delegation.html tutorial. It seems to be doing what you are trying to do.
I've been searching quite some time for something similar (on Linux), that has lead me to this page several times, yet giving no answer. So here is my solution, I came up with:
The web-server is a Apache with mod_auth_kerb. It is already running in a Active Directory, single sign-on setup since quite some time.
What I was already able to do before:
Using chromium with single sign on on Linux (with a proper krb5 setup, with working kinit user#domain)
Having python connect and single sign on using sspi from the pywin32 package, with something like sspi.ClientAuth("Negotiate", targetspn="http/%s" % host)
The following code snippet completes the puzzle (and my needs), having Python single sign on with Kerberos on Linux (using python-gssapi):
in_token=base64.b64decode(neg_value)
service_name = gssapi.Name("HTTP#%s" % host, gssapi.C_NT_HOSTBASED_SERVICE)
spnegoMechOid = gssapi.oids.OID.mech_from_string("1.3.6.1.5.5.2")
ctx = gssapi.InitContext(service_name,mech_type=spnegoMechOid)
out_token = ctx.step(in_token)
buffer = sspi.AuthenticationBuffer()
outStr = base64.b64encode(out_token)