I've been asked to set up a FTP server using python that different users can log in to, and depending on their login will display a different file structure.
Part of the structure will be read only, and another part write, read, create and delete.
The file structure and files won't exist on the server, and will have to be built in a lazy way as the user expands folders by querying external servers.
The servers need to, I guess, mimic the FTP interface/protocol from the outside, but work completely differently internally.
I was wondering how big or difficult a job this would be as I need to provide some type of time scale for getting this working.
Is there anything like this out there already? has anyone done something similar before?
Are there any obvious problems of trying to implement this kind of model?
The twisted project would be the obvious place to start; the following example starts a simple FTP server that authenticates users against a password file but also allows anonymous access
from twisted.protocols.ftp import FTPFactory, FTPRealm
from twisted.cred.portal import Portal
from twisted.cred.checkers import AllowAnonymousAccess, FilePasswordDB
from twisted.internet import reactor
p = Portal(FTPRealm('./'),
[AllowAnonymousAccess(), FilePasswordDB("pass.dat")])
f = FTPFactory(p)
reactor.listenTCP(21, f)
reactor.run()
You can easily expand from there. How you implement 'files' and 'directories' is completely up to you.
Why python? I mean what python has to do with it? I'd look for some PAM module, able to create user-specific virtual filesystem structure on login, and if there's no ready one, consider modify some pam_mount, something like that..
http://pam-mount.sourceforge.net
Related
I am building simple app which is using Twitter API. What I have to do to hide my Twitter app keys? For example, if I will put my program to the internet and somebody who look up to the code will know my consumer key, access token etc. And if I not include this information into my program, that it won't be work!
I'm assuming by putting on the internet you mean publishing your code on github or such.
In that case you should always separate code and configuration. Put your API keys in an .ini file, i.e. config.ini, then load that file from python program using configparser
Add configuration file to your .gitignore so it would not get added to the source control.
Assuming you're running on a Unix like system, one way to handle this is environment variables.
In your shell you can do this:
export TWITTER_API_KEY=yoursecretapikey
Note that you don't use quotes of any kind for this.
Then in your script:
import os
twitter_key = os.environ.get('TWITTER_API_KEY')
Before I pose the question, some background: I'm creating a web management tool that, among other things, allows the user to download, tail, email, and move and files between predefined directories via the management panel. Many of these directories are local to the server, but some are actually located on remote hosts and accessed via SSH--however, this is transparent to the user. I've used Twisted to create a pseudo-REST API for the client to access, but since I want to avoid revealing actual server paths to the client, it requests downloads of files using a POST with an arbitrary ID to the api, as such: "http://XXXX:8880/api/transfer/download"
with POST params similar to this: {"srckey":"5","srcfile":"solar2-windows-1.10.zip"}. The idea being the client only knows the key of the directory and filename.
Pardon the excessive background--I'm hoping it will make my question more clear: The issue I have is I'm trying to allow users to download a copy of a file from one of the "remote" hosts via the management server that hosts the web panel, all without caching the file locally. I've used Twisted's File() object to stream large static files before, but since the file resides on another server, I'm trying to accomplish the same using a file object provided by Paramiko's "open()" method.
I've tried setting up a consumer/producer system similar to that used in the render methods of twisted.web.static.File, plugging in the file pointer provided by Paramiko in the appropriate places, but only the smallest text files transfer successfully--all cases cause Paramiko to throw this error:
socket.error: Socket is closed
The contents of the relevant python files are here:
serve-project.py: http://pastebin.com/YcjsQHu3
WrapSSH.py:
http://pastebin.com/XaKXJwxb
In a nutshell, I'm trying to stream the data from a Paramiko SFTPFile to an HTTP client. I suspect that my approach is majorly faulty, due to my minimal familiarity with Twisted. Anyone have suggestions on a more intelligent way to accomplish this?
I'm not seeing much documentation on this. I'm trying to get an image uploaded onto server from a URL. Ideally I'd like to make things simple but I'm in two minds as to whether using an ImageField is the best way or simpler to simply store the file on the server and display it as a static file. I'm not uploading anyfiles so I need to fetch them in. Can anyone suggest any decent code examples before I try and re-invent the wheel?
Given an URL say http://www.xyx.com/image.jpg, I'd like to download that image to the server, put it into a suitable location after renaming. My question is general as I'm looking for examples of what people have already done. So far I just see examples relating to uploading images, but that doesn't apply. This should be a simple case and I'm looking for a canonical example that might help.
This is for uploading an image from the user: Django: Image Upload to the Server
So are there any examples out there that just deal with the process of fetching and image and storing on the server and/or ImageField.
Well, just fetching an image and storing it into a file is straightforward:
import urllib2
with open('/path/to/storage/' + make_a_unique_name(), 'w') as f:
f.write(urllib2.urlopen(your_url).read())
Then you need to configure your Web server to serve files from that directory.
But this comes with security risks.
A malicious user could come along and type a URL that points nowhere. Or that points to their own evil server, which accepts your connection but never responds. This would be a typical denial of service attack.
A naive fix could be:
urllib2.urlopen(your_url, timeout=5)
But then the adversary could build a server that accepts a connection and writes out a line every second indefinitely, never stopping. The timeout doesn’t cover that.
So a proper solution is to run a task queue, also with timeouts, and a carefully chosen number of workers, all strictly independent of your Web-facing processes.
Another kind of attack is to point your server at something private. Suppose, for the sake of example, that you have an internal admin site that is running on port 8000, and it is not accessible to the outside world, but it is accessible to your own processes. Then I could type http://localhost:8000/path/to/secret/stats.png and see all your valuable secret graphs, or even modify something. This is known as server-side request forgery or SSRF, and it’s not trivial to defend against. You can try parsing the URL and checking the hostname against a blacklist, or explicitly resolving the hostname and making sure it doesn’t point to any of your machines or networks (including 127.0.0.0/8).
Then of course, there is the problem of validating that the file you receive is actually an image, not an HTML file or a Windows executable. But this is common to the upload scenario as well.
The question is, imagine that I want to create a deploy script which uses 'fabric' deploy library, which has to specify the FTP credentials where you want to deploy to. The idea is that I would like to store this script in our testing server, and from that server, it will deploy remotely to another servers. I would like to create a user account to each developer, but I don't want to share with them the FTP credentials, but rather, give them only the executable, so, if I create a python executable and I added to /user/bin for instance, they will be able to execute it, but also making a 'which mycommand' they can see the source where is inside the credentials, what can I do to avoid it?
Thanks!!
If you care about security, you probably should be using scp or sftp instead. These can be set up to not require any keystrokes, while still having decent security. For more see: http://www.debian-administration.org/articles/152
However, if you really want/need to use ftp, you probably should put the credentials in a file and chmod it to mode 400: r-------- or perhaps 440: r--r-----. Embedding credentials in a script isn't a great thing.
Put the credentials in a file to which the individual developers have no access.
Create an account that DOES have access to that file but DOES NOT allow interactive logons. - Create your FTP submission program, make it runnable by this second account.
Put all the developers in a group (e.g. "Devs").
Add an entry in the sudoers file to allow members of the "Devs" group to run the FTP program without additional authentication. This will be something like:
%Devs ALL=(ALL) ALL, , NOPASSWD: /path/to/FTPscript
I've got a website that I wrote in python using the CGI. This was great up until very recently, when the ability to scale became important.
I decided, because it was very simple, to use mod_python. Most of the functionality of my site is stored in a python module which I call to render the various pages. One of the CGI scripts might look like this:
#!/usr/bin/python
import mysite
mysite.init()
mysite.foo_page()
mysite.close()
and in mysite, I might have something like this:
def get_username():
cookie = Cookie.SimpleCookie(os.environ.get("HTTP_COOKIE",""))
sessionid = cookie['sessionid'].value
ip = os.environ['REMOTE_ADDR']
username = select username from sessions where ip = %foo and session = %bar
return(username)
to fetch the current user's username. Problem is that this depends on os.envrion getting populated when os is imported to the script (at the top of the module). Because I'm now using mod_python, the interpreter only loads this module once, and only populates it once. I can't read cookies because it's os has the environment variables of the local machine, not the remote user.
I'm sure there is a way around this, but I'm not sure what it is. I tried re-importing os in the get_username function, but no dice :(.
Any thoughts?
Which version of mod_python are you using? Mod_python 3.x includes a separate Cookie class to make this easier (see here)
Under earlier versions IIRC you can get the incoming cookies inside of the headers_in member of the request object.