What is the easiest way to write a simple Python daemon/server-side program that, in a reasonably secure way, processes incoming messages from an email account? For example, if you have an account 'foo#bar.org' and you have the username/password to the program, you want to be able to have the program read the contents of the email and save them to a database (e.g. with sqlite) in Python. What's the best framework/library for doing this? It sounds like it might be overkill to use Django for something so simple -- can it be done purely with the Python standard libraries?
There are python poplib (http://docs.python.org/2/library/poplib.html) and python imaplib (http://docs.python.org/2/library/imaplib.html). For accessing mailboxes.
Then you have lamson (http://lamsonproject.org/), which is not only excellent for sending and recieving mails. But it can also help you with parsing messages, detecting if they are spam or not - look into lamsons code to see exactly what you can do with it.
Then there are many examples of python daemons, which you can periodically run to pick up mails using poplib/imaplib and then save them somewhere using sqlalchemy or django or whatever.
OR you could skip python daemons and rather create small django project for doing all that. Combined with Celery (https://pypi.python.org/pypi/django-celery), you can create excellent daemonized backend for accessing mailbox via POP or IMAP and saving stuff to your own database.
Related
I've got a question concerning Python and MySQL. I come from a PHP background and I am wondering how the workflow will be in Python. I can't find any good answers on that on the web and hope anybody can help me understanding this thing. So let me quickly explain what i'm stucked with:
In PHP i did a lot of little things in combination with MySQL, meaning loading data from a database and writing to it. As long as the server on which the php files were stored was correctly set up, it was save to do that. The connection to the database including the username, servername, password and database name was saved in the php file. As php files get stored on the server and the source code won't get shown to the user itself, the user couldn't see the authentication data to connect to the database.
Now, I am wondering how that whole concept can be transfered to Python in a secure way so that the user can't see the authentication data in the source text.
I plan to program a Python program in which the user has to authenticate. Let's assume I created a MySQL database on a webserver and in the Python program the user can login. As soon the user clicks on the login-button a connection to the web-database is done. That would mean that in my source code i need to write down the neccessary data like username, password, db-name and server name for that specific database. Here is my Question: That would mean that everybody could see that authentication data which would be very unsecure, wouldn't it? Even if the user has just a .pyc file he could then recompile it and see the standard .py file in which he could see all that very sensitive data.
So I was wondering how to securely hide that authentication data from the user who will later use my Python program.
As a pythoneer who long time ago was working in php, I think I know what you are after.
First of all, the HTML code will not contain any database credentials unless you put them into the HTML view. An easy way of structuring what goes into the HTML views is to use a framework like Django. It handles the MVC of web applications and does connections to databases.
If you want your database credentials to be very safe, you can have your web application ask for them at startup, thus never having them written down in a file. Keep them secure using keypass or similar password storage systems.
This way they are also not checked in to any version control system, where the most common place for database password leakage occurs.
If you are a newbie to webapp programming in python, I would suggest to follow a Django tutorial it should help you get on the track.
This question is a bit far fetched (i don't even know if the way i'm going about doing this is correct).
I have a script that gathers some information on a computer. The intent is to have that script ftp/sftp/any-transfer etc some data to a remote server. This script is intended to be distributed among many people also.
Is it possible to hide the password/user of remote server in the script (or perhaps even the implementation details?). I was thinking of encoding it in some way. Any suggestions?
Also, in compiled languages like java or C, is it safe to just distribute around a compiled version of the code?
Thanks.
The answer is no. You can't put the authentication details into the program and make it impossible for users to get those same authentication details. You can try to obfuscate them, but it is not possible to ensure that they cannot be read.
Compiling the code will not even obfuscate them very much.
One approach to the problem would be to implement a REST web interface and supply each distribution of the program with an API key of some sort. Then set up the program to connect to the interface over SSL using its key and put whatever information it needs there. Then you could track which version is connecting from where and limit each distribution of the program to updating a restricted set of resources on the server. Furthermore you could use server heuristics to guess if an api key has leaked and block an account if that occurs.
Another way would be if all of the hosts/users of the program are trusted, then you could set up user accounts on a server node and each script could authenticate with its own username and password or SSH key. Your server node would then have to restrict access based on what each user is allowed to update. Using SSH key based authentication allows you to avoid leaving the passwords around while still allowing authenticated access to your server.
Just set the name to "username" and password to "password", and then when you give it to your friends, provision an account/credential that's only for them, and tell them to change the script and be done with it. That's the best/easiest way to do this.
to add onto jmh's comments and answer another part of your question, it is possible to decompile the java from the .class byte code and get almost exactly what the .java file contains so that won't help you. C is more difficult to piece back together but again, its certainly possible.
I sometimes compress credentials with zlib and compile to pyo file.
It protect from "open in editor and press ctrl+f" and from not-programmers only.
Sometimes I used PGP cryptography.)
I have a Plone instance (4.2.2) running and need to automate the creation of user accounts. I'd really like to do this using an external Python script or some other Linux-based command line utility (such as "curl").
If I use curl, I can authenticate to gain access to the "##new_user" page, but I can't seem to get the right POST setup in the headers.
If I don't use curl and use a Python script instead, are there any utilities or libraries that can do this? I've tried using libraries such as Products.CMFCore.utils.getToolByName and getting the "portal_registration" - but I can't seem to get that to work in a regular script (one that has no request/context).
This script needs to run once every N minutes on the server (where the user information is grabbed from an external database). I also need there to be no password - and select the option to email the user to set their own password, and I need to add this user to a pre-defined group.
Are there any suggestions - perhaps another utility or built-in library that would better suite these requirements?
There are many scripts floating around for batch imports of users. A good one is Tom Lazar's CSV import http://plone.org/documentation/kb/batch-adding-users . Tom's script could be very easily adapted to run as a Zope "run" script if you need to run it from the command line, say as a cron job. "Run" scripts are scripts processed via command like "bin/client5 run my_script.py" using a Zope or ZEO client instance.
However, a strategy like this is one-way. It sound's from the conversation you had with Martijn like you might be better off by creating a Pluggable Authentication System (PAS) plugin to provide a connection to your external database source. The great thing about this strategy is that you can very precisely determine which user properties have which source (external DB or Plone membership properties manager). Docs (as MJ indicated) are at http://plone.org/documentation/manual/developer-manual/users-and-security/pluggable-authentication-service/referencemanual-all-pages .
Make sure to look and see if there is already a plugin written for your external data source. There are already plugins for many auth schemes and dbs (like LDAP and SQLAlchemy).
I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails.
Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading.
I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case.
I would suggest taking a look at setting up a simple site in google app engine. It's free and you can use python to do the site. Than it would just be a matter of creating a simple restful service that you could send a POST to with your pickled data and store it in a database. Than just create a simple web front end onto the database.
Another option in addition to what Casey already provided:
Set up a remote MySQL database somewhere that has user access levels allowing remote connections. Your Python program could then simply access the database and INSERT the data you're trying to store centrally (e.g. through MySQLDb package or pyodbc package). Your users could then either read the data through a client that supports MySQL or you could write a simple front-end in Python or PHP that displays the data from the database.
Adding this as an answer so that OP will be more likely to see it...
Make sure you consider security! If you just blindly accept pickled data, it can open you up to arbitrary code execution.
I suggest you to use a good middle-ware like: Zero-C ICE, Pyro4, Twisted.
Pyro4 using pickle to serialize data.
To start off, this desktop app is really to give myself an excuse to learn python and how a gui works.
Im trying to help my clients visualize how much bandwidth they are going through, when its happening and where their visitors are. All of this would be displayed by graphs or whatever would be most convienient. (Down the road, I'd like to add cpu/mem usage)
I was thinking the easiest way would be for the app to connect via sftp, download the specified log and then use regex to filter the necessary information.
I was thinking of using :
Python 2.6
Pyside
Paramiko
to start out with. I was looking at twisted for the sftp part but I though maybe keeping it simple for now would be a better choice.
Does this seem right? Should I be trying to use sftp? Or should I try to interact with some subdomain from my site to push the logs to the client? (i.e app.mysite.com)
How about regular expressions to parse the logs?
sftp or shelling out to rsync seems like a reasonable way to retrieve the logs. As for parsing them, regular expressions are what most people tend to use. However, there are other approaches, too. For instance:
Parse Apache logs to SQLite database
Using pyparsing to parse logs. This one is parsing a different kind of log file, but the approach is still interesting.
Parsing Apache access logs with Python. The author actually wrote a little parser, which is available in an apachelogs module.
You get the idea.