Saving emails for later view - python

I need to save emails I receive so that the user can view them later on. They need to be saved in such a way that the images will remain even if their links a re broken (e.g. for the images that are link and not attachments, upload them to S3 and change the links to point to them).
Can anyone recommend a library that will help me achieve that?
I was thinking of two approaches:
1) Save the email to PDF - but I have no idea how to make it correctly include the images.
2) Save the original email and render it on the client, but then it doe snot show the attached images.
Any one of those will do with preference to the first option. If its the first option then I can write it on my RoR server or as an external Python service. If its the sercond I have to write it to work on RoR.
I am aware that this question is similar to: Best way to save email, including images and HTML data, using Java Mail API?
but I need to do it on Rails not Java.
Thank you!

Why not just have an auto-forwarder to a separate account? That way they would effectively be bcc'd on everything you get. I know Gmail can easily do that with filters.
Another option is forwarding the emails to a 'read it later' service and let their api do the heavy lifting. Not sure if they keep the attachment, but it is worth a look.

Related

Rewrite Mail Body in Outlook through Python Script

Our company has recently added an automatic filter to all incoming Emails (O365) that rewrites URLs to some redirection service that is both unreliable and buggy. Additionally I hate it if a link is replaced by some garbage service I have not opted in to. Imagine it like a man-in-the-middle attack on your inbox.
So I wrote a Python script that can replace the rewritten links with the original ones. Currently I'm saving the email contents to a file, use the script and then copy & paste the output back. Evidently, this is not a good solution.
What I am trying to achieve is a way to do this automatically from inside Outlook, but I am open to other suggestions. I would like a solution to use Python since I'm comfortable with it, but if there is a different way that can handle this problem I'm open for that also.

Automatic email sending from a gmail account using script

I need to send email with same content to 1 million users.
Is there any way to do so by writing script or something?
Email Id's are stored in Excel format.
It is absolutely possible for a bot to be made that creates gmail accounts, in fact many already exist. The main problem is how to solve the captcha that is required for each new account, however there are services already built to handle this. The only problem then is being willing to violate googles terms of services, as I'm sure this does in one way or another.
This link will help you to achieve what you wanted
http://naelshiab.com/tutorial-send-email-python/

Sending data through the web to a remote program using python

I have a program that I wrote in python that collects data. I want to be able to store the data on the internet somewhere and allow for another user to access it from another computer somewhere else, anywhere in the world that has an internet connection. My original idea was to use an e-mail client, such as g-mail, to store the data by sending pickled strings to the address. This would allow for anyone to access the address and simply read the newest e-mail to get the data. It worked perfectly, but the program requires a new e-mail to be sent every 5-30 seconds. So the method fell through because of the limit g-mail has on e-mails, among other reasons, such as I was unable to completely delete old e-mails.
Now I want to try a different idea, but I do not know very much about network programming with python. I want to setup a webpage with essentially nothing on it. The "master" program, the program actually collecting the data, will send a pickled string to the webpage. Then any of the "remote" programs will be able to read the string. I will also need the master program to delete old strings as it updates the webpage. It would be preferred to be able to store multiple string, so there is no chance of the master updating while the remote is reading.
I do not know if this is a feasible task in python, but any and all ideas are welcome. Also, if you have an ideas on how to do this a different way, I am all ears, well eyes in this case.
I would suggest taking a look at setting up a simple site in google app engine. It's free and you can use python to do the site. Than it would just be a matter of creating a simple restful service that you could send a POST to with your pickled data and store it in a database. Than just create a simple web front end onto the database.
Another option in addition to what Casey already provided:
Set up a remote MySQL database somewhere that has user access levels allowing remote connections. Your Python program could then simply access the database and INSERT the data you're trying to store centrally (e.g. through MySQLDb package or pyodbc package). Your users could then either read the data through a client that supports MySQL or you could write a simple front-end in Python or PHP that displays the data from the database.
Adding this as an answer so that OP will be more likely to see it...
Make sure you consider security! If you just blindly accept pickled data, it can open you up to arbitrary code execution.
I suggest you to use a good middle-ware like: Zero-C ICE, Pyro4, Twisted.
Pyro4 using pickle to serialize data.

Retrieving my own data via FaceBook API

I am building a website for a comedy group which uses Facebook as one of their marketing platforms; one of the requirements for the new site is to display all of their Facebook events on a calendar.
Currently, I am just trying to put together a Python script which can pull some data from my own Facebook account, like a list of all my friends. I presume once I can accomplish this I can move to pulling more complicated data out of my clients account (since they have given me access to their account).
I have looked at many of the posts here, and also went through the Facebook API documentation, including Facebook Connect, but am really beating my head against the wall. Everything I have read seems like overkill, as it involves setting up a good deal of infrastructure to allow my app to set up connections to any arbitrary user's account (who authorizes me). Shouldn't it be much simpler, given I only ever need to access 1 account?
I cannot find a way to retrieve data without having to display the Facebook login window. I have a script which will retrieve all my friends, but it includes a redirect where I have to physically log myself in to Facebook.
Would appreciate any advice or links, I just feel like I must be missing something simple.
Thank you!
Just posting up my notes on the successful advice, should others find this post;
Per Daniel and William's advice, I obtained the right permissions using the Connect options. From William, this link explains how the Facebook connection works
https://developers.facebook.com/docs/authentication/
This section on setting up the actual authentication was most helpful to me.
http://developers.facebook.com/docs/api
Basically, it goes as follows:
Post a link to the following URL. A user will need to physically click on it (even if that user is just you, the site admin).
https://graph.facebook.com/oauth/authorize?client_id=YOUR_CLIENT_ID&redirect_uri=http://www.example.com/HANDLER
This will redirect to a Facebook login, which will return to http://www.example.com/HANDLER after the user authenticates. If you wish to do more than basic reads and news feed updates you will need to include this variable in the above link: scope=offline_access,user_photos. The scope variable just includes a comma separated list of values, which Facebook will explicitly tell the authenticating user about during the login process, and they will have to OK. Most helpful for me was the offline_access flag (user_photos lets you get at their photos too), so I can pull content without someone logging in regularly (so long as I store the access token obtained later)
Have a script located at http://www.example.com/HANDLER that will take a variable from the request (so facebook will redirect to http://www.example.com/HANDLER&code=YOUR_CODE after authentication). Your handler needs to pull out the code variable, and then send the following request:
https://graph.facebook.com/oauth/access_token?
client_id=YOUR_CLIENT_ID&
redirect_uri=http://www.example.com/oauth_redirect&
client_secret=YOUR_SECRET_KEY&
code=YOUR_CODE
This request will return a string of the form access_token=YOUR_ACCESS_TOKEN.
Just parse off the 'access_token=', and you will have a token that you can use to access the facebook graph API, in requests like
http://graph.facebook.com/me/friends?access_token=YOUR_ACCESS_TOKEN
This will return a JSON object containing all of your friends
Hope this saves someone else some not fun time straining through documentation. Thanks for the help!
It is true, that Facebook's API is targeted at developers who are creating apps that will be used by many users.
Thankfully, the new Graph API is much simpler to use than its predecessor, and shouldn't be terribly difficult for you to work with without using or creating a lot of underlying infrastructure.
You will need to implement authorization, but this is not difficult, and as long as you prompt the user for the offline_access permission, it'll only need to be done once.
The documentation on Desktop Authentication would probably be most relevant to you at this point, though you might want to move to the javascript-based authentication once you've got a web app up and running.
Once the authentication is done, all you're doing is making GET requests to various urls and working with the resulting JSON.
Here's the documentation about Events, and you can get a list of friends from the friends connection of a User.
I'm not expert on Facebook/Facebook Connect, however I've seen it used/used applications with it and it seems there's really only the 'official' way to do it. I'm afraid it looks like your best bet would probably be something along the lines of this.
http://wiki.developers.facebook.com/index.php/Connect/Authentication_and_Authorization
Regardless of how you actually 'use' it, you'll still need to authorize the application to connect to the account and this means having a Facebook App as well.
The answer to Facebook application authentication is hard to find but is actually found within the "Analytics" page of the Graph API.
Specify the following: https://graph.facebook.com/oauth/access_token?client_cred&client_id=yourappid&client_secret=yourappsecret , you will then be given an access_token that you may use on all other calls.
The Facebook provided APIs do NOT currently provide this level of functionality.

AppEngine/Python, query database and send multiple images to the client as a response to a single get request

I am working on a social-network type of application on App Engine, and would like to send multiple images to the client based on a single get request. In particular, when a client loads a page, they should see all images that are associated with their account.
I am using python on the server side, and would like to use Javascript/JQuery on the client side to decode/display the received images.
The difficulty is that I would like to only perform a single query on the server side (ie. query for all images associated with a single user) and send all of the images resulting from the query to the client as a single unit, which will then be broken up into the individual images. Ideally, I would like to use something similar to JSON, but while JSON appears to allow multiple "objects" to be sent as a JSON response, it does not appear to have the ability to allow multiple images (or binary files) to be sent as a JSON response.
Is there another way that I should be looking at this problem, or perhaps a different technology that I should be considering that might allow me to send multiple images to the client, in response to a single get request?
Thank you and Kind Regards
Alexander
The App Engine part isn't much of a problem (as long as the number of images and total size doesn't exceed GAE's limits), but the user's browser is unlikely to know what to do in order to receive multiple payloads per GET request -- that's just not how the web works. I guess you could concatenate all the blobs/bytestreams (together with metadata needed for the client to reconstruct them) and send that (it will still have to be a separate payload from the HTML / CSS / Javascript that you're also sending), as long as you can cajole Javascript into separating the megablob into the needed images again (but for that part you should open a separate question and tag it Javascript, as Python has little to do with it, and GAE nothing at all).
I would instead suggest just accepting the fact that the browser (presumably via ajax, as you mention in tags) will be sending multiple requests, just as it does to every other webpage on the WWW, and focus on optimizing the serving side -- the requests will be very close in time, so you should just use memcache to keep the yet-unsent images to avoid multiple fetch-from-storage requests in your GAE app.
As an improvement to Alex's answer, there's no need to use memcache: Simply do a keys-only query to get a list of keys of images you want to send to the client, then use db.get() to fetch the image corresponding to the required key for each image request. This requires roughly the same amount of effort as a single regular query.
Trying to send all of the images in one request means that you will be fighting very hard against some of the fundamental assumptions of the web and browser technology. If you don't have a really, really compelling reason to do this, you should consider delivering one image per request. That already works now, no sweat, no effort, no wheels reinvented.
I can't think of a sensible way to do what you ask, but I can tell you that you are asking for pain in trying to implement the solution that you are describing.
Send the client URLs for all the images in one hit, and deal with it on the client. That fits with the design of the protocol, and still lets you only make one query. The client might, if you're lucky, be able to stream those back in its next request, but the neat thing is that it'll work (eventually) even if it can't reuse the connection for some reason (usually a busted proxy in the way).

Categories

Resources