Twython - How to update status with media url - python

In my app, I let users to post to twitter. Now i would like to let them update their status with media.
In twython.py i see a method update_status_with_media that reads the image from filesystem and uploads to twitter. My images are not in filesystem but on S3 bucket.
How to make this work with S3 bucket urls?
Passing the url in file_ variable, fails on IO Error, no such file or directory.
Passing StringIO fails on UnicodeDecode Error.
Passing urllib.urlopen(url).read() gives file() argument 1 must be encoded string without NULL bytes, not str.
I also tried using post method and got 403 Forbidden from twitter api, Error creating status.
Just Solved it
Bah, just got it to work, finally! Maybe it will help someone else to save a few hours it cost me.
twitter = Twython(
app_key=settings.TWITTER_CONSUMER_KEY, app_secret=settings.TWITTER_CONSUMER_SECRET,
oauth_token=token.token, oauth_token_secret=token.secret
)
img = requests.get(url=image_obj.url).content
tweet = twitter.post('statuses/update_with_media',
params={'status': msg},
files={'media': (image_obj.url,
BytesIO(img))})

Glad to see you found an answer! There's a similar problem that we handled recently in a repo issue - basically, you can do the following with StringIO and passing it directly to twitter.post like you did:
from StringIO import StringIO
from twython import Twython
t = Twython(...)
img = open('img_url').read()
t.post('/statuses/update_with_media', params = {'status': 'Testing New Status'}, files = {
'media': StringIO(img)
# 'media': ('OrThisIfYouWantToNameTheFile.lol', StringIO(img))
})
This isn't a direct answer to your question, so I'm not expecting any vote or anything, but figured it's seemingly useful to some people and somewhat related so I'd drop a note.

Related

Key Error: 'groups' (Foursquare API explore venues)

KeyError : 'groups'
When I call
results = requests.get(url).json()['response']['groups'][0]['items']
My foursquare API calls are not exhausted yet and still this error shows up every time. I even tried running it by using new client ID and client secret but the problem still persists.
I would love to have some strong solution to this issue so that I can progress further with my project.
It looks like you are parsing the json incorrectly for the latest version of the Places API. 'response', 'groups' and 'items' are not returned.
You can find all the available fields on the Places search response here --> https://developer.foursquare.com/reference/response-fields
I was able to successfully return the first item from a request with the below code. Just replace the API key with yours and it should work.
from os import system
import requests
url = "https://api.foursquare.com/v3/places/search?ll=41.8789%2C-87.6359&&radius=50000"
results = requests.get(url, headers={"Accept":"application/json", "Authorization":"yourApiKey"})
print(results.json()['results'][0])

InvalidS3ObjectException: Unable to get object metadata from S3?

So I am trying to use Amazon Textract to read in multiple pdf files, with multiple pages using the StartDocumentTextDetection method as follows:
client = boto3.client('textract')
textract_bucket = s3.Bucket('my_textract_console-us-east-2')
for s3_file in textract_bucket.objects.all():
print(s3_file)
response = client.start_document_text_detection(
DocumentLocation = {
"S3Object": {
"Bucket": "my_textract_console_us-east-2",
"Name": s3_file.key,
}
},
ClientRequestToken=str(random.randint(1,1e10)))
print(response)
break
When just trying to retrieve the response object from s3, I'm able to see it printed out as:
s3.ObjectSummary(bucket_name='my_textract_console-us-east-2', key='C:\\Users\\My_User\\Documents\\Folder\\Sub_Folder\\Sub_sub_folder\\filename.PDF')
Correspondingly, I'm using that s3_file.key to access the object later. But I'm getting the following error that I can't figure out:
InvalidS3ObjectException: An error occurred (InvalidS3ObjectException) when calling the StartDocumentTextDetection operation: Unable to get object metadata from S3. Check object key, region and/or access permissions.
So far I have:
Checked the region from boto3 session, both the bucket and aws configurations settings are set to us-east-2.
Key cannot be wrong, I'm passing it directly from the object response
Permissions wise, I checked the IAM console, and have it set to AmazonS3FullAccess and AmazonTextractFullAccess.
What could be going wrong here?
[EDIT] I did rename the files so that they didn't have \\, but seems like it's still not working, that's odd..
I ran into the same issue and solved it by specifying a region in extract client. In my case I used us-east2
client = boto3.client('textract', region_name='us-east-2')
The clue to do so came from this issue: https://github.com/aws/aws-sdk-js/issues/2714

Export spreadsheet as text/csv using Drive v3 gives 500 Internal Error

I was trying to export a Google Spreadsheet in csv format using the Google client library for Python:
# OAuth and setups...
req = g['service'].files().export_media(fileId=fileid, mimeType=MIMEType)
fh = io.BytesIO()
downloader = http.MediaIoBaseDownload(fh, req)
# Other file IO handling...
This works for MIMEType: application/pdf, MS Excel, etc.
According to Google's documentation, text/csv is supported. But when I try to make a request, the server gives a 500 Internal Error.
Even using google's Drive API playground, it gives the same error.
Tried:
Like in v2, I added a field:
gid = 0
to the request to specify the worksheet, but then it's a bad request.
This is a known bug in Google's code. https://code.google.com/a/google.com/p/apps-api-issues/issues/detail?id=4289
However, if you manually build your own request, you can download the whole file in bytes (the media management stuff won't work).
With file as the file ID, http as the http object that you've authorized against you can download a file with:
from apiclient.http import HttpRequest
def postproc(*args):
return args[1]
data = HttpRequest(http=http,
postproc=postproc,
uri='https://docs.google.com/feeds/download/spreadsheets/Export?key=%s&exportFormat=csv' % file,
headers={ }).execute()
data here is a bytes object that contains your CSV. You can open it something like:
import io
lines = io.TextIOWrapper(io.BytesIO(data), encoding='utf-8', errors='replace')
for line in lines:
#Do whatever
You just need to implement an Exponential Backoff.
Looking at this documentation of ExponentialBackOffPolicy.
The idea is that the servers are only temporarily unavailable, and they should not be overwhelmed when they are trying to get back up.
The default implementation requires back off for 500 and 503 status codes. Subclasses may override if different status codes are required.
Here is an snippet of an implementation of Exponential Backoff from the first link:
ExponentialBackOff backoff = ExponentialBackOff.builder()
.setInitialIntervalMillis(500)
.setMaxElapsedTimeMillis(900000)
.setMaxIntervalMillis(6000)
.setMultiplier(1.5)
.setRandomizationFactor(0.5)
.build();
request.setUnsuccessfulResponseHandler(new HttpBackOffUnsuccessfulResponseHandler(backoff));
You may want to look at this documentation for the summary of the ExponentialBackoff implementation.

how to create a downloadable csv file in appengine

I use python Appengine. I'm trying to create a link on a webpage, which a user can click to download a csv file. How can I do this?
I've looked at csv module, but it seems to want to open a file on the server, but appengine doesn't allow that.
I've looked at remote_api, but it seems that its only for uploading or downloading using app config, and from account owner's terminal.
Any help thanks.
Pass a StringIO object as the first parameter to csv.writer; then set the content-type and content-disposition on the response appropriately (probably "text/csv" and "attachment", respectively) and send the StringIO as the content.
I used this code:
self.response.headers['Content-Type'] = 'application/csv'
writer = csv.writer(self.response.out)
writer.writerow(['foo','foo,bar', 'bar'])
Put it in your handler's get method. When user requests it, user's browser will download the list content automatically.
Got from: generating a CSV file online on Google App Engine

Getting HTTP GET variables using Tipfy

I'm currently playing around with tipfy on Google's Appengine and just recently ran into a problem: I can't for the life of me find any documentation on how to use GET variables in my application, I've tried sifting through both tipfy and Werkzeug's documentations with no success. I know that I can use request.form.get('variable') to get POST variables and **kwargs in my handlers for URL variables, but that's as much as the documentation will tell me. Any ideas?
request.args.get('variable') should work for what I think you mean by "GET data".
Source: http://www.tipfy.org/wiki/guide/request/
The Request object contains all the information transmitted by the client of the application. You will retrieve from it GET and POST values, uploaded files, cookies and header information and more. All these things are so common that you will be very used to it.
To access the Request object, simply import the request variable from tipfy:
from tipfy import request
# GET
request.args.get('foo')
# POST
request.form.get('bar')
# FILES
image = request.files.get('image_upload')
if image:
# User uploaded a file. Process it.
# This is the filename as uploaded by the user.
filename = image.filename
# This is the file data to process and/or save.
filedata = image.read()
else:
# User didn't select any file. Show an error if it is required.
pass
this works for me (tipfy 0.6):
from tipfy import RequestHandler, Response
from tipfy.ext.session import SessionMiddleware, SessionMixin
from tipfy.ext.jinja2 import render_response
from tipfy import Tipfy
class I18nHandler(RequestHandler, SessionMixin):
middleware = [SessionMiddleware]
def get(self):
language = Tipfy.request.args.get('lang')
return render_response('hello_world.html', message=language)

Categories

Resources