Urllib2 POST request results in 409 conflict error - python

I am calling google's pubsubhubbub publisher at http://pubsubhubbub.appspot.com via Django's view. I want to fetch all the youtube uploads feeds using it. I am sending a 'post' request to it using urllib2.Request, and I get 409 conflict error. I have properly setup callback url, and if I try to post the same request using: python manage shell it works perfectly fine. I am using nginx server as a proxy to gunicorn instance at the production server. What could possibly be wrong. Thanks in advance.
>>> response.request
<PreparedRequest [POST]>
>>> response.request.headers
{'Content-Length': u'303', 'Content-Type': 'application/x-www-form-urlencoded', 'Accept-Encoding': 'gzip, deflate, compress', 'Accept': '*/*', 'User-Agent': 'python-requests/1.2.0 CPython/2.6.6 Linux/2.6.18-308.8.2.el5.028stab101.3'}
>>> response.request.body
'hub.verify=sync&hub.topic=http%3A%2F%2Fgdata.youtube.com%2Ffeeds%2Fapi%2Fusers%2FUCVcFOpBmJqkQ4v6Bh6l1UuQ%2Fuploads%3Fv%3D2&hub.lease_seconds=2592000&hub.callback=http%3A%2F%2Fhypedsound.cloudshuffle.com%2Fhub%2F19%2F&hub.mode=subscribe&hub.verify_token=subscribe7367add7b116969a44e0489ad9da45ca8aea4605'
Request body, headers are same for both requests generated.
Here is the nginx config file:
http://dpaste.org/bOwHO/

It turns out I was using TransactionMiddleware which does not commit to db when model.save() is called, which was creating the issue.

Related

Is it possible to call dash application with http request and pass there some data?

Is it possible to call the dash application with HTTP requests and pass some data in the requests body?
I have application app1 under the link: http://127.0.0.1:8050/app1.
I tried to call this url with the python requests module like below:
import requests
import json
payload = {'download': True}
def make_request():
url = "http://127.0.0.1:8050/app1"
r = requests.get(url, data=json.dumps(payload))
print(r.request.__dict__)
When my server is running and I do make_request() then in my response I get all content with HTML that contains my app and in the response of my request I can see that my data is properly sent in requests body:
{'_body_position': None,
'_cookies': <RequestsCookieJar[]>,
'body': '{"download": true}',
'headers': {'User-Agent': 'python-requests/2.26.0', 'Accept-Encoding': 'gzip, deflate, br', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '18'},
'hooks': {'response': []},
'method': 'GET',
'url': 'http://127.0.0.1:8050/app1'}
But on the server side, I can't see that the user called the app1 endpoint with get requests. I tried to print in my app print(flask.request) but it doesn't print anything. It seems that nothing happened on server side.
Is it somehow possible to make get requests from the outside and get this information inside the server? Because if I just go to a web browser and pass there 'http://127.0.0.1:8050/app1' I see in server flask.request.
I am not sure what you want to achieve but in order to act on outside requests you should setup a server route and define a callback. Check this ticket out: Receive data via a rest call using Flask and Dash and update the graphs
Calling GET on the URL of your dash application is just returning the html page the same what your browser is doing

Python client for Compute Engine returns "Required field 'resource' not specified"

I'm trying to create a VM using the python client. The call I'm making is
import googleapiclient.discovery
compute = googleapiclient.discovery.build('compute', 'v1')
compute.instances().insert(
project='my-project',
zone='us-central1-c',
body=config).execute()
(config is a json string, available here)
and the response is
<HttpError 400 when requesting https://www.googleapis.com/compute/v1/projects/my-project/zones/us-central1-c/instances?alt=json
returned "Required field 'resource' not specified">
From this forum post and this stackexchange question, it appears the problem is with the REST API headers. However headers aren't exposed the python client, as far as I know.
Is this a bug or is there something else I might be doing incorrectly?
EDIT
Following the error back to googleapiclient.http.HttpRequest, it looks like the HttpRequest object generated by build() has headers
{ 'accept': 'application/json',
'accept-encoding': 'gzip, deflate',
'content-length': '2299',
'content-type': 'application/json',
'user-agent': 'google-api-python-client/1.7.7 (gzip)' }
I tried adding 'resource': 'none' to the headers and received the same response.
After looking at this for a while, I suspect the REST API is expecting a Compute Engine resource to be specified. However, searching for the word "resource" on the official docs yields 546 results.
EDIT2
Created GitHub Issue.
Use the request body(requestBody)" instead of resources.

Getting 403 Forbidden requesting Amazon S3 file

I want to get the size of a file on Amazon S3 without having to download it. My attempt has been to try and send a HTTP HEAD and the request returned will include content-length HTTP header.
Here is my code:
import httplib
import urllib
urlPATH = urllib.unquote("/ticket/fakefile.zip?AWSAccessKeyId=AKIAIX44POYZ6RD4KV2A&Expires=1495332764&Signature=swGAc7vqIkFbtrfXjTPmY3Jffew%3D")
conn = httplib.HTTPConnection("cptl.s3.amazonaws.com")
conn.request("HEAD", urlPATH, headers={'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.11 (KHTML, like Gecko) Chrome/23.0.1271.64 Safari/537.11',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
'Accept-Charset': 'ISO-8859-1,utf-8;q=0.7,*;q=0.3',
'Accept-Encoding': 'none',
'Accept-Language': 'en-US,en;q=0.8',
'Connection': 'keep-alive'}
)
res = conn.getresponse()
print res.status, res.reason
Error message is:
403 Forbidden
So to escape the "%" in the URL, I used urllib.unquote and after getting 403 Forbidden, I also attempt to try and add in some headers as I thought Amazon may be only returning files that appear to be requested by a browser, but I continue to get 403 error.
Is this a case of Amazon needing particular arguments to service the HTTP request properly or is my code bad?
Ok.... I found a solution by using a workaround. My best guess is that curl/wget are missing http headers in the request to S3, so they all fail and browser works. Tried to start analyzing the request but didn't.
Ultimately, got it working with the following code:
import urllib
d = urllib.urlopen("S3URL")
print d.info()['Content-Length']
403 Forbidden mildly points to an auth problem. Are you sure your access key and signature are correct?
If there's doubt, you could always try and get the metadata via Boto3, which handles all the auth stuff for you (pulling from config files or data you've passed in). Heck, if it works, you can even maybe turn on debug mode and see what it's actually sending that works.

Error 500 sending python request as chrome

I am dealing with this little error but I can not get the solution. I authenticate into a page and I had opened the "inspect/network" chrome tool to see what web service is called and how. I found out this is used:
I have censored sensitive data releated to the site. So, I have to do this same request using python, but I am always getting error 500 and the log on the server side is not showing helpful information (only java traceback).
This is the code of the request
response = requests.post(url,data = 'username=XXXXX&password=XXXXXXX')
URL has the same string that you see in the image under "General/Request URL" label.
Data has the same string that you see in the image under "Form Data".
It looks very simple request but I can not get it to work :( .
Best regards
If you want your request appears like coming from Chrome, other than sending correct data you need to specify headers as well. The reason you got 500 error is probably there're certain settings on your server side disallowing traffic from "non-browsers".
So in your case, you need to add headers:
headers = {'Accept': 'application/json, text/plain, */*',
'Accept-Encoding': gzip, deflate,
...... # more
'User-Agent': 'Mozilla/5.0 XXXXX...' # this line tells the server what browser/agent is used for this request
}
response = requests.post(url,data = 'username=XXXXX&password=XXXXXXX', headers=headers)
P.S. If you are curious, default headers from requests are:
>>> import requests
>>> session = requests.Session()
>>> session.headers
{'Connection': 'keep-alive', 'Accept-Encoding': 'gzip, deflate',
'Accept': '*/*', 'User-Agent': 'python-requests/2.13.0'}
As you can see the default User-Agent is python-requests/2.13.0, and some websites do block such traffic.

How can i invalidate a twitter bearer token with rauth?

how can i invalidate a bearer token from twitter with rauth?(https://dev.twitter.com/docs/api/1.1/post/oauth2/invalidate_token)
I tried to do it via a oauth2 session but it doesn't work.
By the way, is there a way to see the complete request which will be send/which is created by rauth? This could be very helpful for debugging and to understand what rauth produces.
Here is my code so far:
session = rauth.OAuth2Session(client_id=c_key,
client_secret=c_secret,
access_token=o_bearer_token)
bearer_raw = session.post('https://api.twitter.com/oauth2/invalidate_token',
params={ 'Host': 'api.twitter.com',
'User-Agent': '',
'Accept': '*/*',
'Content-Type': 'application/x-www-form-urlencoded',
'Content-Length': str(len(o_bearer_token)),
'access_token':str(o_bearer_token)})
I think the request is being formatted improperly. It looks like the Twitter docs are indicating that it should be something like this:
session = rauth.OAuth2Session(client_id=KEY,
client_secret=SECRET,
access_token=TOKEN)
r = session.post('https://api.twitter.com/oauth2/invalidate_token', bearer_auth=True)
By the way, Rauth is Requests; it's just a layer on top of Requests. So yes, you can see a request as you would with Requests. Something like r.request should expose what you're looking for.

Categories

Resources