Python Requests POST not working - python

I am using python requests module for calling API's.
Everything was working fine until I pushed my code to AWS. Even on AWS it is working if I am working on dev server i.e., ec2.####.amazon.com:8000 .
Here is my code :
r = requests.post(api_url, data = {"var 1":"value", "var 2":"value"})
My API url not allowed GET method so in response I am getting error that GET method not allowed which means requests.post is reads as get
Any idea what’s wrong here.

Actually the issue was due to SSL , if your server is using https method then you need to add following line in requests.post
r = requests.post(api_url, data = {"var 1":"value", "var 2":"value"}, verify=True)
Also make sure your api_url includes https not http
I have written a small function for that
def get_base_url(request):
host = get_host(request)
if request.is_secure():
return '{0}{1}/{2}'.format('https://', host, 'url')
else:
return '{0}{1}/{2}'.format('http://', host, 'url')

Related

Thoughtspot: API calls to fetch metadata via Python

I'm trying to fetch metadata from thoughtspot. I am able to call the url using browser and fetch the data. But here I'm trying to achieve it via python program. According to thougthspot documentation. I have to enable trusted authentication and pass my secret key & username to obtain a token which I can use in my program.
https://developers.thoughtspot.com/docs/?pageid=api-auth-session
my username : username#username.com
secret key : secret-key
Below is my code:(generated by postman)
import requests
url = "https://<ThoughtSpot-host>/callosum/v1/tspublic/v1/session/auth/token?auth_token=secret-key&access_level=FULL&username=username#username.com"
payload={}
headers = {}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
I'm getting Bad request error. Anyone here using thoughtspot over this issue. Appreciate your support very much.
Error I'm getting:
{"type":"Bad Request","description":"The server could not understand the request due to invalid syntax."}
I can fetch data by calling the api using a web-browser. Below url returns list of all meta-data objects. I want to achieve this using a python program (I have to authenticate first & call the below URL - Authentication step is not working for me when I tried to follow the documentation)
https://<ThoughtSpot-host>/callosum/v1/tspublic/v1/metadata/list
Did you try changing the url so that it includes the domain name?
Also post the error you are getting. And a screenshot of a working request would be great!

How to make a request inside a simple mitmproxy script?

Good day,
I am currently trying to figure out a way to make non blocking requests inside a simple script of mitmproxy, but the documentation doesn't seem to be clear for me for the first look.
I think it's probably the easiest if I show my current code and describe my issue below:
from copy import copy
from mitmproxy import http
def request(flow: http.HTTPFlow):
headers = copy(flow.request.headers)
headers.update({"Authorization": "<removed>", "Requested-URI": flow.request.pretty_url})
req = http.HTTPRequest(
first_line_format="origin_form",
scheme=flow.request.scheme,
port=443,
path="/",
http_version=flow.request.http_version,
content=flow.request.content,
host="my.api.xyz",
headers=headers,
method=flow.request.method
)
print(req.get_text())
flow.response = http.HTTPResponse.make(
200, req.content,
)
Basically I would like to intercept any HTTP(S) request done and make a non blocking request to an API endpoint at https://my.api.xyz/ which should take all original headers and return a png screenshot of the originally requested URL.
However the code above produces an empty content and the print returns nothing either.
My issue seems to be related to: mtmproxy http get request in script and Resubmitting a request from a response in mitmproxy but I still couldn't figure out a proper way of sending requests inside mitmproxy.
The following piece of code probably does what you are looking for:
from copy import copy
from mitmproxy import http
from mitmproxy import ctx
from mitmproxy.addons import clientplayback
def request(flow: http.HTTPFlow):
ctx.log.info("Inside request")
if hasattr(flow.request, 'is_custom'):
return
headers = copy(flow.request.headers)
headers.update({"Authorization": "<removed>", "Requested-URI": flow.request.pretty_url})
req = http.HTTPRequest(
first_line_format="origin_form",
scheme='http',
port=8000,
path="/",
http_version=flow.request.http_version,
content=flow.request.content,
host="localhost",
headers=headers,
method=flow.request.method
)
req.is_custom = True
playback = ctx.master.addons.get('clientplayback')
f = flow.copy()
f.request = req
playback.start_replay([f])
It uses the clientplayback addon in order to send out the request. When this new request is sent, that will generate another request event which will then be an infinite loop. That is the reason for the is_custom attribute I added to the request there. If the request that generated this event is the one that we have created, then we don't want to create a new request from it.

How to send post to laravel api from python?

This is my python request code.
url = "https://test.com/"
r = requests.get(url, verify=False)
xsrf_token = r.cookies.get("XSRF-TOKEN")
headers = {
'X-XSRF-TOKEN':xsrf_token
}
data = {"account": "O_O#gmail.com", "password": "123123"}
r = requests.post(url+'/app/get/users', verify=False, data = data, headers=headers)
In laravel log, I got
[2019-12-27 16:09:14] local.ERROR: The payload is invalid. {"exception":"[object] (Illuminate\Contracts\Encryption\DecryptException(code: 0): The payload is invalid. at /var/www/html/test/vendor/laravel/framework/src/Illuminate/Encryption/Encrypter.php:195)
[stacktrace]
Have any method to solve that? Thanks.
You can't solve the issue with a static xsrf alone since it's doing its job preventing Cross Site Request Forging wich is exactly what you're doing in that piece of code.
To use a route as an API, the laravel installation needs to be configured that way, so, if needed, a stateless way of authentification is used (jwt for example) instead of the session with xsrf token for post methods.
Basicly if it's not configured to be used as an API, you will not be able to use it as an API.

GET request via python script with access token

I'm facing some difficulties executing GET request with access token via python.
For some reason when I execute the request from POSTMAN I get the expected result, however, when using python I get connection error :
A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
I believe that my script is being blocked by the server to prevent scrapping attempts, but I'm not sure.
This is what I'm running via POSTMAN :
https://www.operation.io/web/alerts?access_token=6ef06fee265de1fa640b6a444ba47--z&envId=58739be2c25e76a202c9dd3e&folderId=active&sortBy=status
And this is what I'm running from inside the script :
response = requests.get('https://www.operation.io/web/alerts', headers={'Authorization': 'access_token 6ef06fee265de1fa640b6a444ba47--z'})
Please notice that the url and the access token are not real, so please don't try to use.
How can I make it work via the script and not only via postman?
Thank you for your help/
It's hard to tell without actually being able to test the solutions, but I would suggest 2 hacks that worked for me in the past:
Change "Authorization" to "authorization"
Change "6ef06fee265de1fa640b6a444ba47--z" to "Token 6ef06fee265de1fa640b6a444ba47--z" (add a space as well)
Put it all together:
response = requests.get('https://www.operation.io/web/alerts', headers={'authorization': 'Token 6ef06fee265de1fa640b6a444ba47--z'})
Since via postman you're sending the token as query param(as opposed to header), you can try this :
response = requests.get('https://www.operation.io/web/alerts', params={'access_token': '6ef06fee265de1fa640b6a444ba47--z'})
Assuming the API you're using accepts it as query param rather than as headers (which is my guess from the postman request)

Difference in working of python urllib2 for http and https urls

Can someone explain to me why things are implemented the following in urllib2.
When I pass encoded url using http it again encodes the parameters
whereas in case of https it does not urlencode again
so lets say the (http) call is http//:example.com?email=amit%40sethi.com the request is
http://example.com?email=amit%2540sethi.com
where as in case of https it is
https://example.com?email=amit%40sethi.com
Thanks
Edit : Adding more details
The basic request I am making is
SF_EXTEND_RESOURCE = "https://www.superfax.in/api/voice/planchange/?"
params_dict = {'username':USERNAME,
'password':PASSWORD,
'email':str(user.email)
}
_url = SF_EXTEND_RESOURCE + urlencode(params_dict)
response = urllib2.urlopen(_url).read()
Now my problem is that when I am using http the email string is encoded twice where as that was not the case for https . I am using Python 2.6.5 on ubuntu Lucid. I am not able to understand how this is not reproducible.
I just tried it, and for me the behaviour is not what you observe: for me, http and https URLs work the same.
import urllib2
out = urllib2.urlopen("https://www.google.com/?q=foo%40bar");
print out.geturl()
open('out1', 'w').write(out.read())
out = urllib2.urlopen("http://www.google.com/?q=foo%40bar");
print out.geturl()
open('out2', 'w').write(out.read())
Compare out1 and out2 and you'll find that both the correct foo#bar in the "value" attribute of the search box, so there doesn't seem to be any double-encoding going on.

Categories

Resources