I am trying to create Asana task using oAuth, so far it was working fine, but suddenly its not working anymore.
It is throwing back the following response:
{"errors":[{"message":"missing both `parent` and `workspace` fields; at least one required"}]}
Here is what I am doing
import requests
import json
data = {
'workspace':'<my workspace id>',
'assignee':'me',
'name':'My awesome task',
'notes':'My task notes'
}
headers ={'Authorization':'Bearer <my token>','Content-Type':'application/json'}
response = requests.post('https://app.asana.com/api/1.0/tasks',
headers=headers,
data=json.dumps(data))
print response.text
It looks like you're missing a data parameter in the payload. It's a common gotcha, but when you post JSON data to the API you need to send something that looks like this:
{
"data": {
"name": "This is my task",
"workspace": 1234,
...
}
}
In this case, you should be able to fix it by simply changing the last parameter to data=json.dumps({ 'data': data }).
Hope that helps!
Related
i am trying to send POST request to login to this URL
, i first checked the network tab and found out the the post request is being sent to this
url : https://api-my.te.eg/api/user/login?channelId=WEB_APP
and the request payload is:
{
"header":{
"timstamp":0,
"customerId":null,
"msisdn":null,
"messageCode":null,
"responseCode":"1200",
"responseMessage":"Your Session has been expired, please sign in to continue",
"locale":null,
"referenceId":null,
"channelId":null,
"responeAdditionalParameters":null
},
"body":null
}
Here is the code :
import requests
import calendar
import time
#Here i tried to send a generated timestamp, idk if this is needed or not
ts = calendar.timegm(time.gmtime())
loginUrl = 'https://api-my.te.eg/api/user/login?channelId=WEB_APP'
values = {
"msisdn": MobileNumberID,
"timestamp": str(ts),
"locale": "Ar"
}
data = {
"password": PasswordID
}
url = requests.post(loginUrl , data=data , headers=values)
print(url.text)
I looked at the site and in the body they are also passing a header property in the data
So you should use
data = {
"header": {
"msisdn": "024568478",
"timestamp": "1592337873",
"locale": "Ar"
},
"body": {
"password": "PJGkRJte5ntnKt9TQ8XM3Q=="
}
}
and in headers you should pass the native headers probably something like:
headers={'Content-Type': 'application/json', }
but i also see by when you log into this site they pass a jwt argument in the headers. Looks like its some sort of built in security. So it looks like you can not use this API. Its only for the backend of this site.
Did you search the site to see if they have API documentation, maybe their is written how you can calculate the value for jwt?
EDIT:
When you get the login working this is. How to use sessions in python requests:
s = requests.Session()
data = {"login":"my_login", "password":"my_password"}
url = "http://example.net/login"
r = s.post(url, data=data)
If you do not get arround the jwt. You can uses Selenium in python. It works by automating a webbrowser. So you can open chrome tell it wich page to load, fill in your login form and read the html of elements in the browser. This will work on 95% of the websites. Some even have protections against it. Some sites use cloudflare, they are protected from selenium automation.
I need help making a time series api call using python requests.
My header and body look like the following :
header = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
}
body = {
"getSeries": {
"timeSeriesId": idstring.split(','),
"searchSpan": {
"from": timefrom,
"to": timeto,
}
}
}
My request command is :
data = requests.post(f"https://{fqdn}/timeseries/query?api-version=2018-11-01-preview&storeType=warmstore",
headers=header,
data=body)
If i send the header and body as are, I get "Unexpected character encountered while parsing value: g. Path '', line 0, position 0.\r\n"
If i send them as body = json.dumps(json), there is no unexpected character problem, but I get the error message "'str' object has no attribute 'items'".
I have also tried a solution i found with a custom dictionary that has double quotes instead of the standard single but that didn't work either. Bit stumped as to what to do.
Can anyone help?
Thanks
I solved it. Very annoying as spent a good few hours googling random stuff trying to work it out. I even started implementing it in Julia!
Simple correction : json=body instead of data=body in the request call
data = requests.post(f"https://{fqdn}/timeseries/query?api-version=2018-11-01-preview&storeType=warmstore",
headers=header,
json=body)
So first of all thanks, I'm really new to python and I am trying to understand APIs, I'm currently trying to log in into /inventory/json/v/1.4.1/<request_code>?jsonRequest=<json_request_content> which is the Earth Explorer API, and the first step is to Login, and according to the documentation I am supposed to use POST instead of GET, so here is what I got so far, and it works but this is what I
import requests
import requests
user = 'xxxxx'
psword = 'xxxxx'
input_data= {'username':user,'password':psword,'catalogId':'EE'}
test=requests.post('https://earthexplorer.usgs.gov/inventory/json/v/1.4.0/login?jsonRequest=input_data)')
print(test.text)
print(test.status_code)
{
"errorCode": "AUTH_ERROR",
"error": "Passing credentials via URL is not permitted - use a POST request",
"data": null,
"api_version": "",
"access_level": "guest",
"executionTime": 0
}
200
I have no idea what to do, thank you so much. This is the documentation for the earth explorer API, thank you so much https://earthexplorer.usgs.gov/inventory/documentation/json-api?version=1.4.1#login
I have encountered the same problem while working with Earth Explorer API and managed to solve it by reading usgs package code. Basically, the problem is that you must send the request body as a string. That is, your request body must look like this when printed
{
"jsonRequest": "{\"username\": \"???\", \"password\": \"???\", \"catalogId\": \"???\"}"
}
You can achieve this using
import json
import requests
req_params = {
'username': '???',
'password': '???',
'catalogId': '???',
}
req_txt = json.dumps(req_params)
req_body = {
'jsonRequest': req_txt,
}
resp = requests.post('<LOGIN URL HERE>', req_body)
This code is actually taken from usgs package I mentioned, so you should refer to it if you have any additional questions.
I am using the following filters in Postman to make a POST request in a Web API but I am unable to make a simple POST request in Python with the requests library.
First, I am sending a POST request to this URL (http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets) with the following filters in Postman applied to the Body, with the raw and JSON(application/json) options selected.
Filters in Postman
{
"filter": {
"filters": [
{
"field": "RCA_Assigned_Date",
"operator": "gte",
"value": "2017-05-31 00:00:00"
},
{
"field": "RCA_Assigned_Date",
"operator": "lte",
"value": "2017-06-04 00:00:00"
},
{
"field": "T_Subcategory",
"operator": "neq",
"value": "Temporary Degradation"
},
{
"field": "Issue_Status",
"operator": "neq",
"value": "Queued"
}],
"logic": "and"
}
}
The database where the data is stored is Cassandra and according to the following links Cassandra not equal operator, Cassandra OR operator,
Cassandra Between order by operators, Cassandra does not support the NOT EQUAL TO, OR, BETWEEN operators, so there is no way I can filter the URL with these operators except with AND.
Second, I am using the following code to apply a simple filter with the requests library.
import requests
payload = {'field':'T_Subcategory','operator':'neq','value':'Temporary Degradation'}
url = requests.post("http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets",data=payload)
But what I've got is the complete data of tickets instead of only those that are not temporary degradation.
Third, the system is actually working but we are experiencing a delay of 2-3 mins to see the data. The logic goes as follows: We have 8 users and we want to see all the tickets per user that are not temporary degradation, then we do:
def get_json():
if user_name == "user 001":
with urllib.request.urlopen(
"http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets?user_name=user&001",timeout=15) as url:
complete_data = json.loads(url.read().decode())
elif user_name == "user 002":
with urllib.request.urlopen(
"http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets?user_name=user&002",timeout=15) as url:
complete_data = json.loads(url.read().decode())
return complete_data
def get_tickets_not_temp_degradation(start_date,end_date,complete_):
return Counter([k['user_name'] for k in complete_data if start_date < dateutil.parser.parse(k.get('DateTime')) < end_date and k['T_subcategory'] != 'Temporary Degradation'])
Basically, we get the whole set of tickets from the current and last year, then we let Python to filter the complete set by user and so far there are only 10 users which means that this process is repeated 10 times and makes me no surprise to discover why we get the delay...
My questions is how can I fix this problem of the requests library? I am using the following link Requests library documentation as a tutorial to make it working but it just seems that my payload is not being read.
Your Postman request is a JSON body. Just reproduce that same body in Python. Your Python code is not sending JSON, nor is it sending the same data as your Postman sample.
For starters, sending a dictionary via the data arguments encodes that dictionary to application/x-www-form-urlencoded form, not JSON. Secondly, you appear to be sending a single filter.
The following code replicates your Postman post exactly:
import requests
filters = {"filter": {
"filters": [{
"field": "RCA_Assigned_Date",
"operator": "gte",
"value": "2017-05-31 00:00:00"
}, {
"field": "RCA_Assigned_Date",
"operator": "lte",
"value": "2017-06-04 00:00:00"
}, {
"field": "T_Subcategory",
"operator": "neq",
"value": "Temporary Degradation"
}, {
"field": "Issue_Status",
"operator": "neq",
"value": "Queued"
}],
"logic": "and"
}}
url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets"
response = requests.post(url, json=filters)
Note that filters is a Python data structure here, and that it is passed to the json keyword argument. Using the latter does two things:
Encode the Python data structure to JSON (producing the exact same JSON value as your raw Postman body value).
Set the Content-Type header to application/json (as you did in your Postman configuration by picking the JSON option in the dropdown menu after picking raw for the body).
requests is otherwise just an HTTP API, it can't make Cassandra do any more than any other HTTP library. The urllib.request.urlopen code sends GET requests, and are trivially translated to requests with:
def get_json():
url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets"
response = requests.get(url, params={'user_name': user}, timeout=15)
return response.json()
I removed the if branching and replaced that with using the params argument, which translates a dictionary of key-value pairs to a correctly encoded URL query (passing in the user name as the user_name key).
Note the json() call on the response; this takes care of decoding JSON data coming back from the server. This still takes long, you are not filtering the Cassandra data much here.
I would recommend using the json attribute instead of data. It handles the dumping for you.
import requests
data = {'user_name':'user&001'}
headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets/"
r = requests.post(url, headers=headers, json=data)
Update, answer for question 3. Is there a reason you are using urllib? I’d use python requests as well for this request.
import requests
def get_json():
r = requests.get("http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets”, params={"user_name": user_name.replace(" ", "&")})
return r.json
# not sure what you’re doing here, more context/code example would help
def get_tickets_not_temp_degradation(start_date, end_date, complete_):
return Counter([k['user_name'] for k in complete_data if start_date < dateutil.parser.parse(k.get('DateTime')) < end_date and k['T_subcategory'] != 'Temporary Degradation'])
Also, is the username really supposed to be user+001 and not user&001 or user 001?
I think, you can use requests library as follows:
import requests
import json
payload = {'field':'T_Subcategory','operator':'neq','value':'Temporary Degradation'}
url = requests.post("http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets",data=json.dumps(payload))
You are sending user in url, use it through post, but its depend upon how end points are implemented. You can try the below code :
import requests
from json import dumps
data = {'user_name':'user&001'}
headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
url = "http://10.61.202.98:8081/T/a/api/rows/cat/ect/tickets/"
r = requests.post(url, headers=headers, data=dumps(data))
First post here, so ya know, be nice?
I'm setting up a dashboard in Dashing (http://dashing.io/) using a JSON feed on a server, which looks like:
{
"error":0,
"message_of_the_day":"Welcome!",
"message_of_the_day_hash":"a1234567890123456789012345678901",
"metrics":{
"daily":{
"metric1":"1m 30s",
"metric2":160
},
"monthly":{
"metric1":"10m 30s",
"metric2":"3803"
}
},
I have been experimenting with grabbing the data from the feed, and have managed to do so by Python with no issues:
import json
import urllib2
data = {
'region': "Europe"
}
req = urllib2.Request('http://192.168.1.2/info/handlers/handler.php')
req.add_header('Content-Type', 'application/json')
response = urllib2.urlopen(req, json.dumps(data))
print response.read()
However I haven't yet been successful, and get numerous errors in Ruby.
Would anyone be able to point me in the right direction in parsing this in Ruby?
My attempts to write a basic script, (keeping it simple and outside of Dashing) don't pull through any data.
#!/usr/bin/ruby
require 'httparty'
require 'json'
response = HTTParty.get("http://192.168.1.2/info/handlers/handler.php?region=Europe")
json = JSON.parse(response.body)
puts json
In python code you are sending a JSON and adding a header. I bet it makes sense to do that in ruby as well. The code below is untested, since I can’t test it, but it should lead you into the right direction:
#!/usr/bin/ruby
require 'httparty'
require 'json'
response = HTTParty.post(
"http://192.168.1.2/info/handlers/handler.php",
headers: {'Content-Type', 'application/json'},
query: { data: { 'region' => "Europe" } }
# or maybe query: { 'region' => "Europe" }
)
puts response.inspect