I tried looking around for an answer and gave it a great many tries, but there's something strange going on here. I got some functions in my view that operate on JSON data that comes in via AJAX. Currently I'm trying to do some unit testing on these.
In my test case I have:
kwargs = {'HTTP_X_REQUESTED_WITH': 'XMLHttpRequest'}
url = '/<correct_url>/upload/'
data = {
"id" : p.id
}
c = Client()
response = c.delete(url, data, **kwargs)
content_unicode = response.content.decode('utf-8')
content = json.loads(content_unicode)
p.id is just an integer that comes from a model I'm using.
I then have a function that is being tested, parts of which looks like follows:
def delete_ajax(self, request, *args, **kwargs):
print (request.body)
body_unicode = request.body.decode('utf-8')
print (body_unicode)
body_json = json.loads(body_unicode)
The first print statement yields:
.....b"{'id': 1}"
The other one:
{'id': 1}
and finally I get an error for fourth line as follows:
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
What's going wrong here? I understand that correct JSON format should be {"id": 1} and that's what I'm sending from my test case. But somewhere along the way single-quotes are introduced into the mix causing me head ache.
Any thoughts?
You need to pass a json string to Client.delete(), not a Python dict:
kwargs = {'HTTP_X_REQUESTED_WITH': 'XMLHttpRequest'}
url = '/<correct_url>/upload/'
data = json.dumps({
"id" : p.id
})
c = Client()
response = c.delete(url, data, **kwargs)
You should also set the content-type header to "application/json" and check the content-type header in your view but that's another topic.
Related
I am creating utility which will pull API definition and associated request parameter from database. Then pushing that information to CSV (this is requirement). Till this part I am done. Now CSV I have look like this:
(Apologies for adding csv image.. this editor won't allow me to add same data in default table format)
Now, I want to pass these headers and respective values from column as api request parameters.
If, API does not have values configured then we can ignore and pass empty body.
Ex1:
http://localhost:8080/cm/apis/API6%2Ftoday?username=MyTestUser70
{ "paramsR": {
"M1": "70878-008",
"C1": "467345-121",
"T1":"Hi 2"
}
}
Ex2:
http://localhost:8080/cm/apis/API3%2Ftoday?username=MyTestUser70
{ }
What I am trying for this looks like this:
with open('apis.csv') as csv_file1:
csv_apis_read = csv.DictReader(csv_file1)
fields = csv_apis_read.fieldnames
api_csv = list(csv_apis_read)
for apis in api_csv:
#print(fields)
#print(apis .get('M1'))
#apis.get('S1')
#apis.get('C1')
final_url = f"{http://localhost:8080}{urllib.parse.quote_plus(apis.get('APIDef'))}"
#req_json = {"paramsR": {"S1" : apis.get('S1')}}
req_json = {"paramsR": {"M1" : apis.get('M1')}}
username = {"username": "MyTestUser70"}
headers = {'Accept': "application/json",'Content-Type': "application/json",'Accept-Encoding': "gzip, deflate",'Cache-Control': "no-cache",'Token': "null"}
response = requests.request("POST", f_url, json=req_json, headers=headers, params=uid)
print(response.request.url)
print(response.request.body)
print(response.request.headers)
print(response.text)
Q1.How can I pass header value as payload & associated column value as request parameter for all available APIs one by one dynamically (without hardcoding header values in code.)?
Note: Header values are not fixed. After re-generating csv mentioned at first step headers and associated values updates. That's why I am looking to generate this key-pair combination in dynamic way instead of hard-coding and pre-defining anything with header and column values.
Q2.While passing request parameters is there any way to remove [''] from M1 and T1 column values?
Can someone please guide me with this?
Thank you in advance.
Try using pandas library's iterrows function,
import pandas
api_file = pandas.read_csv("apis.csv")
for index, row in api_file.iterrows():
row = dict(row.dropna())
api_url = row['APIDef'] # Add the prefix
row.pop('APIDef')
for key in row.keys():
if key == "M1":
row[key] = eval(row[key])[0]
if key == "C1":
row[key] = int(row[key])
if key == "T1":
row[key] = eval(row[key])[0]
if key == "S1":
if row[key]:
row[key] = True
else:
row[key] = False
req_json = {}
if row:
req_json["paramsR"] = row
print(api_url, req_json)
# Make your requests
Output that prints out the endpoint followed by the params.
Happy Coding!
I am using microsoft graph api to pull my emails in python and return them as a json object. There is a limitation that it only returns 12 emails. The code is:
def get_calendar_events(token):
graph_client = OAuth2Session(token=token)
# Configure query parameters to
# modify the results
query_params = {
#'$select': 'subject,organizer,start,end,location',
#'$orderby': 'createdDateTime DESC'
'$select': 'sender, subject',
'$skip': 0,
'$count': 'true'
}
# Send GET to /me/events
events = graph_client.get('{0}/me/messages'.format(graph_url), params=query_params)
events = events.json()
# Return the JSON result
return events
The response I get are twelve emails with subject and sender, and total count of my email.
Now I want iterate over emails changing the skip in query_params to get the next 12. Any method of how to iterate it using loops or recursion.
I'm thinking something along the lines of this:
def get_calendar_events(token):
graph_client = OAuth2Session(token=token)
# Configure query parameters to
# modify the results
json_list = []
ct = 0
while True:
query_params = {
#'$select': 'subject,organizer,start,end,location',
#'$orderby': 'createdDateTime DESC'
'$select': 'sender, subject',
'$skip': ct,
'$count': 'true'
}
# Send GET to /me/events
events = graph_client.get('{0}/me/messages'.format(graph_url), params=query_params)
events = events.json()
json_list.append(events)
ct += 12
# Return the JSON result
return json_list
May require some tweaking but essentially you're adding 12 to the offset each time as long as it doesn't return an error. Then it appends the json to a list and returns that.
If you know how many emails you have, you could also batch it that way.
I am trying to upload some data to Dydra from a Sesame triplestore I have on my computer. While the download from Sesame works fine, the triples get mixed up (the s-p-o relationships change as the object of one becomes object of another). Can someone please explain why this is happening and how it can be resolved? The code is below:
#Querying the triplestore to retrieve all results
sesameSparqlEndpoint = 'http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name'
sparql = SPARQLWrapper(sesameSparqlEndpoint)
queryStringDownload = 'SELECT * WHERE {?s ?p ?o}'
dataGraph = Graph()
sparql.setQuery(queryStringDownload)
sparql.method = 'GET'
sparql.setReturnFormat(JSON)
output = sparql.query().convert()
print output
for i in range(len(output['results']['bindings'])):
#The encoding is necessary to parse non-English characters
output['results']['bindings'][i]['s']['value'].encode('utf-8')
try:
subject_extract = output['results']['bindings'][i]['s']['value']
if 'http' in subject_extract:
subject = "<" + subject_extract + ">"
subject_url = URIRef(subject)
print subject_url
predicate_extract = output['results']['bindings'][i]['p']['value']
if 'http' in predicate_extract:
predicate = "<" + predicate_extract + ">"
predicate_url = URIRef(predicate)
print predicate_url
objec_extract = output['results']['bindings'][i]['o']['value']
if 'http' in objec_extract:
objec = "<" + objec_extract + ">"
objec_url = URIRef(objec)
print objec_url
else:
objec = objec_extract
objec_wip = '"' + objec + '"'
objec_url = URIRef(objec_wip)
# Loading the data on a graph
dataGraph.add((subject_url,predicate_url,objec_url))
except UnicodeError as error:
print error
#Print all statements in dataGraph
for stmt in dataGraph:
pprint.pprint(stmt)
# Upload to Dydra
URL = 'http://dydra.com/login'
key = 'my_key'
with requests.Session() as s:
resp = s.get(URL)
soup = BeautifulSoup(resp.text,"html5lib")
csrfToken = soup.find('meta',{'name':'csrf-token'}).get('content')
# print csrf_token
payload = {
'account[login]':key,
'account[password]':'',
'csrfmiddlewaretoken':csrfToken,
'next':'/'
}
# print payload
p = s.post(URL,data=payload, headers=dict(Referer=URL))
# print p.text
r = s.get('http://dydra.com/username/rep_name/sparql')
# print r.text
dydraSparqlEndpoint = 'http://dydra.com/username/rep_name/sparql'
for stmt in dataGraph:
queryStringUpload = 'INSERT DATA {%s %s %s}' % stmt
sparql = SPARQLWrapper(dydraSparqlEndpoint)
sparql.setCredentials(key,key)
sparql.setQuery(queryStringUpload)
sparql.method = 'POST'
sparql.query()
A far simpler way to copy your data over (apart from using a CONSTRUCT query instead of a SELECT, like I mentioned in the comment) is simply to have Dydra itself directly access your Sesame endpoint, for example via a SERVICE-clause.
Execute the following on your Dydra database, and (after some time, depending on how large your Sesame database is), everything will be copied over:
INSERT { ?s ?p ?o }
WHERE {
SERVICE <http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name>
{ ?s ?p ?o }
}
If the above doesn't work on Dydra, you can alternatively just directly access the RDF statements from your Sesame store by using the URI http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name/statements. Assuming Dydra has an upload-feature where you can provide the URL of an RDF document, you can simply provide it the above URI and it should be able to load it.
The code above can work if the following changes are made:
Use CONSTRUCT query instead of SELECT. Details here -> How to iterate over CONSTRUCT output from rdflib?
Use key as input for both account[login] and account[password]
However, this is probably not the most efficient way. Primarily, doing individual INSERTs for every triple is not a good way. Dydra doesn't record all statements this way (I got only about 30% of the triples inserted). On the contrary, using the http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name/statements method as suggested by Jeen enabled me to port all the data successfully.
I'm building a website using Django. And I want that users can be able to receive alerts via SMS when new topics are posted.
I tested textlocal but I had an issue when trying to send SMS to multiple numbers (numbers = ['xxxxx','xxxxx']) .(I don't want to use group_id).
Generally I want to be able to do something like this:
numbers = (SELECT number FROM users WHERE SMS_subscribe=1)
sender = 'mywebsite'
message = 'Hey, a new topic was posted'
send_sms(numbers, message, sender)
My textlocal test code:
#!/user/bin/python
# -*- coding: utf-8 -*-
from urllib2 import Request, urlopen
from urllib import urlencode
def send_sms(uname, hash_code, numbers, message, sender):
data = urlencode({
'username' : uname,
'hash' : hash_code,
'numbers' : numbers,
'message' : message,
'sender' : sender,
'test' : True
})
#data = data.encode('utf-8')
request = Request('https://api.txtlocal.com/send/?')
response = urlopen(request, data)
return response.read()
def just_one_sms_message(message, annonce_link, sender):
links_len=len(annonce_link) + len(sender) + 1
sms_max_len = 160 - links_len
if len(message)>sms_max_len:
message = message[:sms_max_len-6]+'... : '
else:
message += ' : '
return message + annonce_link + '\n' + sender
username = 'xxxxxxx#gmail.com'
hash_code = '3b5xxxxxxxxxxxxxxxxxxxxxxxxxxx8d83818'
numbers = ('2126xxxxx096','2126xxxxx888')
annonce_link = 'http://example.com/'
sender = 'sender'
message = 'New topics..'
message = just_one_sms_message(message, annonce_link, sender)
resp = send_sms(username, hash_code, numbers, message, sender)
print resp
Executing this code I get this error :
{"warnings":[{"code":3,"message":"Invalid number"}],"errors":[{"code":4,"message":"No recipients specified"}],"status":"failure"}
But if I change: numbers=('2126xxxxx096')it works.
What is the best way or web service to do this ?
There are a couple issues you're running into. The first is how tuple literals are defined.
('somenumber') is equivalent to 'somenumber' in python. It's just a string. The parentheses alone do not define a tuple literal. To define a single-element tuple literal, you need a trailing comma of the first element. E.G. ('somenumber',).
The second issue is how urlencode works. For each value in the data dictionary, it asks for the string representation.
In the case of ('2126xxxxx096','2126xxxxx888'), since it's evaluated as a tuple, it's encoded as ('2126xxxxx096','2126xxxxx888'), resulting in %28%272126xxxxx096%27%2C+%272126xxxxx888%27%29.
In the case of ('2126xxxxx096'), since it's evaluated as a string, it's encoded as 2126xxxxx096. Notice the lack of junk characters like %28 and %29.
So, in short, since the value of numbers in the urlencode dictionary is a tuple when you have multiple numbers, you need to convert the tuple into a comma-separated string. This can be accomplished via ",".join(numbers), which in the case of ('2126xxxxx096','2126xxxxx888') produces 2126xxxxx096%2C2126xxxxx888. With the fixed encoding, your message should now send to multiple numbers.
I have the following function,
def facebooktest(request):
fb_value = ast.literal_eval(request.body)
fb_foodies = Foodie.objects.filter(facebook_id__in = fb_value.values())
for fb_foodie in fb_foodies:
state = request.user.relationships.following().filter(username = fb_foodie.user.username).exists()
userData = {
'fbid': fb_foodie.facebook_id,
'followState': int(state),
}
Basically I am checking to see which of the user's facebook friends are on my django app. If they are, return the followState. The followState basically returns a 1 or a 0. 1 if the user is already following them on my Django app and 0 if they are not following their facebook friend on my Django app.
I would like to return back a json type dictionary to that user that looks like this:
[{fbid:222222222222, followState: 0}, {fbid:111111111111, followState: 1}, {fbid:435433434534, followState:1}]
EDIT
I have the dictionary structure but I just want to return it like the structure above.
def facebooktest(request):
fb_value = ast.literal_eval(request.body)
fb_foodies = Foodie.objects.filter(facebook_id__in = fb_value.values())
response = []
for fb_foodie in fb_foodies:
state = request.user.relationships.following().filter(username = fb_foodie.user.username).exists()
userData = {
'fbid': fb_foodie.facebook_id,
'followState': int(state),
}
response.append(userData)
return json.dumps(response)
There is a function in the django.forms.models package for that: model_to_dict
from django.forms.models import model_to_dict
model_to_dict(your_model, fields=[], exclude=[])
From the help:
model_to_dict(instance, fields=None, exclude=None)
Returns a dict containing the data in ``instance`` suitable for passing as
a Form's ``initial`` keyword argument.
``fields`` is an optional list of field names. If provided, only the named
fields will be included in the returned dict.
``exclude`` is an optional list of field names. If provided, the named
fields will be excluded from the returned dict, even if they are listed in
the ``fields`` argument.
I think you're looking for this:
return HttpResponse(simplejson.dumps(response_dict), mimetype='application/json')
where 'response_dict' would be your dictionary.