So far, I have managed to take a bunch of HTML elements for whose contentEditable attribute is True and join their id's and HTML data together to make an Ajax data string. I can get the serialized data back to the server, no problem. For example,
$(document).ready(function(){
$("#save").click(function(){
var ajax_string = ''
$( "[contenteditable=True]" ).each(function( intIndex ){
ajax_string = ajax_string + '&' + $("[contenteditable=True]")[intIndex].id + ':' + $(this).html();
});
$.ajax({
type:"POST",
url:"/episode_edit/{{ episode.ID_Episode }}",
data:ajax_string,
success:function(result){
<!--alert( ajax_string );-->
}
});
});
});
On the server:
for r in request.params: print r
I get strings:
AltTitle:some Alt Title
PrintTitle:The Print Title
Notes:A bunch o' notes.
My dilema now is that I need to convert each request.param string into a dictionary object, so I can map it back to my database model. I can think of some very ugly ways of doing this, but what is the best way?
You say you want to convert each request.param string into a dictionary object, but is that what you meant? It looks like each string is just a key/value pair.
You can pretty simply create a dictionary from those values using:
opts = {}
for r in request.params:
parts = r.split(':', 1)
if len(parts) == 2:
opts[parts[0]] = parts[1]
else:
# some error condition
Related
I am creating utility which will pull API definition and associated request parameter from database. Then pushing that information to CSV (this is requirement). Till this part I am done. Now CSV I have look like this:
(Apologies for adding csv image.. this editor won't allow me to add same data in default table format)
Now, I want to pass these headers and respective values from column as api request parameters.
If, API does not have values configured then we can ignore and pass empty body.
Ex1:
http://localhost:8080/cm/apis/API6%2Ftoday?username=MyTestUser70
{ "paramsR": {
"M1": "70878-008",
"C1": "467345-121",
"T1":"Hi 2"
}
}
Ex2:
http://localhost:8080/cm/apis/API3%2Ftoday?username=MyTestUser70
{ }
What I am trying for this looks like this:
with open('apis.csv') as csv_file1:
csv_apis_read = csv.DictReader(csv_file1)
fields = csv_apis_read.fieldnames
api_csv = list(csv_apis_read)
for apis in api_csv:
#print(fields)
#print(apis .get('M1'))
#apis.get('S1')
#apis.get('C1')
final_url = f"{http://localhost:8080}{urllib.parse.quote_plus(apis.get('APIDef'))}"
#req_json = {"paramsR": {"S1" : apis.get('S1')}}
req_json = {"paramsR": {"M1" : apis.get('M1')}}
username = {"username": "MyTestUser70"}
headers = {'Accept': "application/json",'Content-Type': "application/json",'Accept-Encoding': "gzip, deflate",'Cache-Control': "no-cache",'Token': "null"}
response = requests.request("POST", f_url, json=req_json, headers=headers, params=uid)
print(response.request.url)
print(response.request.body)
print(response.request.headers)
print(response.text)
Q1.How can I pass header value as payload & associated column value as request parameter for all available APIs one by one dynamically (without hardcoding header values in code.)?
Note: Header values are not fixed. After re-generating csv mentioned at first step headers and associated values updates. That's why I am looking to generate this key-pair combination in dynamic way instead of hard-coding and pre-defining anything with header and column values.
Q2.While passing request parameters is there any way to remove [''] from M1 and T1 column values?
Can someone please guide me with this?
Thank you in advance.
Try using pandas library's iterrows function,
import pandas
api_file = pandas.read_csv("apis.csv")
for index, row in api_file.iterrows():
row = dict(row.dropna())
api_url = row['APIDef'] # Add the prefix
row.pop('APIDef')
for key in row.keys():
if key == "M1":
row[key] = eval(row[key])[0]
if key == "C1":
row[key] = int(row[key])
if key == "T1":
row[key] = eval(row[key])[0]
if key == "S1":
if row[key]:
row[key] = True
else:
row[key] = False
req_json = {}
if row:
req_json["paramsR"] = row
print(api_url, req_json)
# Make your requests
Output that prints out the endpoint followed by the params.
Happy Coding!
The following if condition is in Ruby code and I need to duplicated it in Python:
# CpqElm-Login: success
if res.headers['CpqElm-Login'].to_s =~ /success/
cookie = res.get_cookies.scan(/(Compaq\-HMMD=[\w\-]+)/).flatten[0] || ''
end
In case you need all the Ruby original code, you can check this link: https://github.com/rapid7/metasploit-framework/blob/master/modules/exploits/multi/http/hp_sys_mgmt_exec.rb
Currently I have this:
cookie = ''
response_headers = resp.headers.get('CpqElm-Login')
response_cookie = resp.cookies.get('').value #here is were I need this part " (/(Compaq\-HMMD=[\w\-]+)/).flatten[0]" in Python
if 'success' in response_headers:
cookie = response_cookie
I tried looking around for an answer and gave it a great many tries, but there's something strange going on here. I got some functions in my view that operate on JSON data that comes in via AJAX. Currently I'm trying to do some unit testing on these.
In my test case I have:
kwargs = {'HTTP_X_REQUESTED_WITH': 'XMLHttpRequest'}
url = '/<correct_url>/upload/'
data = {
"id" : p.id
}
c = Client()
response = c.delete(url, data, **kwargs)
content_unicode = response.content.decode('utf-8')
content = json.loads(content_unicode)
p.id is just an integer that comes from a model I'm using.
I then have a function that is being tested, parts of which looks like follows:
def delete_ajax(self, request, *args, **kwargs):
print (request.body)
body_unicode = request.body.decode('utf-8')
print (body_unicode)
body_json = json.loads(body_unicode)
The first print statement yields:
.....b"{'id': 1}"
The other one:
{'id': 1}
and finally I get an error for fourth line as follows:
json.decoder.JSONDecodeError: Expecting property name enclosed in double quotes: line 1 column 2 (char 1)
What's going wrong here? I understand that correct JSON format should be {"id": 1} and that's what I'm sending from my test case. But somewhere along the way single-quotes are introduced into the mix causing me head ache.
Any thoughts?
You need to pass a json string to Client.delete(), not a Python dict:
kwargs = {'HTTP_X_REQUESTED_WITH': 'XMLHttpRequest'}
url = '/<correct_url>/upload/'
data = json.dumps({
"id" : p.id
})
c = Client()
response = c.delete(url, data, **kwargs)
You should also set the content-type header to "application/json" and check the content-type header in your view but that's another topic.
I am trying to upload some data to Dydra from a Sesame triplestore I have on my computer. While the download from Sesame works fine, the triples get mixed up (the s-p-o relationships change as the object of one becomes object of another). Can someone please explain why this is happening and how it can be resolved? The code is below:
#Querying the triplestore to retrieve all results
sesameSparqlEndpoint = 'http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name'
sparql = SPARQLWrapper(sesameSparqlEndpoint)
queryStringDownload = 'SELECT * WHERE {?s ?p ?o}'
dataGraph = Graph()
sparql.setQuery(queryStringDownload)
sparql.method = 'GET'
sparql.setReturnFormat(JSON)
output = sparql.query().convert()
print output
for i in range(len(output['results']['bindings'])):
#The encoding is necessary to parse non-English characters
output['results']['bindings'][i]['s']['value'].encode('utf-8')
try:
subject_extract = output['results']['bindings'][i]['s']['value']
if 'http' in subject_extract:
subject = "<" + subject_extract + ">"
subject_url = URIRef(subject)
print subject_url
predicate_extract = output['results']['bindings'][i]['p']['value']
if 'http' in predicate_extract:
predicate = "<" + predicate_extract + ">"
predicate_url = URIRef(predicate)
print predicate_url
objec_extract = output['results']['bindings'][i]['o']['value']
if 'http' in objec_extract:
objec = "<" + objec_extract + ">"
objec_url = URIRef(objec)
print objec_url
else:
objec = objec_extract
objec_wip = '"' + objec + '"'
objec_url = URIRef(objec_wip)
# Loading the data on a graph
dataGraph.add((subject_url,predicate_url,objec_url))
except UnicodeError as error:
print error
#Print all statements in dataGraph
for stmt in dataGraph:
pprint.pprint(stmt)
# Upload to Dydra
URL = 'http://dydra.com/login'
key = 'my_key'
with requests.Session() as s:
resp = s.get(URL)
soup = BeautifulSoup(resp.text,"html5lib")
csrfToken = soup.find('meta',{'name':'csrf-token'}).get('content')
# print csrf_token
payload = {
'account[login]':key,
'account[password]':'',
'csrfmiddlewaretoken':csrfToken,
'next':'/'
}
# print payload
p = s.post(URL,data=payload, headers=dict(Referer=URL))
# print p.text
r = s.get('http://dydra.com/username/rep_name/sparql')
# print r.text
dydraSparqlEndpoint = 'http://dydra.com/username/rep_name/sparql'
for stmt in dataGraph:
queryStringUpload = 'INSERT DATA {%s %s %s}' % stmt
sparql = SPARQLWrapper(dydraSparqlEndpoint)
sparql.setCredentials(key,key)
sparql.setQuery(queryStringUpload)
sparql.method = 'POST'
sparql.query()
A far simpler way to copy your data over (apart from using a CONSTRUCT query instead of a SELECT, like I mentioned in the comment) is simply to have Dydra itself directly access your Sesame endpoint, for example via a SERVICE-clause.
Execute the following on your Dydra database, and (after some time, depending on how large your Sesame database is), everything will be copied over:
INSERT { ?s ?p ?o }
WHERE {
SERVICE <http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name>
{ ?s ?p ?o }
}
If the above doesn't work on Dydra, you can alternatively just directly access the RDF statements from your Sesame store by using the URI http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name/statements. Assuming Dydra has an upload-feature where you can provide the URL of an RDF document, you can simply provide it the above URI and it should be able to load it.
The code above can work if the following changes are made:
Use CONSTRUCT query instead of SELECT. Details here -> How to iterate over CONSTRUCT output from rdflib?
Use key as input for both account[login] and account[password]
However, this is probably not the most efficient way. Primarily, doing individual INSERTs for every triple is not a good way. Dydra doesn't record all statements this way (I got only about 30% of the triples inserted). On the contrary, using the http://my.ip.ad.here:8080/openrdf-sesame/repositories/rep_name/statements method as suggested by Jeen enabled me to port all the data successfully.
I have min max variables that are a result of query on model
args.aggregate(Min('price'))
args.aggregate(Max('price'))
returning the serialized data like this
return HttpResponse(json.dumps([{"maxPrice":args.aggregate(Max('price')),
"minPrice":args.aggregate(Min('price'))}]), content_type ='application/json')
the result looks like this:
minPrice = {
"price__min" = 110;
};
maxPrice = {
"price__max" = 36000;
};
and extracting the data looks like this
...
success:^(AFHTTPRequestOperation *operation, id responseObject){
NSDictionary *elements = responseObject;
int minPrice = elements[0][#"minPrice"][#"price__min"];
}
The Question: how to change the django/python code in order for the objective-c code to look like this: int minPrice = elements[#"minPrice"];
data = args.aggregate(minPrice=Min('price'), maxPrice=Max('price'))
return HttpResponse(json.dumps(data), content_type='application/json')
data variable is a dictionary with "minPrice" and "maxPrice" keys.
Dump to JSON a dictionary instead of a list:
values = args.aggregate(Min('price'), Max('price'))
return HttpResponse(json.dumps({'maxPrice': values['price__max'],
'minPrice': values['price__min']}),
content_type ='application/json')
Well you could do something like this to rearrange the json dump:
data = {'maxPrice': args.aggregate(Max('price'))['price__max'],
'minPrice': args.aggregate(Min('price'))['price__min']}
return HttpResponse(json.dumps(data), content_type ='application/json')
That should give you a json dict of that form '{"maxPrice": xxx, "minPrice": yyy}'.