1.
I had to send request.POST a dict of dict,
big_dict = {k:{kk:vv, kk1: vv1}, k1:{kk:vv, kk1: vv1}}
and ajax always somehow made it into one dict like
one_dict = {k[kk]:vv, k[kk1]:vv1, k1[kk]:vv, k1[kk1]:vv1}
someone told me to
JSON.stringify(big_dict)
before ajax request and then json.loads it in my python django views.
json.loads(request.POST.get('data'))
it worked all damn fine till someone uploaded a chinese character named file.
At the third step of json.loads in my view,
(wrong)
the key and value of my big_dict is still good, looking fine, displaying proper chinese character.
(edited)
the key and value of my big_dict, chinese characters in one value looks good, and not in the other.
{'0': {'designer': 'EV', 'number': '0229', 'version': '', 'bracket': '(120门幅)', 'imgUrl': 'http://127.0.0.1:8000/media/cache/temp/EV-0229(120%E9%97%A8%E5%B9%85%EF%BC%89.jpg'}}
but when I try to
for k, v in big_dict.items():
for kk, vv in v.items():
# do something with kk and vv
chinese characters within vv becomes all wrong.
and os is telling me no such file.
It looked like a simple problem, I searched a web for over an hour and tried things, nothing worked..
Thanks anyone in advance for helping me out.
EDIT
script from template
$('.designInfoSave').click(function() {
let num = $('.designBox').length;
let bcontext = {};
for (let i = 0; i<num; i++) {
let infoBox = $('#'+i+"-infoBox");
let image = $('#image'+i);
image.toArray();
let children = infoBox.children();
children.toArray();
bcontext[i] = {
'designer': children[2].value,
'number': children[4].value,
'version': children[6].value,
'bracket': children[8].value,
'imgUrl': image[0].currentSrc,
};
}
$.ajax({
type: 'POST',
url: "{% url 'update_des_info' %}",
data: {'csrfmiddlewaretoken': '{{ csrf_token }}', 'context':JSON.stringify(bcontext)},
dataType: "json",
success: function(response){
$("#album").hide();
$(".ajaxResult").show();
}
}
)
});
views.py
import json
import os.path
def update_des_info(request):
if request.method != 'POST':
return
queryset = json.loads(request.POST.get('context'))
print(queryset)
exist = 0
created = 0
for k, v in queryset.items():
designer = v['designer']
number = v['number']
version = v['version']
bracket = v['bracket']
fp = v['imgUrl'][v['imgUrl'].rfind('/media/')+6:]
n, c = Design.objects.get_or_create(designer=designer, number=number, version=version)
if c is True:
created += 1
else:
exist += 1
if n.bracket != bracket:
n.bracket = bracket
fpp = settings.MEDIA_ROOT + fp
p, fn = os.path.split(fpp)
nfp = os.path.join(settings.MEDIA_ROOT, 'designs', 'image', fn)
shutil.move(fpp, nfp)
if not n.image:
n.image = nfp
n.save()
Related
We have a Python script that creates a csv file of enterprise data. One part of the enterprise data is a list of nacecodes (can be None) looking like this once its written to the csv file ['47299', '8690901', '4729903', '86909'] (It's one cell).
In a second script, this time written in Node.js, we parse the csv file with papaparse. We want the nacecodes to be an array but it's a string looking like "['47299', '8690901', '4729903', '86909']"
How can we parse this string to an array? I had found a possible solution by using JSON.parse but its given me a Unexpected token ' in JSON at position 1
Python script
class Enterprise:
def __init__(self):
self.enterprise_number = ''
self.vat_number = ''
self.nace_codes = set()
self.tel = ''
self.mobile = ''
self.email = ''
def to_json(self):
return {
'enterprise_number': self.enterprise_number if self.enterprise_number != '' else None,
'vat_number': self.vat_number if self.vat_number != '' else None,
'nace_codes': list(self.nace_codes) if len(self.nace_codes) > 0 else None
'tel': self.tel if self.tel != '' else None,
'mobile': self.mobile if self.mobile != '' else None,
'email': self.email if self.email != '' else None,
}
def read_data():
...
with open('enterprise_data.csv', 'w',) as file:
writer = csv.writer(file, delimiter=';')
writer.writerow(['enterprise_number', 'vat_number', 'name', 'nace_codes', 'type_of_enterprise', 'juridical_form', 'start_date', 'county', 'city', 'address', 'postal_code', 'box', 'group_part', 'group_number', 'tel', 'mobile', 'email', 'is_active'])
with open('data/enterprise_insert.csv') as file:
for line in islice(file, 1, None):
enterprise = Enterprise()
line = line.rstrip()
...
formatted_data = enterprise.to_json()
writer.writerow([formatted_data['enterprise_number'], formatted_data['vat_number'], formatted_data['nace_codes'], formatted_data['tel'], formatted_data['mobile'], formatted_data['email'])
Node.js script
const csvFilePath = 'data/enterprise_data.csv'
const readCSV = async (filePath) => {
const csvFile = fs.readFileSync(filePath);
const csvData = csvFile.toString();
return new Promise(resolve => {
Papa.parse(csvData, {
header: true,
skipEmptyLines: true,
transformHeader: header => header.trim(),
complete: results => {
console.log('Read', results.data.length, 'records.');
resolve(results.data);
}
});
});
};
const start = async () => {
try {
let parsedData = await readCSV(csvFilePath);
parsedData.map((row, i) => {
console.log(`${i} | ${row.enterprise_number}`);
const nace_codes = row.nace_codes ? JSON.parse(row.nace_codes) : '';
console.log('Parsed value: ', nace_codes);
});
} catch(error) {
console.log(`Crashed | ${error} `);
}
}
start();
Assuming that csvData does look like ['47299', '8690901', '4729903', '86909'].
What’s wrong is that single quote is not accepted in JSON, so JSON.parse throws an error.
To fix this you simply need to replace all occurrences of single quotes by double quotes like so:
const csvData = csvFile.toString().replaceAll("'", '"')
I wrote a code that takes 9 keys from API.
The authors, isbn_one, isbn_two, thumbinail, page_count fields may not always be retrievable, and if any of them are missing, I would like it to be None. Unfortunately, if, or even nested, doesn't work. Because that leads to a lot of loops. I also tried try and except KeyError etc. because each key has a different error and it is not known which to assign none to. Here is an example of logic when a photo is missing:
th = result['volumeInfo'].get('imageLinks')
if th is not None:
book_exists_thumbinail = {
'thumbinail': result['volumeInfo']['imageLinks']['thumbnail']
}
dnew = {**book_data, **book_exists_thumbinail}
book_import.append(dnew)
else:
book_exists_thumbinail_n = {
'thumbinail': None
}
dnew_none = {**book_data, **book_exists_thumbinail_n}
book_import.append(dnew_none)
When I use logic, you know when one condition is met, e.g. for thumbinail, the rest is not even checked.
When I use try and except, it's similar. There's also an ISBN in the keys, but there's a list in the dictionary over there, and I need to use something like this:
isbn_zer = result['volumeInfo']['industryIdentifiers']
dic = collections.defaultdict(list)
for d in isbn_zer:
for k, v in d.items():
dic[k].append(v)
Output data: [{'type': 'ISBN_10', 'identifier': '8320717507'}, {'type': 'ISBN_13', 'identifier': '9788320717501'}]
I don't know what to use anymore to check each key separately and in the case of its absence or lack of one ISBN (identifier) assign the value None. I have already tried many ideas.
The rest of the code:
book_import = []
if request.method == 'POST':
filter_ch = BookFilterForm(request.POST)
if filter_ch.is_valid():
cd = filter_ch.cleaned_data
filter_choice = cd['choose_v']
filter_search = cd['search']
search_url = "https://www.googleapis.com/books/v1/volumes?"
params = {
'q': '{}{}'.format(filter_choice, filter_search),
'key': settings.BOOK_DATA_API_KEY,
'maxResults': 2,
'printType': 'books'
}
r = requests.get(search_url, params=params)
results = r.json()['items']
for result in results:
book_data = {
'title': result['volumeInfo']['title'],
'authors': result['volumeInfo']['authors'][0],
'publish_date': result['volumeInfo']['publishedDate'],
'isbn_one': result['volumeInfo']['industryIdentifiers'][0]['identifier'],
'isbn_two': result['volumeInfo']['industryIdentifiers'][1]['identifier'],
'page_count': result['volumeInfo']['pageCount'],
'thumbnail': result['volumeInfo']['imageLinks']['thumbnail'],
'country': result['saleInfo']['country']
}
book_import.append(book_data)
else:
filter_ch = BookFilterForm()
return render(request, "BookApp/book_import.html", {'book_import': book_import,
'filter_ch': filter_ch})```
JavaScript is throwing an error 'Uncaught Syntax Error: Unexpected token '&''
when debugged in Views.py I got he data with proper Apostrophes.
def newEntry(request):
assert isinstance(request, HttpRequest)
i = 1
for x in lines:
for line in x:
cursor.execute("select distinct regionn FROM [XYZ].[dbo].[Errors] where [Linne] like '%" +line+ "%'")
region[i] = cursor.fetchall()
i = i+1
return render(
request,
'app/newEntry.html',
{
'title': 'New Entry',
'year':datetime.now().year,
'lines': lines,
'regions': region,
}
)
and here is my JS code
var Regions= {{regions}}
function changecat(value) {
if (value.length == 0) document.getElementById("category").innerHTML = "<option>default option here</option>";
else {
var catOptions = "";
for (categoryId in Regions[value]) {
catOptions += "<option>" + categoryId+ "</option>";
}
document.getElementById("category").innerHTML = catOptions;
}
}
Thanks in advance, if this is not a best practice to carry data, suggest me some best process which fills my requirement
I am running into an issue that I can't seem to get past. Any insight would be great.
The script is supposed to get memory allocation information from a database, and return that information as a formatted JSON object. The script works fine when I give it a static JSON object will stack_ids (the information I would be passing) but it won't work when I try to pass the information via POST.
Although the current state of my code uses request.json("") to access the passed data, I have also tried request.POST.get("").
My HTML includes this post request, using D3's xhr post:
var stacks = [230323, 201100, 201108, 229390, 201106, 201114];
var stack_ids = {'stack_ids': stacks};
var my_request = d3.xhr('/pie_graph');
my_request.header("Content-Type", "application/json")
my_request.post(stack_ids, function(stuff){
stuff = JSON.parse(stuff);
var data1 = stuff['allocations'];
var data2 = stuff['allocated bytes'];
var data3 = stuff['frees'];
var data4 = stuff['freed bytes'];
...
...
}, "json");
while my server script has this route:
#views.webapp.route('/pie_graph', method='POST')
def server_pie_graph_json():
db = views.db
config = views.config
ret = {
'allocations' : [],
'allocated bytes' : [],
'frees' : [],
'freed bytes' : [],
'leaks' : [],
'leaked bytes' : []
}
stack_ids = request.json['stack_ids']
#for each unique stack trace
for pos, stack_id in stack_ids:
stack = db.stacks[stack_id]
nallocs = format(stack.nallocs(db, config))
nalloc_bytes = format(stack.nalloc_bytes(db, config))
nfrees = format(stack.nfrees(db, config))
nfree_bytes = format(stack.nfree_bytes(db, config))
nleaks = format(stack.nallocs(db, config) - stack.nfrees(db, config))
nleaked_bytes = format(stack.nalloc_bytes(db, config) - stack.nfree_bytes(db, config))
# create a dictionary representing the stack
ret['allocations'].append({'label' : stack_id, 'value' : nallocs})
ret['allocated bytes'].append({'label' : stack_id, 'value' : nalloc_bytes})
ret['frees'].append({'label' : stack_id, 'value' : nfrees})
ret['freed bytes'].append({'label' : stack_id, 'value' : nfree_bytes})
ret['leaks'].append({'label' : stack_id, 'value' : nleaks})
ret['leaked bytes'].append({'label' : stack_id, 'value' : nfree_bytes})
# return dictionary of allocation information
return ret
Most of that can be ignored, the script works when I give it a static JSON object full of data.
The request currently returns a 500 Internal Server Error: JSONDecodeError('Expecting value: line 1 column 2 (char 1)',).
Can anyone explain to me what I am doing wrong?
Also, if you need me to explain anything further, or include any other information, I am happy to do that. My brain is slightly fried after working on this for so long, so I may have missed something.
Here is what I do with POST and it works:
from bottle import *
#post('/')
def do_something():
comment = request.forms.get('comment')
sourcecode = request.forms.get('sourceCode')
Source
function saveTheSourceCodeToServer(comment) {
var path = saveLocation();
var params = { 'sourceCode' : getTheSourceCode() , 'comment' : comment};
post_to_url(path, params, 'post');
}
Source with credits to JavaScript post request like a form submit
I'm using Yahoo Placemaker API which gives different structure of json depending on input.
Simple json file looks like this:
{
'document':{
'itemDetails':{
'id'='0'
'prop1':'1',
'prop2':'2'
}
'other':{
'propA':'A',
'propB':'B'
}
}
}
When I want to access itemDetails I simply write json_file['document']['itemDetails'].
But when I get more complicated response, such as
{
'document':{
'1':{
'itemDetails':{
'id'='1'
'prop1':'1',
'prop2':'2'
}
},
'0':{
'itemDetails':{
'id'='0'
'prop1':'1',
'prop2':'2'
},
'2':{
'itemDetails':{
'id'='1'
'prop1':'1',
'prop2':'2'
}
'other':{
'propA':'A',
'propB':'B'
}
}
}
the solution obviously does not work.
I use id, prop1 and prop2 to create objects.
What would be the best approach to automatically access itemDetails in the second case without writing json_file['document']['0']['itemDetails'] ?
If I understand correctly, you want to loop through all of json_file['document']['0']['itemDetails'], json_file['document']['1']['itemDetails'], ...
If that's the case, then:
item_details = {}
for key, value in json_file['document']:
item_details[key] = value['itemDetails']
Or, a one-liner:
item_details = {k: v['itemDetails'] for k, v in json_file['document']}
Then, you would access them as item_details['0'], item_details['1'], ...
Note: You can suppress the single quotes around 0 and 1, by using int(key) or int(k).
Edit:
If you want to access both cases seamlessly (whether there is one result or many), you could check:
if 'itemDetails' in json_file['document']:
item_details = {'0': json_file['document']['itemDetails']}
else:
item_details = {k: v['itemDetails'] for k, v in json_file['document'] if k != 'other'}
Then loop through the item_details dict.