How does django really handle multiple requests on development server? - python

I am making a little django app to serve translations for my react frontend. The way it works is as follows:
The frontend tries to find a translation using a key.
If the translation for that key is not found, It sends a request to the backend with the missing key
On the backend, the missing key is appended to a json file
Everything works just fine when the requests are sent one at a time (when one finishes, the other is sent). But when multiple requests are sent at the same time, everything breaks. The json file gets corrupted. It's like all the requests are changing the file at the same time which causes this to happen. I am not sure if that's the case because I think that the file can not be edited by two processes at the same time(correct me if I am wrong) but I don't receive such an error which indicates that the requests are handled one at a time according to this and this
Also, I tried something, which to my surprise worked, that is to add time.sleep(1) to the top of my api view. When I did this, everything worked as expected.
What is going on ?
Here is the code, just in case it matters:
#api_view(['POST'])
def save_missing_translation_keys(request, lng, ns):
time.sleep(1)
missing_trans_path = MISSING_TRANS_DIR / f'{lng}.json'
# Read lng file and get current missing keys for given ns
try:
with open(missing_trans_path, 'r', encoding='utf-8') as missing_trans_file:
if is_file_empty(missing_trans_path):
missing_keys_dict = {}
else:
missing_keys_dict = json.load(missing_trans_file)
except FileNotFoundError:
missing_keys_dict = {}
except Exception as e:
# Even if file is not empty, we might not be able to parse it for some reason, so we log any errors in log file
with open(MISSING_LOG_FILE, 'a', encoding='utf-8') as logFile:
logFile.write(
f'could not save missing keys {str(list(request.data.keys()))}\nnamespace {lng}/{ns} file can not be parsed because\n{str(e)}\n\n\n')
raise e
# Add new missing keys to the list above.
ns_missing_keys = missing_keys_dict.get(ns, [])
for missing_key in request.data.keys():
if missing_key and isinstance(missing_key, str):
ns_missing_keys.append(missing_key)
else:
raise ValueError('Missing key not allowed')
missing_keys_dict.update({ns: list(set(ns_missing_keys))})
# Write new missing keys to the file
with open(missing_trans_path, 'w', encoding='utf-8') as missing_trans_file:
json.dump(missing_keys_dict, missing_trans_file, ensure_ascii=False)
return Response()

Related

I am looking to create an API endpoint route that returns txt in a json format -Python

I'm new to developing and my question(s) involves creating an API endpoint in our route. The api will be used for a POST from a Vuetify UI. Data will come from our MongoDB. We will be getting a .txt file for our shell script but it will have to POST as a JSON. I think these are the steps for converting the text file:
1)create a list for the lines of the .txt
2)add each line to the list
3) join the list elements into a string
4)create a dictionary with the file/file content and convert it to JSON
This is my current code for the steps:
import json
something.txt: an example of the shell script ###
f = open("something.txt")
create a list to put the lines of the file in
file_output = []
add each line of the file to the list
for line in f:
file_output.append(line)
mashes all of the list elements together into one string
fileoutput2 = ''.join(file_output)
print(fileoutput2)
create a dict with file and file content and then convert to JSON
json_object = {"file": fileoutput2}
json_response = json.dumps(json_object)
print(json_response)
{"file": "Hello\n\nSomething\n\nGoodbye"}
I have the following code for my baseline below that I execute on my button press in the UI
#bp_customer.route('/install-setup/<string:customer_id>', methods=['POST'])
def install_setup(customer_id):
cust = Customer()
customer = cust.get_customer(customer_id)
### example of a series of lines with newline character between them.
script_string = "Beginning\nof\nscript\n"
json_object = {"file": script_string}
json_response = json.dumps(json_object)
get the install shell script content
replace the values (somebody has already done this)
attempt to return the below example json_response
return make_response(jsonify(json_response), 200)
my current Vuetify button press code is here: so I just have to ammend it to a POST and the new route once this is established
onClickScript() {
console.log("clicked");
axios
.get("https://sword-gc-eadsusl5rq-uc.a.run.app/install-setup/")
.then((resp) => {
console.log("resp: ", resp.data);
this.scriptData = resp.data;
});
},
I'm having a hard time combining these 2 concepts in the correct way. Any input as to whether I'm on the right path? Insight from anyone who's much more experienced than me?
You're on the right path, but needlessly complicating things a bit. For example, the first bit could be just:
import json
with open("something.txt") as f:
json_response = json.dumps({'file': f.read()})
print(json_response)
And since you're looking to pass everything through jsonify anyway, even this would suffice:
with open("something.txt") as f:
data = {'file': f.read()}
Where you can pass data directly through jsonify. The rest of it isn't sufficiently complete to offer any concrete comments, but the basic idea is OK.
If you have a working whole, you could go to https://codereview.stackexchange.com/ to ask for some reviews, you should limit questions on StackOverflow to actual questions about getting something to work.

Parsing tweets in json format to find tweeter users

I am reading a tweeter feed in json format to read the number of users.
Some lines in the input file might not be tweets, but messages that the Twitter server sent to the developer (such as limit notices). I need to ignore these messages.
These messages would not contain the created_at field and can be filtered out accordingly.
I have written the following piece of code, to extract the valid tweets, and then extract the user.id and the text.
def safe_parse(raw_json):
try:
json_object = json.loads(raw_json)
if 'created_at' in json_object:
return json_object
else:
return
except ValueError as error:
return
def get_usr_txt (line):
tmp = safe_parse(line)
if(tmp != None):
return ((tmp.get('user').get('id_str'),tmp.get('text')))
else:
return
My challenge is that I get one extra user called "None"
Here is a sample output (it is a large file)
('49838600', 'This is the temperament you look for in a guy who would
have access to our nuclear arsenal. ), None, ('2678507624', 'RT
#GrlSmile: #Ricky_Vaughn99 Yep, which is why in 1992 I switched from
Democrat to Republican to vote Pat Buchanan, who warned of all of
t…'),
I am struggling to find out, what I am doing wrong. There is no None in the tweeter file, hence I am assuming that I am reading the
{"limit":{"track":1,"timestamp_ms":"1456249416070"}} but the code above should not include it, unless I am missing something.
Any pointers? and thanks for the your help and your time.
Some lines in the input file might not be tweets, but messages that the Twitter server sent to the developer (such as limit notices). I need to ignore these messages.
That's not exactly what happens. If one of the following happens:
raw_json is not a valid JSON document
created_at is not in the parsed object.
you return with default value, which is None. If you want to ignore these, you can add filter step between two operations:
rdd.map(safe_parse).filter(lambda x: x).map(get_usr_txt)
You can also use flatMap trick to avoid filter and simplify your code (borrowed from this answer by zero323):
def safe_parse(raw_json):
try:
json_object = json.loads(raw_json)
except ValueError as error:
return []
else:
if 'created_at' in json_object:
yield json_object
rdd.flatMap(safe_parse).map(get_usr_txt)

PRAW Errors updating to new Python Distro

So I am trying to create a bot that cross posts from a sub (r/pics) to (r/polpics) using a bit of code from u/GoldenSights. I upgraded to a new python distro and I get a ton of errors, I don't even know where to begin. Here is the code (formatting off, error lines bold):
Traceback (most recent call last):
File "C:\Users\tonyc\AppData\Local\Programs\Python\Python36-32\Lib\site-
packages\praw\subdump.py", line 84, in <module>
r = praw.Reddit(USERAGENT)
File "C:\Users\tonyc\AppData\Local\Programs\Python\Python36-32\lib\site-
packages\praw\reddit.py", line 150, in __init__
raise ClientException(required_message.format(attribute))
praw.exceptions.ClientException: Required configuration setting 'client_id'
missing.
This setting can be provided in a praw.ini file, as a keyword argument to the `Reddit` class constructor, or as an environment variable.
This seems to be related to USERAGENT setting. I don't think I have that configured right.
USERAGENT = ""
# This is a short description of what the bot does. For example
"/u/GoldenSights' Newsletter bot"
SUBREDDIT = "pics"
# This is the sub or list of subs to scan for new posts.
# For a single sub, use "sub1".
# For multiple subs, use "sub1+sub2+sub3+...".
# For all use "all"
KEYWORDS = ["It looks like this post is about US Politics."]
# Any comment containing these words will be saved.
KEYDOMAINS = []
# If non-empty, linkposts must have these strings in their URL
This is the error line:
print('Logging in')
r = praw.Reddit(USERAGENT) <--here, this is error line 84
r.set_oauth_app_info(APP_ID, APP_SECRET, APP_URI)
r.refresh_access_information(APP_REFRESH)
Also in Reddit.py :
raise ClientException(required_message.format(attribute)) <--- error
praw.exceptions.ClientException: Required configuration setting 'client_id'
missing.
This setting can be provided in a praw.ini file, as a keyword argument to
the `Reddit` class constructor, or as an environment variable.
Firstly, you're going to want to have your API credentials stored externally in your praw.ini file. This makes things a lot more secure, and looks like it might go some way to fixing your issue. Here's what a completed praw.ini file looks like, including the useragent, so try to replicate this.
[DEFAULT]
# A boolean to indicate whether or not to check for package updates.
check_for_updates=True
# Object to kind mappings
comment_kind=t1
message_kind=t4
redditor_kind=t2
submission_kind=t3
subreddit_kind=t5
# The URL prefix for OAuth-related requests.
oauth_url=https://oauth.reddit.com
# The URL prefix for regular requests.
reddit_url=https://www.reddit.com
# The URL prefix for short URLs.
short_url=https://redd.it
[appname]
client_id=IE*******T14_w
client_secret=SW***********************CLY
password=******************
username=appname
user_agent=web:appname:1.0.0 (by /u/username)
Let me know how things go after you sort this out.

DoubleClick Bid Manager API not updating the line items

I am downloading line items from DBM, modifying them and uploading them again to DBM. Once uploaded, if I download and view them again, I am not able to see the modifications. There is no error in the code and the API also is not returning any errorStatus.
Code to upload line items:
service = build('doubleclickbidmanager', config.Version, http=credentials.authorize(httplib2.Http()))
request = service.lineitems().uploadlineitems(body=BODY)
response = request.execute()
if 'uploadStatus' in response and 'errors' in response['uploadStatus']:
for error in response['uploadStatus']['errors']:
logging.error(error)
Code to download line items:
service = build('doubleclickbidmanager', config.Version, http=credentials.authorize(httplib2.Http()))
request = service.lineitems().downloadlineitems(body=body)
print "Downloading Line Items.."
logging.info("function: Downloading Line Items..")
# Execute request and save response contents.
with open(file_path, 'wb') as handler:
# Call the API, getting the (optionally filtered) list of line items.
# Then write the contents of the response to a CSV file.
lidata = request.execute()['lineItems'].encode('utf-8')
logging.info("function:request.execute succeeded.")
handler.write(lidata)
print 'Download Completed.'
Is this the proper way to check whether the line item is modified, or am I doing something wrong? Is there any other way to check it?
Please note that api will update the line item in following cases:
-Based on any flag,if the flag is set for the row,it will update
-Not all the rows will get update
Refer this link:https://developers.google.com/bid-manager/guides/entity-write/format- For those Columns, whose writeable value is yes can only be updated

Pylons: response renaming? Is there a better way?

I've got a Pylons controller with an action called serialize returning content_type=text/csv. I'd like the response of the action to be named based on the input patameter, i.e. for the following route, produced csv file should be named {id}.csv : /app/PROD/serialize => PROD.csv (so a user can open the file in Excel with a proper name directly via a webbrowser)
map.connect('/app/{id}/serialize',controller = 'csvproducer',action='serialize')
I've tried to set different HTTP headers and properties of the webob's response object with no luck. However, I figured out a workaround by simply adding a new action to the controller and dynamically redirecting the original action to that new action, i.e.:
map.connect('/app/{id}/serialize',controller = 'csvproducer',action='serialize')
map.connect('/app/csv/{foo}',controller = 'csvproducer', action='tocsv')
The controller's snippet:
def serialize(self,id):
try:
session['key'] = self.service.serialize(id) #produces csv content
session.save()
redirect_to(str("/app/csv/%s.csv" % id))
except Exception,e:
log.error(e)
abort(503)
def tocsv(self):
try:
csv = session.pop("rfa.enviornment.serialize")
except Exception,e:
log.error(e)
abort(503)
if csv:
response.content_type='text/csv'
response.status_int=200
response.write(csv)
else:
abort(404)
The above setup works perfectly fine, however, is there a better/slicker/neater way of doing it? Ideally I wouldn't like to redirect the request; instead I'd like to either rename location or set content-disposition: attachment; filename='XXX.csv' [ unsuccessfully tried both :( ]
Am I missing something obvious here?
Cheers
UPDATE:
Thanks to ebo I've managed to do fix content-disposition. Should better read W3C specs next time ;)
You should be able to set the content-disposition header on a response object.
If you have already tried that, it may not have worked because the http standard says that the quotes should be done by double-quote marks.

Categories

Resources