i want to use variable globally in veiws.py - python

veiws.py
def getBusRouteId(strSrch):
end_point = "----API url----"
parameters = "?ServiceKey=" + "----my servicekey----"
parameters += "&strSrch=" + strSrch
url = end_point + parameters
retData = get_request_url(url)
asd = xmltodict.parse(retData)
json_type = json.dumps(asd)
data = json.loads(json_type)
if (data == None):
return None
else:
return data
def show_list(request)
Nm_list=[]
dictData_1 = getBusRouteId("110")
for i in range(len(dictData_1['ServiceResult']['msgBody']['itemList'])):
Nm_list.append(dictData_1['ServiceResult']['msgBody']['itemList'][i]['busRouteNm'])
return render(request, 'list.html', {'Nm_list': Nm_list})
There is a dict data that was given by API
In 'def getBusRouteId', some Xml data is saved by dict data
In 'def show_list', I call 'def getBusRouteId' so 'dictData_1' get a dict data
And I want to refer this dictData_1 in another function
Is there any way to use dictData_1 globally?

Either store those data in a session (if those are short-lived data) or in the database (if you want to persist them).
The point is that a WSGI app is typically deployed as a pool of long-running processes, with a "supervisor" process that will dispatch incoming HTTP requests to the first available process (or to a newly spawned one etc), so using process-wide globals to store per-user data does NOT work as you always end up with user A getting data from user B, or no data at all, etc.
NB: this kind of issues may not appear when testing with a single user on the dev server, but it's still GARANTEED to break in production.
Also, totally unrelated but:
1/ this bit seems totally useless - you serialize a dict to json then unserialize it to a dict, which, unless you have custom serialization / unseralization hooks (which is not the case here), it's functionalmly a no-op.
json_type = json.dumps(asd)
data = json.loads(json_type)
2/ Here:
end_point = "----API url----"
parameters = "?ServiceKey=" + "----my servicekey----"
parameters += "&strSrch=" + strSrch
url = end_point + parameters
retData = get_request_url(url)
I don't know how get_request_url is implemented but if you are using python-requests, it already knows how to turn a dict into a (properly encoded) querystring. And if you're using the standard urllib packages, they ALSO provide a way to turn a dict into a properly built querystring. This makes for more robust AND more maintainable code.
3/ you may want to learn how to properly use Python's for loops
Here:
Nm_list=[]
dictData_1 = getBusRouteId("110")
for i in range(len(dictData_1['ServiceResult']['msgBody']['itemList'])):
Nm_list.append(dictData_1['ServiceResult']['msgBody']['itemList'][i]['busRouteNm'])
Python for loop naturally iterate over the sequence, yielding an item from the sequence in each iteration. So the proper way to write this is:
Nm_list=[]
for item in dictData_1['ServiceResult']['msgBody']['itemList']:
Nm_list.append(item['busRouteNm'])
which is both much more readable AND much more efficient.
Also, this can be further improved using list comprehension:
# intermediate var for readability
source = dictData_1['ServiceResult']['msgBody']['itemList']
Nm_list = [item['busRouteNm'] for item in source]
which is even more efficient (it's optimized by the runtime to avoid memory reallocation when the list grows).
4/ this:
if (data == None):
return None
else:
return data
is a very convoluted way to write:
return data
(also note that since None is a singleton, the preferred way is to use the identity test operator is, ie if data is None - same result but more idiomatic).

I understood that you want to perform some operations on dict_data returned by getBusRouteId() and pass them to another function.
SOLUTION - Just passing the dict_data as an argument to another function will work. No need to make global variables.

Related

Mis-arranged dictionary pairs when host on Flask

When I print the dictionary in the interpreter, it works as desired, but when I use it as Flask API return value the dictionary becomes a mess, all key-value pairs are mis-organized.
Not desired JSON data (got this on Flask API) - https://pastebin.com/jrfLMVNg
Desired JSON data (got this on interpreter) - https://pastebin.com/cDJnah07
Probably the faulty code:
def dataPacker(self,*datas):
for data in datas:
if type(data) == dict:
for key,value in data.items():
self.returnDataJson[key] = value
else:
raise Exception('dict object expected')
def dataCollector(self):
with concurrent.futures.ThreadPoolExecutor() as executor:
details_ = executor.submit(self.dataPacker,self.details)
audiolink_ = executor.submit(self.dataPacker,self.audiolink)
videolink_ = executor.submit(self.dataPacker,self.videolink)
lyrics_ = executor.submit(self.dataPacker,self.lyrics)
return self.returnDataJson
Is this because of threading? But why does it work fine on Interpreter?
So the problem is that the items in the wrong order, but each key has the right value?
You don't say which version of Python this is; older versions didn't keep items in order (it was arbitrary), and Flask may be deliberately making it random as well as arbitrary in order to protect against attacks.
If it comes to that, the order of items in a JSON dictionary (object) is defined to be unimportant, so you shouldn't rely on it if you can at all help it.
Threading will indeed make the order potentially interleaved. If you need to rely on the order, you'll need to put in some mechanism to guarantee it; currently it's just chance whether it ends up in the order you want or in some other order.
I found Cliff Kerr's Answer to this question
helpful. I just added app.config['JSON_SORT_KEYS'] = False to my code and now it doesn't sort it alphabetically and keeps the dict ordered.

append to request.sessions[list] in Django

Something is bugging me.
I'm following along with this beginner tutorial for django (cs50) and at some point we receive a string back from a form submission and want to add it to a list:
https://www.youtube.com/watch?v=w8q0C-C1js4&list=PLhQjrBD2T380xvFSUmToMMzERZ3qB5Ueu&t=5777s
def add(request):
if 'tasklist' not in request.session:
request.session['tasklist'] = []
if request.method == 'POST':
form_data = NewTaskForm(request.POST)
if form_data.is_valid():
task = form_data.cleaned_data['task']
request.session['tasklist'] += [task]
return HttpResponseRedirect(reverse('tasks:index'))
I've checked the type of request.session['tasklist']and python shows it's a list.
The task variable is a string.
So why doesn't request.session['tasklist'].append(task) work properly? I can see it being added to the list via some print statements but then it is 'forgotten again' - it doesn't seem to be permanently added to the tasklist.
Why do we use this request.session['tasklist'] += [task] instead?
The only thing I could find is https://ogirardot.wordpress.com/2010/09/17/append-objects-in-request-session-in-django/ but that refers to a site that no longer exists.
The code works fine, but I'm trying to understand why you need to use a different operation and can't / shouldn't use the append method.
Thanks.
The reason why it does not work is because django does not see that you have changed anything in the session by using the append() method on a list that is in the session.
What you are doing here is essentially pulling out the reference to the list and making changes to it without the session backend knowing anything about it. An other way to explain:
The append() method is on the list itself not on the session object
When you call append() on the list you are only talking to the list and the list's parent (the session) has no idea what you guys are doing
When you however do an assignment on the session itself session['whatever'] = 'something' then it knows that something is up and changes are made
So the key here is that you need to operate on the session object directly if you want your changes to be updated automatically
Django only thinks it needs to save a changed session item if the item got reassigned to the session. See here: django session base code the __setitem__ method containing a self.modified = True statement.
The session['list'] += [new_element] adds a new list item (mutates the list stored in the session, so the list reference stays the same) and then gets it reassigned to the session again -> thus triggering first a __getitem__ call -> then your += / __iadd__ runs on the value read -> then a __setitem__ call is made (with the list ref. passed to it). You can see it in the django codebase that it marks the session after each __setitem__ call as modified.
The session['list'] = session['list'] + [new_item] mode of doing the same does create a new list every time it's run so its a bit less efficient, but you should not store hundreds of items in the session anyway. So you're probably fine. This also works exactly as above.
However if you use sub-keys in the session like session['list']['x'] = 'whatever' the session will not see itself as modified so you need to mark it as by request.session.modified = True
Short answer: It's about how Python chooses to implement the dict data structure.
Long answer:
Let's start by saying that request.session is a dictionary.
Quoting Django's documentation, "By default, Django only saves to the session database when the session has been modified – that is if any of its dictionary values have been assigned or deleted". Link
So, the problem is that the session database is not being modified by
request.session['tasklist'].append(task)
Seeing the related parts Django's Session base code (as posted by #Csaba K. in an answer), the variable self.modified is to be set True when setitem dunder method is called.
Now, at this step the problem seems like the setitem dunder method is not being called with request.session['tasklist'].append(task) but with request.session['tasklist'] += [task] it gets called. It is not due to if the reference of request.session['tasklist'] is changing or not as pointed out by another answer, because the reference to the underlying list remains the same.
To confirm, let's create a custom dictionary which extends the Python dict, and print something when setitem dunder method is called.
class MyDict(dict):
def __init__(self, globalVar):
super().__init__()
self.globalVar = globalVar
def __setitem__(self, key, value):
super().__setitem__(key, value)
print("Called Set item when: ", end="")
myDict = MyDict(0)
print("Creating Dict")
print("-----")
myDict["y"] = []
print("Adding a new key-value pair")
print("-----")
myDict["y"] += ["x"]
print(" using +=")
print("-----")
myDict["y"].append("x")
print("append")
print("-----")
myDict["y"].extend(["x"])
print("extend")
print("-----")
myDict["y"] = myDict["y"] + ["x"]
print(" using +",)
print("-----")
It prints:
Creating Dict
-----
Called Set item when: Adding a new key-value pair
-----
Called Set item when: using +=
-----
append
-----
extend
-----
Called Set item when: using +
-----
As we can see, setitem dunder method is called and in turn self.modified is set true only when adding a new key-value pair, or using += or using +, but not when initializing, appending or extending an iterable (in this case a list). Now, the operator + and += do very different things in Python, as explained in the other answer. += behaves more like the append method but in this case, I guess it's more about how Python chooses to implement the dict data structure rather than how +, += and append behave on lists.
I found this while doing some more searching:
https://code.djangoproject.com/wiki/NewbieMistakes
Scroll to 'Appending to a list in session doesn't work'
Again, it is a very dated entry but still seems to hold true.
Not completely satisfied because this does not answer the question as to 'why' this doesn't work, but at the very least confirms 'something's up' and you should probably still use the recommendations there.
(if anyone out there can actually explain this in a more verbose manner then I'd be happy to hear it)

Too many if statements

I have some topic to discuss. I have a fragment of code with 24 ifs/elifs. Operation is my own class that represents functionality similar to Enum. Here is a fragment of code:
if operation == Operation.START:
strategy = strategy_objects.StartObject()
elif operation == Operation.STOP:
strategy = strategy_objects.StopObject()
elif operation == Operation.STATUS:
strategy = strategy_objects.StatusObject()
(...)
I have concerns from readability point of view. Is is better to change it into 24 classes and use polymorphism? I am not convinced that it will make my code maintainable... From one hand those ifs are pretty clear and it shouldn't be hard to follow, on the other hand there are too many ifs.
My question is rather general, however I'm writing code in Python so I cannot use constructions like switch.
What do you think?
UPDATE:
One important thing is that StartObject(), StopObject() and StatusObject() are constructors and I wanted to assign an object to strategy reference.
You could possibly use a dictionary. Dictionaries store references, which means functions are perfectly viable to use, like so:
operationFuncs = {
Operation.START: strategy_objects.StartObject
Operation.STOP: strategy_objects.StopObject
Operation.STATUS: strategy_objects.StatusObject
(...)
}
It's good to have a default operation just in case, so when you run it use a try except and handle the exception (ie. the equivalent of your else clause)
try:
strategy = operationFuncs[operation]()
except KeyError:
strategy = strategy_objects.DefaultObject()
Alternatively use a dictionary's get method, which allows you to specify a default if the key you provide isn't found.
strategy = operationFuncs.get(operation(), DefaultObject())
Note that you don't include the parentheses when storing them in the dictionary, you just use them when calling your dictionary. Also this requires that Operation.START be hashable, but that should be the case since you described it as a class similar to an ENUM.
Python's equivalent to a switch statement is to use a dictionary. Essentially you can store the keys like you would the cases and the values are what would be called for that particular case. Because functions are objects in Python you can store those as the dictionary values:
operation_dispatcher = {
Operation.START: strategy_objects.StartObject,
Operation.STOP: strategy_objects.StopObject,
}
Which can then be used as follows:
try:
strategy = operation_dispatcher[operation] #fetch the strategy
except KeyError:
strategy = default #this deals with the else-case (if you have one)
strategy() #call if needed
Or more concisely:
strategy = operation_dispatcher.get(operation, default)
strategy() #call if needed
This can potentially scale a lot better than having a mess of if-else statements. Note that if you don't have an else case to deal with you can just use the dictionary directly with operation_dispatcher[operation].
You could try something like this.
For instance:
def chooseStrategy(op):
return {
Operation.START: strategy_objects.StartObject
Operation.STOP: strategy_objects.StopObject
}.get(op, strategy_objects.DefaultValue)
Call it like this
strategy = chooseStrategy(operation)()
This method has the benefit of providing a default value (like a final else statement). Of course, if you only need to use this decision logic in one place in your code, you can always use strategy = dictionary.get(op, default) without the function.
Starting from python 3.10
match i:
case 1:
print("First case")
case 2:
print("Second case")
case _:
print("Didn't match a case")
https://pakstech.com/blog/python-switch-case/
You can use some introspection with getattr:
strategy = getattr(strategy_objects, "%sObject" % operation.capitalize())()
Let's say the operation is "STATUS", it will be capitalized as "Status", then prepended to "Object", giving "StatusObject". The StatusObject method will then be called on the strategy_objects, failing catastrophically if this attribute doesn't exist, or if it's not callable. :) (I.e. add error handling.)
The dictionary solution is probably more flexible though.
If the Operation.START, etc are hashable, you can use dictionary with keys as the condition and the values as the functions to call, example -
d = {Operation.START: strategy_objects.StartObject ,
Operation.STOP: strategy_objects.StopObject,
Operation.STATUS: strategy_objects.StatusObject}
And then you can do this dictionary lookup and call the function , Example -
d[operation]()
Here is a bastardized switch/case done using dictionaries:
For example:
# define the function blocks
def start():
strategy = strategy_objects.StartObject()
def stop():
strategy = strategy_objects.StopObject()
def status():
strategy = strategy_objects.StatusObject()
# map the inputs to the function blocks
options = {"start" : start,
"stop" : stop,
"status" : status,
}
Then the equivalent switch block is invoked:
options["string"]()

Python: Using a multidimensional multiprocessing.manager.list()

This might not be its intended use, but I would like to know how to use a multidimensional manager.list(). I can create on just fine, something like this:
from multiprocessing import manager
test = manager.list(manager.list())
however when ever I try to access the first element of the test list it returns the element's value and not its proxy object
test[0] # returns [] and not the proxy, since I think python is running __getitem__.
Is there anyway for me to get around this and use the manager.list() in this way?
The multiprocessing documentation has a note on this:
Note
Modifications to mutable values or items in dict and list proxies will
not be propagated through the manager, because the proxy has no way of
knowing when its values or items are modified. To modify such an item,
you can re-assign the modified object to the container proxy:
# create a list proxy and append a mutable object (a dictionary)
lproxy = manager.list()
lproxy.append({})
# now mutate the dictionary
d = lproxy[0]
d['a'] = 1
d['b'] = 2
# at this point, the changes to d are not yet synced, but by
# reassigning the dictionary, the proxy is notified of the change
lproxy[0] = d
So, the only way to use a multidimensional list is to actually reassign any changes you make to the second dimension of the list back to the top-level list, so instead of:
test[0][0] = 1
You do:
tmp = test[0]
tmp[0] = 1
test[0] = tmp
Not the most pleasant way to do things, but you can probably write some helper functions to make it a bit more tolerable.
Edit:
It seems the reason that you get a plain list back when you append a ListProxy to another ListProxy is because of how pickling Proxies works. BaseProxy.__reduce__ creates a RebuildProxy object, which what actually is used to unpickle the Proxy. RebuildProxy looks like this:
def RebuildProxy(func, token, serializer, kwds):
'''
Function used for unpickling proxy objects.
If possible the shared object is returned, or otherwise a proxy for it.
'''
server = getattr(process.current_process(), '_manager_server', None)
if server and server.address == token.address:
return server.id_to_obj[token.id][0]
else:
incref = (
kwds.pop('incref', True) and
not getattr(process.current_process(), '_inheriting', False)
)
return func(token, serializer, incref=incref, **kwds)
As the docstring says, if the unpickling is occuring inside a manager server, the actual shared object is created, rather than the Proxy to it. This is probably a bug, and there is actually one filed against this behavior already.

What if I want to store a None value in the memcache?

This is specifically related to the Google App Engine Memcache API, but I'm sure it also applies to other Memcache tools.
The dictionary .get() method allows you to specify a default value, such as dict.get('key', 'defaultval')
This can be useful if it's possible you might want to store None as a value in a dictionary.
However, the memcache.get() does not let you to do this. I've modified my #memoize decorator so it looks like this:
def memoize(keyformat, time=1000000):
"""Decorator to memoize functions using memcache."""
def decorator(fxn):
def wrapper(*args, **kwargs):
key = keyformat + str(args[1:]) + str(kwargs)
from google.appengine.api import memcache
data = memcache.get(key)
if Debug(): return fxn(*args, **kwargs)
if data:
if data is 'None': data = None
return data
data = fxn(*args, **kwargs)
if data is None: data = 'None'
memcache.set(key, data, time)
return data
return wrapper
return decorator
Now I'm sure there's a good argument that I shouldn't be storing None values in the first place, but let's put that aside for now. Is there a better way I can handle this besides converting None vals to strings and back?
A possible way to do this is to create new class that defines None for this purpose, and assign instances of this to the cache (unfortunately you cannot extend None). Alternatively, you could use the empty string "", or avoid storing None/null values altogether (absence of the key implies None).
Then check for instances of your 'None' class when you check the result of mc.get(key) (is None, == "", etc)
You could do something like what Haskell and Scala does and store an Option dictionary. The dictionary contains two keys: one key to indicate that it is valid and one key that is used to hold the data. Something like this:
{valid: true, data: whatyouwanttostore}
Then if get return None, you know that the cache was missed; if the result is a dictionary with None as the data, the you know that the data was in the cache but that it was false.
Not really.
You could store a None value as an empty string, but there isn't really a way to store special data in a memcache.
What's the difference between the cache key not existing and the cache value being None? It's probably better to unify these two situations.

Categories

Resources