Initializing global object in Python - python

In summary, my problem is how do you easily make a connection resource a global variable? To be specific, I'd like to open a Redis queue connection and would like to use that in multiple functions without the hassle of passing it as a parameter, i.e.'
#===============================================================================
# Global variables
#===============================================================================
REDIS_QUEUE <- how to initialize
Then, in my main function, have
# Open redis queue connection to server
REDIS_QUEUE = redis.StrictRedis(host=SERVER_IP, port=6379, db=0)
And then use REDIS_QUEUE in multiple functions, e.g.
def sendStatusMsgToServer(statusMsg):
print "\nSending status message to server:"
print simplejson.dumps(statusMsg)
REDIS_QUEUE.rpush(TLA_DATA_CHANNEL, simplejson.dumps(statusMsg))
I thought REDIS_QUEUE = none would work but it gives me
AttributeError: 'NoneType' object has no attribute 'rpush'
I'm new to Python, what's the best way to solve this?

If you want to set the value of a global variable from inside a function, you need to use the global statement. So in your main function:
def main():
global REDIS_QUEUE
REDIS_QUEUE = redis.StrictRedis(host=SERVER_IP, port=6379, db=0)
# whatever else
Note that there's no need to "initialize" the variable outside main before doing this, although you may want to just to document that the variable exists.

Related

Using global to create a singleton object resulting in multiple initialization

I am facing a problem where the global variable within my module keeps on getting initialized multiple times when produce() is called. The module is imported just once but I am still unable to understand why the global variable within the module will become None.
Am I taking the right approach?
Looking to make it work in python 2.7 and upwards.
producer_instance = None
def produce(message):
producer = get_producer()
producer.produce(message)
def get_producer():
global producer_instance
if producer_instance is None:
producer_instance = Producer()
return producer_instance

How to instantiate an object once

I am instantiating this object below every time I call csv in my function. Was just wondering if there's anyway I could just instantiate the object just once?
I tried to split the return csv from def csv() to another function but failed.
Code instantiating the object
def csv():
proj = Project.Project(db_name='test', json_file="/home/qingyong/workspace/Project/src/json_files/sys_setup.json")#, _id='poc_1'
csv = CSVDatasource(proj, "/home/qingyong/workspace/Project/src/json_files/data_setup.json")
return csv
Test function
def test_df(csv,df)
..............
Is your csv function actually a pytest.fixture? If so, you can change its scope to session so it will only be called once per py.test session.
#pytest.fixture(scope="session")
def csv():
# rest of code
Of course, the returned data should be immutable so tests can't affect each other.
You can use a global variable to cache the object:
_csv = None
def csv():
global _csv
if _csv is None:
proj = Project.Project(db_name='test', json_file="/home/qingyong/workspace/Project/src/json_files/sys_setup.json")#, _id='poc_1'
_csv = CSVDatasource(proj, "/home/qingyong/workspace/Project/src/json_files/data_setup.json")
return _csv
Another option is to change the caller to cache the result of csv() in a manner similar to the snippet above.
Note that your "code to call the function" doesn't call the function, it only declares another function that apparently receives the csv function's return value. You didn't show the call that actually calls the function.
You can use a decorator for this if CSVDatasource doesn't have side effects like reading the input line by line.
See Efficient way of having a function only execute once in a loop
You can store the object in the function's local dictionary. And return that object if it exists, create a new one if it doesn't.
def csv():
if not hasattr(csv, 'obj'):
proj = Project.Project(db_name='test', json_file="/home/qingyong/workspace/Project/src/json_files/sys_setup.json")#, _id='poc_1'
csv.obj = CSVDatasource(proj, "/home/qingyong/workspace/Project/src/json_files/data_setup.json")
return csv.obj

Python tornado global variables

I have a Python Tornado application. I want to have variables which are shared across multiple files. previously I used to declare and initiate them in a python file name global.py and import it into other files. this was a good idea until some of my variables needs to query from database, so every time I imported global.py to get just one value, all of queries was running and causes to slow down my application.
The next step was that I defined my variables in tornado start.py like this:
class RepublishanApplication(tornado.web.Application):
def __init__(self):
##################################################
# conn = pymongo.Connection("localhost", 27017)
self.Countries = GlobalDefined.Countries
self.Countries_rev = GlobalDefined.Countries_rev
self.Languages = GlobalDefined.Languages
self.Categories = GlobalDefined.Categories
self.Categories_rev = GlobalDefined.Categories_rev
self.NewsAgencies = GlobalDefined.NewsAgencies
self.NewsAgencies_rev = GlobalDefined.NewsAgencies_rev
self.SharedConnections = SharedConnections
I can access these variables in handlers like this:
self.application.Countries
It's working good. but the problem is that I can access to these variables only in handler classes and if I want to access them, I have to pass them to functions. I think it's not a good idea. Do you have any suggestion to have access to these variable every where without having to pass application instance to all of my functions or even another way to help me?
Putting your global variables in a globals.py file is a fine way to accomplish this. If you use PyMongo to query values from MongoDB when globals.py is imported, that work is only done the first time globals.py is imported in a process. Other imports of globals.py get the module from the sys.modules cache.

How are global objects handled in threading?

I would like to create a Pyramid app with an orm which I am writing (currently in deep alpha status). I want to plug the orm into the app sanely and thus I want to know how global objects are handled in multithreading.
In the file:
https://www.megiforge.pl/p/elephantoplasty/source/tree/0.0.1/src/eplasty/ctx.py
you can see, there is a global object called ctx which contains a default session. What if I run set_context() and start_session() in middleware at ingress? Can I expect then to have a separate session in ctx in every thread? Or is there a risk that two threads will use the same session?
Global variables are shared between all threads, so if you run those functions the threads will conflict with each other in unpredictable ways.
To do what you want you can use thread local data, using threading.local. You need to remove the global definition of ctx and then create the following function.
def get_ctx():
thread_data = threading.local()
if not hasattr(thread_data, "ctx"):
thread_data.ctx = Ctx()
return thread_data.ctx
Then, everywhere you reference ctx call get_ctx() instead. This will ensure that your context is not shared between threads.

python threading with global variables

i encountered a problem when write python threading code, that i wrote some workers threading classes, they all import a global file like sharevar.py, i need a variable like regdevid to keep
tracking the register device id, then when one thread change it's value, then other threads can
get it fresh, but the result is that: when one thread change it's value, the others still get the default value i defined in sharevar.py file, why?
anything wrong with me?
# thread a
from UserShare import RegDevID
import threading
class AddPosClass(threading.Thread):
global commands
# We need a pubic sock, list to store the request
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
data = self.queue.get()
#print data
RegDevID = data
#print data
send_queue.put(data)
self.queue.task_done()
# thread b
import threading
from ShareVar import send_queue, RegDevID
"""
AddPos -- add pos info on the tail of the reply
"""
class GetPosClass(threading.Thread):
global commands
# We need a pubic sock, list to store the request
def __init__(self, queue):
threading.Thread.__init__(self)
self.queue = queue
def run(self):
while True:
data = self.queue.get()
#print data
data = RegDevID
#print data
send_queue.put(data)
self.queue.task_done()
# ShareVar.py
RegDevID = '100'
That's it, when thread a changed the RegDevID, thread b still get it's default value.
Thanks advanced.
from ShareVar import RegDevID
class Test():
def __init__(self):
pass
def SetVar(self):
RegDevID = 999
def GetVar(self):
print RegDevID
if __name__ == '__main__':
test = Test();
test.SetVar()
test.GetVar()
The ShareVar.py:
RegDevID = 100
The result:
100
why?
My guess is you are trying to access the shared variable without a lock. If you do not acquire a lock and attempt to read a shared variable in one thread, while another thread is writing to it, the value could be indeterminate.
To remedy, make sure you acquire a lock in the thread before reading or writing to it.
import threading
# shared lock: define outside threading class
lock = threading.RLock()
# inside threading classes...
# write with lock
with lock: #(python 2.5+)
shared_var += 1
# or read with lock
with lock:
print shared_var
Read about Python threading.
Answer to your bottom problem with scoping:
In your bottom sample, you are experiencing a problem with scoping. In SetVar(), you are create a label RegDevID local to the function. In GetVar(), you are attempting to read from a label RegDevID but it is not defined. So, it looks higher in scope and finds one defined in the import. The variables need to be in the same scope if you hope to have them reference the same data.
Although scopes are determined statically, they are used dynamically.
At any time during execution, there
are at least three nested scopes whose
namespaces are directly accessible:
the innermost scope, which is searched first, contains the local names
the scopes of any enclosing functions, which are searched starting
with the nearest enclosing scope,
contains non-local, but also
non-global names
the next-to-last scope contains the current module’s global names
the outermost scope (searched last) is the namespace containing
built-in names
If a name is declared global, then all
references and assignments go directly
to the middle scope containing the
module’s global names. Otherwise, all
variables found outside of the
innermost scope are read-only (an
attempt to write to such a variable
will simply create a new local
variable in the innermost scope,
leaving the identically named outer
variable unchanged).
Read about scoping.
Are you sure you posted your actual code? You imported RegDevID from two different modules:
# thread a
from UserShare import RegDevID
vs
# thread b
from ShareVar import send_queue, RegDevID
Either way, your problam has nothing to do with threading. Think of 'from somemodule import somevar' as an assignment statement. Roughly equivalent to some magic to load the module if it isn't already loaded followed by:
somevar = sys.modules['somemodule'].somevar
When you import RegDevID from the other module you are creating a fresh name in the current module. If you mutate the object then other users of the object will see the changes, but if you rebind the name in this module then that only affects the local name it doesn't change anything in the original module.
Instead you need to rebind the variable in another module:
import ShareVar
...
ShareVar.RegDevID = data
Except of course you'll find you get on much better if you create a class to manage your shared state.
Your second bit of code is ejust misunderstanding local and global variables:
def SetVar(self):
RegDevID = 999
inside the function you created a new local variable RegDevID which is nothing to do with the global variable of the same name. Use the global statement if you want to rebind a global variable:
def SetVar(self):
global RegDevID
RegDevID = 999

Categories

Resources