Accessing a function's variable from another function - python

from threading import Thread
import time
def print_k():
while true:
if main.k % 2 == 1: # ditto
print(main.k, "is even.") # <-- my problem is HERE ( Ignore all the other stuff )
time.sleep(2)
def main():
k = 1
while k != 200:
k += 1
print k
time.sleep(0.5)
if __name__ == '__main__':
Thread(target=print_k).start()
Thread(target=main).start()
in this script (example only, ignore all realistic functionality) I am trying to run main(), which adds up to 200 and prints it, and in print_k, i am printing main's variable, k.
I have an exception raised, unsurprisingly, and am wondering how i can access a separate function's variable from a different function (they are both running at the same time, by the way, hence the Threading module.)

You can't print main's variable k. The whole point of local variables is that they're local. It doesn't matter whether they're running at the same time or not; they each have their own separate local environment. (In fact, if you call main 60 times, each of those 60 calls has its own local environment.)
But there are a number of things you can do.
The simplest, but generally worst, is to use global variables instead of local variables. Just add global k to the top of the main function, add some start value for k at the top level (before either thread starts), and you can now access the same global variable inside print_k.
Bundling the shared state and functions up together in a class, where both functions become methods that can access self.k, is a better solution. Passing in some kind of mutable "holder" to both main and print_k is also a better solution. Redesigning your app around explicit message passing (e.g., on a Queue.Queue) is even better.
I'll show how to do it with a class:
class KCounter(object):
def __init__(self):
self.k = 0
def print_k(self):
while True:
if self.k % 2 == 1:
print(self.k, "is even.")
time.sleep(2)
def main(self):
self.k = 1
while self.k != 200:
self.k += 1
print self.k
time.sleep(0.5)
if __name__ == '__main__':
kcounter = KCounter()
Thread(target=kcounter.print_k).start()
Thread(target=kcounter.main).start()
Now, because we're using self.k, an attribute of the KCounter instance, instead of k, a local variable, both methods see the same variable.

Related

How to access a variable outside of a function (access from a loop)?

I am working on accessing a variable outside of a function. Here is part of my code:
def main():
trigger_gmm() = 0
log = []
def spatter_tracking_cb(ts, clusters):
global trigger_gmm
for cluster in clusters:
log.append([ts, cluster['id'], int(cluster['x']), int(cluster['y']), int(cluster['width']),
int(cluster['height'])])
if cluster['width'] >= 200:
trigger_gmm = 1
else:
trigger_gmm = 0
print(trigger_gmm)
while True:
print(trigger_gmm)
if trigger_gmm == 1:
print("print something")
if __name__ == "__main__":
main()
I get the output like this:
NameError: name 'trigger_gmm' is not defined
Any ideas would be much appreciated!
You have three issues in that code:
trigger_gmm() = 0 - You need to remove the parenthesis
You need to move the global variable definition up to the beginning of the main function
The if __name__ == "__main__": is not reached as it is after the while loop, you need to move it up.
EDIT:
I added a global declaration to the main module (above the main function) and inside the spatter_tracking_cb function. This is because you need to indicate that the variable trigger_gmm is a global variable whenever you use it.
This code seems to work for me:
global trigger_gmm
def main():
global trigger_gmm
trigger_gmm = 0
log = []
def spatter_tracking_cb(ts, clusters):
global trigger_gmm
for cluster in clusters:
log.append([ts, cluster['id'], int(cluster['x']), int(cluster['y']), int(cluster['width']),
int(cluster['height'])])
if cluster['width'] >= 200:
trigger_gmm = 1
else:
trigger_gmm = 0
print(trigger_gmm)
if __name__ == "__main__":
main()
while True:
print(trigger_gmm)
if trigger_gmm == 1:
print("print something")
trigger_gmm = 0
Remove parenthesis.
You also don't need global trigger_gmm as this variable will be available within scope of main function.
https://realpython.com/python-scope-legb-rule/#nested-functions-the-enclosing-scope
I am no Python user, but from it looks like you are calling variables that are not in a "global" scope.
Every variable defined in a function or loop, is not accessible by another function unless stated so.
Try defining your variable outside of the function, or make it global
As stated before.
W3Schools

Is there an easy way to thread or multiprocess a non-intensive function?

I'm self-learning python so I don't know how to describe this in a way that would be clear, so here's the easiest by proxy example I can come up with in pseudo code:
#where r() is a random number function
objCount = 0
def mainfunc()
while playgame= True and objCount < 100:
create(r(time))
time.sleep(1)
return None
def create(tmptime)
global objCount
objCount = objCount+1
newobj = plotSomething(r(x),r(y))
time.sleep(tmptime)
selfDelete..
return None
mainfunc() #run it
Instead of it making a random "lived" object every second, it makes a random lived object every second, but waits for it's "life" to expire. I'm trying to just fire this thing off to a sidechain to timeout on its own while still making new things.
All the documentation is getting super involved using asyncio, multithreading, etc.
Is there an easy way to kick this thing out of the main loop and not hold up traffic?
laziest method for simplicity is :
import concurrent.futures as delayobj
#where r() is a random number function
objCount = 0
def mainfunc()
global objCount
with delayobj:
while objCount < 100:
delayobj.ThreapoolExecutor().submit(create,tmptime=r(time))
time.sleep(1)
return None
def create(tmptime)
global objCount
objCount = objCount+1
newobj = plotSomething(r(x),r(y))
time.sleep(tmptime)
selfDelete..
return None
mainfunc() #run it
thanks again guys

Issue with sharing data between Python processes with multiprocessing

I've seen several posts about this, so I know it is fairly straightforward to do, but I seem to be coming up short. I'm not sure if I need to create a worker pool, or use the Queue class. Basically, I want to be able to create several processes that each act autonomously (which is why they inherit from the Agent superclass).
At random ticks of my main loop I want to update each Agent. I'm using time.sleep with different values in the main loop and the Agent's run loop to simulate different processor speeds.
Here is my Agent superclass:
# Generic class to handle mpc of each agent
class Agent(mpc.Process):
# initialize agent parameters
def __init__(self,):
# init mpc
mpc.Process.__init__(self)
self.exit = mpc.Event()
# an agent's main loop...generally should be overridden
def run(self):
while not self.exit.is_set():
pass
print "You exited!"
# safely shutdown an agent
def shutdown(self):
print "Shutdown initiated"
self.exit.set()
# safely communicate values to this agent
def communicate(self,value):
print value
A specific agent's subclass (simulating an HVAC system):
class HVAC(Agent):
def __init__(self, dt=70, dh=50.0):
super(Agent, self).__init__()
self.exit = mpc.Event()
self.__pref_heating = True
self.__pref_cooling = True
self.__desired_temperature = dt
self.__desired_humidity = dh
self.__meas_temperature = 0
self.__meas_humidity = 0.0
self.__hvac_status = "" # heating, cooling, off
self.start()
def run(self): # handle AC or heater on
while not self.exit.is_set():
ctemp = self.measureTemp()
chum = self.measureHumidity()
if (ctemp < self.__desired_temperature):
self.__hvac_status = 'heating'
self.__meas_temperature += 1
elif (ctemp > self.__desired_temperature):
self.__hvac_status = 'cooling'
self.__meas_temperature += 1
else:
self.__hvac_status = 'off'
print self.__hvac_status, self.__meas_temperature
time.sleep(0.5)
print "HVAC EXITED"
def measureTemp(self):
return self.__meas_temperature
def measureHumidity(self):
return self.__meas_humidity
def communicate(self,updates):
self.__meas_temperature = updates['temp']
self.__meas_humidity = updates['humidity']
print "Measured [%d] [%f]" % (self.__meas_temperature,self.__meas_humidity)
And my main loop:
if __name__ == "__main__":
print "Initializing subsystems"
agents = {}
agents['HVAC'] = HVAC()
# Run simulation
timestep = 0
while timestep < args.timesteps:
print "Timestep %d" % timestep
if timestep % 10 == 0:
curr_temp = random.randrange(68,72)
curr_humidity = random.uniform(40.0,60.0)
agents['HVAC'].communicate({'temp':curr_temp, 'humidity':curr_humidity})
time.sleep(1)
timestep += 1
agents['HVAC'].shutdown()
print "HVAC process state: %d" % agents['HVAC'].is_alive()
So the issue is that, whenever I run agents['HVAC'].communicate(x) within the main loop, I can see the value being passed into the HVAC subclass in its run loop (so it prints the received value correctly). However, the value never is successfully stored.
So typical output looks like this:
Initializing subsystems
Timestep 0
Measured [68] [56.948675]
heating 1
heating 2
Timestep 1
heating 3
heating 4
Timestep 2
heating 5
heating 6
When in reality, as soon as Measured [68] appears, the internal stored value should be updated to output 68 (not heating 1, heating 2, etc.). So effectively, the HVAC's self.__meas_temperature is not being properly updated.
Edit: After a bit of research, I realized that I didn't necessarily understand what is happening behind the scenes. Each subprocess operates with its own virtual chunk of memory and is completely abstracted away from any data being shared this way, so passing the value in isn't going to work. My new issue is that I'm not necessarily sure how to share a global value with multiple processes.
I was looking at the Queue or JoinableQueue packages, but I'm not necessarily sure how to pass a Queue into the type of superclass setup that I have (especially with the mpc.Process.__init__(self) call).
A side concern would be if I can have multiple agents reading values out of the queue without pulling it out of the queue? For instance, if I wanted to share a temperature value with multiple agents, would a Queue work for this?
Pipe v Queue
Here's a suggested solution assuming that you want the following:
a centralized manager / main process which controls lifetimes of the workers
worker processes to do something self-contained and then report results to the manager and other processes
Before I show it though, for the record I want to say that in general unless you are CPU bound multiprocessing is not really the right fit, mainly because of the added complexity, and you'd probably be better of using a different high-level asynchronous framework. Also, you should use python 3, it's so much better!
That said, multiprocessing.Manager, makes this pretty easy to do using multiprocessing. I've done this in python 3 but I don't think anything shouldn't "just work" in python 2, but I haven't checked.
from ctypes import c_bool
from multiprocessing import Manager, Process, Array, Value
from pprint import pprint
from time import sleep, time
class Agent(Process):
def __init__(self, name, shared_dictionary, delay=0.5):
"""My take on your Agent.
Key difference is that I've commonized the run-loop and used
a shared value to signal when to stop, to demonstrate it.
"""
super(Agent, self).__init__()
self.name = name
# This is going to be how we communicate between processes.
self.shared_dictionary = shared_dictionary
# Create a silo for us to use.
shared_dictionary[name] = []
self.should_stop = Value(c_bool, False)
# Primarily for testing purposes, and for simulating
# slower agents.
self.delay = delay
def get_next_results(self):
# In the real world I'd use abc.ABCMeta as the metaclass to do
# this properly.
raise RuntimeError('Subclasses must implement this')
def run(self):
ii = 0
while not self.should_stop.value:
ii += 1
# debugging / monitoring
print('%s %s run loop execution %d' % (
type(self).__name__, self.name, ii))
next_results = self.get_next_results()
# Add the results, along with a timestamp.
self.shared_dictionary[self.name] += [(time(), next_results)]
sleep(self.delay)
def stop(self):
self.should_stop.value = True
print('%s %s stopped' % (type(self).__name__, self.name))
class HVACAgent(Agent):
def get_next_results(self):
# This is where you do your work, but for the sake of
# the example just return a constant dictionary.
return {'temperature': 5, 'pressure': 7, 'humidity': 9}
class DumbReadingAgent(Agent):
"""A dumb agent to demonstrate workers reading other worker values."""
def get_next_results(self):
# get hvac 1 results:
hvac1_results = self.shared_dictionary.get('hvac 1')
if hvac1_results is None:
return None
return hvac1_results[-1][1]['temperature']
# Script starts.
results = {}
# The "with" ensures we terminate the manager at the end.
with Manager() as manager:
# the manager is a subprocess in its own right. We can ask
# it to manage a dictionary (or other python types) for us
# to be shared among the other children.
shared_info = manager.dict()
hvac_agent1 = HVACAgent('hvac 1', shared_info)
hvac_agent2 = HVACAgent('hvac 2', shared_info, delay=0.1)
dumb_agent = DumbReadingAgent('dumb hvac1 reader', shared_info)
agents = (hvac_agent1, hvac_agent2, dumb_agent)
list(map(lambda a: a.start(), agents))
sleep(1)
list(map(lambda a: a.stop(), agents))
list(map(lambda a: a.join(), agents))
# Not quite sure what happens to the shared dictionary after
# the manager dies, so for safety make a local copy.
results = dict(shared_info)
pprint(results)

Python: Is it 'proper' to replace global variables with class variables

Please be kind with me, I'm a Python beginner :-)
Now, I see that the 'best practice' for writing Python programs would be to wrap the main code inside a 'main' function, and do the if "__main__" == __name__: test to invoke the 'main' function.
This of course results in the necessity of using a series of global statements in the 'main' function to access the global variables.
I wonder if it is more proper (or 'Pythonic', if you will) to gather the global variables into a custom class, say _v, and refer to the variables using _v. prefix instead?
Also, as a corollary question, would that have any negative impact to, let's say, performance or exception handling?
EDIT : The following is the general structure of the program:
paramset = {
0: { ...dict of params... }
1: { ...dict of params... }
2: { ...dict of params... }
}
selector = 0
reset_requested = False
selector_change = False
def sighup_handler(signal,frame):
global reset_requested
logger.info('Caught SIGHUP, resetting to set #{0}'.format(new_selector))
reset_requested = True
selector = 0
def sigusr1_handler(signal,frame):
global selector
new_selector = (selector + 1) % len(paramset)
logger.info('Caught SIGHUP, changing parameters to set #{0}'.format(new_selector))
selector = new_selector
selector_change = True
signal.signal(signal.SIGHUP, sighup_handler)
signal.signal(signal.SIGUSR1, sigusr1_handler)
def main():
global reset_requested
global selector
global selector_change
keep_running = True
while keep_running
logger.info('Processing selector {0}'.format(selector))
for stage in [process_stage1, process_stage2, process_stage3]
err, result = stage(paramset[selector])
if err is not None:
logger.critical('Stage failure! Err {0} details: {0}'.format(err, result))
raise SystemError('Err {0} details: {0}'.format(err, result))
else:
logger.info('Stage success: {0}'.format(result))
if reset_requested:
stage_cleanup()
reset_requested = False
else:
inter_stage_pause()
if selector_change:
selector_change = False
break
selector = (selector + 1) % len(paramset)
There are enough pieces missing from the example code that reaching any firm conclusions is difficult.
Event-driven approach
The usual approach for this type of problem would be to make it entirely event-driven. As it stands, the code is largely polling. For example, sighup_handler sets reset_requested = True and the while loop in main processes that request. An event-driven approach would handle the reset, meaning the call to stage_cleanup, directly:
def sighup_handler(signal,frame):
logger.info('Caught SIGHUP, resetting to set #{0}'.format(new_selector))
stage_cleanup()
Class with shared variables
In the sample code, the purpose of all those process_stages and cycling through the stages is not clear. Can it all be put in an event-driven context? I don't know. If it can't and it does require shared variables, then your suggestion of a class would be a natural choice. The beginnings of such a class might look like:
class Main(object);
def __init__(self):
self.selector = 0
self.selector_change = False
signal.signal(signal.SIGHUP, self.sighup_handler)
signal.signal(signal.SIGUSR1, self.sigusr1_handler)
def sighup_handler(self, signal,frame):
logger.info('Caught SIGHUP, resetting to set #{0}'.format(new_selector))
stage_cleanup()
self.selector = 0
def sigusr1_handler(self, signal,frame):
new_selector = (selector + 1) % len(paramset)
logger.info('Caught SIGHUP, changing parameters to set #{0}'.format(new_selector))
self.selector = new_selector
self.selector_change = True
def mainloop(self):
# Do here whatever polling is actually required.
if __name__ == '__main__':
main = Main()
main.mainloop()
Again, because the true purpose of the polling loop is not clear to me, I didn't try to reproduce its functionality in the class above.
Generally, it is best practice to avoid global variables, and instead just pass variables to classes/methods that need them through method calls. Example: if you are making a calculator, make an addition method that takes 2 ints and returns an int. This is in contrast to making 2 input ints and 1 output int as global variables, and having the add method work on those.

Conditional if in asynchronous python program with twisted

I'm creating a program that uses the Twisted module and callbacks.
However, I keep having problems because the asynchronous part goes wrecked.
I have learned (also from previous questions..) that the callbacks will be executed at a certain point, but this is unpredictable.
However, I have a certain program that goes like
j = calc(a)
i = calc2(b)
f = calc3(c)
if s:
combine(i, j, f)
Now the boolean s is set by a callback done by calc3. Obviously, this leads to an undefined error because the callback is not executed before the s is needed.
However, I'm unsure how you SHOULD do if statements with asynchronous programming using Twisted. I've been trying many different things, but can't find anything that works.
Is there some way to use conditionals that require callback values?
Also, I'm using VIFF for secure computations (which uses Twisted): VIFF
Maybe what you're looking for is twisted.internet.defer.gatherResults:
d = gatherResults([calc(a), calc2(b), calc3(c)])
def calculated((j, i, f)):
if s:
return combine(i, j, f)
d.addCallback(calculated)
However, this still has the problem that s is undefined. I can't quite tell how you expect s to be defined. If it is a local variable in calc3, then you need to return it so the caller can use it.
Perhaps calc3 looks something like this:
def calc3(argument):
s = bool(argument % 2)
return argument + 1
So, instead, consider making it look like this:
Calc3Result = namedtuple("Calc3Result", "condition value")
def calc3(argument):
s = bool(argument % 2)
return Calc3Result(s, argument + 1)
Now you can rewrite the calling code so it actually works:
It's sort of unclear what you're asking here. It sounds like you know what callbacks are, but if so then you should be able to arrive at this answer yourself:
d = gatherResults([calc(a), calc2(b), calc3(c)])
def calculated((j, i, calc3result)):
if calc3result.condition:
return combine(i, j, calc3result.value)
d.addCallback(calculated)
Or, based on your comment below, maybe calc3 looks more like this (this is the last guess I'm going to make, if it's wrong and you'd like more input, then please actually share the definition of calc3):
def _calc3Result(result, argument):
if result == "250":
# SMTP Success response, yay
return Calc3Result(True, argument)
# Anything else is bad
return Calc3Result(False, argument)
def calc3(argument):
d = emailObserver("The argument was %s" % (argument,))
d.addCallback(_calc3Result)
return d
Fortunately, this definition of calc3 will work just fine with the gatherResults / calculated code block immediately above.
You have to put if in the callback. You may use Deferred to structure your callback.
As stated in previous answer - the preocessing logic should be handled in callback chain, below is simple code demonstration how this could work. C{DelayedTask} is a dummy implementation of a task which happens in the future and fires supplied deferred.
So we first construct a special object - C{ConditionalTask} which takes care of storring the multiple results and servicing callbacks.
calc1, calc2 and calc3 returns the deferreds, which have their callbacks pointed to C{ConditionalTask}.x_callback.
Every C{ConditionalTask}.x_callback does a call to C{ConditionalTask}.process which checks if all of the results have been registered and fires on a full set.
Additionally - C{ConditionalTask}.c_callback sets a flag of wheather or not the data should be processed at all.
from twisted.internet import reactor, defer
class DelayedTask(object):
"""
Delayed async task dummy implementation
"""
def __init__(self,delay,deferred,retVal):
self.deferred = deferred
self.retVal = retVal
reactor.callLater(delay, self.on_completed)
def on_completed(self):
self.deferred.callback(self.retVal)
class ConditionalTask(object):
def __init__(self):
self.resultA=None
self.resultB=None
self.resultC=None
self.should_process=False
def a_callback(self,result):
self.resultA = result
self.process()
def b_callback(self,result):
self.resultB=result
self.process()
def c_callback(self,result):
self.resultC=result
"""
Here is an abstraction for your "s" boolean flag, obviously the logic
normally would go further than just setting the flag, you could
inspect the result variable and do other strange stuff
"""
self.should_process = True
self.process()
def process(self):
if None not in (self.resultA,self.resultB,self.resultC):
if self.should_process:
print 'We will now call the processor function and stop reactor'
reactor.stop()
def calc(a):
deferred = defer.Deferred()
DelayedTask(5,deferred,a)
return deferred
def calc2(a):
deferred = defer.Deferred()
DelayedTask(5,deferred,a*2)
return deferred
def calc3(a):
deferred = defer.Deferred()
DelayedTask(5,deferred,a*3)
return deferred
def main():
conditional_task = ConditionalTask()
dFA = calc(1)
dFB = calc2(2)
dFC = calc3(3)
dFA.addCallback(conditional_task.a_callback)
dFB.addCallback(conditional_task.b_callback)
dFC.addCallback(conditional_task.c_callback)
reactor.run()

Categories

Resources