Accessing variables of non-inherited class - python

I have a module testrun.py which runs all the tests. One of the tests is SWStatus such that
class HWStatus(myTest):
check = []
def __init__(self):
super(SWStatus, self).__init__()
def setup(self):
return
def work(self):
"""
some functionality to calculate the value of i
i is either 10 or 20
"""
if i == 10:
status = True
else:
status = False
check.append(status)
To run this test I do python testrun.py SWStatus and it gives me the results.
I have created HWStatus test such that it will run SWStatus test 10 times.
class HWStatus(myTest):
def __init__(self):
super(SWStatus, self).__init__()
def setup(self):
return
def work(self):
for i in xrange(10):
args = ['python', 'testrun.py', 'SWStatus']
p = subprocess.Popen(args)
while p.poll() != 0:
time.sleep(amount_of_time)
When I do testrun.py HWStatus, it runs SWStatus 10 times.
I'm facing 2 problems here.
I wanted to have check list of 10 values. such that each time it'll append either True or False depending on the logic. But because I'm running SWStatus from HWStatus, check is getting initialized to empty list each time. So even though I'm doing check.append(status), I'm getting just one value. How should I tackle this problem?
My 2nd question is, is there any way where I can access check list from the work method of my HWStatuseven though HWStatus is not inherited from SWStatus?
Can I do something like:
class HWStatus(myTest):
def __init__(self):
super(SWStatus, self).__init__()
def setup(self):
return
def work(self):
for i in xrange(10):
args = ['python', 'testrun.py', 'SWStatus']
p = subprocess.Popen(args)
while p.poll() != 0:
time.sleep(amount_of_time)
print "List of 10",check

Inheritance doesn't affect member visibility in python; all variables in python are visible as long as they're within the lexical scope.
The way you're running your tests though (in separate processes) creates different copies of SWStatus.check. When you start a new process, you create a separate memory area that it runs in. So, 11 copies of the SWstatus.check variable get created in your code, and none can see any other.
I suspect what you want to do is run the tests in parallel, in which case it's better to have the test return its status as an exit status...
import sys
if __name__ == 'main':
t = SWStatus()
sys.exit(not t.work())
However, if you absolutely need all of the tests to run in the same address space, you can use threads instead of processes. However, you'll need to use something like a Queue to coordinate concurrent access to memory.

Related

Python Multiprocessing outside of main producing unexpected result

I have a program that makes a call to an API every minute and do some operations, when some condition is met, I want to create a new process that will make calls to another API every seconds and do some operations. Parent process doesn't care the result that this child process produce, the child will run on its own until everything is done. This way the parent process can continue making call to the api every minute and doing operations without interruption.
I looked into multiprocessing. However I cant get it to work outside of main. I tried passing a callback function, but that created unexpected result (where parent process starting running again in parallel at some point).
Another solution I can think of is just create another project, then make a request. However then I will have a lot of repeated code.
What is the best approach to my problem?
example code:
class Main:
[...]
foo = Foo()
child = Child()
foo.Run(child.Process)
class Foo:
[...]
def Run(callbackfunction):
while(True):
x = self.dataServices.GetDataApi()
if(x == 1020):
callbackfunction()
#start next loop after a minute
class Child:
[...]
def Compute(self):
while(True):
self.dataServics.GetDataApiTwo()
#do stuff
#start next loop after a second
def Process(self):
self.Compute() # i want this function to run from a new process, so it wont interfer
Edit2: added in multiprocess attempt
class Main:
def CreateNewProcess(self, callBack):
if __name__ == '__main__':
p = Process(target=callBack)
p.start()
p.join()
foo = Foo()
child = Child(CreateNewProcess)
foo.Run(child.Process)
class Foo:
def Run(callbackfunction):
while(True):
x = dataServices.GetDataApi()
if(x == 1020):
callbackfunction()
#start next loop after a minute
class Child:
_CreateNewProcess = None
def __init__(self, CreateNewProcess):
self._CreateNewProcess = CreateNewProcess
def Compute(self, CreateNewProcess):
while(True):
dataServics.GetDataApiTwo()
#do stuff
#start next loop after a second
def Process(self):
self.CreateNewProcess(self.Compute) # i want this function to run from a new process, so it wont interfer
I had to reorganize a few things. Among others:
The guard if __name__ == '__main__': should include creation of
objects and especially calls to functions and methods. Usually it is
placed on the global level at the end of code.
Child objects shouldn't be created in main process. In theory you can
do this to use them as containers for necessary data for the child
process and then sending them as parameter but I think a separate
class should be used for this if seen as necessary. Here I used a
simple data parameter which can be anything pickleable.
It is cleaner to have a function on global level as process
target (in my opinion)
Finally it looks like:
from multiprocessing import Process
class Main:
#staticmethod
def CreateNewProcess(data):
p = Process(target=run_child, args=(data,))
p.start()
p.join()
class Foo:
def Run(self, callbackfunction):
while(True):
x = dataServices.GetDataApi()
if(x == 1020):
callbackfunction(data)
#start next loop after a minute
class Child:
def __init__(self, data):
self._data = data
def Compute(self):
while(True):
dataServics.GetDataApiTwo()
#do stuff
#start next loop after a second
# Target for new process. It is cleaner to have a function outside of a
# class for this
def run_child(data): # "data" represents one or more parameters from
# parent to child necessary to run specific child.
# "data" must be pickleable.
# Can be omitted if unnecessary
global child
child = Child(data)
child.Compute()
if __name__ == '__main__':
foo = Foo()
foo.Run(Main.CreateNewProcess)

Starting and stopping a process

I am really having a hard time wrapping my head around multithreading in python. My expectation of the following code is that appLoop() will run for 10 seconds and the cease to exist -- which it does when tracing through in PyCharm, but not, when I just run it. This results in an infinite loop.
import time
import multiprocessing
isRunning = True
runningSince = 0
def appLoop():
try:
global isRunning
while isRunning:
time.sleep(1)
global runningSince
runningSince = runningSince + 1
print(f'Looping since {runningSince} seconds.')
except KeyboardInterrupt:
print(f'appLoop stopped after {runningSince} seconds.')
class Process:
class __Process:
def __init__(self):
self.process = multiprocessing.Process(target=appLoop)
self.process.start()
instance = None
def __init__(self):
if not Process.instance:
Process.instance = Process.__Process()
def __del__(self):
print('Instance deleted.')
p = Process()
time.sleep(10)
isRunning = False
print(f'isRunning set to False.')
del p
This brings up (at least...) 2 questions for me:
why is process still running after del p -- am I creating a zombie process here?
why does my appLoop() keep running even after I set isRunning to false when I run the app (according to my observations this works when tracing through the code as mentioned above)?
My use case in the end is to be able to start / stop my appLoop() from a flask web interface -- which is why I am trying to implement a singleton here. Just in case you might wonder...
And: I do know that __del__ is not recommended as you never know when exactly garbage collection will call it -- in this case I just use it for (cave man) debugging.
isRunning = False changes the value of the variable in the parent process. The child process (the one that executes the while loop) has its own copy of isRunning that is not affected by the assignment.
For the same reason del p does not terminate the process: because it has its own copy of p. You should terminate the process explicitly in the destructor:
def __del__(self):
print('Instance deleted.')
Process.instance.process.terminate()

Control the run time of a python thread from outside

I am trying to spawn a python thread which perform a particular operation repeatedly based on a certain condition. If the condition doesn't met then the thread should exit. I have written the following code, but its running indefinitely.
class dummy(object):
def __init__(self):
# if the flag is set to False,the thread should exit
self.flag = True
def print_hello(self):
while self.flag:
print "Hello!! current Flag value: %s" % self.flag
time.sleep(0.5)
def execute(self):
t = threading.Thread(target=self.print_hello())
t.daemon = True # set daemon to True, to run thread in background
t.start()
if __name__ == "__main__":
obj = dummy()
obj.execute()
#Some other functions calls
#time.sleep(2)
print "Executed" # This line is never executed
obj.flag = False
I am new to python threading module. I have gone through some articles suggesting the use of threading.Timer() function, but this is not what I need.
The problem line is t = threading.Thread(target=self.print_hello()), more specifically target=self.print_hello(). This sets target to the results of self.print_hello(), and since this function never ends it will never be set. What you need to do is set it to the function itself with t = threading.Thread(target=self.print_hello).

QProcess.readAllStandardOutput() doesn't seem to read anything - PyQt

Here is the code sample:
class RunGui (QtGui.QMainWindow)
def __init__(self, parent=None):
...
QtCore.Qobject.connect(self.ui.actionNew, QtCore.SIGNAL("triggered()"), self.new_select)
...
def normal_output_written(self, qprocess):
self.ui.text_edit.append("caught outputReady signal") #works
self.ui.text_edit.append(str(qprocess.readAllStandardOutput())) # doesn't work
def new_select(self):
...
dialog_np = NewProjectDialog()
dialog_np.exec_()
if dialog_np.is_OK:
section = dialog_np.get_section()
project = dialog_np.get_project()
...
np = NewProject()
np.outputReady.connect(lambda: self.normal_output_written(np.qprocess))
np.errorReady.connect(lambda: self.error_output_written(np.qprocess))
np.inputNeeded.connect(lambda: self.input_from_line_edit(np.qprocess))
np.params = partial(np.create_new_project, section, project, otherargs)
np.start()
class NewProject(QtCore.QThread):
outputReady = QtCore.pyqtSignal(object)
errorReady = QtCore.pyqtSignal(object)
inputNeeded = QtCore.pyqtSignal(object)
params = None
message = ""
def __init__(self):
super(NewProject, self).__init__()
self.qprocess = QtCore.QProcess()
self.qprocess.moveToThread(self)
self._inputQueue = Queue()
def run(self):
self.params()
def create_new_project(self, section, project, otherargs):
...
# PyDev for some reason skips the breakpoints inside the thread
self.qprocess.start(command)
self.qprocess.waitForReadyRead()
self.outputReady.emit(self.qprocess) # works - I'm getting signal in RunGui.normal_output_written()
print(str(self.qprocess.readAllStandardOutput())) # prints empty line
.... # other actions inside the method requiring "command" to finish properly.
The idea is beaten to death - get the GUI to run scripts and communicate with the processes. The challenge in this particular example is that the script started in QProcess as command runs an app, that requires user input (confirmation) along the way. Therefore I have to be able to start the script, get all output and parse it, wait for the question to appear in the output and then communicate back the answer, allow it to finish and only then to proceed further with other actions inside create_new_project()
I don't know if this will fix your overall issue, but there are a few design issues I see here.
You are passing around the qprocess between threads instead of just emitting your custom signals with the results of the qprocess
You are using class-level attributes that should probably be instance attributes
Technically you don't even need the QProcess, since you are running it in your thread and actively using blocking calls. It could easily be a subprocess.Popen...but anyways, I might suggest changes like this:
class RunGui (QtGui.QMainWindow)
...
def normal_output_written(self, msg):
self.ui.text_edit.append(msg)
def new_select(self):
...
np = NewProject()
np.outputReady.connect(self.normal_output_written)
np.params = partial(np.create_new_project, section, project, otherargs)
np.start()
class NewProject(QtCore.QThread):
outputReady = QtCore.pyqtSignal(object)
errorReady = QtCore.pyqtSignal(object)
inputNeeded = QtCore.pyqtSignal(object)
def __init__(self):
super(NewProject, self).__init__()
self._inputQueue = Queue()
self.params = None
def run(self):
self.params()
def create_new_project(self, section, project, otherargs):
...
qprocess = QtCore.QProcess()
qprocess.start(command)
if not qprocess.waitForStarted():
# handle a failed command here
return
if not qprocess.waitForReadyRead():
# handle a timeout or error here
return
msg = str(self.qprocess.readAllStandardOutput())
self.outputReady.emit(msg)
Don't pass around the QProcess. Just emit the data. And create it from within the threads method so that it is automatically owned by that thread. Your outside classes should really not have any knowledge of that QProcess object. It doesn't even need to be a member attribute since its only needed during the operation.
Also make sure you are properly checking that your command both successfully started, and is running and outputting data.
Update
To clarify some problems you might be having (per the comments), I wanted to suggest that QProcess might not be the best option if you need to have interactive control with processes that expect periodic user input. It should work find for running scripts that just produce output from start to finish, though really using subprocess would be much easier. For scripts that need user input over time, your best bet may be to use pexpect. It allows you to spawn a process, and then watch for various patterns that you know will indicate the need for input:
foo.py
import time
i = raw_input("Please enter something: ")
print "Output:", i
time.sleep(.1)
print "Another line"
time.sleep(.1)
print "Done"
test.py
import pexpect
import time
child = pexpect.spawn("python foo.py")
child.setecho(False)
ret = -1
while ret < 0:
time.sleep(.05)
ret = child.expect("Please enter something: ")
child.sendline('FOO')
while True:
line = child.readline()
if not line:
break
print line.strip()
# Output: FOO
# Another line
# Done

Limit number of class instances with python

Μy Mainclass creates a simple QmainWindows like this:
class mcManageUiC(QtGui.QMainWindow):
def __init__(self):
super(mcManageUiC, self).__init__()
self.initUI()
def initUI(self):
self.show()
And at the end of my file I launch it like this:
def main():
app = QtGui.QApplication(sys.argv)
renderManagerVar = mcManageUiC()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
My problem is that each time i source it, it launches a new window.
I would like to know if there is a way to detect existence of previous class instance in my script (so that I close the old one or avoid launching a new one), or any other solutions?
Also, when compiling my code with py2exe, same problem with my .exe file on Windows; it launchs a new window every time. Could i add something in the setup.py for Windows to not act like this?
Is it possible, if yes then how?
Note: I'm using Windows 7 64bit compiling with eclipse.
There are a couple ways to do this, you can use a Class attribute to store all the instances -- If you do it this way, you may want to store them as weak references via the weakref module to prevent issues with garbage collecting:
class MyClass(object):
_instances=[]
def __init__(self):
if(len(self._instances) > 2):
self._instances.pop(0).kill() #kill the oldest instance
self._instances.append(self)
def kill(self):
pass #Do something to kill the instance
This is a little ugly though. You might also want to consider using some sort of Factory which (conditionally) creates a new instance. This method is a little more general.
import weakref
class Factory(object):
def __init__(self,cls,nallowed):
self.product_class=cls #What class this Factory produces
self.nallowed=nallowed #Number of instances allowed
self.products=[]
def __call__(self,*args,**kwargs):
self.products=[x for x in self.products if x() is not None] #filter out dead objects
if(len(self.products) <= self.nallowed):
newproduct=self.product_class(*args,**kwargs)
self.products.append(weakref.ref(newproduct))
return newproduct
else:
return None
#This factory will create up to 2 instances of MyClass
#and refuse to create more until at least one of those
#instances have died.
factory=Factory(MyClass,2)
i1=factory("foo","bar") #instance of MyClass
i2=factory("bar","baz") #instance of MyClass
i3=factory("baz","chicken") #None
You can limit the number of instances you want to create in your code just by adding a counter:
class A(object):
ins = 0 # This is a static counter
def __init__(self):
if A.ins >= 1: # Check if the number of instances present are more than one.
del self
print "Failed to create another instance" #if > 1, del self and return.
return
A.ins += 1
print "Success",str(self)
Try running via:
lst = []
for i in range(1,101):
a=A()
lst.append(a)
you could monopolize a socket
import socket
try:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
except:
"Network Error!"
s.settimeout(30)
try:
s.connect(('localhost' , 123))
except:
"could not open...already in use socket(program already running?)"
no idea if this is a good method but I have used it in the past and it solves this problem
this was designed to prevent launching a program when it was already running not from launching a new window from within a single script that is spawning several windows...
Use a class variable:
class mcManageUiC(QtGui.QMainWindow):
singleton = None
def __init__(self):
if not mcManageUiC.singleton: #if no instance yet
super(mcManageUiC, self).__init__()
self.initUI()
...
mcManageUiC.singleton = self
else:
...
def initUI(self):
self.show()

Categories

Resources