Passing state attributes between two python processes - python

I want to pass a state attribute (bool) to a second process. The main process initializes this process and passes the regarding attribute within the constructor. Depending on this attribute the second process should print different values.
The class 'TestClass' is located in separated file.
This is the sub process in a second file (let‘s call it subprocess.py), which doesn‘t print my desired results.
from multiprocessing import Process, Value
import time
# this class is executed in a second process and reacts to changes in
# in the main process
class TestClass(Process):
def __init__(self, value):
Process.__init__(self)
self.__current_state = value
def run(self):
while True:
if bool(self.__current_state):
print("Hello World")
else:
print("Not Hello World")
time.sleep(0.5)
This is the main process, which is executed
import time
from SubProcess import TestClass
# main procedur
value_to_pass = Value('i', False).value
test_obj = TestClass(value_to_pass)
test_obj.start()
while True:
if bool(value_to_pass):
value_to_pass = Value('i', False).value
else:
value_to_pass = Value('i', True).value
# wait some time
time.sleep(0.4)
At the end I would like to have an alternating output of Not Hello World and Hello World, which would successfully indicate the passing of the state argument.
At the moment it just outputs the print depending on my initialization of value_to_pass. It obviously does never change its value.
Using global attributes doesn't fit my requirements, because it is a question of different files. Furthermore using object attributes just fits if I use threads. Later I will use a RaspberryPi. I will handle multiple sensors with it. Hence, I'm forced to use multiple processes.
Thank you!

Related

Making conditions on returned value from first process

Hallow everyone, I am making a simple python program where i need a multiprocessing for starting two functions at the same time. Following is the main process of the program.
creating two functions.
making a global variable for storing value from function.
starting two functions at the same time using multiprocessing library in python.
Work of first function is to load and change the value of global variable at certain time.
Work of second function is continuously printing specific data while global variable is not equal to certain value.
"""
import time
import multiprocessing
a = "Wait"
def test1():
print("\n#######################\nFunction 1")
global a
time.sleep(1)
a = "Loaded Data"
print(a)
def test2():
print("\n#######################\nFunction 2")
global a
print(a)
if __name__ == '__main__':
t1 = multiprocessing.Process(target=test1, args=[])
t1.start()
time.sleep(2)
t2 = multiprocessing.Process(target=test2, args=[])
t2.start()
"""
The output of following code is below:
"""
#######################
Function 1
Loaded Data
#######################
Function 2
Wait
"""
The problem I am facing here is that the first function executed successfully and printed the value of a. But while in second function it prints the old value of global variable.
I think that I have done something wrong. I need someone to help me.
What is needed is :
How to fix or change global variable value.
How to use loops (conditions) on global variable in second function to for printing value!=specific value.
Please correct my code and tell how use looping in second condition. If a!="LoadedData" second function continuously prints "Loading Data …. Please Wait". Else stop looping.
And also sorry for my poor and broken English Language
Processes have no shared memory by default in Python, although it's possible to do it. Quoting from docs:
... when doing concurrent programming it is usually best to avoid using shared state as far as possible. This is particularly true when using multiple processes.
One way to do it is using Manager and Value.
from multiprocessing import Manager, Process
import time
import ctypes
def test1(a):
print("\n#######################\nFunction 1")
time.sleep(1)
print(a.value)
a.value = "Loaded Data"
def test2(a):
print("\n#######################\nFunction 2")
print(a.value)
if __name__ == '__main__':
manager = Manager()
a = manager.Value(ctypes.c_wchar_p, "Wait")
t1 = Process(target=test1, args=(a,))
t1.start()
time.sleep(2)
t2 = Process(target=test2, args=(a,))
t2.start()
t1.join()
t2.join()
The reason you get the "old" value of the global a variable is that every process gets it's own copy of a. The second process is fully new instance of the Python interpreter, with no knowledge about the first process.
I suggest you read the docs first.

How to wait for an input for a set amount of time in python

In the code that I am writing in python, I check if there is an NFC tag at the reader with id, text = reader.read() how do I make it skip and do a different function if an NFC tag is not read in a set amount of time?
Thanks.
This is the code adapted form what you gave that i used to test it
from multiprocessing import Process, Manager
def NCFReader(info):
info = input("Test: ")
def main():
# Motion is detected, now we want to time the function NCFReader
info = None
nfc_proc = Process(target=NCFReader, args=(info,))
nfc_proc.start()
nfc_proc.join(timeout=5)
if nfc_proc.is_alive():
nfc_proc.terminate()
print("It did not complete")
# PROCESS DID NOT FINISH, DO SOMETHING
else:
# PROCESS DID FINISH, DO SOMETHING ELSE
print("It did complete")
main()
From my experience setting a time limit on a function is surprisingly difficult.
It might seem as an over kill but the way I would do it is like this:
Say the function you want to limit is called NFCReader then
from multiprocessing import Process
# Create a process of the function
func_proc = Process(target=NFCReader)
# Start the process
func_proc.start()
# Wait until child process terminate or T seconds pass
func_proc.join(timeout=T)
# Check if function has finished and if not kill it
if func_proc.is_alive():
func_proc.terminate()
You can read more about python Processes here
Additionally Since you want to receive data from your Process you need to somehow be able to read a variable in one process and have it available in another, for that you can use the Manager object.
In your case you could to the following:
def NCFReader(info):
info['id'], info['text'] = reader.read()
def main():
... SOME LINES OF CODE ...
# Motion is detected, now we want to time the function NCFReader
info = Manager.dict()
info['id'] = None
info['text'] = None
nfc_proc = Process(target=NCFReader, args=(info,))
nfc_proc.start()
nfc_proc.join(timeout=T)
if nfc_proc.is_alive():
nfc_proc.terminate()
# PROCESS DID NOT FINISH, DO SOMETHING
else:
# PROCESS DID FINISH, DO SOMETHING ELSE
notice that the dictionary info is a dictionary that is shared among all Process, so if you would want to use it again make sure you reset it's values.
Hope this helps

Working with deque object across multiple processes

I'm trying to reduce the processing time of reading a database with roughly 100,000 entries, but I need them to be formatted a specific way, in an attempt to do this, I tried to use python's multiprocessing.map function which works perfectly except that I can't seem to get any form of queue reference to work across them.
I've been using information from Filling a queue and managing multiprocessing in python to guide me for using queues across multiple processes, and Using a global variable with a thread to guide me for using global variables across threads. I've gotten the software to work, but when I check the list/queue/dict/map length after running the process, it always returns zero
I've written a simple example to show what I mean:
You have to run the script as a file, the map's initialize function does not work from the interpreter.
from multiprocessing import Pool
from collections import deque
global_q = deque()
def my_init(q):
global global_q
global_q = q
q.append("Hello world")
def map_fn(i):
global global_q
global_q.append(i)
if __name__ == "__main__":
with Pool(3, my_init, (global_q,)) as pool:
pool.map(map_fn, range(3))
for p in range(len(global_q)):
print(global_q.pop())
Theoretically, when I pass the queue object reference from the main thread to the worker threads using the pool function, and then initialize that thread's global variables using with the given function, then when I insert elements into the queue from the map function later, that object reference should still be pointing to the original queue object reference (long story short, everything should end up in the same queue, because they all point to the same location in memory).
So, I expect:
Hello World
Hello World
Hello World
1
2
3
of course, the 1, 2, 3's are in arbitrary order, but what you'll see on the output is ''.
How come when I pass object references to the pool function, nothing happens?
Here's an example of how to share something between processes by extending the multiprocessing.managers.BaseManager class to support deques.
There's a Customized managers section in the documentation about creating them.
import collections
from multiprocessing import Pool
from multiprocessing.managers import BaseManager
class DequeManager(BaseManager):
pass
class DequeProxy(object):
def __init__(self, *args):
self.deque = collections.deque(*args)
def __len__(self):
return self.deque.__len__()
def appendleft(self, x):
self.deque.appendleft(x)
def append(self, x):
self.deque.append(x)
def pop(self):
return self.deque.pop()
def popleft(self):
return self.deque.popleft()
# Currently only exposes a subset of deque's methods.
DequeManager.register('DequeProxy', DequeProxy,
exposed=['__len__', 'append', 'appendleft',
'pop', 'popleft'])
process_shared_deque = None # Global only within each process.
def my_init(q):
""" Initialize module-level global. """
global process_shared_deque
process_shared_deque = q
q.append("Hello world")
def map_fn(i):
process_shared_deque.append(i) # deque's don't have a "put()" method.
if __name__ == "__main__":
manager = DequeManager()
manager.start()
shared_deque = manager.DequeProxy()
with Pool(3, my_init, (shared_deque,)) as pool:
pool.map(map_fn, range(3))
for p in range(len(shared_deque)): # Show left-to-right contents.
print(shared_deque.popleft())
Output:
Hello world
0
1
2
Hello world
Hello world
You cant use global variable for multiprocesing.
Pass to the function multiprocessing queue.
from multiprocessing import Queue
queue= Queue()
def worker(q):
q.put(something)
Also you are propably experiencing that the code is allright, but as the pool create separate processes, even the errors are separeted and therefore you dont see the code not only isnt working, but that it throws error.
The reason why your output is '', is because nothing was appended to your q/global_q. And if it was appended, then only some variable, that may be called global_q, but its totally different one than your global_q in your main thread
Try to print('Hello world') inside the function you want to multiprocess and you will see by yourself, that nothing is actually printed at all. That processes is simply outside of your main thread and the only way to access that process is by multiprocessing Queues. You access the Queue by queue.put('something') and something = queue.get()
Try to understand this code and you will do well:
import multiprocessing as mp
shared_queue = mp.Queue() # This will be shared among all procesess, but you need to pass the queue as an argument in the process. You CANNOT use it as global variable. Understand that the functions kind of run in total different processes and nothing can really access them... Except multiprocessing.Queue - that can be shared across all processes.
def channel(que,channel_num):
que.put(channel_num)
if __name__ == '__main__':
processes = [mp.Process(target=channel, args=(shared_queue, channel_num)) for channel_num in range(8)]
for p in processes:
p.start()
for p in processes: # wait for all results to close the pool
p.join()
for i in range(8): # Get data from Queue. (you can get data out of it at any time actually)
print(shared_queue.get())

Python - How to read a class instances in real time from a separate process that is printing the information

I have a piece of code that is constantly creating new instances of class Car. In doing so, class Car is creating a list of instances of itself so when I want to get the info of the current instances, I can easily do so, like in the below code:
from multiprocessing import Process
import time
class Car:
car_list = list()
def __init__(self, id, model):
self.id = id
self.model = model
Car.car_list.append(self)
#classmethod
def get_current_instances(cls):
return Car.car_list
class Interface:
def print_current_system(self):
while True:
print(len(Car.get_current_instances()))
time.sleep(1)
if __name__ == "__main__":
interface = Interface()
model = ["Toyota", "BMW"]
[Car(i, model[i]) for i in range(len(model))]
print_process = Process(target=interface.print_current_system)
print_process.start()
Car(2345, "Tesla")
print("from main process " + str(len(Car.get_current_instances())))
This code is simplified for the purpose of the question. However, the problem still remains the same. I am invoking a function print_current_system from a new process. This function is constantly looking at all the current instances of Car and prints the number of cars.
When I start this process, and then, later on, add some new instances of Car, these instances are hidden to the other child process while are perfectly visible to the main process. I am pretty sure I need to use something like Queue or Pipe. However, I am not sure how.
This is the output of the above code:
2
from main process 3
2
2
2
2
2
Background: Because Python is, by nature, single threaded (the interpreter is guarded by the GIL, or global interpreter lock), there are not true threads in it. Instead, to achieve the same effect, you have to use different processes, as you are doing in your example. Because these are different processes, with different interpreters and different namespaces, you will not be able to access normal data in one process from a different process. When you create the new process, the python interpreter forks itself and makes a copy of all Python objects, so Car.car_list is now two different objects, one in each process. So when one process adds to that list, it is adding to a different list than the other process is reading.
Answer: your hunch was correct: you will want to use one of the data structures in the multiprocessing module. These data structures are specially written to be "thread safe" (I guess actually "process safe" in this context) and to marshal the shared data between the two processes behind the scenes.
Example: you could use a global queue in which the "producer" process adds items and the "consumer" process removes them and adds them to its own copy of the list.
from multiprocessing import Queue
class Car:
global_queue = Queue()
_car_list = [] # This member will be up-to-date in the producer
# process. In that process, access it directly.
# In the consumer process, call get_car_list instead.
# This can be wrapped in an interface which knows
# which process it is in, so the calling code does
# not have to keep track.
def __init__(self, id, model):
self.id = id
self.model = model
self.global_queue.put(self)
self._car_list.append(self)
#classmethod
def get_car_list(cls):
""" Get the car list for the consumer process
Note: do not call this from the producer process
"""
# Before returning the car list, pull all pending cars off the queue
# while cls.global_queue.qsize() > 0:
# qsize is not implemented on some unix systems
while not cls.global_queue.empty():
cls._car_list.append(cls.global_queue.get())
return cls._car_list
Note: with the above code, you can only have one consumer of the data. If the other processes call the get_car_list method, they will remove the pending cars from the queue and the consumer process won't receive them. If you need to have multiple consumer processes, you will need to take a different approach.
If all what you want is keep a counting about how many cars you have, you can use a shared memory object like Value.
You can achive what you want with just a few changes to your code:
from multiprocessing import Process, Value
import time
class Car:
car_list = list()
car_quantity = Value('i', 0) # Use a shared memory object here.
def __init__(self, id, model):
self.id = id
self.model = model
Car.car_list.append(self)
Car.car_quantity.value += 1 # Update quantity
#classmethod
def get_current_instances(cls):
return Car.car_list
class Interface:
def print_current_system(self):
while True:
print(Car.car_quantity.value) # Just print the value of the memory shared object (Value).
time.sleep(1)
if __name__ == "__main__":
interface = Interface()
model = ["Toyota", "BMW"]
[Car(i, model[i]) for i in range(len(model))]
print_process = Process(target=interface.print_current_system)
print_process.start()
time.sleep(3) # Added here in order you can see the
# ouptut changing from 2 to 3.
Car(2345, "Tesla")
print("from main process " + str(len(Car.get_current_instances())))
Output:
2
2
2
from main process 3
3
3
3
3
3

Updating object attributes from within module run in multiprocessor process

I am relatively new to python and definitely new to multiprocessing. I'm following this question/answer for the structure of my multiprocessing, but in def func_A, I'm calling a module that passes a class object as one of the arguments. In the module, I change an object attribute that I would like the main program to see and update the user with the object attribute value. The child processes run for very long times, so I need the main program to provide updates as they run.
My suspicion is that I'm not understanding namespace/object scoping or something similar, but from what I've read, passing an object (an instance of a class?) to a module as an argument passes a reference to the object and not a copy. I would have thought this meant that changing the attributes of the object in the child process/module would have changed the attributes in the main program object (since they're the same object). Or am I confusing things?
The code for my main program:
# MainProgram.py
import multiprocessing as mp
import time
from time import sleep
import sys
from datetime import datetime
import myModule
MYOBJECTNAMES = ['name1','name2']
class myClass:
def __init__(self, name):
self.name = name
self.value = 0
myObjects = []
for n in MYOBJECTNAMES:
myObjects.append(myClass(n))
def func_A(process_number, queue):
start = datetime.now()
print("Process {} (object: {}) started at {}".format(process_number, myObjects[process_number].name, start))
myModule.Eval(myObjects[process_number])
sys.stdout.flush()
def multiproc_master():
queue = mp.Queue()
proceed = mp.Event()
processes = [mp.Process(target=func_A, args=(x, queue)) for x in range(len(myObjects))]
for p in processes:
p.start()
for i in range(100):
for o in myObjects:
print("In main: Value of {} is {}".format(o.name, o.value))
sleep(10)
for p in processes:
p.join()
if __name__ == '__main__':
split_jobs = multiproc_master()
print(split_jobs)
The code for my module program:
# myModule.py
from time import sleep
def Eval(myObject):
for i in range(100):
myObject.value += 1
print("In module: Value of {} is {}".format(myObject.name, myObject.value))
sleep(5)
That question/answer you linked to probably was probably a poor choice to use as a template, as it's doing many things that your code doesn't require (much less use).
I think your biggest misconception about how multiprocessing works is thinking that all the code is running in the same address-space. The main task runs in its own, and there are separate ones for each subtask. The way your code is written, each of them will end up with its own separate myObjects list. That's why the main task doesn't see any of the changes made by any of the other tasks.
While there are ways share objects using the multiprocessing module, doing so often introduces significant overhead because keeping it or them all in-sync between all the processes requires lots of things happening "under the covers" to make seem like they're shared (which is what is really going on since they can't actually be because of having separate address-spaces). This overhead frequently completely cancels out any speed gained by parallel-processing.
As stated in the documentation: "when doing concurrent programming it is usually best to avoid using shared state as far as possible".

Categories

Resources