Add function not working for set in Python - python

I am trying to add the references of a function in a set (in exposed_setCallback method).
The answer is given at the end. Somehow, it is not adding the reference for the second attempt. The links of the source files are:
http://pastebin.com/BNde5Cgr
http://pastebin.com/aCi6yMT9
Below is the code:
import rpyc
test = ['hi']
myReferences= set()
class MyService(rpyc.Service):
def on_connect(self):
"""Think o+ this as a constructor of the class, but with
a new name so not to 'overload' the parent's init"""
self.fn = None
def exposed_setCallback(self,fn):
# i = 0
self.fn = fn # Saves the remote function for calling later
print self.fn
myReferences.add(self.fn)
#abc.append(i)
#i+=1
print myReferences
for x in myReferences:
print x
#print abc
if __name__ == "__main__":
# lists are pass by reference, so the same 'test'
# will be available to all threads
# While not required, think about locking!
from rpyc.utils.server import ThreadedServer
t = ThreadedServer(MyService, port = 18888)
t.start()
Answer:
<function myprint at 0x01FFD370>
set([<function myprint at 0x01FFD370>])
<function myprint at 0x01FFD370>
<function myprint at 0x022DD370>
set([<function myprint at 0x022DD370>,
Please help

I think the issue is because you have a ThreadedServer which is of course going to be multithreaded.
However, Python sets are not threadsafe, (they are not allowed to be accessed by multiple threads at the same time) so you need to implement a lock for whenever you access the set. You use the lock with a Python context manager (the with statement), which handle acquiring/releasing the lock for you and the Lock itself can only be acquired by one context manager at a time, thus preventing simultaneous access to your set. See the modified code below:
import rpyc
import threading
test = ['hi']
myReferences= set()
myReferencesLock = threading.Lock()
class MyService(rpyc.Service):
def on_connect(self):
"""Think o+ this as a constructor of the class, but with
a new name so not to 'overload' the parent's init"""
self.fn = None
def exposed_setCallback(self,fn):
# i = 0
self.fn = fn # Saves the remote function for calling later
print self.fn
with myReferencesLock:
myReferences.add(self.fn)
#abc.append(i)
#i+=1
with myReferencesLock:
print myReferences
for x in myReferences:
print x
#print abc
if __name__ == "__main__":
# lists are pass by reference, so the same 'test'
# will be available to all threads
# While not required, think about locking!
from rpyc.utils.server import ThreadedServer
t = ThreadedServer(MyService, port = 18888)
t.start()
Welcome to the world of threaded programming. Make sure you protect data shared between threads with locks!

If you want to modify a global variable, you should use global statement on top of your function
def exposed_setCallback(self, fn):
global myReferences
# i = 0
self.fn = fn # Saves the remote function for calling later
print self.fn
myReferences.add(self.fn)

Related

Detach COM events using pywin32

Is it possible to detach a specific event after attaching it to a COM object?
For example, how to deregister the ClassOfHandlers in the following snippet:
from win32com.client import WithEvents
# ...
class ClassOfHandlers():
def OnStart(self):
print("Start observed")
class AnotherClassOfHandlers():
def OnStart(self):
print("Start observed from another")
WithEvents(client, ClassOfHandlers)
# ...
WithEvents(client, AnotherClassOfHandlers)
# ...
# Deregister `ClassOfHandlers`
As a variation on the OP's answer, which avoids a static member variable, it is worth remembering that WithEvents() returns an instance of the handler class.
from win32com.client import WithEvents
def MyOnStart():
print("Start observed")
def MySecondOnStart():
print("Start observed from another")
class ClassOfHandlers():
def __init__(self):
self._fn = MyOnStart
def setStartFunction(self,fn):
self._fn = fn
def OnStart(self):
self._fn()
handler = WithEvents(client, ClassOfHandlers)
#then later
handler.setStartFunction(MySecondOnStart)
Hence you can re-use the handler class for a different client.
Alternatively you could try opening an issue here and maybe the developers can advise on whether they expose the IConnectionPoint::Unadvise() function which would be needed behind the scenes to switch event handlers (I think).
Edit
Based on DS_London's answer we could benefit from WithEvents return, thus the combined solution would look like
from win32com.client import WithEvents
def MyOnStart():
print("Start observed")
def MySecondOnStart():
print("Start observed from another")
class ClassOfHandlers():
def __init__(self):
self._onStarts = []
# self._onStops = []
# ... add a list of functions for each event type
# the following 3 methods are implemented for each event type
def attachStart(self, fn):
self._onStarts.append(fn)
def detachStart(self, fn):
self._onStarts.remove(fn)
def OnStart(self):
for fn in self._onStarts:
fn()
# Always at the beginning
handler = WithEvents(client, ClassOfHandlers)
handler.attachStart(MyOnStart)
# ...
handler.attachStart(MySecondOnStart)
# ...
handler.detachStart(MyOnStart)
Limitation
If support for multiple clients is needed and thus threading is used, this edit won't work, and it would be needed to use the original answer's approach.
The cause: one needs to pass the ClassOfHandlers to the thread runnable*, however the thread runnable would PumpWaitingMessages() till interrupted, thus it won't be able to return the client handler back, preventing us from being able to detach/attach further functions while waiting for messages.
* PumpWaitingMessages() requires that it runs on the same thread that connected the ClassOfHandlers to the client, thus we can't create the client handler out of the thread then send it into the thread runnable.
Following is a snippet that shows this scenario:
def threadRunnable(client, eventsClass, controller):
pythoncom.CoInitializeEx(pythoncom.COINIT_MULTITHREADED)
# Connect the custom events
# The connection needs to be done inside the same thread for PumpWaitingMessages
handler = WithEvents(client, eventsClass)
if controller == None:
print("no control was provided")
controller = { "sleep_time": 1, "running_flag": True}
# With this while we won't be able to return the handler
while controller["running_flag"]:
pythoncom.PumpWaitingMessages()
time.sleep(controller["sleep_time"])
pythoncom.CoUninitialize()
def connectEvents(client, eventsClass, controller=None, runnable=threadRunnable):
flusher = Thread(target=runnable, args=(client,eventsClass,controller))
flusher.daemon = True
flusher.start()
def main():
controller = { "sleep_time": 1, "running_flag": True}
connectEvents(client, ClassOfHandlers, controller)
Original
I'm now able to achieve the desired behavior, by attaching a single permanent observer class and managing the events myself.
For example:
from win32com.client import WithEvents
# ...
class ClassOfHandlers():
OnStarts = []
def OnStart(self):
for handler in ClassOfHandlers.OnStarts:
handler()
def MyOnStart():
print("Start observed")
def MySecondOnStart():
print("Start observed from another")
# Always at the beginning
WithEvents(client, ClassOfHandlers)
ClassOfHandlers.OnStarts.append(MyOnStart)
# ...
ClassOfHandlers.OnStarts.append(MySecondOnStart)
# ...
ClassOfHandlers.OnStarts.remove(MyOnStart)
Hint:
The class variable OnStarts shall be changed to an instance variable if the class represents an instantiable COM object, to allow having an instance of the ClassOfHandlers (each instance having a different handler list) for each instantiated COM object.
One also needs to ensure that WithEvents is called only once for each COM object instance.

python multiprocessing manager connect creates another object

I would like to create shared object among processes. First I created server process which spawned process for class ProcessClass. Then I created another process where I want to connect to shared object.
But connection from another process created its own instance of ProcessClass.
So what I need to do to access this remote shared object.
Here is my test code.
from multiprocessing.managers import BaseManager
from multiprocessing import Process
class ProcessClass:
def __init__(self):
self._state = False
def set(self):
self._state = True
def get(self):
return self._state
class MyManager(BaseManager):
pass
def another_process():
MyManager.register('my_object')
m = MyManager(address=('', 50000))
m.connect()
proxy = m.my_object()
print(f'state from another process: {proxy.get()}')
def test_spawn_and_terminate_process():
MyManager.register('my_object', ProcessClass)
m = MyManager(address=('', 50000))
m.start()
proxy = m.my_object()
proxy.set()
print(f'state from main process: {proxy.get()}')
p = Process(target=another_process)
p.start()
p.join()
print(f'state from main process: {proxy.get()}')
if __name__ == '__main__':
test_spawn_and_terminate_process()
Output is
python test_communication.py
state from main process: True
state from another process: False
state from main process: True
Your code is working as it is supposed to. If you look at the documentation for multiprocessing.managers.SyncManager you will see that there is, for example, a method dict() to create a shareable dictionary. Would you expect that calling this method multiple times would return the same dictionary over and over again or new instances of sharable dictionaries?
What you need to do is enforce a singleton instance to be used repeatedly for successive invocations of proxy = m.my_object() and the way to do that is to first define the following function:
singleton = None
def get_singleton_process_instance():
global singleton
if singleton is None:
singleton = ProcessClass()
return singleton
Then you need to make a one line change in funtion test_spawn_and_terminate_process:
def test_spawn_and_terminate_process():
#MyManager.register('my_object', ProcessClass)
MyManager.register('my_object', get_singleton_process_instance)
This ensures that to satisfy requests for 'my_object', it always invokes get_singleton_process_instance() (returning the singleton) instead of ProcessClass(), which would return a new instance.

python threading.local() in different module

I am trying to pass data in threading.local() to functions in different module.
Code is something like this:
other_module.py:
import threading
# 2.1
ll = threading.local()
def other_fn():
# 2.2
ll = threading.local()
v = getattr(ll, "v", None)
print(v)
main_module.py:
import threading
import other_module
# 1.1
ll = threading.local()
def main_fn(v):
# 1.2
ll = threading.local()
ll.v = v
other_fn()
for i in [1,2]:
t = threading.Thread(target=main_fn, args=(i,))
t.start()
But none of combinations 1.x - 2.x not working for me.
I have found similar question - Access thread local object in different module - Python but reply, marked as answer not working for me too if print_message function located in different module.
Is it possible to pass thread local data between modules without passing it as function argument?
In a similar situation I ended up doing the following in a separate module:
import threading
from collections import defaultdict
tls = defaultdict(dict)
def get_thread_ctx():
""" Get thread-local, global context"""
return tls[threading.get_ident()]
This essentially creates a global variable called tls. Then each thread (based on its identity) gets a key in the global dict. I handle that also as a dict. Example:
class Test(Thread):
def __init__(self):
super().__init__()
# note: we cannot initialize thread local here, since thread
# is not running yet
def run(self):
# Get thread context
tmp = get_thread_ctx()
# Create an app-specific entry
tmp["probe"] = {}
self.ctx = tmp["probe"]
while True:
...
Now, in a different module:
def get_thread_settings():
ctx = get_thread_ctx()
probe_ctx = ctx.get("probe", None)
# Get what you need from the app-specific region of this thread
return probe_ctx.get("settings", {})
Hope it helps the next one looking for something similar

Make Singleton class in Multiprocessing

I create Singleton class using Metaclass, it working good in multithreadeds and create only one instance of MySingleton class but in multiprocessing, it creates always new instance
import multiprocessing
class SingletonType(type):
# meta class for making a class singleton
def __call__(cls, *args, **kwargs):
try:
return cls.__instance
except AttributeError:
cls.__instance = super(SingletonType, cls).__call__(*args, **kwargs)
return cls.__instance
class MySingleton(object):
# singleton class
__metaclass__ = SingletonType
def __init__(*args,**kwargs):
print "init called"
def task():
# create singleton class instance
a = MySingleton()
# create two process
pro_1 = multiprocessing.Process(target=task)
pro_2 = multiprocessing.Process(target=task)
# start process
pro_1.start()
pro_2.start()
My output:
init called
init called
I need MySingleton class init method get called only once
Each of your child processes runs its own instance of the Python interpreter, hence the SingletonType in one process doesn't share its state with those in another process. This means that a true singleton that only exists in one of your processes will be of little use, because you won't be able to use it in the other processes: while you can manually share data between processes, that is limited to only basic data types (for example dicts and lists).
Instead of relying on singletons, simply share the underlying data between the processes:
#!/usr/bin/env python3
import multiprocessing
import os
def log(s):
print('{}: {}'.format(os.getpid(), s))
class PseudoSingleton(object):
def __init__(*args,**kwargs):
if not shared_state:
log('Initializating shared state')
with shared_state_lock:
shared_state['x'] = 1
shared_state['y'] = 2
log('Shared state initialized')
else:
log('Shared state was already initalized: {}'.format(shared_state))
def task():
a = PseudoSingleton()
if __name__ == '__main__':
# We need the __main__ guard so that this part is only executed in
# the parent
log('Communication setup')
shared_state = multiprocessing.Manager().dict()
shared_state_lock = multiprocessing.Lock()
# create two process
log('Start child processes')
pro_1 = multiprocessing.Process(target=task)
pro_2 = multiprocessing.Process(target=task)
pro_1.start()
pro_2.start()
# Wait until processes have finished
# See https://stackoverflow.com/a/25456494/857390
log('Wait for children')
pro_1.join()
pro_2.join()
log('Done')
This prints
16194: Communication setup
16194: Start child processes
16194: Wait for children
16200: Initializating shared state
16200: Shared state initialized
16201: Shared state was already initalized: {'x': 1, 'y': 2}
16194: Done
However, depending on your problem setting there might be better solutions using other mechanisms of inter-process communication. For example, the Queue class is often very useful.

thread class instance creating a thread function

I have a thread class, in it, I want to create a thread function to do its job corrurently with the thread instance. Is it possible, if yes, how ?
run function of thread class is doing a job at every, excatly, x seconds. I want to create a thread function to do a job parallel with the run function.
class Concurrent(threading.Thread):
def __init__(self,consType, consTemp):
# something
def run(self):
# make foo as a thread
def foo (self):
# something
If not, think about below case, is it possible, how ?
class Concurrent(threading.Thread):
def __init__(self,consType, consTemp):
# something
def run(self):
# make foo as a thread
def foo ():
# something
If it is unclear, please tell . I will try to reedit
Just launch another thread. You already know how to create them and start them, so simply write another sublcass of Thread and start() it along the ones you already have.
Change def foo() for a Thread subclass with run() instead of foo().
First of all, I suggest the you will reconsider using threads. In most cases in Python you should use multiprocessing instead.. That is because Python's GIL.
Unless you are using Jython or IronPython..
If I understood you correctly, just open another thread inside the thread you already opened:
import threading
class FooThread(threading.Thread):
def __init__(self, consType, consTemp):
super(FooThread, self).__init__()
self.consType = consType
self.consTemp = consTemp
def run(self):
print 'FooThread - I just started'
# here will be the implementation of the foo function
class Concurrent(threading.Thread):
def __init__(self, consType, consTemp):
super(Concurrent, self).__init__()
self.consType = consType
self.consTemp = consTemp
def run(self):
print 'Concurrent - I just started'
threadFoo = FooThread('consType', 'consTemp')
threadFoo.start()
# do something every X seconds
if __name__ == '__main__':
thread = Concurrent('consType', 'consTemp')
thread.start()
The output of the program will be:
Concurrent - I just startedFooThread - I just started

Categories

Resources