So I have written a wrapper WebApiSession for a Web API. When an instance is created a login etc is done and a session is created. The session needs to be kept alive so the constructor launches a separate process handling this. The method close() logs out of the session as well as stops the process. Now ideally I would not want to have to call close(). Instead I want this to happen when the instance is not needed anymore, i.e. I would like to be able to remove the session.close() call below. Is this possible?
import time
from multiprocessing import Process
class WebApiSession:
def __init__(self):
# start session, login etc
# ...
# start touch loop
self.touchLoop = Process(target = self.runTouchLoop)
self.touchLoop.start()
def runTouchLoop(self):
self.touch()
time.sleep(1)
self.runTouchLoop()
def touch(self):
# touch session
pass
def close(self):
# logout etc
# ...
self.touchLoop.terminate()
def doSomething(self):
pass
if __name__ == '__main__':
session = WebApiSession()
session.doSomething()
session.close()
It sounds like you could benefit from implementing WebApiSession as a context manager. You can then treat your session like any other "context" that has special methods that must be called when it's opened and closed, like a file or other connection. It would also would give you the added bonus of neatly wrapping up errors and so on.
class WebApiSession(object):
def __init__(self):
pass # other init stuff here, but don't connect yet.
def __enter__(self): # entering the context.
# start session, login, start touch loop
self.touchLoop = Process(target = self.runTouchLoop)
self.touchLoop.start()
return self
def __exit__(self, exc_type, exc_val, traceback): # leaving the context.
# Bonus feature: handle exception info here as needed!
self.close()
if __name__ == '__main__':
with WebApiSession() as session:
session.doSomething()
Related
Is it possible to detach a specific event after attaching it to a COM object?
For example, how to deregister the ClassOfHandlers in the following snippet:
from win32com.client import WithEvents
# ...
class ClassOfHandlers():
def OnStart(self):
print("Start observed")
class AnotherClassOfHandlers():
def OnStart(self):
print("Start observed from another")
WithEvents(client, ClassOfHandlers)
# ...
WithEvents(client, AnotherClassOfHandlers)
# ...
# Deregister `ClassOfHandlers`
As a variation on the OP's answer, which avoids a static member variable, it is worth remembering that WithEvents() returns an instance of the handler class.
from win32com.client import WithEvents
def MyOnStart():
print("Start observed")
def MySecondOnStart():
print("Start observed from another")
class ClassOfHandlers():
def __init__(self):
self._fn = MyOnStart
def setStartFunction(self,fn):
self._fn = fn
def OnStart(self):
self._fn()
handler = WithEvents(client, ClassOfHandlers)
#then later
handler.setStartFunction(MySecondOnStart)
Hence you can re-use the handler class for a different client.
Alternatively you could try opening an issue here and maybe the developers can advise on whether they expose the IConnectionPoint::Unadvise() function which would be needed behind the scenes to switch event handlers (I think).
Edit
Based on DS_London's answer we could benefit from WithEvents return, thus the combined solution would look like
from win32com.client import WithEvents
def MyOnStart():
print("Start observed")
def MySecondOnStart():
print("Start observed from another")
class ClassOfHandlers():
def __init__(self):
self._onStarts = []
# self._onStops = []
# ... add a list of functions for each event type
# the following 3 methods are implemented for each event type
def attachStart(self, fn):
self._onStarts.append(fn)
def detachStart(self, fn):
self._onStarts.remove(fn)
def OnStart(self):
for fn in self._onStarts:
fn()
# Always at the beginning
handler = WithEvents(client, ClassOfHandlers)
handler.attachStart(MyOnStart)
# ...
handler.attachStart(MySecondOnStart)
# ...
handler.detachStart(MyOnStart)
Limitation
If support for multiple clients is needed and thus threading is used, this edit won't work, and it would be needed to use the original answer's approach.
The cause: one needs to pass the ClassOfHandlers to the thread runnable*, however the thread runnable would PumpWaitingMessages() till interrupted, thus it won't be able to return the client handler back, preventing us from being able to detach/attach further functions while waiting for messages.
* PumpWaitingMessages() requires that it runs on the same thread that connected the ClassOfHandlers to the client, thus we can't create the client handler out of the thread then send it into the thread runnable.
Following is a snippet that shows this scenario:
def threadRunnable(client, eventsClass, controller):
pythoncom.CoInitializeEx(pythoncom.COINIT_MULTITHREADED)
# Connect the custom events
# The connection needs to be done inside the same thread for PumpWaitingMessages
handler = WithEvents(client, eventsClass)
if controller == None:
print("no control was provided")
controller = { "sleep_time": 1, "running_flag": True}
# With this while we won't be able to return the handler
while controller["running_flag"]:
pythoncom.PumpWaitingMessages()
time.sleep(controller["sleep_time"])
pythoncom.CoUninitialize()
def connectEvents(client, eventsClass, controller=None, runnable=threadRunnable):
flusher = Thread(target=runnable, args=(client,eventsClass,controller))
flusher.daemon = True
flusher.start()
def main():
controller = { "sleep_time": 1, "running_flag": True}
connectEvents(client, ClassOfHandlers, controller)
Original
I'm now able to achieve the desired behavior, by attaching a single permanent observer class and managing the events myself.
For example:
from win32com.client import WithEvents
# ...
class ClassOfHandlers():
OnStarts = []
def OnStart(self):
for handler in ClassOfHandlers.OnStarts:
handler()
def MyOnStart():
print("Start observed")
def MySecondOnStart():
print("Start observed from another")
# Always at the beginning
WithEvents(client, ClassOfHandlers)
ClassOfHandlers.OnStarts.append(MyOnStart)
# ...
ClassOfHandlers.OnStarts.append(MySecondOnStart)
# ...
ClassOfHandlers.OnStarts.remove(MyOnStart)
Hint:
The class variable OnStarts shall be changed to an instance variable if the class represents an instantiable COM object, to allow having an instance of the ClassOfHandlers (each instance having a different handler list) for each instantiated COM object.
One also needs to ensure that WithEvents is called only once for each COM object instance.
I have a class Indexer which is instantiated from the main thread, the instance of this class is stored in a variable, say, indexer. watchdog.observers.Observer() watches directories for changes and these happen in another thread. I tried passing this indexer variable from main thread through my handler Vigilante which was passed to ob.schedule(Vigilante(indexer)) alongside some other variables from main thread. I can't access the indexer variable in the Vigilante class, because of being in different threads. I know I could use a Queue but I don't know how I'd pass the Queue to watchdog's thread.
Here is the code from main thread:
if watch:
import watchdog.observers
from .utils import vigilante
class callbacks:
def __init__(self):
pass
#staticmethod
def build(filename, response):
return _build(filename, response)
#staticmethod
def renderer(src, mode):
return render(src, mode)
handler = vigilante.Vigilante(_filter, ignore, Indexer, callbacks, Mode)
path_to_watch = os.path.normpath(os.path.join(workspace, '..'))
ob = watchdog.observers.Observer()
ob.schedule(handler, path=path_to_watch, recursive=True)
ob.start()
import time
try:
while True:
time.sleep(1)
except KeyboardInterrupt:
ob.stop()
Indexer.close()
ob.join()
The Indexer class is meant to write to a database from another part of the code where the Indexer was instantiated.
Here is the code from Vigilante class running in watchdog's thread:
class Vigilante(PatternMatchingEventHandler):
"""Helps to watch files, directories for changes"""
def __init__(self, pattern, ignore, indexer, callback, mode):
pattern.append("*.yml")
self.Callback = callback
self.Mode = mode
self.Indexer = indexer
super(Vigilante, self).__init__(patterns=pattern, ignore_directories=ignore)
def vigil(self, event):
print(event.src_path, 'modified')
IndexReader = self.Indexer.get_index_on(event.src_path)
dep = IndexReader.read_index()
print(dep.next(), 'dependency')
feedout = self.Callback.build(
os.path.basename(event.src_path)
,self.Callback.renderer(event.src_path, self.Mode.FILE_MODE)
)
def on_modified(self, event):
self.vigil(event)
def on_created(self, event):
self.vigil(event)
All I need is a way to pass those variables from the main thread to watchdog's thread, through the Vigilante class
I finally found a way to do it without crossing threads too much as before, with an idea derived from #EvertW 's answer. I passed a Queue from main thread to the Vigilante class which was in another thread, so every modified file would be put in the Queue and then, from the main thread, I got the file modified from the queue, read from the Indexer database, and every other task which the Vigilante.vigil method needed to perform was moved to the main thread since those tasks depends on the modified file and what is read from the Indexer database.
This error disappeared:
SQLite objects created in a thread can only be used in that same thread.The object was created in thread id 9788 and this is thread id 4288.
Here is a snippet from what I did:
....
q = Queue.LifoQueue(10)
handler = vigilante.Vigilante(q, _filter, ignore)
path_to_watch = os.path.normpath(os.path.join(workspace, '..'))
ob = watchdog.observers.Observer()
ob.schedule(handler, path=path_to_watch, recursive=True)
ob.start()
import time
try:
while True:
if not q.empty():
modified = q.get()
IndexReader = Indexer.get_index_on(modified)
deps = IndexReader.read_index()
print(deps.next(), 'dependency')
# TODO
else:
print('q is empty')
time.sleep(1)
except KeyboardInterrupt:
ob.stop()
Indexer.close()
ob.join()
Vigilante class:
class Vigilante(PatternMatchingEventHandler):
"""Helps to watch files, directories for changes"""
def __init__(self, q, pattern, ignore):
self.q = q
super(Vigilante, self).__init__(
patterns=pattern,
ignore_patterns=ignore,
ignore_directories=True
)
def vigil(self, event):
print(event.src_path, 'modified')
self.q.put(event.src_path)
def on_modified(self, event):
self.vigil(event)
def on_created(self, event):
self.vigil(event)
....
PS: A word of advice : my word of advice to whoever comes across this kind of problem with threading in watchdog is; "Don't trust the watchdog's thread to do tasks on modified files, just get the modified files out and do whatever you like with them, except the task is a simple one."
You could try the Observer pattern (no pun intended), i.e. let the Observer class have a list of listeners that it will inform of any changes it sees. Then let the indexer announce its interest to the Observer.
In my example, the Observer expects subscribers to be callables that receive the changes. Then you can do:
from queue import Queue
class Observable:
def __init__(self):
self.listeners = []
def subscribe(listener):
self.listeners.append(listener)
def onNotify(change):
for listener in self.listeners:
listener(change)
class Indexer(Thread):
def __init__(self, observer):
Thread.__init__(self)
self.q = Queue()
observer.subscribe(self.q.put)
def run(self):
while True:
change = self.q.get()
Because the standard Queue is completely thread-safe, this will work fine.
I am looking for a way to run part of python code/function to be run as the different user in Linux, instead of make another script.
For example:
def function1(args):
# stuffs
def function2():
# stuffs
I would like to call the function function1 from function2 with passing couple of arguments and the function1 should accept those arguments and execute the stuffs as the different user and return the result. Since I have to call some stuffs in between during the entire execution and so I don't want to create multiple scripts for small piece of code. Basically am trying to connect database in the function function1 (the database connection can be done as that particular user only) and run the query and get the result.
This is a bit more difficult than you might expect. First of all, Python provides os.setuid() and os.setguid() to change the current user/group of the running script and you can create a context manager to do your bidding for you and automatically revert back to the currently executing user:
import os
class UnixUser(object):
def __init__(self, uid, gid=None):
self.uid = uid
self.gid = gid
def __enter__(self):
self.cache = os.getuid(), os.getgid() # cache the current UID and GID
if self.gid is not None: # GID change requested as well
os.setgid(self.gid)
os.setuid(self.uid) # set the UID for the code within the `with` block
def __exit__(self, exc_type, exc_val, exc_tb):
# optionally, deal with the exception
os.setuid(self.cache[0]) # revert back to the original UID
os.setgid(self.cache[1]) # revert back to the original GID
And to test it:
def test():
print("Current UID: {}".format(os.getuid())) # prints the UID of the executing user
test() # executes as the current user
with UnixUser(105):
test() # executes as the user with UID: 105
You can even create a neat decorator to select that some functions should always execute as another user:
def as_unix_user(uid, gid=None): # optional group
def wrapper(func):
def wrapped(*args, **kwargs):
with UnixUser(uid, gid):
return func(*args, **kwargs) # execute the function
return wrapped
return wrapper
def test1():
print("Current UID: {}".format(os.getuid())) # prints the UID of the executing user
#as_unix_user(105)
def test2():
print("Current UID: {}".format(os.getuid())) # prints the UID of the executing user
test1() # executes as the current user
test2() # executes as the user with UID: 105
The kicker? Next to not being thread-safe, it will only work if both the current user, and the user you want to execute the function as, have CAP_SETUID and, optionally, CAP_SETGID capabilities.
You can get away with having only one user with these capabilities run the main script and then fork when necessary, changing the UID/GID only on the forked process:
import os
def as_unix_user(uid, gid=None): # optional group
def wrapper(func):
def wrapped(*args, **kwargs):
pid = os.fork()
if pid == 0: # we're in the forked process
if gid is not None: # GID change requested as well
os.setgid(gid)
os.setuid(uid) # set the UID for the code within the `with` block
func(*args, **kwargs) # execute the function
os._exit(0) # exit the child process
return wrapped
return wrapper
def test1():
print("Current UID: {}".format(os.getuid())) # prints the UID of the executing user
#as_unix_user(105)
def test2():
print("Current UID: {}".format(os.getuid())) # prints the UID of the executing user
test1() # executes as the current user
test2() # executes as the user with UID: 105
The kicker here? You don't get the return data from the forked function. If you need it, you'll have to pipe it back to the parent process and then wait in the parent for it to finish. You'll also need to choose a format to pass the data between processes (if simple enough, I'd recommend JSON or resort back to native pickle)...
At that point, you're already doing half of what the subprocess module is doing so you might as well just launch your function as a subprocess and be done with it. If you have to go through such hoops to achieve your desired result, chances are your original design is at fault. In your case - why not just provide the permissions to the current user to access the DB? The user will need to have the capabilities to switch to another user who can so you're not gaining anything in terms of security from it - you're only complicating your life.
In pymodbus library in server.sync, SocketServer.BaseRequestHandler is used, and defines as follow:
class ModbusBaseRequestHandler(socketserver.BaseRequestHandler):
""" Implements the modbus server protocol
This uses the socketserver.BaseRequestHandler to implement
the client handler.
"""
running = False
framer = None
def setup(self):
""" Callback for when a client connects
"""
_logger.debug("Client Connected [%s:%s]" % self.client_address)
self.running = True
self.framer = self.server.framer(self.server.decoder, client=None)
self.server.threads.append(self)
def finish(self):
""" Callback for when a client disconnects
"""
_logger.debug("Client Disconnected [%s:%s]" % self.client_address)
self.server.threads.remove(self)
def execute(self, request):
""" The callback to call with the resulting message
:param request: The decoded request message
"""
try:
context = self.server.context[request.unit_id]
response = request.execute(context)
except NoSuchSlaveException as ex:
_logger.debug("requested slave does not exist: %s" % request.unit_id )
if self.server.ignore_missing_slaves:
return # the client will simply timeout waiting for a response
response = request.doException(merror.GatewayNoResponse)
except Exception as ex:
_logger.debug("Datastore unable to fulfill request: %s; %s", ex, traceback.format_exc() )
response = request.doException(merror.SlaveFailure)
response.transaction_id = request.transaction_id
response.unit_id = request.unit_id
self.send(response)
# ----------------------------------------------------------------------- #
# Base class implementations
# ----------------------------------------------------------------------- #
def handle(self):
""" Callback when we receive any data
"""
raise NotImplementedException("Method not implemented by derived class")
def send(self, message):
""" Send a request (string) to the network
:param message: The unencoded modbus response
"""
raise NotImplementedException("Method not implemented by derived class")
setup() is called when a client is connected to the server, and finish() is called when a client is disconnected. I want to manipulate these methods (setup() and finish()) in another class in another file which use the library (pymodbus) and add some code to setup and finish functions. I do not intend to modify the library, since it may cause strange behavior in specific situation.
---Edited ----
To clarify, I want setup function in ModbusBaseRequestHandler class to work as before and remain untouched, but add sth else to it, but this modification should be done in my code not in the library.
The simplest, and usually best, thing to do is to not manipulate the methods of ModbusBaseRequestHandler, but instead inherit from it and override those methods in your subclass, then just use the subclass wherever you would have used the base class:
class SoupedUpModbusBaseRequestHandler(ModbusBaseRequestHandler):
def setup(self):
# do different stuff
# call super().setup() if you want
# or call socketserver.BaseRequestHandler.setup() to skip over it
# or call neither
Notice that a class statement is just a normal statement, and can go anywhere any other statement can, even in the middle of a method. So, even if you need to dynamically create the subclass because you won't know what you want setup to do until runtime, that's not a problem.
If you actually need to monkeypatch the class, that isn't very hard—although it is easy to screw things up if you aren't careful.
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
If you want to be able to call the normal implementation, you have to stash it somewhere:
_setup = ModbusBaseRequestHandler.setup
def setup(self):
# do different stuff
# call _setup whenever you want
ModbusBaseRequestHandler.setup = setup
If you want to make sure you copy over the name, docstring, etc., you can use `wraps:
#functools.wraps(ModbusBaseRequestHandler.setup)
def setup(self):
# do different stuff
ModbusBaseRequestHandler.setup = setup
Again, you can do this anywhere in your code, even in the middle of a method.
If you need to monkeypatch one instance of ModbusBaseRequestHandler while leaving any other instances untouched, you can even do that. You just have to manually bind the method:
def setup(self):
# do different stuff
myModbusBaseRequestHandler.setup = setup.__get__(myModbusBaseRequestHandler)
If you want to call the original method, or wraps it, or do this in the middle of some other method, etc., it's otherwise basically the same as the last version.
It can be done by Interceptor
from functools import wraps
def iterceptor(func):
print('this is executed at function definition time (def my_func)')
#wraps(func)
def wrapper(*args, **kwargs):
print('this is executed before function call')
result = func(*args, **kwargs)
print('this is executed after function call')
return result
return wrapper
#iterceptor
def my_func(n):
print('this is my_func')
print('n =', n)
my_func(4)
more explanation can be found here
I have the following code in python:
class gateWay:
def __init__(self):
self.var1 = []
self.var2 = {}
self.currentThread = None
def stateProcess(self, file):
# some irrelevant code
self.currentThread = saltGatWayThread(self, file).start()
return self.var1
def stopRunning(self):
self.currentThread.proc.stop()
In addition, here the source code of the saltGatWayThread:
class saltGatWayThread(threading.Thread):
def __init__(self):
threading.Thread.__init__(self)
# some irrelevant code
self.proc = src.proc.Process1()
In addition, I have the following code in src/proc/__init__.py:
class Process1:
def stop(self):
# code to stop operation
In the console, I notice that self.currentThread is null.
My purpose is to save the thread in local variable, when start it. If I get an abort request, I apply
stopRunning function. This function, would take the saved thread and will do "clean" exit (finish the process of the tread and exit).
Why can't I save the thread, and use the structure of it later on?
invoke currentThread = saltGatWayThread() and then call .start(). currentThread does not contains thread instance because starts() method always returns nothing according to the threading.py source code. See source of C:\Python27\Lib\threading.py
def start(self):
"""Start the thread's activity.
It must be called at most once per thread object. It arranges for the
object's run() method to be invoked in a separate thread of control.
This method will raise a RuntimeError if called more than once on the
same thread object.
"""
if not self.__initialized:
raise RuntimeError("thread.__init__() not called")
if self.__started.is_set():
raise RuntimeError("threads can only be started once")
if __debug__:
self._note("%s.start(): starting thread", self)
with _active_limbo_lock:
_limbo[self] = self
try:
_start_new_thread(self.__bootstrap, ())
except Exception:
with _active_limbo_lock:
del _limbo[self]
raise
self.__started.wait()