In my python script I have a core dumped, and I think it's because the same function is called twice at the same time.
The function is the reading of a Vte terminal in a gtk window
def term(self, dPluzz, donnees=None):
text = str(self.v.get_text(lambda *a: True).rstrip())
[...]
print "Here is the time " + str(time.time())
def terminal(self):
self.v = vte.Terminal()
self.v.set_emulation('xterm')
self.v.connect ("child-exited", lambda term: self.verif(self, dPluzz))
self.v.connect("contents-changed", self.term)
Result:
Here is the time 1474816913.68
Here is the time 1474816913.68
Erreur de segmentation (core dumped)
How to avoid the double executing of the function?
The double execution must be the consequence of the contents-changed event triggering twice.
You could just check in your term function whether it has already been executed before, and exit if so.
Add these two lines at the start of the term function:
if hasattr(self, 'term_has_executed'): return
self.term_has_executed = True
I created a python decorator (mutli-platform compatible) which provide a mecanism to avoid concurrent execution.
The usage is :
#lock('file.lock')
def run():
# Function action
pass
Personnaly, I am used to using relative path :
CURRENT_FILE_DIR = os.path.dirname(os.path.abspath(__file__))
#lock(os.path.join(CURRENT_FILE_DIR, os.path.basename(__file__)+".lock"))
The decorator :
import os
def lock(lock_file):
"""
Usage:
#lock('file.lock')
def run():
# Function action
"""
def decorator(target):
def wrapper(*args, **kwargs):
if os.path.exists(lock_file):
raise Exception('Unable to get exclusive lock.')
else:
with open(lock_file, "w") as f:
f.write("LOCK")
# Execute the target
result = target(*args, **kwargs)
remove_attempts = 10
os.remove(lock_file)
while os.path.exists(lock_file) and remove_attempts >= 1:
os.remove(lock_file)
remove_attempts-=1
return result
return wrapper
return decorator
For mutlithread calls
There is a unix solution for manage multithread calls : https://gist.github.com/mvliet/5715690
Don't forget to thank the author of this gist (it's not me).
Related
I would like to write a decorator to redirect stdout for the main function to a specific log file. The argument that the main function takes is - item, and I want the decorator to allocate a different log path with each item for the main function. How do I achieve this?
Currently I have:
def redirect_stdout(func):
def wrapper():
with open(f"{log_path}{item}.log", "w") as log, contextlib.redirect_stdout(log), contextlib.redirect_stderr(log):
func(item)
return wrapper()
#redirect_stdout
def main(item):
But I am not sure how the item argument goes into the decorator. Thanks!
What you are looking for is something like below
def redirect_stdout(func):
def wrapper(item):
with open(f"{log_path}{item}.log", "w") as log, contextlib.redirect_stdout(log), contextlib.redirect_stderr(log):
func(item)
return wrapper
To understand how this works you need to properly understand how the decorator works. Check below I have tried to explain how the decorator works. ==> I have used to indicate this is equivalent to.
#redirect_stdout
def main(item):
pass
== >
def main(item):
pass
main = redirect_stdout(main) = wrapper
---------
main(item) => wrapper(item)
I'm new at NAO programming and I'm having some trouble regarding the ALAudioDevice API.
My problem is the following one: I wrote a python module that should record raw data from the front microphone.
The documentation of the ALAudioDevice API says that the method "subscribe(...)" calls the function "process" automatically
and regularly with raw data from microphones as inputs. I wrote a code to execute this process (see bellow), and it runs without raising
the error flag. However, the subscribe is bypassing the function "process" and the module doesn't get any audio at all.
Has someone had the same problem?
import qi
class AudioModule(object):
def __init__(self):
super(AudioModule, self).__init__()
self.moduleName = "AudioModule"
try :
self.ALAudioDevice = ALProxy("ALAudioDevice")
except Exception, e:
self.logger.error("Error when creating proxy on ALAudioDevice:")
self.logger.error(e)
def begin_stream(self):
self.ALAudioDevice.setClientPreferences(self.moduleName, 16000, 3, 0)
self.ALAudioDevice.subscribe(self.moduleName)
def end_stream(self):
self.ALAudioDevice.unsubscribe(self.moduleName)
def processRemote( self, nbOfChannels, samplesByChannel, altimestamp, buffer ):
nbOfChannels = nbOfChannels
mylogger = qi.Logger("data")
mylogger.info("It works !" + str(nbOfChannels))
class MyClass(GeneratedClass):
def __init__(self):
GeneratedClass.__init__(self, False)
self.audio = AudioModule()
def onLoad(self):
self.serviceId = self.session().registerService("AudioModule", self.audio)
pass
def onUnload(self):
if self.serviceId != -1:
self.session().unregisterService(self.serviceId)
self.serviceId = -1
pass
def onInput_onStart(self):
self.audio.begin_stream()
self.onInput_onStop()
pass
def onInput_onStop(self):
self.audio.end_stream()
self.onUnload
self.onStopped()
pass
It appears you are subscribing to the audio from a Choregraphe box. I'm not sure it is supposed to work.
But in this configuration the Python code is executed from within the same process as the ALAudioDevice service. So probably you should name your callback "process" instead of "processRemote".
Otherwise, you can still do this from a separate Python script.
Is there a way to better implement my code leveraging python coding techniques considering that I have a few methods that are all wrapping one specific method and adding a bit of pre/post processing to it?
Looking to leverage python coding techniques (thinking python decorators might help clean this a bit) to implement the class below.
I have a class that has one method to interface with the outside world and that other methods in the class use to execute actions and do pre/post processing of some data.
import subprocess as sp
class MyClass():
def _system_interface(self, command):
hello = ["echo", "'hello'"]
start = ["echo", "'start'"]
stop = ["echo", "'reset'"]
reset = ["echo", "'reset'"]
cmd = sp.Popen(locals()[command], stdout=sp.PIPE)
output = cmd.stdout.readlines()
print(output)
return cmd.stdout.readlines()
def call_one(self):
# Do some processing
self._system_interface("hello")
def call_two(self):
# Do some processing
self._system_interface("start")
def call_three(self):
# Do some processing
self._system_interface("stop")
if __name__ == "__main__":
c = MyClass()
c.call_one()
c.call_two()
c.call_three()
You can use a class that takes a command when instantiated, and when called, returns a decorator function that calls Popen with a command derived from the command given during the instantiation:
import subprocess as sp
class system_interface:
def __init__(self, command):
self.command = command
def __call__(self, func):
def wrapper(*args, **kwargs):
func(*args, **kwargs)
cmd = sp.Popen(['echo', self.command], stdout=sp.PIPE)
output = cmd.stdout.readlines()
print(output)
return cmd.stdout.readlines()
return wrapper
class MyClass():
#system_interface('hello')
def call_one(self):
print('one: do some processing')
#system_interface('start')
def call_two(self):
print('two: do some processing')
#system_interface('stop')
def call_three(self):
print('three: do some processing')
if __name__ == "__main__":
c = MyClass()
c.call_one()
c.call_two()
c.call_three()
This outputs:
one: do some processing
[b'hello\n']
two: do some processing
[b'start\n']
three: do some processing
[b'stop\n']
I have a simple storm bolt which only needs to call a function from another python module. Everything works until I call a method which has print statements inside the function.
So my bolt:
import storm
from pipeline import module as m
class ExampleBolt(storm.BasicBolt):
def initialize(self, conf, context):
self._conf = conf;
self._context = context;
storm.logInfo("ExampleBolt instance starting ...")
def process(self, tuple):
id, text = tuple.values
result = m.dummy_funct(text)
storm.emit([result])
ExampleBolt().run()
the method:
def dummy_funct(text):
print "log info"
return text
The bolt calls the method one time and then hangs on output. Using Apache Storm 0.9.3
Check that, whether from second time only process block is getting executed. Not the whole program is getting executed.
I have three files in a folder:
MultiProcFunctions.py
The idea is to take any function and parallelize it
import multiprocessing
from multiprocessing import Manager
def MultiProcDecorator(f,*args):
"""
Takes a function f, and formats it so that results are saved to a shared dict
"""
def g(procnum,return_dict,*args):
result = f(*args)
return_dict[procnum] = result
g.__module__ = "__main__"
return g
def MultiProcFunction(f,n_procs,*args):
"""
Takes a function f, and runs it in n_procs with given args
"""
manager = Manager()
return_dict = manager.dict()
jobs = []
for i in range(n_procs):
p = multiprocessing.Process( target = f, args = (i,return_dict) + args )
jobs.append(p)
p.start()
for proc in jobs:
proc.join()
return dict(return_dict)
MultiProcClass.py
A file that defines a class which makes use of the above functions to parallelize the sq function:
from MultiProcFunctions import MultiProcDecorator, MultiProcFunction
def sq(x):
return x**2
g = MultiProcDecorator(sq)
class Square:
def __init__(self):
pass
def f(self,x):
return MultiProcFunction(g,2,x)
MultiProcTest.py
Finally, I have a third file that imports the class above and tries to call the f method:
from MultiProcClass import Square
s = Square()
print s.f(2)
However, this yields an error:
File "C:\Python27\lib\multiprocessing\managers.py", line 528, in start
self._address = reader.recv()
EOFError
I am on Windows 7, and also tried:
from MultiProcClass import Square
if __name__ == "__main__":
s = Square()
print s.f(2)
In this case, I got a different error:
PicklingError: Can't pickle <function g at 0x01F62530>: it's not found as __main__.g
Not sure how to make heads or tails of this. I get neither error on Ubuntu 12.04 LTS, where all of this works flawlessly; so the error definitely has to do with how Windows does things, but I can't put my finger on it. Any insight is highly appreciated!
I think you get it on Windows because under Windows you start a new Python process whereas in Linux you fork the process. That means in Windows you need to serialize and deserialize the function whereas in Linux the pointer can be used. Finding a function required module and nae of the function to point to it.
g.__module__ should equal f.__module__.
Also these answers might help further how to decorate functions for picklability and usability.