My code in the following encounters error.
It's an example to use Queue in python.
May I ask how to fix it?
thanks for your help in advance.
# coding=utf-8
from multiprocessing import Queue
if __name__ == '__main__':
q=Queue(3)
q.put('message1')
q.put('message2')
print(q.full())
q.put('message3')
print(q.full())
try:
q.put("message4",True,1)
except:
print("the queue is full. current amount is %s"%q.qsize())
try:
q.put('message4')
except:
print("the queue is full. current amount is %s"%q.qsize())
if not q.empty():
print('get message from the queue.')
for i in range(q.qsize()):
print(q.get_nowait())
if not q.full():
q.put_nowait("message4")
The output error is:
False
True
Traceback (most recent call last):<br/>
File "process.py", line 13, in <module>
q.put("message4",True,1)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/queues.py", line 84, in put
raise Full
queue.Full
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "process.py", line 15, in <module>
print("the queue is full. current amount is %s"%q.qsize())
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/queues.py", line 120, in qsize
return self._maxsize - self._sem._semlock._get_value()
NotImplementedError
As Víctor Terrón has suggested in a GitHub discussion, you can use his implementation:
https://github.com/vterron/lemon/blob/d60576bec2ad5d1d5043bcb3111dff1fcb58a8d6/methods.py#L536-L573
According to the doc:
A portable implementation of multiprocessing.Queue. Because of
multithreading / multiprocessing semantics, Queue.qsize() may raise
the NotImplementedError exception on Unix platforms like Mac OS X
where sem_getvalue() is not implemented. This subclass addresses this
problem by using a synchronized shared counter (initialized to zero)
and increasing / decreasing its value every time the put() and get()
methods are called, respectively. This not only prevents
NotImplementedError from being raised, but also allows us to implement
a reliable version of both qsize() and empty().
Related
import asyncio
async def task_successful():
pass
if __name__ == '__main__':
loop = asyncio.get_event_loop()
task = loop.create_task(task_successful())
task.set_exception(ValueError)
loop.run_until_complete(task)
What I expect is an exception rising from loop.run_until_complete(task) or at least task.exception() being a ValueError.
Instead I'm getting the following error:
Traceback (most recent call last):
File "set_exception.py", line 11, in <module>
task.set_exception(ValueError)
RuntimeError: Task does not support set_exception operation
This is very weird since this task is not done when the call is made:
<Task pending coro=<task_successful() running at set_exception.py:4>>
Also it is not an InvalidStateError mentioned at the docs
Tried Python 3.7.5, 3.9.1.
What is it? A bug or I'm doing something wrong?
This isn't a valid operation on the Task instance. See the Task docs
"A Future-like object that runs a Python coroutine"
"asyncio.Task inherits from Future all of its APIs except Future.set_result() and Future.set_exception()."
Further proof, look at the source:
class Task(futures._PyFuture): # Inherit Python Task implementation
# from a Python Future implementation.
...
def set_exception(self, exception):
raise RuntimeError('Task does not support set_exception operation')
I am polling data using a python 2.7.10 function that I want to timeout if a device takes too long to respond, or catch a RuntimeError if that device is not available.
I am using this Timeout function:
class Timeout():
class Timeout(Exception):
pass
def __init__(self, sec):
self.sec = sec
def __enter__(self):
signal.signal(signal.SIGALRM, self.raise_timeout)
signal.alarm(self.sec)
def __exit__(self, *args):
signal.alarm(0)
def raise_timeout(self, *args):
raise Timeout.Timeout()
This is my loop to make the data polls (Modbus) and catch the exceptions. This loop is called every 60 seconds:
def getDeviceTags(name, tag_data):
global val_returns
for tag in tag_data[name]:
local_vals = []
local_vals.append(name+"."+tag)
try:
with Timeout(3):
value = modbus.read(str(name), str(tag))
local_vals.append(str(value.value()))
except RuntimeError:
print("RuntimeError on " + str(name))
local_vals.append(None)
except Timeout.Timeout:
print("Timeout on " + str(name))
local_vals.append(None)
val_returns.append(local_vals)
This will work for DAYS at a time with no issues, both RuntimeErrors and Timeouts being printed to the console, all data logged - GREAT.
However, recently its been getting stuck - and this is the only error I'm getting:
Traceback (most recent call last):
File "working_one_min_back.py", line 161, in <module>
job()
File "working_one_min_back.py", line 79, in job
getDeviceTags(str(key), data)
File "working_one_min_back.py", line 57, in getDeviceTags
print("RuntimeError on " + str(name))
File "working_one_min_back.py", line 30, in raise_timeout
raise Timeout.Timeout()
__main__.Timeout
There’s no guarantee that a “Python signal” isn’t delivered after a call to alarm(0). The actual (C) signal might already have been delivered, causing the Python handler to be invoked a few bytecode instructions later.
If you call signal.signal from __exit__, any such pending signal is discarded, which usefully prevents mistaking it for the next one requested. Using that to restore the handler to the value it had before the Timeout was created (as returned by the first signal.signal call) is a good idea anyway. (Reset it after calling alarm(0) to prevent SIG_DFL from killing the process.)
In Python 3, such a call delivers any pending signals instead of discarding them, which is an improvement in that it prevents losing a signal just because the handler changed. (This is no more documented than the Python 2 behavior, unfortunately.) You can try to suppress such a late signal by setting an attribute in __exit__ and ignoring any (Python) signal raised when it is set.
Of course, the signal could be delivered after __exit__ begins execution and before the signal is discarded (or marked to be ignored). You therefore have to handle an operation both completing and timing out, perhaps by having several assignments to a single variable that is then appended in just one place.
I'm running a django rest framework server hosted on google cloud. Every hour or so I get a couple of these errors that I can't figure out:
Exception ignored in: .cb at 0x7f50f275ebf8>
Traceback (most recent call last): (no traceback provided)
File "", line 191, in cb
"KeyError: ('django.contrib.sessions.serializers',)
There's no traceback since the error is caught and ignored. I've followed the code down to the cpython library in this method:
def _get_module_lock(name):
"""Get or create the module lock for a given module name.
Should only be called with the import lock taken."""
lock = None
try:
lock = _module_locks[name]()
except KeyError:
pass
if lock is None:
if _thread is None:
lock = _DummyModuleLock(name)
else:
lock = _ModuleLock(name)
def cb(_):
del _module_locks[name]
_module_locks[name] = _weakref.ref(lock, cb)
return lock
Has anyone seen this error before? I can't find any pattern of when this error comes in and can't manually reproduce it with any certainty.
I enforce a timeout for a block of code using the multiprocessing module. It appears that with certain sized inputs, the following error is raised:
WindowsError: [Error 5] Access is denied
I can replicate this error with the following code. Note that the code completes with '467,912,040' but not with '517,912,040'.
import multiprocessing, Queue
def wrapper(queue, lst):
lst.append(1)
queue.put(lst)
queue.close()
def timeout(timeout, lst):
q = multiprocessing.Queue(1)
proc = multiprocessing.Process(target=wrapper, args=(q, lst))
proc.start()
try:
result = q.get(True, timeout)
except Queue.Empty:
return None
finally:
proc.terminate()
return result
if __name__ == "__main__":
# lst = [0]*417912040 # this works fine
# lst = [0]*467912040 # this works fine
lst = [0] * 517912040 # this does not
print "List length:",len(lst)
timeout(60*30, lst)
The output (including error):
List length: 517912040
Traceback (most recent call last):
File ".\multiprocessing_error.py", line 29, in <module>
print "List length:",len(lst)
File ".\multiprocessing_error.py", line 21, in timeout
proc.terminate()
File "C:\Python27\lib\multiprocessing\process.py", line 137, in terminate
self._popen.terminate()
File "C:\Python27\lib\multiprocessing\forking.py", line 306, in terminate
_subprocess.TerminateProcess(int(self._handle), TERMINATE)
WindowsError: [Error 5] Access is denied
Am I not permitted to terminate a Process of a certain size?
I am using Python 2.7 on Windows 7 (64bit).
While I am still uncertain regarding the precise cause of the problem, I have some additional observations as well as a workaround.
Workaround.
Adding a try-except block in the finally clause.
finally:
try:
proc.terminate()
except WindowsError:
pass
This also seems to be the solution arrived at in a related (?) issue posted here on GitHub (you may have to scroll down a bit).
Observations.
This error is dependent on the size of the object passed to the Process/Queue, but it is not related to the execution of the Process itself. In the OP, the Process completes before the timeout expires.
proc.is_alive returns True before and after the execution of proc.terminate() (which then throws the WindowsError). A second or two later, proc.is_alive() returns False and a second call to proc.terminate() succeeds.
Forcing the main thread to sleep time.sleep(1) in the finally block also prevents the throwing of the WindowsError. Thanks, #tdelaney's comment in the OP.
My best guess is that proc is in the process of freeing memory (?, or something comparable) while being killed by the OS (having completed execution) when the call to proc.terminate() attempts to kill it again.
I'm trying to make a file like object which is meant to be assigned to sys.stdout/sys.stderr during testing to provide deterministic output. It's not meant to be fast, just reliable. What I have so far almost works, but I need some help getting rid of the last few edge-case errors.
Here is my current implementation.
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
from os import getpid
class MultiProcessFile(object):
"""
helper for testing multiprocessing
multiprocessing poses a problem for doctests, since the strategy
of replacing sys.stdout/stderr with file-like objects then
inspecting the results won't work: the child processes will
write to the objects, but the data will not be reflected
in the parent doctest-ing process.
The solution is to create file-like objects which will interact with
multiprocessing in a more desirable way.
All processes can write to this object, but only the creator can read.
This allows the testing system to see a unified picture of I/O.
"""
def __init__(self):
# per advice at:
# http://docs.python.org/library/multiprocessing.html#all-platforms
from multiprocessing import Queue
self.__master = getpid()
self.__queue = Queue()
self.__buffer = StringIO()
self.softspace = 0
def buffer(self):
if getpid() != self.__master:
return
from Queue import Empty
from collections import defaultdict
cache = defaultdict(str)
while True:
try:
pid, data = self.__queue.get_nowait()
except Empty:
break
cache[pid] += data
for pid in sorted(cache):
self.__buffer.write( '%s wrote: %r\n' % (pid, cache[pid]) )
def write(self, data):
self.__queue.put((getpid(), data))
def __iter__(self):
"getattr doesn't work for iter()"
self.buffer()
return self.__buffer
def getvalue(self):
self.buffer()
return self.__buffer.getvalue()
def flush(self):
"meaningless"
pass
... and a quick test script:
#!/usr/bin/python2.6
from multiprocessing import Process
from mpfile import MultiProcessFile
def printer(msg):
print msg
processes = []
for i in range(20):
processes.append( Process(target=printer, args=(i,), name='printer') )
print 'START'
import sys
buffer = MultiProcessFile()
sys.stdout = buffer
for p in processes:
p.start()
for p in processes:
p.join()
for i in range(20):
print i,
print
sys.stdout = sys.__stdout__
sys.stderr = sys.__stderr__
print
print 'DONE'
print
buffer.buffer()
print buffer.getvalue()
This works perfectly 95% of the time, but it has three edge-case problems. I have to run the test script in a fast while-loop to reproduce these.
3% of the time, the parent process output isn't completely reflected. I assume this is because the data is being consumed before the Queue-flushing thread can catch up. I haven't though of a way to wait for the thread without deadlocking.
.5% of the time, there's a traceback from the multiprocess.Queue implementation
.01% of the time, the PIDs wrap around, and so sorting by PID gives the wrong ordering.
In the very worst case (odds: one in 70 million), the output would look like this:
START
DONE
302 wrote: '19\n'
32731 wrote: '0 1 2 3 4 5 6 7 8 '
32732 wrote: '0\n'
32734 wrote: '1\n'
32735 wrote: '2\n'
32736 wrote: '3\n'
32737 wrote: '4\n'
32738 wrote: '5\n'
32743 wrote: '6\n'
32744 wrote: '7\n'
32745 wrote: '8\n'
32749 wrote: '9\n'
32751 wrote: '10\n'
32752 wrote: '11\n'
32753 wrote: '12\n'
32754 wrote: '13\n'
32756 wrote: '14\n'
32757 wrote: '15\n'
32759 wrote: '16\n'
32760 wrote: '17\n'
32761 wrote: '18\n'
Exception in thread QueueFeederThread (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.6/threading.py", line 532, in __bootstrap_inner
File "/usr/lib/python2.6/threading.py", line 484, in run
File "/usr/lib/python2.6/multiprocessing/queues.py", line 233, in _feed
<type 'exceptions.TypeError'>: 'NoneType' object is not callable
In python2.7 the exception is slightly different:
Exception in thread QueueFeederThread (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
File "/usr/lib/python2.7/threading.py", line 505, in run
File "/usr/lib/python2.7/multiprocessing/queues.py", line 268, in _feed
<type 'exceptions.IOError'>: [Errno 32] Broken pipe
How do I get rid of these edge cases?
The solution came in two parts. I've successfully run the test program 200 thousand times without any change in output.
The easy part was to use multiprocessing.current_process()._identity to sort the messages. This is not a part of the published API, but it is a unique, deterministic identifier of each process. This fixed the problem with PIDs wrapping around and giving a bad ordering of output.
The other part of the solution was to use multiprocessing.Manager().Queue() rather than the multiprocessing.Queue. This fixes problem #2 above because the manager lives in a separate Process, and so avoids some of the bad special cases when using a Queue from the owning process. #3 is fixed because the Queue is fully exhausted and the feeder thread dies naturally before python starts shutting down and closes stdin.
I have encountered far fewer multiprocessing bugs with Python 2.7 than with Python 2.6. Having said this, the solution I used to avoid the "Exception in thread QueueFeederThread" problem is to sleep momentarily, possibly for 0.01s, in each process in which the the Queue is used. It is true that using sleep is not desirable or even reliable, but the specified duration was observed to work sufficiently well in practice for me. You can also try 0.1s.