I'm having problems with attaching to gevent-based processes with the PyDev's debugger under PyDev 3.9.2.
It works fine in PyDev 3.9.1 and LiClipse 3.9.1, just not with 3.9.2 which is the latest PyDev right now (6 Feb, 2015).
It also works properly when a piece of code is run directly from PyDev's debugger rather than externally.
Note that it doesn't depend on whether there are breakpoints set (enabled or not) - the mere fact of attaching to a process suffices for the exception to be raised.
Here is a sample module to reproduce the behaviour along with two exceptions - one from PyDev and the other one from the gevent code's point of view.
Can anyone please shed any light on it? Thanks a lot.
from gevent.monkey import patch_all
patch_all()
import threading
import gevent
def myfunc():
t = threading.current_thread()
print(t.name)
while True:
gevent.spawn(myfunc)
gevent.sleep(1)
Debug Server at port: 5678
Traceback (most recent call last):
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd_attach_to_process/attach_script.py", line 16, in attach
patch_multiprocessing=False,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1828, in settrace
patch_multiprocessing,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1920, in _locked_settrace
CheckOutputThread(debugger).start()
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 261, in start
thread.start()
File "/usr/lib/python2.7/threading.py", line 750, in start
self.__started.wait()
File "/usr/lib/python2.7/threading.py", line 620, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 339, in wait
waiter.acquire()
File "_semaphore.pyx",...
Traceback (most recent call last):
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd_attach_to_process/attach_script.py", line 16, in attach
patch_multiprocessing=False,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1828, in settrace
patch_multiprocessing,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1920, in _locked_settrace
CheckOutputThread(debugger).start()
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 261, in start
thread.start()
File "/usr/lib/python2.7/threading.py", line 750, in start
self.__started.wait()
File "/usr/lib/python2.7/threading.py", line 620, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 339, in wait
waiter.acquire()
File "_semaphore.pyx", line 112, in gevent._semaphore.Semaphore.acquire (gevent/gevent._semaphore.c:3004)
File "/home/dsuch/projects/pydev-plugin/pydev-plugin/local/lib/python2.7/site-packages/gevent/hub.py", line 330, in switch
switch_out()
File "/home/dsuch/projects/pydev-plugin/pydev-plugin/local/lib/python2.7/site-packages/gevent/hub.py", line 334, in switch_out
raise AssertionError('Impossible to call blocking function in the event loop callback')
AssertionError: Impossible to call blocking function in the event loop callback
Related
I am working with sh in python 2, the script works well when the version of sh is 1.11 but when upgrading the version to 1.12 and above an error occurs while running the program. The error is:
AttachedBlocks INFO: crypto_drive_handler.get_key: Trying to get key from metadata partition...
Exception in thread STDIN thread for pid 54714:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "/home/kevin/block_dev_hook/venv/local/lib/python2.7/site-packages/sh.py", line 1637, in wrap
fn(*rgs, **kwargs)
File "/home/kevin/block_dev_hook/venv/local/lib/python2.7/site-packages/sh.py", line 2511, in input_thread
done = stdin.write()
File "/home/kevin/block_dev_hook/venv/local/lib/python2.7/site-packages/sh.py", line 2818, in write
chunk = self.get_chunk()
File "/home/kevin/block_dev_hook/venv/local/lib/python2.7/site-packages/sh.py", line 2749, in fn
poller.register_read(stdin)
File "/home/kevin/block_dev_hook/venv/local/lib/python2.7/site-packages/sh.py", line 187, in register_read
self._register(f, select.POLLIN | select.POLLPRI)
File "/home/kevin/block_dev_hook/venv/local/lib/python2.7/site-packages/sh.py", line 182, in _register
self._set_fileobject(f)
File "/home/kevin/block_dev_hook/venv/local/lib/python2.7/site-packages/sh.py", line 158, in _set_fileobject
fd = f.fileno()
UnsupportedOperation: fileno
Does someone figure what can be the error? Has something changed from 1.11 version to 1.12 versions and above?
I am using
sh.dd("if=" + device_path,
"bs=64k",
"skip=" + str(i * ENC_KEY_SIZE),
"count=" + str(ENC_KEY_SIZE),
"iflag=skip_bytes,count_bytes",
_out=encrypted_key)
after "Trying to get key from metadata partition..." info. I can't share more code due to different reasons.
Thanks in advance!
I have a list of 80,000 strings that I am running through a discourse parser, and in order to increase the speed of this process I have been trying to use the python multiprocessing package.
The parser code requires python 2.7 and I am currently running it on a 2-core Ubuntu machine using a subset of the strings. For short lists, i.e. 20, the process runs without an issue on both cores, however if I run a list of about 100 strings, both workers will freeze at different points (so in some cases worker 1 won't stop until a few minutes after worker 2). This happens before all the strings are finished and anything is returned. Each time the cores stop at the same point given the same mapping function is used, but these points are different if I try a different mapping function, i.e. map vs map_async vs imap.
I have tried removing the strings at those indices, which did not have any affect and those strings run fine in a shorter list. Based on print statements I included, when the process appears to freeze the current iteration seems to finish for the current string and it just does not move on to the next string. It takes about an hour of run time to reach the spot where both workers have frozen and I have not been able to reproduce the issue in less time. The code involving the multiprocessing commands is:
def main(initial_file, chunksize = 2):
entered_file = pd.read_csv(initial_file)
entered_file = entered_file.ix[:, 0].tolist()
pool = multiprocessing.Pool()
result = pool.map_async(discourse_process, entered_file, chunksize = chunksize)
pool.close()
pool.join()
with open("final_results.csv", 'w') as file:
writer = csv.writer(file)
for listitem in result.get():
writer.writerow([listitem[0], listitem[1]])
if __name__ == '__main__':
main(sys.argv[1])
When I stop the process with Ctrl-C (which does not always work), the error message I receive is:
^CTraceback (most recent call last):
File "Combined_Script.py", line 94, in <module>
main(sys.argv[1])
File "Combined_Script.py", line 85, in main
pool.join()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 474, in join
p.join()
File "/usr/lib/python2.7/multiprocessing/process.py", line 145, in join
res = self._popen.wait(timeout)
File "/usr/lib/python2.7/multiprocessing/forking.py", line 154, in wait
return self.poll(0)
File "/usr/lib/python2.7/multiprocessing/forking.py", line 135, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
Process PoolWorker-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 117, in worker
put((job, i, result))
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
wacquire()
KeyboardInterrupt
^CProcess PoolWorker-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 117, in worker
put((job, i, result))
File "/usr/lib/python2.7/multiprocessing/queues.py", line 392, in put
return send(obj)
KeyboardInterrupt
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.7/multiprocessing/util.py", line 305, in _exit_function
_run_finalizers(0)
File "/usr/lib/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
finalizer()
File "/usr/lib/python2.7/multiprocessing/util.py", line 207, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 500, in _terminate_pool
outqueue.put(None) # sentinel
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
wacquire()
KeyboardInterrupt
Error in sys.exitfunc:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.7/multiprocessing/util.py", line 305, in _exit_function
_run_finalizers(0)
File "/usr/lib/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
finalizer()
File "/usr/lib/python2.7/multiprocessing/util.py", line 207, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 500, in _terminate_pool
outqueue.put(None) # sentinel
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
wacquire()
KeyboardInterrupt
When I look at the memory in another command window using htop, memory is at <3% once the workers freeze. This is my first attempt at parallel processing and I am not sure what else I might be missing?
I was not able to solve the issue with multiprocessing pool, however I came across the loky package and was able to use it to run my code with the following lines:
executor = loky.get_reusable_executor(timeout = 200, kill_workers = True)
results = executor.map(discourse_process, entered_file)
You could define a time to your process to return a result, otherwise it would raise an error:
try:
result.get(timeout = 1)
except multiprocessing.TimeoutError:
print("Error while retrieving the result")
Also you could verify if your process is succesful with
import time
while True:
try:
result.succesful()
except Exception:
print("Result is not yet succesful")
time.sleep(1)
Finally, checking out https://docs.python.org/2/library/multiprocessing.html ,is helpful.
I am using Plone 4.3.3 for creating my Plone site but when I shut-down the server it shows the following error.
Traceback (most recent call last):
File "/Plone/zinstance/parts/instance/bin/interpreter", line 298, in <module>
exec(compile(__file__f.read(), __file__, "exec"))
File "/Plone/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/run.py", line 76, in <module>
run()
File "/Plone/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/run.py", line 26, in run
starter.run()
File "/Plone/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/__init__.py", line 108, in run
self.shutdown()
File "/Plone/buildout-cache/eggs/Zope2-2.13.22-py2.7.egg/Zope2/Startup/__init__.py", line 113, in shutdown
db.close()
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/DB.py", line 624, in close
user#user-Vostro-3300:~/Plone/zinstance$ #self._connectionMap
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/DB.py", line 506, in _connectionMap
self.pool.map(f)
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/DB.py", line 206, in map
self.all.map(f)
File "/Plone/buildout-cache/eggs/transaction-1.1.1-py2.7.egg/transaction/weakset.py", line 58, in map
f(elt)
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/DB.py", line 628, in _
c._release_resources()
File "/Plone/buildout-cache/eggs/ZODB3-3.10.5-py2.7-linux-i686.egg/ZODB/Connection.py", line 1075, in _release_resources
c._storage.release()
AttributeError: 'NoneType' object has no attribute 'release'
There is an issue with Zope2 shutdown that tries to close a database connection (and in turn, a storage). However, this late-running sequence has some cosmetic side-effect for users of RelStorage. This is annoying, but not fundamentally a problem that should cause any data integrity issues.
Users of FileStorage or ZEO should not see this.
References:
https://github.com/zopefoundation/Zope/commit/5032027470091957a6c0028da04c0fc0a1ed646b
https://mail.zope.org/pipermail/zodb-dev/2013-August/015119.html
I've never used the multiprocessing library before, so all advice is welcome..
I've got a python program that uses the multiprocessing library to do some memory-intensive tasks in multiple processes, which occasionally runs out of memory (I'm working on optimizations, but that's not what this question is about). Sometimes, an out-of-memory error gets thrown in a way that I can't seem to catch (output below), and then the program hangs on pool.join() (I'm using multiprocessing.Pool. How can I make the program do something other than indefinitely wait when this problem occurs?
Ideally, The memory error is propagated back to the main process which then dies.
Here's the memory error:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 811, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 764, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 325, in _handle_workers
pool._maintain_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 229, in _maintain_pool
self._repopulate_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 222, in _repopulate_pool
w.start()
File "/usr/lib64/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/usr/lib64/python2.7/multiprocessing/forking.py", line 121, in __init__
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
And here's where i manage multiprocessing:
mp_pool = mp.Pool(processes=num_processes)
mp_results = list()
for datum in input_data:
data_args = {
'value': 0 // actually some other simple dict key/values
}
mp_results.append(mp_pool.apply_async(_process_data, args=(common_args, data_args)))
frame_pool.close()
frame_pool.join() // hangs here when that thread dies..
for result_async in mp_results:
result = result_async.get()
// do stuff to collect results
// rest of the code
When I interrupt the hanging program, I get:
Process process_003:
Traceback (most recent call last):
File "/opt/rh/python27/root/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/opt/rh/python27/root/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/opt/rh/python27/root/usr/lib64/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
File "/opt/rh/python27/root/usr/lib64/python2.7/multiprocessing/queues.py", line 374, in get
return recv()
racquire()
KeyboardInterrupt
This is actually a known bug in python's multiprocessing module, fixed in python 3 (here's a summarizing blog post I found). There's a patch attached to python issue 22393, but that hasn't been officially applied.
Basically, if one of a multiprocess pool's sub-processes die unexpectedly (out of memory, killed externally, etc.), the pool will wait indefinitely.
I'm having trouble running tasks. I run ./manage celeryd -B -l info, it correctly loads all tasks to registry.
The error happens when any of the tasks run - the task starts, does its thing, and then I get:
[ERROR/MainProcess] Thread 'ResultHandler' crashed: ValueError('Octet out of range 0..2**64-1',)
Traceback (most recent call last):
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 221, in run
return self.body()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 458, in body
on_state_change(task)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 436, in on_state_change
state_handlers[state](*args)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 413, in on_ack
cache[job]._ack(i, time_accepted, pid)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 1016, in _ack
self._accept_callback(pid, time_accepted)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 424, in on_accepted
self.acknowledge()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 516, in acknowledge
self.on_ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/consumer.py", line 405, in ack
message.ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/kombu-2.1.0-py2.7.egg/kombu/transport/base.py", line 98, in ack
self.channel.basic_ack(self.delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/channel.py", line 1740, in basic_ack
args.write_longlong(delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/serialization.py", line 325, in write_longlong
raise ValueError('Octet out of range 0..2**64-1')
ValueError: Octet out of range 0..2**64-1
I also must note that this worked on my previous Lion install, and even if I create a blank virtualenv with some test code, when a task runs it gives this error.
This happens with Python 2.7.2 and 2.6.4.
Django==1.3.1
amqplib==1.0.2
celery==2.4.6
django-celery==2.4.2
It appears there is some bug with homebrew install python. I've now switched to the native Lion one (2.7.1) and it works.