I wonder if anyone can see an error. I am running an experiment with approximately 220 trials. There are 3 auditory stimuli that I want to turn on for a variable amount of time and terminate when the 'space' is pressed (on some trials). At about trial 148 I get an error, indicating that the sound is the issue and the last line is "memory Error'
Specifically:
if self.lineRGB!=None and self.lineWidth!=0.0: Traceback (most recent
call last): File "C:\Users\freemali\Desktop\UGH_lastrun.py", line
4352, in short_stim3 = sound.Sound('C', secs=short_dur)
File "C:\Program
Files\PsychoPy2\lib\site-packages\psychopy-1.81.00-py2.7.egg\psychopy\sound.py",
line 217, in init self.setSound(value=value, secs=secs,
octave=octave) File "C:\Program
Files\PsychoPy2\lib\site-packages\psychopy-1.81.00-py2.7.egg\psychopy\sound.py",
line 135, in setSound self._setSndFromNote(value.capitalize(),
secs, octave, hamming=hamming) File "C:\Program
Files\PsychoPy2\lib\site-packages\psychopy-1.81.00-py2.7.egg\psychopy\sound.py",
line 167, in _setSndFromNote self._setSndFromFreq(thisFreq, secs,
hamming=hamming) File "C:\Program
Files\PsychoPy2\lib\site-packages\psychopy-1.81.00-py2.7.egg\psychopy\sound.py",
line 177, in _setSndFromFreq self._setSndFromArray(outArr) File
"C:\Program
Files\PsychoPy2\lib\site-packages\psychopy-1.81.00-py2.7.egg\psychopy\sound.py",
line 291, in _setSndFromArray thisArray=
(thisArray*2**15).astype(numpy.int16) MemoryError
Is this a result of Psychopy not releasing the auditory stimuli? Because I want the stimuli on for a variable period of time, I am using stimulus.stop() at the end of the routine. Is this an issue?
Related
When working with luigi on Windows 10, the following error is sometimes thrown:
Traceback (most recent call last):
File "D:\Users\myuser\PycharmProjects\project\venv\lib\site-packages\luigi\worker.py", line 192, in run
new_deps = self._run_get_new_deps()
File "D:\Users\myuser\PycharmProjects\project\venv\lib\site-packages\luigi\worker.py", line 130, in _run_get_new_deps
task_gen = self.task.run()
File "project.py", line 15000, in run
data_frame = pd.read_excel(self.input()["tables"].open(),segment)
File "D:\Users\myuser\PycharmProjects\project\venv\lib\site-packages\pandas\io\excel.py", line 191, in read_excel
io = ExcelFile(io, engine=engine)
File "D:\Users\myuser\PycharmProjects\project\venv\lib\site-packages\pandas\io\excel.py", line 247, in __init__
self.book = xlrd.open_workbook(file_contents=data)
File "D:\Users\myuser\PycharmProjects\project\venv\lib\site-packages\xlrd\__init__.py", line 115, in open_workbook
zf = zipfile.ZipFile(timemachine.BYTES_IO(file_contents))
File "C:\Python27\lib\zipfile.py", line 793, in __init__
self._RealGetContents()
File "C:\Python27\lib\zipfile.py", line 862, in _RealGetContents
raise BadZipfile("Bad magic number for central directory")
BadZipfile: Bad magic number for central directory
This error seems to only happen when working on Windows, and happens more frequently when a task requires to access the same file more than once in the same task, or is used by different tasks as a Target even when there are no parallel workers running. Code runs without errors on Linux.
Has anyone else experienced this behavior?
I'm trying to create pandas DataFrames from Excel file sheets but I get a BadZipFile error instead sometimes.
I have a list of 80,000 strings that I am running through a discourse parser, and in order to increase the speed of this process I have been trying to use the python multiprocessing package.
The parser code requires python 2.7 and I am currently running it on a 2-core Ubuntu machine using a subset of the strings. For short lists, i.e. 20, the process runs without an issue on both cores, however if I run a list of about 100 strings, both workers will freeze at different points (so in some cases worker 1 won't stop until a few minutes after worker 2). This happens before all the strings are finished and anything is returned. Each time the cores stop at the same point given the same mapping function is used, but these points are different if I try a different mapping function, i.e. map vs map_async vs imap.
I have tried removing the strings at those indices, which did not have any affect and those strings run fine in a shorter list. Based on print statements I included, when the process appears to freeze the current iteration seems to finish for the current string and it just does not move on to the next string. It takes about an hour of run time to reach the spot where both workers have frozen and I have not been able to reproduce the issue in less time. The code involving the multiprocessing commands is:
def main(initial_file, chunksize = 2):
entered_file = pd.read_csv(initial_file)
entered_file = entered_file.ix[:, 0].tolist()
pool = multiprocessing.Pool()
result = pool.map_async(discourse_process, entered_file, chunksize = chunksize)
pool.close()
pool.join()
with open("final_results.csv", 'w') as file:
writer = csv.writer(file)
for listitem in result.get():
writer.writerow([listitem[0], listitem[1]])
if __name__ == '__main__':
main(sys.argv[1])
When I stop the process with Ctrl-C (which does not always work), the error message I receive is:
^CTraceback (most recent call last):
File "Combined_Script.py", line 94, in <module>
main(sys.argv[1])
File "Combined_Script.py", line 85, in main
pool.join()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 474, in join
p.join()
File "/usr/lib/python2.7/multiprocessing/process.py", line 145, in join
res = self._popen.wait(timeout)
File "/usr/lib/python2.7/multiprocessing/forking.py", line 154, in wait
return self.poll(0)
File "/usr/lib/python2.7/multiprocessing/forking.py", line 135, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
Process PoolWorker-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 117, in worker
put((job, i, result))
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
wacquire()
KeyboardInterrupt
^CProcess PoolWorker-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 117, in worker
put((job, i, result))
File "/usr/lib/python2.7/multiprocessing/queues.py", line 392, in put
return send(obj)
KeyboardInterrupt
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.7/multiprocessing/util.py", line 305, in _exit_function
_run_finalizers(0)
File "/usr/lib/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
finalizer()
File "/usr/lib/python2.7/multiprocessing/util.py", line 207, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 500, in _terminate_pool
outqueue.put(None) # sentinel
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
wacquire()
KeyboardInterrupt
Error in sys.exitfunc:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.7/multiprocessing/util.py", line 305, in _exit_function
_run_finalizers(0)
File "/usr/lib/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
finalizer()
File "/usr/lib/python2.7/multiprocessing/util.py", line 207, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 500, in _terminate_pool
outqueue.put(None) # sentinel
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
wacquire()
KeyboardInterrupt
When I look at the memory in another command window using htop, memory is at <3% once the workers freeze. This is my first attempt at parallel processing and I am not sure what else I might be missing?
I was not able to solve the issue with multiprocessing pool, however I came across the loky package and was able to use it to run my code with the following lines:
executor = loky.get_reusable_executor(timeout = 200, kill_workers = True)
results = executor.map(discourse_process, entered_file)
You could define a time to your process to return a result, otherwise it would raise an error:
try:
result.get(timeout = 1)
except multiprocessing.TimeoutError:
print("Error while retrieving the result")
Also you could verify if your process is succesful with
import time
while True:
try:
result.succesful()
except Exception:
print("Result is not yet succesful")
time.sleep(1)
Finally, checking out https://docs.python.org/2/library/multiprocessing.html ,is helpful.
I'm having problems with attaching to gevent-based processes with the PyDev's debugger under PyDev 3.9.2.
It works fine in PyDev 3.9.1 and LiClipse 3.9.1, just not with 3.9.2 which is the latest PyDev right now (6 Feb, 2015).
It also works properly when a piece of code is run directly from PyDev's debugger rather than externally.
Note that it doesn't depend on whether there are breakpoints set (enabled or not) - the mere fact of attaching to a process suffices for the exception to be raised.
Here is a sample module to reproduce the behaviour along with two exceptions - one from PyDev and the other one from the gevent code's point of view.
Can anyone please shed any light on it? Thanks a lot.
from gevent.monkey import patch_all
patch_all()
import threading
import gevent
def myfunc():
t = threading.current_thread()
print(t.name)
while True:
gevent.spawn(myfunc)
gevent.sleep(1)
Debug Server at port: 5678
Traceback (most recent call last):
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd_attach_to_process/attach_script.py", line 16, in attach
patch_multiprocessing=False,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1828, in settrace
patch_multiprocessing,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1920, in _locked_settrace
CheckOutputThread(debugger).start()
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 261, in start
thread.start()
File "/usr/lib/python2.7/threading.py", line 750, in start
self.__started.wait()
File "/usr/lib/python2.7/threading.py", line 620, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 339, in wait
waiter.acquire()
File "_semaphore.pyx",...
Traceback (most recent call last):
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd_attach_to_process/attach_script.py", line 16, in attach
patch_multiprocessing=False,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1828, in settrace
patch_multiprocessing,
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 1920, in _locked_settrace
CheckOutputThread(debugger).start()
File "/opt/slow/01/data/install/pydev/eclipse/plugins/org.python.pydev_3.9.2.201502050007/pysrc/pydevd.py", line 261, in start
thread.start()
File "/usr/lib/python2.7/threading.py", line 750, in start
self.__started.wait()
File "/usr/lib/python2.7/threading.py", line 620, in wait
self.__cond.wait(timeout)
File "/usr/lib/python2.7/threading.py", line 339, in wait
waiter.acquire()
File "_semaphore.pyx", line 112, in gevent._semaphore.Semaphore.acquire (gevent/gevent._semaphore.c:3004)
File "/home/dsuch/projects/pydev-plugin/pydev-plugin/local/lib/python2.7/site-packages/gevent/hub.py", line 330, in switch
switch_out()
File "/home/dsuch/projects/pydev-plugin/pydev-plugin/local/lib/python2.7/site-packages/gevent/hub.py", line 334, in switch_out
raise AssertionError('Impossible to call blocking function in the event loop callback')
AssertionError: Impossible to call blocking function in the event loop callback
I'm trying to migrate a CSV repository to SVN using csv2svn. After dealing with some errors during the first pass (CollectRevsPass), i'm in the fourth step and this error appears:
...
c:\Users\Andres\Desktop\copa\copa\proyectosAMEG\JGA\KnapsackJG2A\src\operators\K
napsackSelection.java,v
Done
Time for pass1 (CollectRevsPass): 68.47 seconds.
----- pass 2 (CleanMetadataPass) -----
Converting metadata to UTF8...
Done
Time for pass2 (CleanMetadataPass): 0.437 seconds.
----- pass 3 (CollateSymbolsPass) -----
Checking for forced tags with commits...
Done
Time for pass3 (CollateSymbolsPass): 0.015 seconds.
----- pass 4 (FilterSymbolsPass) -----
Filtering out excluded symbols and summarizing items...
Traceback (most recent call last):
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn", line 70, in <module>
svn_main(os.path.basename(sys.argv[0]), sys.argv[1:])
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\main.py", line 113, in svn_main
main(progname, run_options, pass_manager)
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\main.py", line 96, in main
pass_manager.run(run_options)
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\pass_manager.py", line 181, in run
the_pass.run(run_options, stats_keeper)
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\passes.py", line 505, in run
revision_collector.process_file(cvs_file_items)
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\checkout_internal.py", line 615, in process_file
_Sink(self, cvs_file_items),
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\rcsparser.py", line 68, in parse
return selected_parser().parse(file, sink)
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_rcsparse\common.py", line 477, in parse
self.parse_rcs_deltatext()
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_rcsparse\common.py", line 450, in parse_rcs_deltatext
self.sink.set_revision_info(revision, log, text)
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\checkout_internal.py", line 539, in set_revision_info
text_record, self._rcs_stream.get_text()
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\checkout_internal.py", line 601, in _writeout
self._delta_db[text_record.id] = text
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\indexed_database.py", line 94, in __setitem__
s = self.serializer.dumps(item)
File "C:\Users\Andres\Downloads\cvs2svn-2.4.0.tar\dist\cvs2svn-2.4.0\cvs2svn-2
.4.0\cvs2svn_lib\serializer.py", line 138, in dumps
return marshal.dumps(zlib.compress(self.wrapee.dumps(object), 9))
MemoryError
I checked and there is enough free memory, when this error appear. Just before the Error appears, the process 'python' increaces a lot the use of memory.
Does Some one know what can I do?
CVS2SVN 2.4
Python 2.7
This error can also indicate that the process has run out of virtual address space. If you are running cvs2svn in 32-bit mode, then the process only has something like 2 GiB or 4 GiB of addresses that it can use, regardless of whether the computer has free RAM. If this is the case, try running the program in 64-bit mode.
If you are already running in 64-bit mode, then try increasing RAM or swap space or running the program on a beefier computer.
I'm having trouble running tasks. I run ./manage celeryd -B -l info, it correctly loads all tasks to registry.
The error happens when any of the tasks run - the task starts, does its thing, and then I get:
[ERROR/MainProcess] Thread 'ResultHandler' crashed: ValueError('Octet out of range 0..2**64-1',)
Traceback (most recent call last):
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 221, in run
return self.body()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 458, in body
on_state_change(task)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 436, in on_state_change
state_handlers[state](*args)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 413, in on_ack
cache[job]._ack(i, time_accepted, pid)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 1016, in _ack
self._accept_callback(pid, time_accepted)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 424, in on_accepted
self.acknowledge()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 516, in acknowledge
self.on_ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/consumer.py", line 405, in ack
message.ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/kombu-2.1.0-py2.7.egg/kombu/transport/base.py", line 98, in ack
self.channel.basic_ack(self.delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/channel.py", line 1740, in basic_ack
args.write_longlong(delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/serialization.py", line 325, in write_longlong
raise ValueError('Octet out of range 0..2**64-1')
ValueError: Octet out of range 0..2**64-1
I also must note that this worked on my previous Lion install, and even if I create a blank virtualenv with some test code, when a task runs it gives this error.
This happens with Python 2.7.2 and 2.6.4.
Django==1.3.1
amqplib==1.0.2
celery==2.4.6
django-celery==2.4.2
It appears there is some bug with homebrew install python. I've now switched to the native Lion one (2.7.1) and it works.