I'm using parallel python to run multiple dynamic simulations using a module called OrcFxAPI. The program works perfectly if it is run as a python program on my machine however if I convert it to an exe file using py2exe and then run I get the following error:
Traceback (most recent call last):
File "Analysis.pyc", Line 500, in multiprocessor
File "pp.pyc", Line 342, in __init__
File "pp.pyc", Line 506, in set_ncpus
File "pp.pyc", Line 140, in __init__
File "pp.pyc", Line 152, in start
File "pptransport.pyc", Line 140, in receive
RuntimeError: Communication pipe read error
It is failing at this line in my program:
job_server = pp.Server(ppservers=ppservers)
but I think it might be something to do with the path used to import the OrcFxAPI module when submitting the job:
job = job_server.submit(max_seastate, (gui_vars, case_list, case, line_info, output_vars), (), ("OrcFxAPI",), callback=finished, callbackargs=(case_no, no_of_cases,))
Related
I'm trying to run a python code using SEM, this code runs only in ns3-mmwave-2.0 module, however during the ns3-mmwave building, it shows an ns3-lte error.
What could be the reason?, knowing that i can't use other versions of ns3 because the code im trying to run can only work on this mmwave module version 2.
Building ns-3: 62%|████████████▎ | 1563/2535 [10:29<6:47:36, 25.16s/file]
Traceback (most recent call last):
File "ns3-mmwave-2.0/thz-sem.py", line 103, in <module>
campaign = sem.CampaignManager.new(
File "/home/dang/.local/lib/python3.8/site-packages/sem/manager.py", line 116, in new
runner = CampaignManager.create_runner(ns_path, script,
File "/home/dang/.local/lib/python3.8/site-packages/sem/manager.py", line 209, in create_runner
return locals().get(runner_type,
File "/home/dang/.local/lib/python3.8/site-packages/sem/runner.py", line 56, in __init__
self.configure_and_build(path, optimized=optimized)
File "/home/dang/.local/lib/python3.8/site-packages/sem/runner.py", line 148, in configure_and_build
for current, total in pbar:
File "/home/dang/.local/lib/python3.8/site-packages/tqdm/std.py", line 1176, in __iter__
for obj in iterable:
File "/home/dang/.local/lib/python3.8/site-packages/sem/runner.py", line 169, in get_build_output
raise Exception("Compilation ended with an error"
Exception: Compilation ended with an error.
STDERR
b"In file included from ../../src/lte/model/epc-mme-application.cc:30:\n../../src/lte/model/epc-s11-sap.h: In member function \xe2\x80\x98void ns3::EpcMmeApplication::DoInitialUeMessage(uint64_t, uint16_t, uint64_t, uint16_t)\xe2\x80\x99:\n../../src/lte/model/epc-s11-sap.h:183:10: error: \xe2\x80\x98msg.ns3::EpcS11SapSgw::CreateSessionRequestMessage::<anonymous>\xe2\x80\x99 may be used uninitialized in this function [-Werror=maybe-uninitialized]\n 183 | struct CreateSessionRequestMessage : public GtpcMessage\n | ^~~~~~~~~~~~~~~~~~~~~~~~~~~\ncc1plus: all warnings being treated as errors\n\nBuild failed\n -> task in 'ns3-lte' failed with exit status 1 (run with -v to display more information)\n"
STDOUT
b''
Fixing requires the initialization of the structures and variables.
Update: Apparently there is a waf option to disable these kinds of warnings from being treated as errors: --disable-werror.
I am trying to run a preprocessing pipeline using nipype and I get the following error message:
Traceback (most recent call last):
File "preprocscript.py", line 211, in <module>
preproc.run('MultiProc', plugin_args={'n_procs': 8})
File "/sw/anaconda/3/lib/python3.6/site-packages/nipype/pipeline/engine/workflows.py", line 579, in run
runner = plugin_mod(plugin_args=plugin_args)
File "/sw/anaconda/3/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 162, in __init__
initargs=(self._cwd,)
File "/sw/anaconda/3/lib/python3.6/multiprocessing/pool.py", line 175, in __init__
self._repopulate_pool()
File "/sw/anaconda/3/lib/python3.6/multiprocessing/pool.py", line 236, in _repopulate_pool
self._wrap_exception)
File "/sw/anaconda/3/lib/python3.6/multiprocessing/pool.py", line 250, in _repopulate_pool_static
wrap_exception)
File "/sw/anaconda/3/lib/python3.6/multiprocessing/process.py", line 73, in __init__
assert group is None, 'group argument must be None for now'
AssertionError: group argument must be None for now
and I am not sure what exactly might be wrong in my code that leads to this or if this is an issue with my software.
I am on a linux system and use python 3.6.
The module you are using has a ProcessPoolExecuter being used in it. In Python 3.7 they added some additional arguments to that class, namely initargs which is what is being called in nipype multiprocess module you are using. Unfortunately it is not backwards compatible to 3.6 and they did not write in another way to use that module.
Your options are to upgrade or not use the multiprocessing portion of nipype.
I have an IPython notebook that I was running as multi-threaded and have changed over to try and run on multiple processors. There is a substantial amount of code that I won't quote here unless someone thinks it would be helpful to look at a specific portion of it. The gist of it is that I'm trying to wrap a load balancer around PyOpenCL so I need to feed multiple independent queues-- partly to solve a specific problem (particle simulation) and partly as a learning exercise.
At no point am I intentionally opening any files, yet it's crashing with this error:
Traceback (most recent call last):
File "//anaconda/lib/python3.4/site-packages/IPython/core/interactiveshell.py", line 3035, in run_code
File "<ipython-input-23-d29c87aba720>", line 1, in <module>
co.convergeStep(mags[3])
File "<ipython-input-17-0eac43f0bbbf>", line 62, in convergeStep
evt=self.nlb.qTask(evtList,(b,k),b.forces,k,out=b.forceAccum,out2=k.forceAccum,sum=True)
File "<ipython-input-13-0f26b8ae660e>", line 36, in qTask
t=StagingThread(waitlist,c,self.pool)
File "<ipython-input-10-549b980aea87>", line 7, in __init__
self.killme=mp.Event()
File "//anaconda/lib/python3.4/multiprocessing/context.py", line 91, in Event
File "//anaconda/lib/python3.4/multiprocessing/synchronize.py", line 336, in __init__
File "//anaconda/lib/python3.4/multiprocessing/context.py", line 76, in Condition
File "//anaconda/lib/python3.4/multiprocessing/synchronize.py", line 215, in __init__
File "//anaconda/lib/python3.4/multiprocessing/context.py", line 81, in Semaphore
File "//anaconda/lib/python3.4/multiprocessing/synchronize.py", line 127, in __init__
File "//anaconda/lib/python3.4/multiprocessing/synchronize.py", line 60, in
__init__ OSError: [Errno 24] Too many open files
Is there something opening files in the background that I'm not considering?
It looks like I have 11 processes spawned at the time of failure. I'm running under OS X 10.10.3, Python 3.4.3, IPython 3.1. I am not using multiprocessing.Pool (at least not intentionally). It looks like the failure is happening in Event? There are other exceptions that occur while handling this one, but they're mostly too many files errors and then some traceback failures and other things that look like they're a result of one of the above...
I am calling an R script file from python using pyrserve. I have rserve running. At arbitrary points in the R script, pyrserve gives an error and quits:
Traceback (most recent call last):
File "scriptV2.py", line 272, in <module>
rConn.eval("source(file.PropensityFlow)")
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rconn.py", line 47, in decoCheckIfClosed
return func(self, *args, **kw)
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rconn.py", line 119, in eval
return rparse(src, atomicArray=atomicArray)
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rparser.py", line 539, in rparse
return rparser.parse()
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rparser.py", line 349, in parse
self.lexer.readHeader()
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rparser.py", line 94, in readHeader
self.responseCode = struct.unpack(b'<i', self.read(3) + b'\x00')[0]
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rparser.py", line 149, in read
raise EndOfDataError()
pyRserve.rparser.EndOfDataError
I have set rserv.conf to the following:
maxinbuf 20000000
maxsendbuf 0
Does anybody know why this happens? This looks like some buffer problem, because the R script runs by itself.
It is a late answer, but in such situations it is useful to run Rserve in debug mode so its output can be monitored in a separate shell.
R CMD Rserve.dbg
In some rare cases I've seen Rserve printing warnings to the console, and when this happened the command sent thru pyRserve didn't return any value from Rserve - which led to the 'EndOfDataError' above.
I'm having trouble running tasks. I run ./manage celeryd -B -l info, it correctly loads all tasks to registry.
The error happens when any of the tasks run - the task starts, does its thing, and then I get:
[ERROR/MainProcess] Thread 'ResultHandler' crashed: ValueError('Octet out of range 0..2**64-1',)
Traceback (most recent call last):
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 221, in run
return self.body()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 458, in body
on_state_change(task)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 436, in on_state_change
state_handlers[state](*args)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 413, in on_ack
cache[job]._ack(i, time_accepted, pid)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/concurrency/processes/pool.py", line 1016, in _ack
self._accept_callback(pid, time_accepted)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 424, in on_accepted
self.acknowledge()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/job.py", line 516, in acknowledge
self.on_ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/celery/worker/consumer.py", line 405, in ack
message.ack()
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/kombu-2.1.0-py2.7.egg/kombu/transport/base.py", line 98, in ack
self.channel.basic_ack(self.delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/channel.py", line 1740, in basic_ack
args.write_longlong(delivery_tag)
File "/Users/jzelez/Sites/my_virtual_env/lib/python2.7/site-packages/amqplib-1.0.2-py2.7.egg/amqplib/client_0_8/serialization.py", line 325, in write_longlong
raise ValueError('Octet out of range 0..2**64-1')
ValueError: Octet out of range 0..2**64-1
I also must note that this worked on my previous Lion install, and even if I create a blank virtualenv with some test code, when a task runs it gives this error.
This happens with Python 2.7.2 and 2.6.4.
Django==1.3.1
amqplib==1.0.2
celery==2.4.6
django-celery==2.4.2
It appears there is some bug with homebrew install python. I've now switched to the native Lion one (2.7.1) and it works.