I am solving graph coloring problem using minizinc.
The process run looks loke
mz_output = subprocess.run(["minizinc", "--time-limit", "10000", "--no-intermediate", "-p 4", "graph_coloring.mzn", "data.dzn"],
shell=False, stdout=subprocess.PIPE).stdout.decode('utf-8')
It works fine for some graphs, but for graph with 1000 nodes and ~250000 edges it throws KeyboardInterrupt:
Traceback (most recent call last):
File "submit.py", line 458, in <module>
main(parser.parse_args())
File "submit.py", line 390, in main
results = compute(metadata, args.override)
File "submit.py", line 181, in compute
submission = output(problem.input_file, solver_file)
File "submit.py", line 222, in output
solution = pkg.solve_it(load_input_data(input_file))
File "D:\Coursera\Discrete\Coloring\coloring\solver.py", line 73, in solve_it
shell=False, stdout=subprocess.PIPE).stdout.decode('utf-8')
File "D:\programs\Anaconda\lib\subprocess.py", line 490, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "D:\programs\Anaconda\lib\subprocess.py", line 951, in communicate
stdout = self.stdout.read()
KeyboardInterrupt
I supposed it is connected with size of the graph but it doesn't appear so. First, it works fine in minizinc IDE and gives intermediate solutions in the first seconds. Second, it is expected some time errors or memory errors in case of large size, while KeyboardInterrupt is confusing. How I can resolve that issue?
Related
I have a python code as follows:
try:
print("Running code " + str(sub.id))
r = subprocess.call("node codes.js > outputs.txt", shell=True)
except:
print("Error running submission code id " + str(sub.id))
The code is running node command using subprocess.call. The node command is running codes.js file. Sometimes if there is error in code like if there is document. command then the code throws error.
With try and except it is not catching the error thrown when the node command fails.
The error thrown is as follows
There is document. line in the code so node cannot understand that line so it throws error.
/home/kofhearts/homework/codes.js:5
document.getElementById("outputalert").innerHTML = "Hacked";
^
ReferenceError: document is not defined
at solve (/home/kofhearts/homework/codes.js:5:3)
at Object.<anonymous> (/home/kofhearts/homework/codes.js:13:28)
at Module._compile (internal/modules/cjs/loader.js:1068:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:933:32)
at Function.Module._load (internal/modules/cjs/loader.js:774:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
at internal/main/run_main_module.js:17:47
Traceback (most recent call last):
File "manage.py", line 22, in <module>
main()
File "manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/kofhearts/.virtualenvs/myenv/lib/python3.7/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/home/kofhearts/homework/assignments/management/commands/police.py", line 73, in handle
if isCorrect(data.strip()[:-1], sub.question.outputs, sub.question, sub.code):
File "/home/kofhearts/homework/assignments/views.py", line 566, in isCorrect
givenans = [json.loads(e.strip()) for e in received.split('|')]
File "/home/kofhearts/homework/assignments/views.py",
How is it possible to catch the error when subprocess.call fails? Thanks for the help!
How is it possible to catch the error when subprocess.call fails?
The 'standard' way to do this is to use subprocess.run:
from subprocess import run, CalledProcessError
cmd = ["node", "code.js"]
try:
r = run(cmd, check=True, capture_output=True, encoding="utf8")
with open("outputs.txt", "w") as f:
f.write(r.stdout)
except CalledProcessError as e:
print("oh no!")
print(e.stderr)
Note that I have dropped the redirect and done it in python. You might be able to redirect with shell=True, but it's a whole security hole you don't need just for sending stdout to a file.
check=True ensures it will throw with non-zero return state.
capture_output=True is handy, because stderr and stdout are passed through to the exception, allowing you to retrieve them there. Thank to #OlvinRoght for pointing that out.
Lastly, it is possible to check manually:
r = run(cmd, capture_output=True, encoding="utf8")
if r.returncode:
print("Failed", r.stderr, r.stdout)
else:
print("Success", r.stdout)
I would generally avoid this pattern as
try is free for success (and we expect this to succeed)
catching exceptions is how we normally handle problems, so it's the Right Way (TM)
but YMMV.
I am completely new to the subprocess module. And I was trying to automate the deauthentication attack commands. When I run airodump-ng wlan0mon as you know it looks for the APs nearby and the connected clients to it.
Now when I try to run this command using lets suppose p = subprocess.run(["airmon-ng","wlan0mon"], capture_output=True) in Python as you know this command runs until the user hits Ctrl+C, so it should save the last output when user hits Ctrl+C in the variable but instead I get error which is this:
Traceback (most recent call last):
File "Deauth.py", line 9, in <module>
p3 = subprocess.run(["airodump-ng","wlan0"], capture_output=True)
File "/usr/lib/python3.8/subprocess.py", line 491, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "/usr/lib/python3.8/subprocess.py", line 1024, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "/usr/lib/python3.8/subprocess.py", line 1866, in _communicate
ready = selector.select(timeout)
File "/usr/lib/python3.8/selectors.py", line 415, in select
fd_event_list = self._selector.poll(timeout)
KeyboardInterrupt
What can I try to resolve this?
Just use Python's error handling. Catch any KeyboardInnterrupts (within your subprocess function) using try and except statements like so:
def stuff(things):
try:
# do stuff
except KeyboardInterrupt:
return last_value
I have a list of 80,000 strings that I am running through a discourse parser, and in order to increase the speed of this process I have been trying to use the python multiprocessing package.
The parser code requires python 2.7 and I am currently running it on a 2-core Ubuntu machine using a subset of the strings. For short lists, i.e. 20, the process runs without an issue on both cores, however if I run a list of about 100 strings, both workers will freeze at different points (so in some cases worker 1 won't stop until a few minutes after worker 2). This happens before all the strings are finished and anything is returned. Each time the cores stop at the same point given the same mapping function is used, but these points are different if I try a different mapping function, i.e. map vs map_async vs imap.
I have tried removing the strings at those indices, which did not have any affect and those strings run fine in a shorter list. Based on print statements I included, when the process appears to freeze the current iteration seems to finish for the current string and it just does not move on to the next string. It takes about an hour of run time to reach the spot where both workers have frozen and I have not been able to reproduce the issue in less time. The code involving the multiprocessing commands is:
def main(initial_file, chunksize = 2):
entered_file = pd.read_csv(initial_file)
entered_file = entered_file.ix[:, 0].tolist()
pool = multiprocessing.Pool()
result = pool.map_async(discourse_process, entered_file, chunksize = chunksize)
pool.close()
pool.join()
with open("final_results.csv", 'w') as file:
writer = csv.writer(file)
for listitem in result.get():
writer.writerow([listitem[0], listitem[1]])
if __name__ == '__main__':
main(sys.argv[1])
When I stop the process with Ctrl-C (which does not always work), the error message I receive is:
^CTraceback (most recent call last):
File "Combined_Script.py", line 94, in <module>
main(sys.argv[1])
File "Combined_Script.py", line 85, in main
pool.join()
File "/usr/lib/python2.7/multiprocessing/pool.py", line 474, in join
p.join()
File "/usr/lib/python2.7/multiprocessing/process.py", line 145, in join
res = self._popen.wait(timeout)
File "/usr/lib/python2.7/multiprocessing/forking.py", line 154, in wait
return self.poll(0)
File "/usr/lib/python2.7/multiprocessing/forking.py", line 135, in poll
pid, sts = os.waitpid(self.pid, flag)
KeyboardInterrupt
Process PoolWorker-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 117, in worker
put((job, i, result))
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
wacquire()
KeyboardInterrupt
^CProcess PoolWorker-2:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 117, in worker
put((job, i, result))
File "/usr/lib/python2.7/multiprocessing/queues.py", line 392, in put
return send(obj)
KeyboardInterrupt
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.7/multiprocessing/util.py", line 305, in _exit_function
_run_finalizers(0)
File "/usr/lib/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
finalizer()
File "/usr/lib/python2.7/multiprocessing/util.py", line 207, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 500, in _terminate_pool
outqueue.put(None) # sentinel
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
wacquire()
KeyboardInterrupt
Error in sys.exitfunc:
Traceback (most recent call last):
File "/usr/lib/python2.7/atexit.py", line 24, in _run_exitfuncs
func(*targs, **kargs)
File "/usr/lib/python2.7/multiprocessing/util.py", line 305, in _exit_function
_run_finalizers(0)
File "/usr/lib/python2.7/multiprocessing/util.py", line 274, in _run_finalizers
finalizer()
File "/usr/lib/python2.7/multiprocessing/util.py", line 207, in __call__
res = self._callback(*self._args, **self._kwargs)
File "/usr/lib/python2.7/multiprocessing/pool.py", line 500, in _terminate_pool
outqueue.put(None) # sentinel
File "/usr/lib/python2.7/multiprocessing/queues.py", line 390, in put
wacquire()
KeyboardInterrupt
When I look at the memory in another command window using htop, memory is at <3% once the workers freeze. This is my first attempt at parallel processing and I am not sure what else I might be missing?
I was not able to solve the issue with multiprocessing pool, however I came across the loky package and was able to use it to run my code with the following lines:
executor = loky.get_reusable_executor(timeout = 200, kill_workers = True)
results = executor.map(discourse_process, entered_file)
You could define a time to your process to return a result, otherwise it would raise an error:
try:
result.get(timeout = 1)
except multiprocessing.TimeoutError:
print("Error while retrieving the result")
Also you could verify if your process is succesful with
import time
while True:
try:
result.succesful()
except Exception:
print("Result is not yet succesful")
time.sleep(1)
Finally, checking out https://docs.python.org/2/library/multiprocessing.html ,is helpful.
I've never used the multiprocessing library before, so all advice is welcome..
I've got a python program that uses the multiprocessing library to do some memory-intensive tasks in multiple processes, which occasionally runs out of memory (I'm working on optimizations, but that's not what this question is about). Sometimes, an out-of-memory error gets thrown in a way that I can't seem to catch (output below), and then the program hangs on pool.join() (I'm using multiprocessing.Pool. How can I make the program do something other than indefinitely wait when this problem occurs?
Ideally, The memory error is propagated back to the main process which then dies.
Here's the memory error:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib64/python2.7/threading.py", line 811, in __bootstrap_inner
self.run()
File "/usr/lib64/python2.7/threading.py", line 764, in run
self.__target(*self.__args, **self.__kwargs)
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 325, in _handle_workers
pool._maintain_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 229, in _maintain_pool
self._repopulate_pool()
File "/usr/lib64/python2.7/multiprocessing/pool.py", line 222, in _repopulate_pool
w.start()
File "/usr/lib64/python2.7/multiprocessing/process.py", line 130, in start
self._popen = Popen(self)
File "/usr/lib64/python2.7/multiprocessing/forking.py", line 121, in __init__
self.pid = os.fork()
OSError: [Errno 12] Cannot allocate memory
And here's where i manage multiprocessing:
mp_pool = mp.Pool(processes=num_processes)
mp_results = list()
for datum in input_data:
data_args = {
'value': 0 // actually some other simple dict key/values
}
mp_results.append(mp_pool.apply_async(_process_data, args=(common_args, data_args)))
frame_pool.close()
frame_pool.join() // hangs here when that thread dies..
for result_async in mp_results:
result = result_async.get()
// do stuff to collect results
// rest of the code
When I interrupt the hanging program, I get:
Process process_003:
Traceback (most recent call last):
File "/opt/rh/python27/root/usr/lib64/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/opt/rh/python27/root/usr/lib64/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "/opt/rh/python27/root/usr/lib64/python2.7/multiprocessing/pool.py", line 102, in worker
task = get()
File "/opt/rh/python27/root/usr/lib64/python2.7/multiprocessing/queues.py", line 374, in get
return recv()
racquire()
KeyboardInterrupt
This is actually a known bug in python's multiprocessing module, fixed in python 3 (here's a summarizing blog post I found). There's a patch attached to python issue 22393, but that hasn't been officially applied.
Basically, if one of a multiprocess pool's sub-processes die unexpectedly (out of memory, killed externally, etc.), the pool will wait indefinitely.
I am calling an R script file from python using pyrserve. I have rserve running. At arbitrary points in the R script, pyrserve gives an error and quits:
Traceback (most recent call last):
File "scriptV2.py", line 272, in <module>
rConn.eval("source(file.PropensityFlow)")
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rconn.py", line 47, in decoCheckIfClosed
return func(self, *args, **kw)
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rconn.py", line 119, in eval
return rparse(src, atomicArray=atomicArray)
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rparser.py", line 539, in rparse
return rparser.parse()
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rparser.py", line 349, in parse
self.lexer.readHeader()
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rparser.py", line 94, in readHeader
self.responseCode = struct.unpack(b'<i', self.read(3) + b'\x00')[0]
File "/Users/dipayanmaiti/Py3.3venv/lib/python3.3/site-packages/pyRserve/rparser.py", line 149, in read
raise EndOfDataError()
pyRserve.rparser.EndOfDataError
I have set rserv.conf to the following:
maxinbuf 20000000
maxsendbuf 0
Does anybody know why this happens? This looks like some buffer problem, because the R script runs by itself.
It is a late answer, but in such situations it is useful to run Rserve in debug mode so its output can be monitored in a separate shell.
R CMD Rserve.dbg
In some rare cases I've seen Rserve printing warnings to the console, and when this happened the command sent thru pyRserve didn't return any value from Rserve - which led to the 'EndOfDataError' above.