My current approach is to first remove the old model, save the new model , using the shell is no problem , but it just doesn't work automatically using crontab. Any idea why or how to solve this? Thanks for the help.
The error is that main programm does not wait for the subprocess.call to return. I think that this is the problem but not sure.
This is my current command:
subprocess.call('dse hadoop fs -rmr /root/recommend_model', shell=True)
A possible solution just to check that it was correctly executed is to wait a returncode.
Here the link to subprocess module:
https://docs.python.org/2/library/subprocess.html
You can wait the return code in you script:
if (subprocess.call(command, args) == 0):
print("We are proceeding)
else:
print("Something went wrong executing %s" % command)
Additionally try as suggested to redirect to a log file your script execution with 2>&1 > mickey.log
Last but not least some subprocess/os.system suggestions available here:
Controlling a python script from another script
python: run external program and direct output to file and wait for finish
Please let me know if this solve your issue.
Related
I'm trying to write a simple software to scan some bluetooth devices (beacon with advertising) and I have a problem with the instruction subprocess.Popen
p1 = subprocess.Popen(['timeout','10s','hcitool','lescan'],stdout=subprocess.PIPE,stderr=subprocess.PIPE)
p1.wait()
output, error = p1.communicate()
print("stdout: {}".format(output))
print("stderr: {}".format(error))
the output and error variables are empty!
If I remove the stdout=subprocess.PIPE,stderr=subprocess.PIPE from the Popen I can see the right result in the console, if I change the command with ['ls','-l'] it works fine and I see the result in the variables .
I've tryed with subprocess.run (with the timeout) and it is the same.
If I don't use the timeout obviously the command never ends.
I can't use pybluez and my python version is the 3.7
Can someone help me?
Solved using ['timeout','-s','INT','10s','hcitool','lescan'] as command instead of ['timeout','10s','hcitool','lescan'].
Maybe in the second case the process was not killed well and I didn't receive the output.
Thank you the same.
I'm running an application from within my code, and it rewrites files which I need to read later on in the code. There is no output the goes directly into my program. I can't get my code to wait until the subprocess has finished, it just goes ahead and reads the unchanged files.
I've tried subprocess.Popen.wait(), subprocess.call(), and subprocess.check_call(), but none of them work for my problem. Does anyone have any idea how to make this work? Thanks.
Edit: Here is the relevant part of my code:
os.chdir('C:\Users\Jeremy\Documents\FORCAST\dusty')
t = subprocess.Popen('start dusty.exe', shell=True)
t.wait()
os.chdir('C:\Users\Jeremy\Documents\FORCAST')
Do you use the return object of subprocess.Popen()?
p = subprocess.Popen(command)
p.wait()
should work.
Are you sure that the command does not end instantly?
If you execute a program with
t = subprocess.Popen(prog, Shell=True)
Python won't thrown an error, regardless whether the program exists or not. If you try to start an non-existing program with Popen and Shell=False, you will get an error. My guess would be that your program either doesn't exist in the folder or doesn't execute. Try to execute in the Python IDLE environment with Shell=False and see if you get a new window.
I'm working with Windows :
import os
import http.client as h
co = h.HTTPSConnection("www.google.com")
co.request("GET","/")
res = co.getresponse()
print(res.status,res.reason)
os.system("pause")
When I open the command line, all work perfectly : "200 OK"
But when I copy this in a file and save it, I have an error and the programm stop.
I found a "solution", when I run my app, the folder "__pycache__" containing "http.cpython-34" is created.
And I have to open the "http.cpython-34" file to see "200 OK"
Is there another way to run correctly my programm without opening the "http.cpython-34" file ?
EDIT : I found the solution. My file was called http.py. But when I rename it, it work perfectly :)
I don't think os.system("pause") is what you want to use here. That will try to launch a subprocess with the command pause, which will not actually pause the running Python program. Generally if you want to pause and wait for a keypress or something you can use raw_input() to pause and wait for user input in Python 2.7 or input() in Python 3.
I'm building a site in django that interfaces with a large program written in R, and I would like to have a button on the site that runs the R program. I have that working, using subprocess.call(), but, as expected, the server does not continue rendering the view until subprocess.call() returns. As this program could take several hours to run, that's not really an option.
Is there any way to run the R program and and keep executing the python code?
I've searched around, and looked into subprocess.Popen(), but I couldn't get that to work.
Here's the generic code I'm using in the view:
if 'button' in request.POST:
subprocess.call('R CMD BATCH /path/to/script.R', shell=True)
return HttpResponseRedirect('')
Hopefully I've just overlooked something simple.
Thank you.
subprocess.Popen(['R', 'CMD', 'BATCH', '/path/to/script.R'])
The process will be started asynchronously.
Example:
$ cat 1.py
import time
import subprocess
print time.time()
subprocess.Popen(['sleep', '1000'])
print time.time()
$ python 1.py
1340698384.08
1340698384.08
You must note that the child process will run even after the main process stops.
You may use a wrapper for subprocess.call(), that wrapper would have its own thread within which it will call subprocess.call() method.
I have inherited some code which is periodically (randomly) failing due to an Input/Output error being raised during a call to print. I am trying to determine the cause of the exception being raised (or at least, better understand it) and how to handle it correctly.
When executing the following line of Python (in a 2.6.6 interpreter, running on CentOS 5.5):
print >> sys.stderr, 'Unable to do something: %s' % command
The exception is raised (traceback omitted):
IOError: [Errno 5] Input/output error
For context, this is generally what the larger function is trying to do at the time:
from subprocess import Popen, PIPE
import sys
def run_commands(commands):
for command in commands:
try:
out, err = Popen(command, shell=True, stdout=PIPE, stderr=PIPE).communicate()
print >> sys.stdout, out
if err:
raise Exception('ERROR -- an error occurred when executing this command: %s --- err: %s' % (command, err))
except:
print >> sys.stderr, 'Unable to do something: %s' % command
run_commands(["ls", "echo foo"])
The >> syntax is not particularly familiar to me, it's not something I use often, and I understand that it is perhaps the least preferred way of writing to stderr. However I don't believe the alternatives would fix the underlying problem.
From the documentation I have read, IOError 5 is often misused, and somewhat loosely defined, with different operating systems using it to cover different problems. The best I can see in my case is that the python process is no longer attached to the terminal/pty.
As best I can tell nothing is disconnecting the process from the stdout/stderr streams - the terminal is still open for example, and everything 'appears' to be fine. Could it be caused by the child process terminating in an unclean fashion? What else might be a cause of this problem - or what other steps could I introduce to debug it further?
In terms of handling the exception, I can obviously catch it, but I'm assuming this means I wont be able to print to stdout/stderr for the remainder of execution? Can I reattach to these streams somehow - perhaps by resetting sys.stdout to sys.__stdout__ etc? In this case not being able to write to stdout/stderr is not considered fatal but if it is an indication of something starting to go wrong I'd rather bail early.
I guess ultimately I'm at a bit of a loss as to where to start debugging this one...
I think it has to do with the terminal the process is attached to. I got this error when I run a python process in the background and closed the terminal in which I started it:
$ myprogram.py
Ctrl-Z
$ bg
$ exit
The problem was that I started a not daemonized process in a remote server and logged out (closing the terminal session). A solution was to start a screen/tmux session on the remote server and start the process within this session. Then detaching the session+log out keeps the terminal associated with the process. This works at least in the *nix world.
I had a very similar problem. I had a program that was launching several other programs using the subprocess module. Those subprocesses would then print output to the terminal. What I found was that when I closed the main program, it did not terminate the subprocesses automatically (as I had assumed), rather they kept running. So if I terminated both the main program and then the terminal it had been launched from*, the subprocesses no longer had a terminal attached to their stdout, and would throw an IOError. Hope this helps you.
*NB: it must be done in this order. If you just kill the terminal, (for some reason) that would kill both the main program and the subprocesses.
I just got this error because the directory where I was writing files to ran out of memory. Not sure if this is at all applicable to your situation.
I'm new here, so please forgive if I slip up a bit when it comes to the code detail.
Recently I was able to figure out what cause the I/O error of the print statement when the terminal associated with the run of the python script is closed.
It is because the string to be printed to stdout/stderr is too long. In this case, the "out" string is the culprit.
To fix this problem (without having to keep the terminal open while running the python script), simply read the "out" string line by line, and print line by line, until we reach the end of the "out" string. Something like:
while true:
ln=out.readline()
if not ln: break
print ln.strip("\n") # print without new line
The same problem occurs if you print the entire list of strings out to the screen. Simply print the list one item by one item.
Hope that helps!
The problem is you've closed the stdout pipe which python is attempting to write to when print() is called
This can be caused by running a script in the background using & and then closing the terminal session (ie. closing stdout)
$ python myscript.py &
$ exit
One solution is to set stdout to a file when running in the background
Example
$ python myscript.py > /var/log/myscript.log 2>&1 &
$ exit
No errors on print()
It could happen when your shell crashes while the print was trying to write the data into it.
For my case, I just restart the service, then this issue disappear. don't now why.
My issue was the same OSError Input/Output error, for Odoo.
After I restart the service, it disappeared.