Run cmd file using python - python

I have a cmd file "file.cmd" containing 100s of lines of command.
Example
pandoc --extract-media -f docx -t gfm "sample1.docx" -o "sample1.md"
pandoc --extract-media -f docx -t gfm "sample2.docx" -o "sample2.md"
pandoc --extract-media -f docx -t gfm "sample3.docx" -o "sample3.md"
I am trying to run these commands using a script so that I don't have to go to a file and click on it.
This is my code, and it results in no output:
file1 = open('example.cmd', 'r')
Lines = file1.readlines()
# print(Lines)
for i in Lines:
print(i)
os.system(i)

You don't need to read the cmd file line by line. you can simply try the following:
import os
os.system('myfile.cmd')
or using the subprocess module:
import subprocess
p = subprocess.Popen(['myfile.cmd'], shell = True, close_fds = True)
stdout, stderr = proc.communicate()
Example:
myfile.cmd:
#ECHO OFF
ECHO Grettings From Python!
PAUSE
script.py:
import os
os.system('myfile.cmd')
The cmd will open with:
Greetings From Python!
Press any key to continue ...
You can debug the issue by knowing the return exit code by:
import os
return_code=os.system('myfile.cmd')
assert return_code == 0 #asserts that the return code is 0 indicating success!
Note: os.system works by calling system() in C can only take up to 65533 arguments after a command (so it is a 16 bit issue). Giving one more argument will result in the return code 32512 (which implies the exit code 127).

The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function (os.system('command')).
since it is a command file (cmd), and only the shell can run it, then shell argument must set to be true. since you are setting the shell argument to true, the command needs to be string form and not a list.
use the Popen method for spawn a new process and the communicte for waiting on that process (you can time it out as well). if you whish to communicate with the child process, provide the PIPES (see mu example, but you dont have to!)
the code below for python 3.3 and beyond
import subprocess
try:
proc=subprocess.Popen('myfile.cmd', shell=True, stderr=subprocess.PIPE, stdout=subprocess.PIPE)
outs, errs = proc.communicate(timeout=15) #timing out the execution, just if you want, you dont have to!
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
for older python versions
proc = subprocess.Popen('myfile.cmd', shell=True)
t=10
while proc.poll() is None and t >= 0:
print('Still waiting')
time.sleep(1)
t -= 1
proc.kill()
In both cases (python versions) if you dont need the timeout feature and you dont need to interact with the child process, then just, use:
proc = subprocess.Popen('myfile.cmd', shell=True)
proc.communicate()

Related

Running a C executable inside a python program

I have written a C code where I have converted one file format to another file format. To run my C code, I have taken one command line argument : filestem.
I executed that code using : ./executable_file filestem > outputfile
Where I have got my desired output inside outputfile
Now I want to take that executable and run within a python code.
I am trying like :
import subprocess
import sys
filestem = sys.argv[1];
subprocess.run(['/home/dev/executable_file', filestem , 'outputfile'])
But it is unable to create the outputfile. I think some thing should be added to solve the > issue. But unable to figure out. Please help.
subprocess.run has optional stdout argument, you might give it file handle, so in your case something like
import subprocess
import sys
filestem = sys.argv[1]
with open('outputfile','wb') as f:
subprocess.run(['/home/dev/executable_file', filestem],stdout=f)
should work. I do not have ability to test it so please run it and write if it does work as intended
You have several options:
NOTE - Tested in CentOS 7, using Python 2.7
1. Try pexpect:
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import pexpect
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
command_output, exitstatus = pexpect.run("/usr/bin/bash -c '{0}'".format(cmd), withexitstatus=True)
if exitstatus == 0:
print(command_output)
else:
print("Houston, we've had a problem.")
2. Run subprocess with shell=true (Not recommended):
"""Usage: executable_file argument ("ex. stack.py -lh")"""
import sys
import subprocess
filestem = sys.argv[1]
# Using ls -lh >> outputfile as an example
cmd = "ls {0} >> outputfile".format(filestem)
result = subprocess.check_output(shlex.split(cmd), shell=True) # or subprocess.call(cmd, shell=True)
print(result)
It works, but python.org frowns upon this, due to the chance of a shell injection: see "Security Considerations" in the subprocess documentation.
3. If you must use subprocess, run each command separately and take the SDTOUT of the previous command and pipe it into the STDIN of the next command:
p = subprocess.Popen(cmd, stdin=PIPE, stdout=PIPE)
stdout_data, stderr_data = p.communicate()
p = subprocess.Popen(cmd, stdin=stdout_data, stdout=PIPE)
etc...
Good luck with your code!

How to keep ssh session open after logging in using subprocess.popen?

I am new to Python.
I am trying to SSH to a server to perform some operations. However, before performing the operations, i need to load a profile, which takes 60-90 seconds. After loading the profile, is there a way to keep the SSH session open so that i can perform the operations later?
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
result = p.communicate()[0]
print result
return result
This loads the profile and exits. Is there a way to keep the above ssh session open and run some commands?
Example:
p = subprocess.Popen("ssh abc#xyz'./profile'", stdout=subprocess.PIPE, shell=True)
<More Python Code>
<More Python Code>
<More Python Code>
<Run some scripts/commands on xyz server non-interactively>
After loading the profile, I want to run some scripts/commands on the remote server, which I am able to do by simply doing below:
p = subprocess.Popen("ssh abc#xyz './profile;**<./a.py;etc>**'", stdout=subprocess.PIPE, shell=True)
However, once done, it exists and the next time I want to execute some script on the above server, I need to load the profile again (which takes 60-90 seconds). I am trying to figure out a way where we can create some sort of tunnel (or any other way) where the ssh connection remains open after loading the profile, so that the users don't have to wait 60-90 seconds whenever anything is to be executed.
I don't have access to strip down the profile.
Try an ssh library like asyncssh or spur. Keeping the connection object should keep the session open.
You could send a dummy command like date to prevent the timeout as well.
You have to construct a ssh command like this ['ssh', '-T', 'host_user_name#host_address'] then follow below code.
Code:
from subprocess import Popen, PIPE
ssh_conn = ['ssh', '-T', 'host_user_name#host_address']
# if you have to add port then ssh_conn should be as following
# ssh_conn = ['ssh', '-T', 'host_user_name#host_address', '-p', 'port']
commands = """
cd Documents/
ls -l
cat test.txt
"""
with Popen(ssh_conn, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True) as p:
output, error = p.communicate(commands)
print(output)
print(error)
print(p.returncode)
# or can do following things
p.stdin.write('command_1')
# add as many command as you want
p.stdin.write('command_n')
Terminal Output:
Please let me know if you need further explanations.
N.B: You can add command in commands string as many as you want.
What you want to do is write/read to the process's stdin/stdout.
from subprocess import Popen, PIPE
import shlex
shell_command = "ssh user#address"
proc = Popen(shlex.split(shell_command), stdin=PIPE, universal_newlines=True)
# Do python stuff here
proc.stdin.write("cd Desktop\n")
proc.stdin.write("mkdir Example\n")
# And so on
proc.stdin.write("exit\n")
You must include the trailing newline for each command. If you prefer, print() (as of Python 3.x, where it is a function) takes a keyword argument file, which allows you to forget about that newline (and also gain all the benefits of print()).
print("rm Example", file=proc.stdin)
Additionally, if you need to see the output of your command, you can pass stdout=PIPE and then read via proc.stdout.read() (same for stderr).
You may also want to but the exit command in a try/finally block, to ensure you exit the ssh session gracefully.
Note that a) read is blocking, so if there's no output, it'll block forever and b) it will only return what was available to read from the stdout at that time- so you may need to read repeatedly, sleep for a short time, or poll for additional data. See the fnctl and select stdlib modules for changing blocking -> nonblocking read and polling for events, respectively.
Hello Koshur!
I think that what you are trying to achieve looks like what I've tried in the past when trying to make my terminal accessible from a private website:
I would open a bash instance, keep it open and would listen for commands through a WebSocket connection.
What I did to achieve this was using the O_NONBLOCK flag on STDOUT.
Example
import fcntl
import os
import shlex
import subprocess
current_process = subprocess.Popen(shlex.split("/bin/sh"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT) # Open a shell prompt
fcntl.fcntl(current_process.stdout.fileno(), fcntl.F_SETFL,
os.O_NONBLOCK) # Non blocking stdout and stderr reading
What I would have after this is a loop checking for new output in another thread:
from time import sleep
from threading import Thread
def check_output(process):
"""
Checks the output of stdout and stderr to send it to the WebSocket client
"""
while process.poll() is None: # while the process isn't exited
try:
output = process.stdout.read() # Read the stdout PIPE (which contains stdout and stderr)
except Exception:
output = None
if output:
print(output)
sleep(.1)
# from here, we are outside the loop: the process exited
print("Process exited with return code: {code}".format(code=process.returncode))
Thread(target=check_output, args=(current_process,), daemon=True).start() # Start checking for new text in stdout and stderr
So you would need to implement your logic to SSH when starting the process:
current_process = subprocess.Popen(shlex.split("ssh abc#xyz'./profile'"), stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
And send commands like so:
def send_command(process, cmd):
process.stdin.write(str(cmd + "\n").encode("utf-8")) # Write the input to STDIN
process.stdin.flush() # Run the command
send_command(current_process, "echo Hello")
EDIT
I tried to see the minimum Python requirements for the given examples and found out that Thread(daemon) might not work on Python 2.7, which you asked in the tags.
If you are sure to exit the Thread before exiting, you can ignore daemon and use Thread() which works on 2.7. (You could for example use atexit and terminate the process)
References
fcntl(2) man page
https://man7.org/linux/man-pages/man2/fcntl.2.html
fcntl Python 3 Documentation
https://docs.python.org/3/library/fcntl.html
fcntl Python 2.7 Documentation
https://docs.python.org/2.7/library/fcntl.html
O_NONBLOCK Python 3 Documentation
https://docs.python.org/3/library/os.html#os.O_NONBLOCK
O_NONBLOCK Python 2.7 Documentation
https://docs.python.org/2.7/library/os.html#os.O_NONBLOCK

Reading stdout from a subprocess in real time

Given this code snippet:
from subprocess import Popen, PIPE, CalledProcessError
def execute(cmd):
with Popen(cmd, shell=True, stdout=PIPE, bufsize=1, universal_newlines=True) as p:
for line in p.stdout:
print(line, end='')
if p.returncode != 0:
raise CalledProcessError(p.returncode, p.args)
base_cmd = [
"cmd", "/c", "d:\\virtual_envs\\py362_32\\Scripts\\activate",
"&&"
]
cmd1 = " ".join(base_cmd + ['python -c "import sys; print(sys.version)"'])
cmd2 = " ".join(base_cmd + ["python -m http.server"])
If I run execute(cmd1) the output will be printed without any problems.
However, If I run execute(cmd2) instead nothing will be printed, why is that and how can I fix it so I could see the http.server's output in real time.
Also, how for line in p.stdout is been evaluated internally? is it some sort of endless loop till reaches stdout eof or something?
This topic has already been addressed few times here in SO but I haven't found a windows solution. The above snippet is code from this answer and I'm running http.server from a virtualenv (python3.6.2-32bits on win7)
If you want to read continuously from a running subprocess, you have to make that process' output unbuffered. Your subprocess being a Python program, this can be done by passing -u to the interpreter:
python -u -m http.server
This is how it looks on a Windows box.
With this code, you can`t see the real-time output because of buffering:
for line in p.stdout:
print(line, end='')
But if you use p.stdout.readline() it should work:
while True:
line = p.stdout.readline()
if not line: break
print(line, end='')
See corresponding python bug discussion for details
UPD: here you can find almost the same problem with various solutions on stackoverflow.
I think the main problem is that http.server somehow is logging the output to stderr, here I have an example with asyncio, reading the data either from stdout or stderr.
My first attempt was to use asyncio, a nice API, which exists in since Python 3.4. Later I found a simpler solution, so you can choose, both of em should work.
asyncio as solution
In the background asyncio is using IOCP - a windows API to async stuff.
# inspired by https://pymotw.com/3/asyncio/subprocesses.html
import asyncio
import sys
import time
if sys.platform == 'win32':
loop = asyncio.ProactorEventLoop()
asyncio.set_event_loop(loop)
async def run_webserver():
buffer = bytearray()
# start the webserver without buffering (-u) and stderr and stdin as the arguments
print('launching process')
proc = await asyncio.create_subprocess_exec(
sys.executable, '-u', '-mhttp.server',
stdout=asyncio.subprocess.PIPE,
stderr=asyncio.subprocess.PIPE
)
print('process started {}'.format(proc.pid))
while 1:
# wait either for stderr or stdout and loop over the results
for line in asyncio.as_completed([proc.stderr.readline(), proc.stdout.readline()]):
print('read {!r}'.format(await line))
event_loop = asyncio.get_event_loop()
try:
event_loop.run_until_complete(run_df())
finally:
event_loop.close()
redirecting the from stdout
based on your example this is a really simple solution. It just redirects the stderr to stdout and only stdout is read.
from subprocess import Popen, PIPE, CalledProcessError, run, STDOUT import os
def execute(cmd):
with Popen(cmd, stdout=PIPE, stderr=STDOUT, bufsize=1) as p:
while 1:
print('waiting for a line')
print(p.stdout.readline())
cmd2 = ["python", "-u", "-m", "http.server"]
execute(cmd2)
How for line in p.stdout is been evaluated internally? is it some sort of endless loop till reaches stdout eof or something?
p.stdout is a buffer (blocking). When you are reading from an empty buffer, you are blocked until something is written to that buffer. Once something is in it, you get the data and execute the inner part.
Think of how tail -f works on linux: it waits until something is written to the file, and when it does it echo's the new data to the screen. What happens when there is no data? it waits. So when your program gets to this line, it waits for data and process it.
As your code works, but when run as a model not, it has to be related to this somehow. The http.server module probably buffers the output. Try adding -u parameter to Python to run the process as unbuffered:
-u : unbuffered binary stdout and stderr; also PYTHONUNBUFFERED=x
see man page for details on internal buffering relating to '-u'
Also, you might want to try change your loop to for line in iter(lambda: p.stdout.read(1), ''):, as this reads 1 byte at a time before processing.
Update: The full loop code is
for line in iter(lambda: p.stdout.read(1), ''):
sys.stdout.write(line)
sys.stdout.flush()
Also, you pass your command as a string. Try passing it as a list, with each element in its own slot:
cmd = ['python', '-m', 'http.server', ..]
You could implement the no-buffer behavior at the OS level.
In Linux, you could wrap your existing command line with stdbuf :
stdbuf -i0 -o0 -e0 YOURCOMMAND
Or in Windows, you could wrap your existing command line with winpty:
winpty.exe -Xallow-non-tty -Xplain YOURCOMMAND
I'm not aware of OS-neutral tools for this.

How use python subprocess.call, sending copy of stdout to logfile, while detecting result of first command

My python script needs to invoke a program, detect if it failed (eg, result != 0) and send the output of the program to both stdout like normal plus a log file.
My default shell is bash. I'm using Python 2.7.9
To send output to both stdout and a file I'd normally use tee:
result = subprocess.call('some_program --an-option | tee -a ' + logfile , shell=True)
However, the pipe in bash will return true even if the first command fails, so this approach fails to detect if the command fails.
If I try to use set -o pipefail in the command (so that the result will indicate if the first command fails) like this:
result = subprocess.call('set -o pipefail && some_program --an_option | tee -a ' + logfile , shell=True)
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Is there a way in python to invoke a command, send the output to both the normal stdout console and a logfile, and still detect if the command failed?
Note: we have to continue sending some_program's output to stdout since stdout is being sent to a websocket.
I get the error /bin/sh: 1: set: Illegal option -o pipefail
Pass executable='/bin/bash' otherwise /bin/sh is used.
You could implement tee in pure Python:
#!/usr/bin/env python2
import sys
from subprocess import Popen, PIPE
chunk_size = 1 << 13
p = Popen(["some_program", "--an-option"], stdout=PIPE, bufsize=1)
with p.stdout, open('logfile', 'ab') as logfile:
for chunk in iter(lambda: p.stdout.read(chunk_size), b''):
sys.stdout.write(chunk)
logfile.write(chunk)
if p.wait() != 0:
raise Error
My preference would to to send stdout to a pipe, and then read the pipe in the Python code. The Python code can write to stdout, a file, etc as required. It would also enable you to set shell=False as setting it to True is a potential security issue, as mentioned in the documentation.
However, the pipe in bash will return true even if the first command
fails, so this approach fails to detect if the command fails.
That is not true.
But I think you mean: the 'some_program --an-option | tee -a ' + logfile exit status code always is 0 even though fails in any command part.
Well, using multiple commands (when using && or ||) or connecting multiple commands together via pipes causes unreliable exit status code when returned.
Regardless, in the command: some_program --an-option | tee -a ' + logfile logfile is not written if some_program fails. So you don't need to worry regarding exit code.
Anyway the best way to do pipe along with subprocess is creating Popen objects ans handling stdout and stdin:
import subprocess as sp
STATUS_OK = 0
logfile = '/tmp/test.log'
commands = {
'main' : 'ls /home',
'pipe_to': 'tee -a ' + logfile
}
process = sp.Popen(commands['main'], shell=True, stdout=sp.PIPE)
# explicitly force waits till command terminate, set and return exit status code
process.wait()
if process.returncode == STATUS_OK:
stdoutdata = process.communicate()[0]
# pipe last command output to "tee" command
sp.Popen(commands['pipe_to'], stdin=sp.PIPE, shell=1).communicate(stdoutdata)
else:
# do something when command fails 'ls /hom' (in this case) fails
pass
That is it!
I the last Popen we invoke Popen.communicate() to send the last output from ls command to tee command STDIN.
In the Python doc there's a tiny tutorial called Replacing shell pipeline, maybe you want take a look.

Python subprocess.Popen() followed by time.sleep

I want to make a python script that will convert a TEX file to PDF and then open the output file with my document viewer.
I first tried the following:
import subprocess
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
This way, the conversion from TEX to PDF works all right, but, as it takes some time, the second command (open file with Document Viewer) is executed before the output file is created.
So, I tried do make the program wait some seconds before executing the second command. Here's what I've done:
import subprocess
import time
subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
time.sleep(10)
subprocess.Popen(['gnome-open', 'Alunos/'+aluno+'_pratica.pdf'], shell=False)
But, when I do so, the output PDF file is not created. I can't understand why. The only change was the time.sleep command. Why does it affect the Popen process?
Could anyone give me some help?
EDIT:
I've followed the advice from Faust and Paulo Bu and in both cases the result is the same.
When I run this command...
subprocess.call('xelatex --output-directory=Alunos/ Alunos/{}_pratica.tex'.format(aluno), shell=True)
... or this...
p = subprocess.Popen(['xelatex', '--output-directory=Alunos/', 'Alunos/' + aluno + '_pratica.tex'], shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
...the Xelatex program is run but doesn't make the conversion.
Strangely, when I run the command directly in the shell...
$ xelatex --output-directory=Alunos/ Alunos/name_pratica.tex
... the conversion works perfectly.
Here's what I get when I run the subprocess.call() command:
$ python my_file.py
Enter name:
name
This is XeTeX, Version 3.1415926-2.4-0.9998 (TeX Live 2012/Debian)
restricted \write18 enabled.
entering extended mode
(./Alunos/name_pratica.tex
LaTeX2e <2011/06/27>
Babel <v3.8m> and hyphenation patterns for english, dumylang, nohyphenation, loaded.
)
*
When I write the command directly in the shell, the output is the same, but it followed automatically by the conversion.
Does anyone know why it happens this way?
PS: sorry for the bad formating. I don't know how to post the shell output properly.
If you need to wait the termination of the program and you are not interested in its output you should use subprocess.call
import subprocess
subprocess.call(['xelatex', '--output-directory=Alunos/', 'Alunos/{}_pratica.tex'.format(aluno)])
subprocess.call([('gnome-open', 'Alunos/{}_pratica.pdf'.format(aluno)])
EDIT:
Also it is generally a good thing to use English when you have to name variables or functions.
If xelatex command works in a shell but fails when you call it from Python then xelatex might be blocked on output in your Python code. You do not read the pipes despite setting stdout/stderr to PIPE. On my machine the pipe buffer is 64KB therefore if xelatex output size is less then it should not block.
You could redirect the output to os.devnull instead:
import os
import webbrowser
from subprocess import STDOUT, check_call
try:
from subprocess import DEVNULL # py3k
except ImportError:
DEVNULL = open(os.devnull, 'w+b')
basename = aluno + '_pratica'
output_dir = 'Alunos'
root = os.path.join(output_dir, basename)
check_call(['xelatex', '--output-directory', output_dir, root+'.tex'],
stdin=DEVNULL, stdout=DEVNULL, stderr=STDOUT)
webbrowser.open(root+'.pdf')
check_call is used to wait for xelatex and raise an exception on error.

Categories

Resources