I have to make graphs from several files with data. I already found a way to run a simple command
xmgrace -batch batch.bfile -nosafe -hardcopy
in which batch.bfile is a text file with grace commands to print the graph I want. I already tried it manually and it works perfectly. To do this with several files I just have to edit one parameter inside batch.bfile and run the same command every time I make a change.
I have already written a python code which edits batch.bfile and goes through all the data files with a for cycle. In each cycle step I want to run the mentioned command directly in the command line.
After searching a bit I found two solutions, one with os.system() and another with subprocess.Popen() and I could only make subprocess.Popen() work without giving any errors by writing:
subprocess.Popen("xmgrace -batch batch.bfile -nosafe -hardcopy", shell=True)
Problem is, this doesn't do anything in practice, i.e., it just isn't the same as running the command directly in the command line. I already tried writing the full directory for the batch.bfile but nothing changed.
I am using Python 2.7 and Mac OS 10.7
Have you checked running xmgrace from the command line using sh? (i.e. invoke /bin/sh, then run xmgrace... which should be the same shell that Popen is using when you set shell=true).
Another solution would be to create a shell script (create a file like myscript.sh, and run chmod +x from the terminal). In the script call xmgrace:
#!/bin/bash
xmgrace -batch batch.bfile -nosafe -hardcopy
You could then test that myscript.sh works, which ought to pick up any environment variables that might be in your profile that might differ from python. If this works, you could call the script from python's subprocess.Popen('myscript.sh'). You can check what the environment variables are set in python for subprocess by running:
import os
os.environ
You may want to check out http://sourceforge.net/projects/graceplot/
When use use Popen, you can capture the application's output to stdout to stderr and print it within your application - this way you can see what is happening:
from subprocess import Popen, PIPE
ps = Popen(reportParameters,bufsize=512, stdout = PIPE, stderr = PIPE)
if ps:
while 1:
stdout = ps.stdout.readline()
stderr = ps.stderr.readline()
exitcode = ps.poll()
if (not stdout and not stderr) and (exitcode is not None):
break
if stdout:
stdout = stdout[:-1]
print stdout
if stderr:
stderr = stderr[:-1]
print stderr
Related
I am trying to use w3mimgdisplay to display images on the image terminal, and was looking at the source code for Ranger file manager. The file I was looking at can be found here.
Using this, I made a simple program.
import curses
from subprocess import Popen, PIPE
process = Popen("/usr/libexec/w3m/w3mimgdisplay",
stdin=PIPE, stdout=PIPE, universal_newlines=True)
process.stdin.write("echo -e '0;1;100;100;400;320;;;;;picture.jpg\n4;\n3;'")
process.stdin.flush()
process.stdout.readline()
process.kill()
Whenever I enter
echo -e '0;1;100;100;400;320;;;;;picture.jpg\n4;\n3;' \ /usr/libexec/w3m/w3mimgdisplay
into the terminal, it prints the image properly, however, running the python script does nothing. How can I write the output of the program to the terminal?
the shell echo command adds newline to the end of its output (unless you use the -n switch which you didn't) so you need to mimic that by adding a newline at the end of your command too.
Also, you should write the string contents, not the echo command itself, because this is being sent directly to the w3mimgdisplay process, not to the shell.
I'm also unsure why readline. I suggest using the .communicate() command instead because it makes sure you don't get into a rare but possible read/write buffer race condition. Or, the best method, use the simpler subprocess.run() directly:
import subprocess
subprocess.run(["/usr/libexec/w3m/w3mimgdisplay"],
input=b'0;1;100;100;400;320;;;;;picture.jpg\n4;\n3;\n')
I want to run an external program from python, redirect output (lots of text) to a log file and wait for that program to finish. I know I can do it via bash:
#! /bin/bash
my_external_program > log_file 2>&1
echo "done"
But how can I do the same with python? Note that with the bash command, I can check the log_file while the program is running. I want this property in python as well.
See the subprocess module.
For example:
with open("log_file", "w") as log_file:
subprocess.run(["my_external_program"], stdout=log_file, stderr=log_file)
print("done")
Controlling a python script from another script
You can check the link above, it is indeed similar issue. Using Popen from subprocess or from os.popen it is possible to check real time.
With a simple os.system ("your script > /tmp/mickey.log") will also run the script, but it will wait the execution of the command before.
Please let me know if this solve your issue.
Why am I getting list of files when executing this command?
subprocess.check_call("time ls &>/dev/null", shell=True)
If I will paste
time ls &>/dev/null
into the console, I will just get the timings.
OS is Linux Ubuntu.
On debian-like systems, the default shell is dash, not bash. Dash does not support the &> shortcut. To get only the subprocess return code, try:
subprocess.check_call("time ls >/dev/null 2>&1", shell=True)
To get subprocess return code and the timing information but not the directory listing, use:
subprocess.check_call("time ls >/dev/null", shell=True)
Minus, of course, the subprocess return code, this is the same behavior that you would see on the dash command prompt.
The Python version is running under sh, but the console version is running in whatever your default shell is, which is probably either bash or dash. (Your sh may actually be a different shell running in POSIX-compliant mode, but that doesn't make any difference.)
Both bash and dash have builtin time functions, but sh doesn't, so you get /usr/bin/time, which is a normal program. The most important difference this makes is that the time builtin is not running as a subprocess with its own independent stdout and stderr.
Also, sh, bash, and dash all have different redirection syntax.
But what you're trying to do seems wrong in the first place, and you're just getting lucky on the console because two mistakes are canceling out.
You want to get rid of the stdout of ls but keep the stderr of time, but that's not what you asked for. You're trying to redirect both stdout and stderr: that's what >& means on any shell that actually supports it.
So why are you still getting the time stderr? Either (a) your default shell doesn't support >&, or (b) you're using the builtin instead of the program, and you're not redirecting the stderr of the shell itself, or maybe (c) both of the above.
If you really want to do exactly the same thing in Python, with the exact same bugs canceling out in the exact same way, you can run your default shell manually instead of using shell=True. Depending on which reason it was working, that would be either this:
subprocess.check_call([os.environ['SHELL'], '-c', 'time ls &> /dev/null'])
or this:
subprocess.check_call('{} -c time ls &> /dev/null'.format(os.environ(SHELL), shell=True)
But really, why are you doing this at all? If you want to redirect stdout and not stderr, write that:
subprocess.check_call('time ls > /dev/null', shell=True)
Or, better yet, why are you even using the shell in the first place?
subprocess.check_call(['time', 'ls'], stdout=subprocess.devnull)
I need to execute and send command to external app from python:
.\Ext\PrintfPC /p “C:\Leica\DBX” /l “.\joblist.log”
It is cmd app, Is it possible to hide its console and terminate after all also using only
python?
You are probably looking for the subprocess module. Example for executing the ls -l bash command on a unix system:
subprocess.call(['ls', '-l'])
So, in your case it should probably look something like:
subprocess.call(['.\Ext\PrintfPC', '/p', 'C:\Leica\DBX', '/l', '.\joblist.log'])
Have a look at the linked documentation though, because you can also get the output back from the command line execution by using pipes / Popen objects.
i want to run and control PSFTP from a Python script in order to get log files from a UNIX box onto my Windows machine.
I can start up PSFTP and log in but when i try to run a command remotely such as 'cd' it isn't recognised by PSFTP and is just run in the terminal when i close PSFTP.
The code which i am trying to run is as follows:
import os
os.system("<directory> -l <username> -pw <password>" )
os.system("cd <anotherDirectory>")
i was just wondering if this is actually possible. Or if there is a better way to do this in Python.
Thanks.
You'll need to run PSFTP as a subprocess and speak directly with the process. os.system spawns a separate subshell each time it's invoked so it doesn't work like typing commands sequentially into a command prompt window. Take a look at the documentation for the standard Python subprocess module. You should be able to accomplish your goal from there. Alternatively, there are a few Python SSH packages available such as paramiko and Twisted. If you're already happy with PSFTP, I'd definitely stick with trying to make it work first though.
Subprocess module hint:
# The following line spawns the psftp process and binds its standard input
# to p.stdin and its standard output to p.stdout
p = subprocess.Popen('psftp -l testuser -pw testpass'.split(),
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
# Send the 'cd some_directory' command to the process as if a user were
# typing it at the command line
p.stdin.write('cd some_directory\n')
This has sort of been answered in: SFTP in Python? (platform independent)
http://www.lag.net/paramiko/
The advantage to the pure python approach is that you don't always need psftp installed.