I would like to know a good way to make a GUI progress bar. I'm using python, Glade/GTK. An example of a program I'd like to get more than just a pulse bar. Something that could give the user a more accurate way to know the progress. The program I'm trying to capture this for is cdparanoia, but would like to know how to do this in general for other programs like wget. Important to note on these two programs is the the info is output on stderr.
I should mention that when doing a
output = p.stderr.read(1)
print output
Doesn't show that the text progress bar is showing. It is almost like it is treating a non-interactive shell differently. There are no carriage returns (\r) like I thought it would be doing.
Codings is as such:
import subprocess, shlex, gtk
command = 'cdparanoia -w -Z 1- - | sox -t wav - "my disk.flac"'
p = subprocess(shlex.split(command), stderr=subprocess.PIPE)
gui = gtk.Builder()
gui.add_from_file("pulsebar.glade")
#do magic here to make a good pulsebar
Gratefully,
Narnie
gtk.ProgressBar? (Note the .set_fraction() method.)
Here's some info about doing non-blocking reads from a subprocess. The suggestion of polling with timeout seems appropriate. Also, this answer.
My dim memory of cdparanoia's progress indicator is that it is very idiosyncratic. But I'm guessing it's just some silly stuff + '\r'; shouldn't be too hard to extract a fraction from it.
Edit: Ok, actually, perhaps the above is incorrect under normal usage situations; but have you tried -e?
-e --stderr-progress
Force output of progress information to stderr (for wrapper
scripts).
Related
I'm trying to make a python script that would clone(ISO image) one usb stick to another using dd if=/dev/sda of=/dev/sdb
Here's my problem:
I want to create progress bar showing what is done.
I tried:
Looking at storage space at second usb stick, but this doesn't work beacause ISO image scans also unused space
By adding status=progress to dd command I can get progress in terminal but I can't figure how to access stdout from python. I tried subprocess.Popen,run(stdout = PIPE) with and without shell = True
reading process.stdout with .read(), .read(1), .readline() or communicate(). Nothing worked for me. (https://www.endpointdev.com/blog/2015/01/getting-realtime-output-using-python/)
I can see progress going on in python shell but .read() function always get stuck.
Part of code I am concerned about:
comm = 'sudo dd if=/dev/sda of=/dev/sdb'
cloning = subprocess.Popen(shlex.split(comm),stdout = PIPE,text = True)
while True:
print(cloning.stdout.read())
I want something that would work like:
while True:
progress = cloning.stdout.read()
update_bar(progress)
I'm using python 3.7 on Raspberry
Thanks for help
You were on the right track with status=progress, but it outputs to stderr, not stdout. If you do stderr = PIPE and then read from cloning.stderr instead of cloning.stdout, it will work.
I have a script that simply outputs log events formatted in json to the screen line by line. One line equals one json log event. I usually simply append the results to a file to hold onto them when I need to (./script.py >> json.logs).
This script can take a while depending on the input, and I'd like to add a simple progress bar or number to the bottom of the console as it's working. However, I think this will also be written to the log file if I append like normal and I do not want that.
What is the normal way to approach printing something to the console that will not be appended to stdout or >>? Also, if I'm just simply printing the results to the screen instead of logging them to a file, I need the status bar to not make a mess in the screen as well (or rather, to only ever show at the bottom of the console screen).
Using >> will by default only pipe STDOUT to the file, so if you print to STDERR, it won't go to the log. For example:
import sys
print("something") # this will go to json.logs
print("something else", file=sys.stderr) # this won't go to json.logs unless you specifically tell it to
As for making the bar itself, either look at something like tqdm or if you specifically want the bar to appear at the bottom of the window, you may have to roll your own solution with curses. Or just do something simple, like print one asterisk at a time.
The idea is to use the stderr (import sys; sys.stderr.write('the bar') you could use print ('barstuff', file=sys.stderr) if you are using python3). This works fine if you want to save the stdout in a file while having the bar in the screen. To have the bar always at the bottom of the screen, looks like quite complicated: you should know what is the height of the screen and, I think, this could be almost impossible from python.
Probably, with some magic, you could be able to print the bar at the beginning of the screen and a given number of lines below it using the \r to rewrite on the old strings.
Background: I have a Python subprocess that connects to a shell-like application, which uses the readline library to handle input, and that app has a TAB-complete routine for command input, just like bash. The child process is spawned, like so:
def get_cli_subprocess_handle():
return subprocess.Popen(
'/bin/myshell',
shell=False,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
Everything works great, except tab-complete. Whenever my Python program passes the tab character, '\t' to the subprocess, I get 5 spaces in the STDIN, instead of triggering the readline library's tab-complete routine. :(
Question: What can I send to the subprocess's STDIN to trigger the child's tab-complete function? Maybe asked another way: How do I send the TAB key as opposed to the TAB character, if that is even possible?
Related but Unanswered and Derailed:
trigger tab completion for python batch process built around readline
The shell like application is probably differentiating between a terminal being connected to stdin and a pipe being connected to it. Many Unix utilities do just that to optimise their buffering (line vs. block) and shell-like utilities are likely to disable command completion facilities on batch input (i.e. PIPE) to avoid unexpected results. Command completion is really an interactive feature which requires a terminal input.
Check out the pty module and try using a master/slave pair as the pipe for your subprocess.
There really is no such thing as sending a tab key to a pipe. A pipe can only accept strings of bits, and if the tab character isn't doing it, there may not be a solution.
There is a project that does something similar called pexpect. Just looking at its interact() code, I'm not seeing anything obvious that makes it work and yours not. Given that, the most likely explanation is that pexpect actually does some work to make itself look like a pseudo-terminal. Perhaps you could incorporate its code for that?
Based on isedev's answer, I modified my code as follows:
import os, pty
def get_cli_subprocess_handle():
masterPTY, slaveTTY = pty.openpty()
return masterPTY, slaveTTY, subprocess.Popen(
'/bin/myshell',
shell=False,
stdin=slaveTTY,
stdout=slaveTTY,
stderr=slaveTTY,
)
Using this returned tuple, I was able to perform select.select([masterPTY],[],[]) and os.read(masterPTY, 1024) as needed, and I wrote to the master-pty with a function that is very similar to a private method in the pty module source:
def write_all(masterPTY, data):
"""Successively write all of data into a file-descriptor."""
while data:
chars_written = os.write(masterPTY, data)
data = data[chars_written:]
return data
Thanks to all for the good solutions. Hope this example helps someone else. :)
I just want to build a little python music client on my raspberry pi. I installed "mpg321" and it works great but now my problem. After sending the command
os.system("mpg321 -R testPlayer")
python waits for user input like play, pause or quit. If I write this in my terminal the player pause the music oder quits. Perfect but I want python to do that so I send the command
os.system("LOAD test.mp3")
where LOAD is the command for loading this mp3. But nothing happens. When I quit the player via terminal I get the error:
sh: 1: LOAD: not found
I think this means that
os.system("mpg321 -R testPlayer")
takes the whole process and after I quit it python tries to execute the comman LOAD. So how do I get these things work together?
My code:
import os
class PyMusic:
def __init__(self):
print "initial stuff later"
def playFile(self, fileName, directory = ""):
os.system("mpg321 -R testPlayer")
os.system("LOAD test.mp3")
if __name__ == "__main__":
pymusic = PyMusic()
pymusic.playFile("test.mp3")
Thanks for your help!
First, you should almost never be using os.system. See the subprocess module.
One major advantage of using subprocess is that you can choose whatever behavior you want—run it in the background, start it and wait for it to finish (and throw an exception if it returns non-zero), interact with its stdin and stdout explicitly, whatever makes sense.
Here, you're not trying to run another command "LOAD test.mp3", you're trying to pass that as input to the existing process. So:
p = subprocess.Popen(['mpg321', '-R', 'testPlayer'], stdin=PIPE)
Then you can do this:
p.stdin.write('LOAD test.mp3\n')
This is roughly equivalent to doing this from the shell:
echo -e 'LOAD test.mp3\n' | mpg321 -R testPlayer
However, you should probably read about communicate, because whenever it's possible to figure out how to make your code work with communicate, it's a lot simpler than trying to deal with generic I/O (especially if you've never coded with pipes, sockets, etc. before).
Or, if you're trying to interact with a command-line UI (e.g., you can't send the command until you get the right prompt), you may want to look at an "expect" library. There are a few of these to choose from, so you should search at PyPI to find the right one for you (although I can say that I've used pexpect successfully in the past, and the documentation is full of samples that get the ideas across a lot more quickly than most expect documentation does).
You are looking for a way to send data to stdin. Here is an example of this using Popen:
from subprocess import Popen, PIPE, STDOUT
p = Popen(['mpg321', '-R testPlayer'], stdout=PIPE, stdin=PIPE, stderr=STDOUT)
mpg123_stdout = p.communicate(input='LOAD test.mp3\n')[0]
print(mpg123_stdout)
You establish pointers to stdin and stdout, then after you start your process, you communicate with stdin and read from stdout. Be sure to send new lines (carriage returns)
I have a python script which will give an output file. I need to feed this output file to a command line program. Is there any way I could call the commandline program and control it to process the file in python?
I tried to run this code
import os
import subprocess
import sys
proc = subprocess.Popen(["program.exe"], stdin=subprocess.PIPE)
proc.communicate(input=sys.argv[1]) #here the filename should be entered
proc.communicate(input=sys.argv[2]) #choice 1
proc.communicate(input=sys.argv[3]) #choice 2
is there any way I could enter the input coming from the commandline. And also though the cmd program opens the interface flickers after i run the code.
Thanks.
Note: platform is windows
Have a look at http://docs.python.org/library/subprocess.html. It's the current way to go when starting external programms. There are many examples and you have to check yourself which one fits your needs best.
You could do os.system(somestr) which lets you execute semestr as a command on the command line. However, this has been scrutinized over time for being insecure, etc (will post a link as soon as I find it).
As a result, it has been conventionally replaced with subprocess.popen
Hope this helps
depending on how much control you need, you might find it easier to use pexpect which makes parsing the output of the program rather easy and can also easily be used to talk to the programs stdin. Check out the website, they have some nice examples.
If your target program is expecting the input on STDIN, you can redirect using pipe:
python myfile.py | someprogram
As I just answered another question regarding subprocess, there is a better alternative!
Please have a look at the great library python sh, it is a full-fledged subprocess interface for Python that allows you to call any program as if it were a function, and more important, it's pleasingly pythonic.
Beside redirecting data stream with pipes, you can also process a command line such as:
mycode.py -o outputfile inputfilename.txt
You must import sys
import sys
and in you main function:
ii=1
infile=None
outfile=None
# let's process the command line
while ii < len(sys.argv):
arg = sys.argv[ii]
if arg == '-o':
ii = ii +1
outfile = sys.argv[ii]
else:
infile=arg
ii = ii +1
Of course, you can add some file checking, etc...