Python3 curses code not working in pipeline - python

I am writing a python script that I want to use in a unix pipeline. My goal is to write to the screen using curses (which should only be seen by the person running the command, not the pipe), and then write the "return value" to stdout at the end so it can continue down the pipeline, something along the lines of ./myscript.py | consumer_script
This was failing in mysterious ways until I found This. The suggested solution was to use newterm instead of init_scr.
My problem is that I am using python, and from what I could find in the documentation, newterm doesnt exist. All I was able to find was a single reference to newterm, and it didn't come with a link.
Could someone please either point me towards the python newterm, or suggest another way of working with pipes and curses.

I think you're making this more complicated than it needs to be... the simple answer is to write the curses stream to another handle than stdout. If it works for you, stderr is the obvious choice. In short, anything that gets written to stdout goes into the pipeline, and if you don't want it there, you need a different handle.
Check out this thread for ways to write to stderr in python:
How to print to stderr in Python?

Related

Displaying all results of execution

When I execute a python program, the results starts to appear quickly and I can't read it all. It just flushes over my screen.
When the execution ends, I can no longer see the first displays, because the terminal display space is limited.
How save the output, so I can read all of it?
You have a few options here.
Add a breakpoint and learn how to use the debugger. Once you add this command (import pdb;pdb.set_trace() # this will take some learning so look up what pdb is online. actually, i prefer 'ipdb' instead.), the code will stop at that specific point when you execute it.
Save it to a file (python file.py > filename.txt) and then read it afterwards. Bonus: Before you ask yourself, where are my outputs? https://askubuntu.com/questions/625224/how-to-redirect-stderr-to-a-file
(More advanced) Your code is spitting out too much garbage output. You can remove some of the code or use python logging filters.
May be platform dependant.
On Linux you can also pipe your program output into your favorite pager (less for example) if you don't want to write it to a file.
python file.py | less

Use named pipes to send input to program based on output

Here's a general example of what I need to do:
For example, I would initiate a back trace by sending the command "bt" to GDB from the program. Then I would search for a word such as "pardrivr" and get the line number associated with it by using regular expressions. Then I would input "f [line_number_of_pardriver]" into GDB. This process would be repeated until the correct information is eventually extracted.
I want to use named pipes in bash or python to accomplish this.
Could someone please provide a simple example of how to do this?
My recommendation is not to do this. Instead there are two more supportable ways to go:
Write your code in Python directly in gdb. Gdb has been extensible in Python for several years now.
Use the gdb MI ("Machine Interface") approach. There are libraries available to parse this already (not sure if there is one in Python but I assume so). This is better than parsing gdb's command-line output because some pains are taken to avoid gratuitous breakage -- this is the preferred way for programs to interact with gdb.

How can I monitor a screen session with Python?

I need to monitor a screen session in real time using a Python script. It needs to know when the display changes. I believe this can be described as whenever stdout is flushed, or a character is entered to stdin. Is there some way to do this; perhaps with pipes?
I have some code found here that gets a character from stdin, and I assume works on a pipe (if I modify the code, or change sys.stdin)?
Does the flush function of a stream (like stdout) get called in a pipe, or is it just called explicitly? My understanding is that the display is only updated when stdout is flushed.
Probably you want to take a look at script, which already does pretty much everything you want.
Have you tried python curses? It is similar of Linux curses and provides a good way to handle terminal related i/o.

Using stdout to customize Logfile

For my current project I need to find a way to write a stdout-dependent version or at least a formatted stdout to a logfile.
Currently I'm just using subprocess.Popen(exec, stdout=log), which writes it to a file, unformatted.
The best idea I've got was formatting the output in on-the-fly, by intercepting the stdout, rerouting it to a custom module and let it write own strings to the log by using a combination of if's and elif's. But I couldn't find any way on how to do this in Python 3 in the docs. The most fitting answer I could find was in this question, the accepted answer is already kind of near.
But I just fail to understand on how exactly I should build the class that should be used as stdout=class. Which one of these methods receives the input (and how is it formatted)? Which of these methods are necessary? Or might there even be a way easier method to accomplish what I want to do?
You can use the sarge project to allow a parent process to capture the stdout (and/or stderr) of a subprocess, read it line by line (say) and do whatever you like with the read lines (including writing them, appropriately formatted, to a log file).
Disclosure: I'm the maintainer of the sarge project.

Running multiple processes and capturing the output in python with pygtk

I'd like to write a simple application that runs multiple programs and displays their output in multiple terminal (style) windows. In addition, I want to be able to read the stdout/stderr of these processes and search for keywords in the output.
I've tried implementing this two ways in python, the first using subprocess.Popen and the second using vte (python-vte).
I've only gotten Popen to work w/ polling. I have to constantly check to see if the processes have data to be read, read the data, and then send it to my TextArea. It's been recommended to use gobject.io_add_watch() instead, but whenever I try that my program hangs on the second call to io_add_watch--it's like it can only handle one file descriptor at a time.
vte works great but I haven't found a reliable way to capture the output. You can get a callback when the cursor moves and then screen scrape w/ get_text(), but I've already run into cases where these programs I'm viewing generate an obscene about of tty in one go and then it's off the screen. There doesn't appear to be a callback that contains new text to be added to the window.
Any ideas?
I did something similar to this using the subprocess.Popen. For each process I actually ended up redirecting the stdout and stderr to a temporary file, then periodically checking the file for updates and dumping the output into a TextView.
The reason for not using a pipe to the process was that the processes themselves were volatile and prone to segfaults. When that happened I sometimes lost data between the last read and the segfault (which was the most needed data to determine the cause of the segfault).
As it turned out, sometimes I'd want to save the output from a specific process, so this method worked well for me.
If you go with igkuk's suggestion, I got some good advice on watching files for changes in a related question. That worked pretty well for me (I was watching a log file for changes).
You want to use select to monitor the pipes from your subprocesses. It's better than polling.

Categories

Resources