Creating a new terminal/shell window to simply display text - python

I want to pipe [edit: real-time text] the output of several subprocesses (sometimes chained, sometimes parallel) to a single terminal/tty window that is not the active python shell (be it an IDE, command-line, or a running script using tkinter). IPython is not an option. I need something that comes with the standard install. Prefer OS-agnostic solution, but needs to work on XP/Vista.
I'll post what I've tried already if you want it, but it’s embarrassing.

A good solution in Unix would be named pipes. I know you asked about Windows, but there might be a similar approach in Windows, or this might be helpful for someone else.
on terminal 1:
mkfifo /tmp/display_data
myapp >> /tmp/display_data
on terminal 2 (bash):
tail -f /tmp/display_data
Edit: changed terminal 2 command to use "tail -f" instead of infinite loop.

You say "pipe" so I assume you're dealing with text output from the subprocesses. A simple solution may be to just write output to files?
e.g. in the subprocess:
Redirect output %TEMP%\output.txt
On exit, copy output.txt to a directory your main process is watching.
In the main process:
Every second, examine directory for new files.
When files found, process and remove them.
You could encode the subprocess name in the output filename so you know how to process it.

You could make a producer-customer system, where lines are inserted over a socket (nothing fancy here).
The customer would be a multithreaded socket server listening to connections and putting all lines into a Queue. In the separate thread it would get items from the queue and print it on the console. The program can be run from the cmd console or from the eclipse console as an external tool without much trouble.
From your point of view, it should be realtime. As a bonus, You can place producers and customers on separate boxes. Producers can even form a network.
Some Examples of socket programming with python can be found here. Look here for an tcp echoserver example and here for a tcp "hello world" socket client.
There also is an extension for windows that enables usage of named pipes.
On linux (possibly cygwin?) You could just tail -f named-fifo.
Good luck!

Related

Swift app and python code fail to kill a certain process on mac

I would imagine there are many different possible solutions to this very leveled problem. I am trying to make a swift-based mac app that can manage all my discord bots from one window. I have gotten it to turn on the discord bots successfully(using a global thread, NOT process objects). However, when I quit the app, I noticed the Python process launched by the app keeps running, and so does the discord bot. Instead of the app killing all child processes, it switches the parent process of the python to null when quit. I don't know swift very well, so I had some trouble getting it to kill all child processes when it closes(and yes, I know there is something with info.plist, but that is only for newer XCode versions than I can install). To fix this, I made the applicationWillTerminate code for AppDelegate.swift execute some python code to kill any process that mentions the files of the one bot that the app works with right now. The bot is stored in a folder called roleManager. Here's the python code:
import os
import subprocess
import re
subprocess = subprocess.Popen(['ps', '-A'], stdout=subprocess.PIPE)
output, error = subprocess.communicate()
print(output)
roleProcesses=re.findall("roleManager.{20}", str(output))
#this regex probably could have been better but it works
PIDs=[i.split('\\n')[1] for i in roleProcesses]
for pid in PIDs:
with open('killProcesses.sh','w') as file:
file.write(f'kill {int(pid)}')
os.system('sudo /Users/nathanwolf/Documents/coding/PycharmProjects/botManagerPythonSide/killProcesses.sh')
subprocess.communicate() returns a bytes object with a list of processes formatted like so(I'm pretty sure about this):
CPU time command associated with process(like usr/local/bin/python3.9 some/python/file)\n(not an enter character actually \n)PID ??
The sudo is only there because one time it said it didn't have permission to kill one of the running bots. This approach has had 2 problems. One time it killed its own python process despite not being in the folder roleManager and crashed PyCharm, most of the time it fails to kill the bot. For debugging, I looked for the bot's PID in subprocess.communicate() and it was associated with the following command:
/Library/Developer/PrivateFrameworks/CoreSimulator.framework/Versions/A/XPCServices/SimulatorTrampoline.xpc/Contents/MacOS/SimulatorTrampoline.
The way I see it, there are two pathways to a possible solution: get swift to kill child processes(not sure why that isn't a default), or get python to successfully kill the bot(is the above process related to this?). I would prefer the first one, but either one is fine.
Let me know if you need more info.
Thanks so much in advance!
Found the solution from another stackoverflow question. I just had to execute the following terminal command:
pgrep -f Python | xargs kill -9. This kills all running Python apps, which are all going to be controlled from the app, so that works as a patch for me.

Daemonising python project, that uses Twisted

If we use PEP-3143 and it's reference implementation http://pypi.python.org/pypi/python-daemon
then it looks like impossible to have Twisted working, since during daemonising ALL possible file handlers are explicitly closed, which includes pipes.
When Twisted tries to call os.pipe() and then write to it - gets bad file descriptor.
As I see it, daemonising is not suited for networking by this PEP?
And probably that's the reason why twisted exist
Edit:
I'll have to point out that the question is more of the "Why PEP effectively makes it impossible to create a network application" rather then "How to do it".
Twisted breaks this rules in order to work
It doesn't close all the open file descriptors: just the ones not in the files_preserve attribute. You could probably coerce this to work by figuring out the FD of the waker and all open sockets in the reactor and then passing that to files_preserve... but why bother? Just use twistd and have twisted daemonize itself.
Better yet, use twistd -n and let your process get monitored by some other system tool, and don't bother with daemonization at all.
Feel free to use this daemon http://www.jejik.com/articles/2007/02/a_simple_unix_linux_daemon_in_python/
How to mix it with Twisted see here
http://michael-xiii.blogspot.com/2011/10/twisted.html (warning! Russian text ahead, but Python code is rather demonstrating)
supervisord + upstart
The practice of closing all open filedescriptors is an effect of the possibility that the deamonizing process inherits some open files from the parent process. For example, you can open dozens of files in one process (with, say, os.open()) or and then invoke a sub-process that inherits them. You probably don't have an easy way, as a subprocess, to know what filedescriptors are useful from the parent process (unless you pass that along with command line arguments), and you certainly don't want stdin, stdout or stderr, so its perfectly reasonable to, before doing anything else, close all open files.
Then a deamonizing process will take some additional steps to become a deamon (as laid out in the PEP).
Once the process is fully detached from any kind of terminal, it can start opening files and connections as it needs. It'll open its log files, its configuration files, and its network connections.
Others have mentioned that twisted, via the twistd tool already does a pretty good job of all of this, and you don't need to use an extra module. If you don't want to use twistd (for some reason) but you do want to use twisted, you could use something external, but you should deamonize first and then import twisted and the rest of your application code and open network connections last.

Python: Why does my SMTP script freeze my computer?

So I wrote a little multithreaded SMTP program. The problem is every time I run it, it freezes the computer shortly after. The script appears to still work, as my network card is still lighting up and the emails are received, but in some cases it will lock up completely and stop sending the emails.
Here's a link to my two script files. The first is the one used to launch the program:
readFile.py
newEmail.py
First, you're using popen which creates subprocesses, ie. processes not threads. I'll assume this is what you meant.
My guess would be that the program gets stuck in a loop where it generates processes continuously, which the OS will probably dislike. (That kind of thing is known as a forkbomb which is a good way to freeze Linux unless a process limit has been set with ulimit.) I couldn't find the bug though, but if I were you, I'd log messages each time I spawn or kill a subprocess, and if everything is normal, watch the system closely (ps or top on Unix systems) to see if the processes are really being killed.

Need a workaround: Python's select.select() doesn't work with subprocess' stdout?

From within my master python program, I am spawning a child program with this code:
child = subprocess.Popen(..., stdout=subprocess.PIPE, stdin=subprocess.PIPE)
FWIW, the child is a PHP script which needs to communicate back and forth with the python program.
The master python program actually needs to listen for communication from several other channels - other PHP scripts spawned using the same code, or socket objects coming from socket.accept(), and I would like to use select.select() as that is the most efficient way to wait for input from a variety of sources.
The problem I have, is that select.select() under Windows does not work with the subprocess' stdout file descriptor (this is documented), and it looks I will be forced to:
A) Poll the PHP scripts to see if they have written anything to stdout. (This system needs to be very responsive, I would need to poll at least 1,000 times per second!)
B) Have the PHP scripts connect to the master process and communicate via sockets instead of stdout/stdin.
I will probably go with solution (B), because I can't bring myself to make the system poll at such a high frequency, but it seems a sad waste of resources to reconnect with sockets when stdout/stdin would have done just fine.
Is there some alternative solution which would allow me to use stdout and select.select()?
Unfortunately, many uses of pipes on Windows don't work as nicely as they do on Unix, and this is one of them. On Windows, the better solution is probably to have your master program spawn threads to listen to each of its subprocesses. If you know the granularity of data that you expect back from your subprocess, you can do blocking reads in each of your threads, and then the thread will come alive when the IO unblocks.
Alternatively, (I have no idea if this is viable for your project), you could look into using a Unix-like system, or a Unix-like layer on top of Windows (e.g. Cygwin), where select.select() will work on subprocess pipes.

Python Printing StdOut As It Received

I'm trying to run wrap a simple (windows) command line tool up in a PyQt GUI app that I am writing. The problem I have is that the command line tool throws it's progress out to stdout (it's a server reset command so you get "Attempting to stop" and "Restarting" type output.
What I am trying to do is capture the output so I can display it as part of my app. I assumed it would be quite simple to do something like the following :
import os
import subprocess as sub
cmd = "COMMAND LINE APP NAME -ARGS"
proc = sub.Popen(cmd, shell=True, stdout=sub.PIPE).stdout
while 1:
line = proc.readline()
if not line:
break
print line
This partially works in that I do get the contents of StdOut but instead of as the progress messages are sent I get it once the command line application exits and it seems to flush StdOut in one go.
Is there a simple answer?
Interactive communication through stdin/stdout is a common problem.
You're in luck though, with PyQt you can use QProcess, as described here:
http://diotavelli.net/PyQtWiki/Capturing_Output_from_a_Process
Do I understand the question?
I believe you're running something like "echo first; sleep 60; echo second" and you want see the "first" well-ahead of the "second", but they're both spitting out at the same time.
The reason you're having issues is that the operating system stores the output of processes in its memory. It will only take the trouble of sending the output to your program if the buffer has filled, or the other program has ended. So, we need to dig into the O/S and figure out how to tell it "Hey, gimme that!" This is generally known as asynchronous or non-blocking mode.
Luckily someone has done the hard work for us.
This guy has added a send() and recv() method to the python built-in Popen class.
It also looks like he fixed the bugs that people found in the comments.
Try it out:
http://code.activestate.com/recipes/440554/

Categories

Resources