I am writing a Python program which runs a virtual terminal. Currently I am launching it like so:
import pexpect, thread
def create_input(child, scrollers, textlength=80, height=12):
while 1:
newtext = child.readline()
print newtext
child = pexpect.spawn("bash", timeout=30000)
thread.start_new_thread(create_input,(child))
This works, and I can send commands to it via child.send(command). However, I only get entire lines as output. This means that if I launch something like Nano or Links, I don't receive any output until the process has completed. I also can't see what I'm typing until I press enter. Is there any way to read the individual characters as bash outputs them?
You would need to change the output of whatever program bash is running to be unbuffered instead of line buffering. A good amount of programs have a command line option for unbuffered output.
The expect project has a tool called unbuffer that looks like it can give you all bash output unbuffered. I have personally never used it, but there are other answers here on SO that also recommend it: bash: force exec'd process to have unbuffered stdout
The problem is lies in something else. If you open an interactive shell normally a terminal window is opened that runs bash, sh, csh or whatever. See the word terminal!
In the old days, we connected a terminal to a serial port (telnet does the same but over ip), again the word terminal.
Even a dumb terminal respond to ESC codes, to report its type and to set the cursor position, colors, clear screen etc.
So you are starting a subprocess with interactive output, but there is no way in telling that shell and subprocesses are to a terminal in this setup other than with bash startup parameters if there are any.
I suggest you enable telnetd but only on localhost (127.0.0.1)
Within your program, make a socket and connect to localhost:telnet and look up how to emulate a proper terminal. If a program is in line mode you are fine but if you go to full screen editing, somewhere you will need an array of 80x24 or 132x24 or whatever size you want to store its characters, color. You also need to be able to shift lines up in that array.
I have not looked but I cannot imagine there is no telnet client example in python, and a terminal emu must be there too!
Another great thing is that telnet sessions clean up if the the ip connection is lost, eliminating ghost processes.
Martijn
Related
I have a Python script running on a server through SSH with the following command:
nohup python3 python_script.py >> output.txt
It was running for a very long time and (probably) created useful output, so I want to force it to stop now. How do I make it write the output it has so far, into the output.txt file?
The file was automatically created when I started running the script, but the size is zero (nothing has been written in it so far).
as Robert said in his comment, check that the output you are expecting to go to the file is actually making it there and not stderr. If the process is already running and has been for a long time without any response or writes into your output file, I think there are 3 options:
It is generating outout but it's not going where you are expecting (Roberts response)
It is generating output but it's buffered and for some reason has yet to be flushed
It's hasn't generated any output
Option 3 is easy: wait longer. Options 1 & 2 are a little bit tricky. If you are expecting a significant amount of output from this process, you could check the memory consumption of the python instance running your script and see if it's growing or very large. Also you could use lsof to see if it has the file open and to get some idea what it's doing with it.
If you find that your output appears to be going somewhere like /dev/null, take a look at this answer on redirecting output for an existing process.
In the unlikely event that you have a huge buffer that hasn't been flushed, you could try using ps to get the PID and then use kill -STOP [PID] to pause the process and see where you can get using GDB.
Unless it would be extremely painful, I would probably just restart the whole thing, but add periodic flushing to the script, and maybe some extra status reporting so you can tell what is going on. It might help too if you could post your code, since there may be other options available in your situation depending on how the program is written.
I am trying to execute commands using communicate in the terminal that i spawned.
sitecreate_proc = subprocess.Popen(['gnome-terminal'], stdout=subprocess.PIPE, stdin=subprocess)
out = sitecreate_proc.communicate("pwd")
print out
the "out" variable is always empty.
Displaying the terminal is necessary.
gnome-terminal is a graphical application and as one, likely doesn't use its own standard streams that it got from the parent process.
You need to run console applications instead to communicate with them -
either the commands themselves:
>>> subprocess.check_output("pwd")
'/c/Users/Ivan\n'
or an interactive shell command, then send input to it and receive responses as per Interacting with bash from python
If you just need to output stream data to the same console that python is using, you can simply write out their data as you're getting it - either automatically with tee, or by hand at appropriate moments.
If, instead, you need to launch an independent terminal emulator window on a desktop and interact with it via IPC, that's another matter entirely - namely, UI automation, and has nothing to do with standard console streams.
The most common way for that in Linux is D-Bus (there are other options outlined on the previous link). Ppl report however (as of 2012) that gnome-terminal doesn't support D-bus and you have to jump through hoops to interact with it. There is an article on controlling konsole via D-Bus though.
As I remember communicate return a tuple,
communicate() returns a tuple (stdoutdata, stderrdata)
. so you can't user communicate("pwd"). gnome-terminal returns, then try to get that result, by sitecreate_proc.communicate()[0] for stroutdate, or sitecreate_proc.communicate()[0] for stderrdata
I am running a script remotely on a server via SSH and Python. This script looks through log files and returns some information based on each log entry it encounters. The problem I am running into is that although my script spits out the information as soon as it hits each log entry, my local machine needs to wait for the entire process to finish before it can read the lines from the ssh connection's stdout.
ssh = subprocess.Popen(cmd.split(' '), stdout=subprocess.PIPE)
with open(remote_ip+'.hits', 'w') as f:
for line in ssh.stdout:
print line
Essentially this code prints all of the results all at once, at the end. I was wondering if there was a way for me to print out the contents of stdout as it was being produced on the server. Sorry if this is unclear, if something is ambiguous I will do my best to clarify it. Thanks!
EDIT: I should also add that I would preferably like to do this without any external packages, just built-in modules from 2.6 if possible.
I'm making a webshell in python, so actually, the user will use his favourite shell trough a web server. My idea is to create a subprocess.Popen with bash -i and to make two functions read and write in the webapp that, respectively, read the stdout or write in the stdin of the subprocess.
I start the shell with:
p = subprocess.Popen(["script","-f","-c","bash -i -l"],stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE)
The writing is Ok, but when I read the standard output I don't read the user typing:
while select.select([p.stdout],[],[],0)[0]!=[] or select.select([p.stderr],[],[],0)[0]!=[]:
if(select.select([p.stdout],[],[],0)[0]!=[]): data+=p.stdout.read(1)
if(select.select([p.stderr],[],[],0)[0]!=[]): data+=p.stderr.read(1)
I could force the echoing adding the user input to the output but that's not very elegant because, if the user uses some program that prevent echoing (like a password input), the user input would always be shown in the web page.
So, there is a way, like an option in the bash parameters, to force it adding the input to the output?
PS: If you wonder why I'm using script to run bash is because running bash alone will cause python to stop itself with
[1]+ Stopped python RPCServer.py
Altough I've not figured out WHY it happens, i've found how to prevent it from this question: Main Python Process is stopped using subprocess calls in SocketServer
So, there is a way, like an option in the bash parameters, to force it adding the input to the output?
Yes: you can use the -v command-line option ("Print shell input lines as they are read.").
I know there are similar questions posted already, but non of the methods I have seen seems to work. I want to launch the application xfoil, on mac, with python subprocess, and send xfoil a bunch of commands with a script (xfoil is an application that runs in a terminal window and you interact with it through text commands). I am able to launch xfoil with the script, but I can't seem to find out how to send commands to it. This is the code I am currently trying:
import subprocess as sp
xfoil = sp.Popen(['open', '-a', '/Applications/Xfoil.app/Contents/MacOS/Xfoil'], stdin=sp.PIPE, stdout=sp.PIPE)
stdout_data = xfoil.communicate(input='NACA 0012')
I have also tried
xfoil.stdin.write('NACA 0012\n')
in order to send commands to xfoil.
As the man page says,
The open command opens a file (or a directory or URL), just as if you had double-clicked the file's icon.
Ultimately, the application gets started by LaunchServices, but that's not important—what's important is that it's not a child of your shell, or Python script.
Also, the whole point of open is to open the app itself, so you don't have to dig into it and find the Unix executable file. If you already have that, and want to run it as a Unix executable… just run it:
xfoil = sp.Popen(['/Applications/Xfoil.app/Contents/MacOS/Xfoil'], stdin=sp.PIPE, stdout=sp.PIPE)
As it turns out, in this case, MacOS/Xfoil isn't even the right program; it's apparently some kind of wrapper around Resources/xfoil, which is the actual equivalent to what you get as /usr/local/bin/xfoil on linux. So you want to do this:
xfoil = sp.Popen(['/Applications/Xfoil.app/Contents/Resouces/xfoil'], stdin=sp.PIPE, stdout=sp.PIPE)
(Also, technically, your command line shouldn't even work at all; the -a specifies an application, not a Unix executable, and you're supposed to pass at least one file to open. But because LaunchServices can launch Unix executables as if they were applications, and open doesn't check that the arguments are valid, open -a /Applications/Xfoil.app/Contents/MacOS/Xfoil ends up doing effectively the same thing as open /Applications/Xfoil.app/Contents/MacOS/Xfoil.)
For the benefit of future readers, I'll include this information from the comments:
If you just write a line to stdin and then return from the function/fall off the end of the main script/etc., the Popen object will get garbage collected, closing both of its pipes. If xfoil hasn't finished running yet, it will get an error the next time it tries to write any output, and apparently it handles this by printing Fortran runtime error: end of file (to stderr?) and bailing. You need to call xfoil.wait() (or something else that implicitly waits) to prevent this from happening.