Is there a way to open a second python console and let the new console run while the original console keeps going and when the new console finishes it sends back its data, in the form of a variable back to the original console?
Perhaps you should look into the multiprocessing or multi-threading modules. These help you spawn off child processes from your original (parent) program.
The sub-process module might be what you are looking for. However, the thing is, you get the output of the process after the entire program finishes. this means, if the program you are trying to run runs forever you will not be able to see the output until it is quit (either by forcing it or using termination methods).
An example of how you would assign the output to a variable would be:
output,error=your_process.communicate()
The output part of this is what you would be using (based on your question). However, the error is what you get if you run it and there is a problem (doesn't return 0). If you are not looking to capture errors then you can simply assign it to _.
Also note that if you are using key-word arguments, i would suggest using the shlex library for splitting your string into arguments. (you can just use a regular string such as: var="mypythonprogram.py argument1 argument2" and use arguments=shlex.split(var) and you can then just supply it into the arguments for the sub-process.
Another option if you don't need to interact with the program would be using Threads, and there are many questions on stack overflow about them, as well as plenty of documentation both officially, and on other websites all over the internet.
Read up on Python multiprocessing. It has examples of exchanging objects between processes.
It sounds like you want to do something like what the multiprocessing library offers. With out more info all i can do is point you to the docs.
for instance:
http://www.ibm.com/developerworks/aix/library/au-multiprocessing/
or
http://docs.python.org/library/multiprocessing.html
If you are on windows you can use win32console module to open a second console for your thread or subprocess output
Here is a sample code:
import win32console
import multiprocessing
def subprocess(queue):
win32console.FreeConsole() #Frees subprocess from using main console
win32console.AllocConsole() #Creates new console and all input and output of subprocess goes to this new console
while True:
print(queue.get())
#prints any output produced by main script passed to subprocess using queue
if __name__ == "__main__":
queue = multiprocessing.Queue()
multiprocessing.Process(target=subprocess, args=[queue]).start()
while True:
print("Hello World")
queue.put("Hello to subprocess console")
#sends above string to subprocess which prints it into its own console
#and whatever else you want to do in ur main process
You can also do this with threading. You have to use queue module if you want the queue functionality as threading module doesn't have queue
Here is the win32console module documentation
Related
I want to store the output of the terminal command top into a file, using Python.
In the terminal, when I type top and hit enter, I get an output that is real time, so it keeps updating. I want to store this into a file for a fixed duration and then stop writing.
file=open("data.txt","w")
file.flush()
import os,time
os.system("top>>data.txt -n 1")
time.sleep(5)
exit()
file.close()
I have tried to use time.sleep() and then exit(), but it doesn't work, and the only way top can be stopped is in the terminal, by Control + C
The process keeps running and the data is continuously written onto the file, which is not ideal, as one would guess
For clarity: I know how to write the output on to the file, I just want to stop writing after a period
system will wait for the end of the child process. If you do not want that, the Pythonic way is to directly use the subprocess module:
import subprocess
timeout=60 # let top run for one minute
file=open("data.txt","w")
top = subprocess.Popen(["top", "-n", 1], stdout=file)
if top.wait(timeout) is None: # wait at most timeout seconds
top.terminate() # and terminate child
The panonoic way (which is highly recommended for robust code) would be to use the full path of top. I have not here, because it may depend on the actual system...
The issue you could be facing is that os.system starts the process as part of the current process. So the rest of your script will not be run until the command you run has completed execution.
I think what you want to be doing is executing your console command on another thread so that the thread running your python script can continue while the command runs in the background. See run a python program on a new thread for more info.
I'd suggest something like (this is untested):
import os
import time
import multiprocessing
myThread = multiprocessing.process(target=os.system, args=("top>>data.txt -n 1",))
myThread.start()
time.sleep(5)
myThread.terminate()
That being said, you may need to consider the thread safety of os.system(), if it is not thread safe you'll need to find an alternative that is.
Something else worth noting (and that I know little about) is that it may not be ideal to terminate threads in this way, see some of the answers here: Is there any way to kill a Thread?
I'm trying to run a program that requires successive interactions (I have to answer with strings: '0' or '1') from within my python script.
My code:
from subprocess import Popen, PIPE
command = ['program', '-arg1', 'path/file_to_arg1']
p = Popen(command, stdin=PIPE, stdout=PIPE)
p.communicate('0'.encode())
The last two lines work for the first interaction, but after that the program prints all the following questions on the screen without waiting for their respective inputs. I basically need to answer the first question, wait until the program deals with it and prints the second question, then answer the second question, and so on.
Any ideas?
Thanks!
PS: I'm using Python 3.3.4
The subprocess module is designed for single-use interactions. Print to a process and read the result, and then STOP. It is challenging to do a ongoing back-and-forth interaction with a Unix process, where you continue to take turns reading and writing. I recommend using a library built for the task, instead of rewriting all the necessary logic from scratch.
There is a classic library named Expect, which works well for interacting with a child process. There is a python implementation named Pexpect (read the docs here). I recommend using Pexpect, or a similar library.
Pexpect works like this:
# spawn a subprocess.
# then wait for expected output from the child process,
# and send additional commands to the child.
child = pexpect.spawnu('ftp ftp.openbsd.org')
child.expect('(?i)name .*: ')
child.sendline('anonymous')
child.expect('(?i)password')
child.sendline('pexpect#sourceforge.net')
child.expect('ftp> ')
child.sendline('cd /pub/OpenBSD/3.7/packages/i386')
child.expect('ftp> ')
For catching stdout in realtime from subprocess better check from this thread catching stdout in realtime from subprocess
I am writing a test framework in Python for a command line application. The application will create directories, call other shell scripts in the current directory and will output on the Stdout.
I am trying to treat {Python-SubProcess, CommandLine} combo as equivalent to {Selenium, Browser}. The first component plays something on the second and checks if the output is expected. I am facing the following problems
The Popen construct takes a command and returns back after that command is completed. What I want is a live handle to the process so I can run further commands + verifications and finally close the shell once done
I am okay with writing some infrastructure code for achieveing this since we have a lot of command line applications that need testing like this.
Here is a sample code that I am running
p = subprocess.Popen("/bin/bash", cwd = test_dir)
p.communicate(input = "hostname") --> I expect the hostname to be printed out
p.communicate(input = "time") --> I expect current time to be printed out
but the process hangs or may be I am doing something wrong. Also how do I "grab" the output of that sub process so I can assert that something exists?
subprocess.Popen allows you to continue execution after starting a process. The Popen objects expose wait(), poll() and many other methods to communicate with a child process when it is running. Isn't it what you need?
See Popen constructor and Popen objects description for details.
Here is a small example that runs Bash on Unix systems and executes a command:
from subprocess import Popen, PIPE
p = Popen (['/bin/sh'], stdout=PIPE, stderr=PIPE, stdin=PIPE)
sout, serr = p.communicate('ls\n')
print 'OUT:'
print sout
print 'ERR:'
print serr
UPD: communicate() waits for process termination. If you do not need that, you may use the appropriate pipes directly, though that usually gives you rather ugly code.
UPD2: You updated the question. Yes, you cannot call communicate twice for a single process. You may either give all commands you need to execute in a single call to communicate and check the whole output, or work with pipes (Popen.stdin, Popen.stdout, Popen.stderr). If possible, I strongly recommend the first solution (using communicate).
Otherwise you will have to put a command to input and wait for some time for desired output. What you need is non-blocking read to avoid hanging when there is nothing to read. Here is a recipe how to emulate a non-blocking mode on pipes using threads. The code is ugly and strangely complicated for such a trivial purpose, but that's how it's done.
Another option could be using p.stdout.fileno() for select.select() call, but that won't work on Windows (on Windows select operates only on objects originating from WinSock). You may consider it if you are not on Windows.
Instead of using plain subprocess you might find Python sh library very useful:
http://amoffat.github.com/sh/
Here is an example how to build in an asynchronous interaction loop with sh:
http://amoffat.github.com/sh/tutorials/2-interacting_with_processes.html
Another (old) library for solving this problem is pexpect:
http://www.noah.org/wiki/pexpect
Currently I'm trying to convert my little python script to support multiple threads/cores. I've been reading about the multiprocessing module for several days now and I've also been trying to get it to suit my needs for some time, still I don't have a clue why it won't work.
This is the working code, and this is my approach on implementing the pool workers. As there are no locks in place and I didn't want to make it too complicated at first I already disabled the logging to file.
Still it doesn't work. It doesn't even output any kind of error message. After running it it just displays the welcome message and then it just keeps running, but without outputting any of the desired output, which would be 2 lines per converted file (before + after converting).
all your workers do is wait for started subprocesses to finish. they don't have any real work to do as that is performed by the external subprocesses, so they will be idle all the time.
using multiprocessing for what you do really is overkill, it's much more appropriate to use threads for that.
if you want to learn how to do multiprocessing, try something which involves inter-process communication, synchronisation, pipes, ...
but to also address your question:
hava a look at what arguments subprocess.call takes. you call it with a single space-separated command string. if you want that to work you have to pass shell=True, otherwise the whole string is interpreted as the executable's name.
the preferred way to call a program using subprocess is is to specify program and arguments as a list:
subprocess.Popen(['/path/to/program', 'arg1', 'arg2'], *otherarguments)
I'd like to execute multiple commands in a standalone application launched from a python script, using pipes. The only way I could reliably pass the commands to the stdin of the program was using Popen.communicate but it closes the program after the command gets executed. If I use Popen.stdin.write than the command executes only 1 time out of 5 or so, it does not work reliable. What am I doing wrong?
To elaborate a bit :
I have an application that listens to stdin for commands and executes them line by line.
I'd like to be able to run the application and pass various commands to it, based on the users interaction with a GUI.
This is a simple test example:
import os, string
from subprocess import Popen, PIPE
command = "anApplication"
process = Popen(command, shell=False, stderr=None, stdin=PIPE)
process.stdin.write("doSomething1\n")
process.stdin.flush()
process.stdin.write("doSomething2\n")
process.stdin.flush()
I'd expect to see the result of both commands but I don't get any response. (If I execute one of the Popen.write lines multiple times it occasionally works.)
And if I execute:
process.communicate("doSomething1")
it works perfectly but the application terminates.
If I understand your problem correctly, you want to interact (i.e. send commands and read the responses) with a console application.
If so, you may want to check an Expect-like library, like pexpect for Python: http://pexpect.sourceforge.net
It will make your life easier, because it will take care of synchronization, the problem that ddaa also describes. See also:
http://www.noah.org/wiki/Pexpect#Q:_Why_not_just_use_a_pipe_.28popen.28.29.29.3F
The real issue here is whether the application is buffering its output, and if it is whether there's anything you can do to stop it. Presumably when the user generates a command and clicks a button on your GUI you want to see the output from that command before you require the user to enter the next.
Unfortunately there's nothing you can do on the client side of subprocess.Popen to ensure that when you have passed the application a command the application is making sure that all output is flushed to the final destination. You can call flush() all you like, but if it doesn't do the same, and you can't make it, then you are doomed to looking for workarounds.
Your code in the question should work as is. If it doesn't then either your actual code is different (e.g., you might use stdout=PIPE that may change the child buffering behavior) or it might indicate a bug in the child application itself such as the read-ahead bug in Python 2 i.e., your input is sent correctly by the parent process but it is stuck in the child's internal input buffer.
The following works on my Ubuntu machine:
#!/usr/bin/env python
import time
from subprocess import Popen, PIPE
LINE_BUFFERED = 1
#NOTE: the first argument is a list
p = Popen(['cat'], bufsize=LINE_BUFFERED, stdin=PIPE,
universal_newlines=True)
with p.stdin:
for cmd in ["doSomething1\n", "doSomethingElse\n"]:
time.sleep(1) # a delay to see that the commands appear one by one
p.stdin.write(cmd)
p.stdin.flush() # use explicit flush() to workaround
# buffering bugs on some Python versions
rc = p.wait()
It sounds like your application is treating input from a pipe in a strange way. This means it won't get all of the commands you send until you close the pipe.
So the approach I would suggest is just to do this:
process.stdin.write("command1\n")
process.stdin.write("command2\n")
process.stdin.write("command3\n")
process.stdin.close()
It doesn't sound like your Python program is reading output from the application, so it shouldn't matter if you send the commands all at once like that.