Running a secondary script in a new terminal - python

When running a secondary python script:
Is it possible to run a subprocess.Popen, or subprocess.call or even execfile in a new terminal? (as in simply a different terminal than the current terminal where the script is run).
Alternatively, if before running my program (main), I open two terminals first, can I then point the secondary script to the second terminal? (so somehow getting the ID of open terminals, and then using a specific one among them, to perform the subprocess).
An example, two subprocesses to be run, first.py should be called first, only then the second is called, second.py. Because the two scripts first.py and second.py are interdependent (as in first.py goes to wait mode, until second.py is run, then first.py resumes, and I don't know how to make this communication work between them in terms of subprocesses.)
import subprocess
command = ["python", "first.py"]
command2 = ["python", "second.py"]
n = 5
for i in range(n):
p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
p2 = subprocess.Popen(command2, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
while True:
output = p.stdout.readline().strip()
print output
if output == 'stop':
print 'success'
p.terminate()
p2.terminate()
break
Framework (Ubuntu, python 2.7)

I guess you want something like
subprocess.call(['xterm','-e','python',script])
Good old xterm has almost no frills; on a Freedesktop system, maybe run xdg-terminal instead. On Debian, try x-terminal-emulator.
However, making your program require X11 is in most cases a mistake. A better solution is to run the subprocesses with output to a log file (or a socket, or whatever) and then separately run tail -f on those files (in a different terminal, or from a different server over ssh, or with output to a logger which supports rsyslog, or or or ...) which keeps your program simple and modular, free from "convenience" dependencies.

If you're using tmux, you can specify which target you want the command to run in:
tmux send -t foo.0 ls ENTER
So, if you've created a tmux session foo.0, you should be able to do:
my_command = 'ls'
tmux_cmd = ['tmux', 'send', '-t', 'foo.0', my_command]
p = subprocess.Popen(tmux_cmd)

You can specify the tty of the terminal window you wish the command to be carried out in:
ls > /dev/ttys004
However, I would recommend going for the tmux approach for greater control (see my other answer).

Related

How to run cmd.exe in python

I'm trying to port the following c# code into Python. It firstly defines a new process and then runs a windows prompt command (cmd.exe). After that, it executes a command in the prompt and when an external event occurs, it closes the prompt.
//Start the prompt - when an event occured
Process winShell = new Process();
winShell.StartInfo.FileName = "cmd.exe";
winShell.StartInfo.RedirectStandardInput = true;
winShell.Start();
//Execute a command in the prompt
winShell.StandardInput.WriteLine("cd " + projectDirectory);
//Close it - when an event occured
winShell.StandardInput.Flush();
winShell.StandardInput.Close();
winShell.WaitForExit();
I read that for Python 3 (my version 3.7), It is recommended to use subprocess. Unfortunately, I feel a bit confused about which of the function to use. I found call, run and Popen, but I didn't understand how to use them.
I wrote the following lines, but they don't produce any visible result.
import subprocess
subprocess.run(['cmd.exe'])
First of all, I would like that the shell appears and than to write some commands in it. Finally, I want to close it.
Use subprocess.Popen() like this. Each API matches to the corresponding C# API almost 1:1.
p = subprocess.Popen(['cmd.exe'],
stdin=subprocess.PIPE, stdout=subprocess.PIPE,
text=True)
p.stdin.write('dir\n')
p.stdin.close()
print(p.stdout.read())
p.wait()
p.stdout.close()
Other API's such as run(), call(), etc are wrappers for Popen(). For example, the above code is equivalent to this one line.
print(subprocess.run(['cmd.exe'], capture_output=True, text=True, input = 'dir\n').stdout)

Can python run a "persistent shell"

I'm trying to run a sequence of shell commands in the same environment:
same exported variables, persistent history, etc.
And I want to work with each commands output before running the next command.
After looking over python subprocess.run and Pexpect.spawn neither seem to provide both features.
subprocess.run allows me to run one command and then examine the output, but not to keep the environment open for another command.
Pexpect.spawn("bash") allows me to run multiple commands in the same environment, but i can't get the output until EOF; when bash itself exits.
Ideally i would like an interface that can do both:
shell = bash.new()
shell.run("export VAR=2")
shell.run("whoami")
print(shell.exit_code, shell.stdout())
# 0, User
shell.run("echo $VAR")
print(shell.stdout())
# 2
shell.run("!!")
print(shell.stdout())
# 2
shell.run("cat file -")
shell.stdin("Foo Bar")
print(shell.stdout())
# Foo Bar
print(shell.stderr())
# cat: file: No such file or directory
shell.close()
Sounds like a case for Popen. You can specify bufsize to disable buffering, if it gets in the way.
Example from the linked page:
with Popen(["ifconfig"], stdout=PIPE) as proc:
log.write(proc.stdout.read())
There's also proc.stdin for sending more commands, and proc.stderr.

How to run python script using subprocess from another delivery

I need to run another python script which generating data in my script I current working with. I use subprocess to run it:
cmd = 'python /home/usr/script.py arg1 arg2 arg3'
subprocess.Popen(cmd, shell=True)
But have a problem. Previous script generate few directories in 'current directory', it means in directory it was run in. And I can't modify previous script, cause it's not mine. How to set current directory to dir where I want to get data? \n
Another small problem is that when I run subprocess.Popen() my script doesn't end. Should I run it in another way?
the best way is to use subprocess.call instead (waits & terminates, Popen without the relevant wait() may create a zombie process) and use the cwd= parameter to specify current dir for the subprocess:
cmd = ['python','/home/usr/script.py','arg1','arg2','arg3']
return_code = subprocess.call(cmd, cwd="/some/dir")
(also pass the command as a list, and drop shell=True, you don't need it here)

Launch a single python script as different processes differing by command line arguments

I have python script that takes command line arguments. The way I get the command line arguments is by reading a mongo database. I need to iterate over the mongo query and launch a different process for the single script with different command line arguments from the mongo query.
Key is, I need the launched processes to be:
separate processes share nothing
when killing the process, I need to be able to kill them all easily.
I think the command killall -9 script.py would work and satisfies the second constraint.
Edit 1
From the answer below, the launcher.py program looks like this
def main():
symbolPreDict = initializeGetMongoAllSymbols()
keys = sorted(symbolPreDict.keys())
for symbol in keys:
# Display key.
print(symbol)
command = ['python', 'mc.py', '-s', str(symbol)]
print command
subprocess.call(command)
if __name__ == '__main__':
main()
The problem is that mc.py has a call that blocks
receiver = multicast.MulticastUDPReceiver ("192.168.0.2", symbolMCIPAddrStr, symbolMCPort )
while True:
try:
b = MD()
data = receiver.read() # This blocks
...
except Exception, e:
print str(e)
When I run the launcher, it just executes one of the mc.py (there are at least 39). How do I modify the launcher program to say "run the launched script in background" so that the script returns to the launcher to launch more scripts?
Edit 2
The problem is solved by replacing subprocess.call(command) with subprocess.Popen(command)
One thing I noticed though, if I say ps ax | grep mc.py, the PID seem to be all different. I don't think I care since I can kill them all pretty easily with killall.
[Correction] kill them with pkill -f xxx.py
There are several options for launching scripts from a script. The easiest are probably to use the subprocess or os modules.
I have done this several times to launch things to separate nodes on a cluster. Using os it might look something like this:
import os
for i in range(len(operations)):
os.system("python myScript.py {:} {:} > out.log".format(arg1,arg2))
using killall you should have no problem terminating processes spawned this way.
Another option is to use subprocess which has got a wide range of features and is much more flexible than os.system. An example might look like:
import subprocess
for i in range(len(operations)):
command = ['python','myScript.py','arg1','arg2']
subprocess.call(command)
In both of these methods, the processes are independent and share nothing other than a parent PID.

python subprocess.Popen stdin.write

I'm new to python and would like to open a windows cmd prompt, start a process, leave the process running and then issue commands to the same running process.
The commands will change so i cant just include these commands in the cmdline variable below. Also, the process takes 10-15 seconds to start so i dont want to waste time waiting for the process to start and run commands each time. just want to start process once. and run quick commands as needed in the same process
I was hoping to use subprocess.Popen to make this work, though i am open to better methods. Note that my process to run is not cmd, but im just using this as example
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi' readback
cmd.stdin.write("echo hi again") #would like this to be written to the cmd prompt
print cmd.stdout.readline() #would like to see 'hi again' readback
The results arent what i expect. Seems as though the stdin.write commands arent actually getting in and the readline freezes up with nothing to read.
I have tried the popen.communicate() instead of write/readline, but it kills the process. I have tried setting bufsize in the Popen line, but that didn't make too much difference
Your comments suggest that you are confusing command-line arguments with input via stdin. Namely, the fact that system-console.exe program accepts script=filename parameter does not imply that you can send it the same string as a command via stdin e.g., python executable accepts -c "print(1)" command-line arguments but it is a SyntaxError if you pass it as a command to Python shell.
Therefore, the first step is to use the correct syntax. Suppose the system-console.exe accepts a filename by itself:
#!/usr/bin/env python3
import time
from subprocess import Popen, PIPE
with Popen(r'C:\full\path\to\system-console.exe -cli -',
stdin=PIPE, bufsize=1, universal_newlines=True) as shell:
for _ in range(10):
print('capture.tcl', file=shell.stdin, flush=True)
time.sleep(5)
Note: if you've redirected more than one stream e.g., stdin, stdout then you should read/write both streams concurrently (e.g., using multiple threads) otherwise it is very easy to deadlock your program.
Related:
Q: Why not just use a pipe (popen())? -- mandatory reading for Unix environment but it might also be applicable for some programs on Windows
subprocess readline hangs waiting for EOF -- code example on how to pass multiple inputs, read multiple outputs using subprocess, pexpect modules.
The second and the following steps might have to deal with buffering issues on the side of the child process (out of your hands on Windows), whether system-console allows to redirect its stdin/stdout or whether it works with a console directly, and character encoding issues (how various commands in the pipeline encode text).
Here is some code that I tested and is working on Windows 10, Quartus Prime 15.1 and Python 3.5
import subprocess
class altera_system_console:
def __init__(self):
sc_path = r'C:\altera_lite\15.1\quartus\sopc_builder\bin\system-console.exe --cli --disable_readline'
self.console = subprocess.Popen(sc_path, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
def read_output(self):
rtn = ""
loop = True
i = 0
match = '% '
while loop:
out = self.console.stdout.read1(1)
if bytes(match[i],'utf-8') == out:
i = i+1
if i==len(match):
loop=False
else:
rtn = rtn + out.decode('utf-8')
return rtn
def cmd(self,cmd_string):
self.console.stdin.write(bytes(cmd_string+'\n','utf-8'))
self.console.stdin.flush()
c = altera_system_console()
print(c.read_output())
c.cmd('set jtag_master [lindex [get_service_paths master] 0]')
print(c.read_output())
c.cmd('open_service master $jtag_master')
print(c.read_output())
c.cmd('master_write_8 $jtag_master 0x00 0xFF')
print(c.read_output())
You need to use iter if you want to see the output in real time:
import subprocess
cmdline = ['cmd', '/k']
cmd = subprocess.Popen(cmdline, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
cmd.stdin.write("echo hi\n")#would like this to be written to the cmd prompt
for line in iter(cmd.stdout.readline,""):
print line
cmd.stdin.write("echo hi again\n")#would like this to be written to the cmd prompt
Not sure exactly what you are trying to do but if you want to input certain data when you get certain output then I would recommend using pexpect

Categories

Resources