I have two scripts in Python.
sub.py code:
import time
import subprocess as sub
while 1:
value=input("Input some text or number") # it is example, and I don't care about if it is number-input or text-raw_input, just input something
proces=sub.Popen(['sudo', 'python', '/home/pi/second.py'],stdin=sub.PIPE)
proces.stdin.write(value)
second.py code:
import sys
while 1:
from_sub=sys.stdin()#or sys.stdout() I dont remember...
list_args.append(from_sub) # I dont know if syntax is ok, but it doesn't matter
for i in list_arg:
print i
First I execute sub.py, and I input something, then second.py file will execute and printing everything what I inputed and again and again...
The thing is I don't want to open new process. There should be only one process. Is it possible?
Give me your hand :)
This problem can be solved by using Pexpect. Check my answer over here. It solves a similar problem
https://stackoverflow.com/a/35864170/5134525.
Another way to do that is to use Popen from subprocess module and setting stdin and stdout as pipe. Modifying your code a tad bit can give you the desired results
from subprocess import Popen, PIPE
#part which should be outside loop
args = ['sudo', 'python', '/home/pi/second.py']
process = Popen(args, stdin=PIPE, stdout=PIPE)
while True:
value=input("Input some text or number")
process.stdin.write(value)
You need to open the process outside the loop for this to work. A similar issue is addressed here in case you want to check that Keep a subprocess alive and keep giving it commands? Python
This approach will lead to error if child process quits after first iteration and close all the pipes. You somehow need to block the child process to accept more input. This you can do by either using threads or by using the first option i.e. Pexpect
Related
I'm currently working on a POC with the following results to be desired
python script working as a parent, meaning it will start a child process while running it
the child process is oblivious to the fact another script is running it, the very same child script can also be executed as the main script by the user
comfortable way to read the subprocess's outputs (to sys.stdout via print), and the parent's inputs will be sent to the sys.stdin (via input)
I've already done some research on the topic and I am aware that I can pass to Popen/run subprocess.PIPE, and call it a day.
However I saw multiprocessing.Pipe() produces a linked socket pair which allows to send objects through them as a whole, so I don't need to get into when to stop reading a stream and continue afterward
# parent.py
import multiprocessing
import subprocess
import os
pipe1, pipe2 = multiprocessing.Pipe()
if os.fork():
while True:
print(pipe1.recv())
exit() # avoid fork colision
if os.fork():
# subprocess.run is busy wait
subprocess.run(args['python3', 'child.py'], stdin=pipe2.fileno(), stdout=pipe2.fileno())
exit() # avoid fork colision
while True:
user_input = input('> ')
pipe1.send(user_input)
# child.py
import os
import time
if os.fork:
while True:
print('child sends howdy')
time.sleep(1)
with open('child.txt, 'w') as file
while True:
user_input = input('> ')
# We supposedly can't write to sys.stdout because parent.py took control of it
file.write(f'{user_input}\n')
So to finally reach the essence of the problem, child.py is installed as a package,
meaning parent.py doesn't call on the actual file to run the script.
The subprocess is run by calling upon the package
And for some bizarre reason, when child.py is a package vs a script, the code written above doesn't seem to work.
child.py's sys.stdin and sys.stdout fail to work entirely, parent.py is unable to receive ANY of the child.py's prints (even sys.stdout.write(<some_data>) and sys.stdout.flush()),
and the same applies to sys.stdin.
If anyone can shed any light on how to solve it, I would be delighted !
Side Note
When calling upon a package, you don't call upon its main.py (image it's dunder_main_dunder.py) directly.
you call upon a python file which it actually starts up the package.
I assume something fishy might be happening over there when that happens and that what causes the interference, but that's just a theory
I want to figure out a way to programmatically avoid the builtin input() method stopping and waiting for user input.
Here is a snippet showing what I'm trying to do:
import sys
from threading import Thread
def kill_input():
sys.stdout.write('\n')
sys.stdout.flush() # just to make sure the output is really written to stdout and not bufferized
t = Thread(target=kill_input)
t.start()
foo = input('Press some key')
print('input() method has been bypassed')
Expected behavior: the script executes and terminates without waiting for enter key to be pressed.
On the contrary, what's happening is the program stopping to wait for user entering some input.
In my thoughts input() should read the newline character ('\n') printed on stdout by the other thread and terminates by executing the final print statement. That thread should simulate a user pressing the enter key. I do not understand what's going on behind
Maybe one other possible way is to close the stdin file descriptor from the non-main thread and catching the exception on the main one.
def kill_input():
sys.stdin.close()
Possibly I would like to avoid this option and rather understand what's going on behind this logic and find a way to force the main thread to read some mock characters from the stdin.
Edit - using subprocess module
Based on these related posts I've had a look to the subprocess module. I've thought this is the case for the Popen class to come in handy, so I've modified my script to exploit pipes
import sys
from subprocess import Popen, PIPE
def kill_input():
proc = Popen(['python3', '-c', 'pass'], stdin=PIPE)
proc.stdin.write('some text just to force parent proc to read'.encode())
proc.stdin.flush()
proc.stdin.close()
t = Thread(target=kill_input)
t.start()
sys.stdin.read()
print('input() method has been bypassed')
From my understanding, that should create a process with the Popen (the commend python3 -c 'pass' acts like a placeholder) whose (should?) stdin is a unix pipe opened with the parent process.
What I'm expecting is anything written to the child process stdin to go straight to the stdin of the parent in order to be read by the sys.stdin.read(). So the program shouldn't stop to wait for any user input and it should terminates instantly. Unfortunately, it doesn't happen and the script still waits for me pressing enter. I cannot really find out a workaround for this.
[Python version: 3.8.5]
In your first piece of code, you were writing to sys.stdout, which by default won't effect the contents of sys.stdin. Also, by default, you can't directly write to sys.stdin, but you can change it to a different file. To do this, you can use os.pipe(), which will return a tuple of a file descriptor for reading from the new pipe, and a file descriptor for writing to the pipe.
We can then use os.fdopen on these file descriptors, and assign sys.stdin to the read end of the pipe, while in another thread we write to the other end of the pipe.
import sys
import os
from threading import Thread
fake_stdin_read_fd, fake_stdin_write_fd = os.pipe()
fake_stdin_read = os.fdopen(fake_stdin_read_fd, 'r')
fake_stdin_write = os.fdopen(fake_stdin_write_fd, 'w')
sys.stdin = fake_stdin_read
def kill_input():
fake_stdin_write.write('hello\n')
fake_stdin_write.flush()
thread = Thread(target=kill_input)
thread.start()
input()
print('input() method has been bypassed!')
I am trying to use output of external program run using the run function.
this program regularly throws a row of data which i need to use in mine script
I have found a subprocess library and used its run()/check_output()
Example:
def usual_process():
# some code here
for i in subprocess.check_output(['foo','$$']):
some_function(i)
Now assuming that foo is already in a PATH variable and it outputs a string in semi-random periods.
I want the program to do its own things, and run some_function(i)every time foo sends new row to its output.
which boiles to two problems. piping the output into a for loop and running this as a background subprocess
Thank you
Update: I have managed to get the foo output onto some_function using This
with os.popen('foo') as foos_output:
for line in foos_output:
some_function(line)
According to this os.popen is to be deprecated, but I am yet to figure out how to pipe internal processes in python
Now just need to figure out how to run this function in a background
SO, I have solved it.
First step was to start the external script:
proc=Popen('./cisla.sh', stdout=PIPE, bufsize=1)
Next I have started a function that would read it and passed it a pipe
def foo(proc, **args):
for i in proc.stdout:
'''Do all I want to do with each'''
foo(proc).start()`
Limitations are:
If your wish t catch scripts error you would have to pipe it in.
second is that it leaves a zombie if you kill parrent SO dont forget to kill child in signal-handling
I have a function in a python script which should launch another python script multiple times, I am assuming this can be done like this(Script is just my imagination of how this would work.)
iterations = input("Enter the number of processes to run")
for x in range(0, iterations):
subprocess.call("python3 /path/to/the/script.py", shell=True)
but, I also need to pass over some defined variables into the other script, for example, if
x = 1
in the first script, then, I need x to have the same value in the second script without defining it there, I have NO idea how to do that.
And then also killing them, I have read about some method using PIDs, but don't those change every time?
Most of the methods I found on Google looked overly complex and what I want to do is really simple. Can anyone guide me in the right direction as to what to use and how I should go at accomplishing it?
I have a function in a python script which should launch another python script multiple times, I am assuming this can be done like this(Script is just my imagination of how this would work.)
**
Here is the subprocess manual page which contains everything I will be talking about
https://docs.python.org/2/library/subprocess.html
One of the way to call one script from other is using subprocess.Popen
something on the lines
import subprocess
for i in range(0,100):
ret = subprocess.Popen("python3 /path/to/the/script.py",stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=True)
you can use the return value from Open to make the call synchronous using the communicate method.
out,err = ret.communicate()
This would block the calling script until the subprocess finishes.
I also need to pass over some defined variables into the other script??
There are multiple ways to do this.
1. Pass parameters to the called script and parse it using OptionPraser or sys.args
in the called script have something like
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-x","--variable",action="store_true",dest="xvalue",default=False)
(options,args) = parser.parse_args()
if options.xvalue == True:
###do something
in the callee script use subprocess as
ret = subprocess.Popen("python3 /path/to/the/script.py -x",stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=True)
Note the addition of -x parameter
You can use args parse
https://docs.python.org/2/library/argparse.html#module-argparse
Pass the subprocess a environment variable which can be used to configure the subprocess. This is fast but this only works one way, i.e. from parent process to child process.
in called script
import os
x = int(os.enviorn('xvalue'))
in callee script set the environment variable
import os
int x = 1
os.environ['xvalue'] = str(x)
Use sockets or pipes or some other IPC method
And then also killing them, I have read about some method using PIDs, but don't those change every time?
again you can use subprocess to hold the process id and terminate it
this will give you the process id
ret.pid
you can then use .terminate to terminate the process if it is running
ret.terminate()
to check if the process is running you can use the poll method from subprocess Popen. I would suggest you to check before you terminate the process
ret.poll()
poll will return a None if the process is running
If you just need to pass some values to second script, and you need to run that
by means of subprocess module, then you may simply pass the variables as command line arguments:
for x in range(0, iterations):
subprocess.call('python3 /path/to/second_script.py -x=%s'%x, shell=True)
And recieve the -x=1 via sys.argv list inside second_script.py (using argparse module)
On the other hand, If you need to exchange something between the two scripts dynamically (while both are running), You can use the pipe mechanism or even better, use the multiprocessing (wich requires some changes in your current code), it would make communication with and controlling it (terminating it) much cleaner.
You can pass variables to subprocesses via the command line, environment variables or passing data in on stdin. Command line is easy for simple strings that aren't too long and don't themselves have shell meta characters in them. The target script would pull them from sys.argv.
script.py:
import sys
import os
import time
x = sys.argv[1]
print(os.getpid(), "processing", x)
time.sleep(240)
subprocess.Popen starts child processes but doesn't wait for them to complete. You could start all of the children, put their popen objects in a list and finish with them later.
iterations = input("Enter the number of processes to run")
processes = []
for x in range(0, iterations):
processes.append(subprocess.Popen([sys.executable, "/path/to/the/script.py", str(x)])
time.sleep(10)
for proc in processes:
if proc.poll() is not None:
proc.terminate()
for proc in processes:
returncode = proc.wait()
I want to spawn (fork?) multiple Python scripts from my program (written in Python as well).
My problem is that I want to dedicate one terminal to each script, because I'll gather their output using pexpect.
I've tried using pexpect, os.execlp, and os.forkpty but neither of them do as I expect.
I want to spawn the child processes and forget about them (they will process some data, write the output to the terminal which I could read with pexpect and then exit).
Is there any library/best practice/etc. to accomplish this job?
p.s. Before you ask why I would write to STDOUT and read from it, I shall say that I don't write to STDOUT, I read the output of tshark.
See the subprocess module
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes. This module intends to replace several other, older modules and functions, such as:
os.system
os.spawn*
os.popen*
popen2.*
commands.*
From Python 3.5 onwards you can do:
import subprocess
result = subprocess.run(['python', 'my_script.py', '--arg1', val1])
if result.returncode != 0:
print('script returned error')
This also automatically redirects stdout and stderr.
I don't understand why you need expect for this. tshark should send its output to stdout, and only for some strange reason would it send it to stderr.
Therefore, what you want should be:
import subprocess
fp= subprocess.Popen( ("/usr/bin/tshark", "option1", "option2"), stdout=subprocess.PIPE).stdout
# now, whenever you are ready, read stuff from fp
You want to dedicate one terminal or one python shell?
You already have some useful answers for Popen and Subprocess, you could also use pexpect if you're already planning on using it anyways.
#for multiple python shells
import pexpect
#make your commands however you want them, this is just one method
mycommand1 = "print 'hello first python shell'"
mycommand2 = "print 'this is my second shell'"
#add a "for" statement if you want
child1 = pexpect.spawn('python')
child1.sendline(mycommand1)
child2 = pexpect.spawn('python')
child2.sendline(mycommand2)
Make as many children/shells as you want and then use the child.before() or child.after() to get your responses.
Of course you would want to add definitions or classes to be sent instead of "mycommand1", but this is just a simple example.
If you wanted to make a bunch of terminals in linux, you just need to replace the 'python' in the pextpext.spawn line
Note: I haven't tested the above code. I'm just replying from past experience with pexpect.