Infinite while not working with os.execvp - python

I am programming in python which involves me implementing a shell in Python in Linux. I am trying to run standard unix commands by using os.execvp(). I need to keep asking the user for commands so I have used an infinite while loop. However, the infinite while loop doesn't work. I have tried searching online but they're isn't much available for Python. Any help would be appreciated. Thanks
This is the code I have written so far:
import os
import shlex
def word_list(line):
"""Break the line into shell words."""
lexer = shlex.shlex(line, posix=True)
lexer.whitespace_split = False
lexer.wordchars += '#$+-,./?#^='
args = list(lexer)
return args
def main():
while(True):
line = input('psh>')
split_line = word_list(line)
if len(split_line) == 1:
print(os.execvp(split_line[0],[" "]))
else:
print(os.execvp(split_line[0],split_line))
if __name__ == "__main__":
main()
So when I run this and put in the input "ls" I get the output "HelloWorld.py" (which is correct) and "Process finished with exit code 0". However I don't get the output "psh>" which is waiting for the next command. No exceptions are thrown when I run this code.

Your code does not work because it uses os.execvp. os.execvp replaces the current process image completely with the executing program, your running process becomes the ls.
To execute a subprocess use the aptly named subprocess module.
In case of an ill-advised programming exercise then you need to:
# warning, never do this at home!
pid = os.fork()
if not pid:
os.execvp(cmdline) # in child
else:
os.wait(pid) # in parent
os.fork returns twice, giving the pid of child in parent process, zero in child process.

If you want it to run like a shell you are looking for os.fork() . Call this before you call os.execvp() and it will create a child process. os.fork() returns the process id. If it is 0 then you are in the child process and can call os.execvp(), otherwise continue with the code. This will keep the while loop running. You can have the original process either wait for it to complete os.wait(), or continue without waiting to the start of the while loop. The pseudo code on page 2 of this link should help https://www.cs.auckland.ac.nz/courses/compsci340s2c/assignments/A1/A1.pdf

Related

Utilizing multiprocessing.Pipe() with subprocess.Popen/run as stdin/stdout

I'm currently working on a POC with the following results to be desired
python script working as a parent, meaning it will start a child process while running it
the child process is oblivious to the fact another script is running it, the very same child script can also be executed as the main script by the user
comfortable way to read the subprocess's outputs (to sys.stdout via print), and the parent's inputs will be sent to the sys.stdin (via input)
I've already done some research on the topic and I am aware that I can pass to Popen/run subprocess.PIPE, and call it a day.
However I saw multiprocessing.Pipe() produces a linked socket pair which allows to send objects through them as a whole, so I don't need to get into when to stop reading a stream and continue afterward
# parent.py
import multiprocessing
import subprocess
import os
pipe1, pipe2 = multiprocessing.Pipe()
if os.fork():
while True:
print(pipe1.recv())
exit() # avoid fork colision
if os.fork():
# subprocess.run is busy wait
subprocess.run(args['python3', 'child.py'], stdin=pipe2.fileno(), stdout=pipe2.fileno())
exit() # avoid fork colision
while True:
user_input = input('> ')
pipe1.send(user_input)
# child.py
import os
import time
if os.fork:
while True:
print('child sends howdy')
time.sleep(1)
with open('child.txt, 'w') as file
while True:
user_input = input('> ')
# We supposedly can't write to sys.stdout because parent.py took control of it
file.write(f'{user_input}\n')
So to finally reach the essence of the problem, child.py is installed as a package,
meaning parent.py doesn't call on the actual file to run the script.
The subprocess is run by calling upon the package
And for some bizarre reason, when child.py is a package vs a script, the code written above doesn't seem to work.
child.py's sys.stdin and sys.stdout fail to work entirely, parent.py is unable to receive ANY of the child.py's prints (even sys.stdout.write(<some_data>) and sys.stdout.flush()),
and the same applies to sys.stdin.
If anyone can shed any light on how to solve it, I would be delighted !
Side Note
When calling upon a package, you don't call upon its main.py (image it's dunder_main_dunder.py) directly.
you call upon a python file which it actually starts up the package.
I assume something fishy might be happening over there when that happens and that what causes the interference, but that's just a theory

How to Terminate a Python program before its child is finished running?

I have a script that is supposed to run 24/7 unless interrupted. This script is script A.
I want script A to call Script B, and have script A exit while B is running. Is this possible?
This is what I thought would work
#script_A.py
while(1)
do some stuff
do even more stuff
if true:
os.system("python script_B.py")
sys.exit(0)
#script_B.py
time.sleep(some_time)
do something
os.system("python script_A.py")
sys.exit(0)
But it seems as if A doesn't actually exit until B has finished executing (which is not what I want to happen).
Is there another way to do this?
What you are describing sounds a lot like a function call:
def doScriptB():
# do some stuff
# do some more stuff
def doScriptA():
while True:
# do some stuff
if Your Condition:
doScriptB()
return
while True:
doScriptA()
If this is insufficient for you, then you have to detach the process from you python process. This normally involves spawning the process in the background, which is done by appending an ampersand to the command in bash:
yes 'This is a background process' &
And detaching said process from the current shell, which, in a simple C program is done by forking the process twice. I don't know how to do this in python, but would bet, that there is a module for this.
This way, when the calling python process exits, it won't terminate the spawned child, since it is now independent.
It seems you want to detach a system call to another thread.
script_A.py
import subprocess
import sys
while(1)
do some stuff
do even more stuff
if true:
pid = subprocess.Popen([sys.executable, "python script_B.py"]) # call subprocess
sys.exit(0)
Anyway it does not seem a good practice at all. Why do you not try the script A listens the Process Stack and if it finds script B running then stops. This is another example how you could do it.
import subprocess
import sys
import psutil
while(1)
#This sections queries the current processes running
for proc in psutil.process_iter():
pinfo = proc.as_dict(attrs=['pid', 'name'])
if pinfo[ 'name' ] == "script_B.py":
sys.exit(0)
do some stuff
do even more stuff
if true:
pid = subprocess.Popen([sys.executable, "python script_B.py"]) # call subprocess
sys.exit(0)

Passing arguments/strings into already running process - Python 2.7

I have two scripts in Python.
sub.py code:
import time
import subprocess as sub
while 1:
value=input("Input some text or number") # it is example, and I don't care about if it is number-input or text-raw_input, just input something
proces=sub.Popen(['sudo', 'python', '/home/pi/second.py'],stdin=sub.PIPE)
proces.stdin.write(value)
second.py code:
import sys
while 1:
from_sub=sys.stdin()#or sys.stdout() I dont remember...
list_args.append(from_sub) # I dont know if syntax is ok, but it doesn't matter
for i in list_arg:
print i
First I execute sub.py, and I input something, then second.py file will execute and printing everything what I inputed and again and again...
The thing is I don't want to open new process. There should be only one process. Is it possible?
Give me your hand :)
This problem can be solved by using Pexpect. Check my answer over here. It solves a similar problem
https://stackoverflow.com/a/35864170/5134525.
Another way to do that is to use Popen from subprocess module and setting stdin and stdout as pipe. Modifying your code a tad bit can give you the desired results
from subprocess import Popen, PIPE
#part which should be outside loop
args = ['sudo', 'python', '/home/pi/second.py']
process = Popen(args, stdin=PIPE, stdout=PIPE)
while True:
value=input("Input some text or number")
process.stdin.write(value)
You need to open the process outside the loop for this to work. A similar issue is addressed here in case you want to check that Keep a subprocess alive and keep giving it commands? Python
This approach will lead to error if child process quits after first iteration and close all the pipes. You somehow need to block the child process to accept more input. This you can do by either using threads or by using the first option i.e. Pexpect

How to launch a couple of python scripts from a first python script and then terminate them all at once?

I have a function in a python script which should launch another python script multiple times, I am assuming this can be done like this(Script is just my imagination of how this would work.)
iterations = input("Enter the number of processes to run")
for x in range(0, iterations):
subprocess.call("python3 /path/to/the/script.py", shell=True)
but, I also need to pass over some defined variables into the other script, for example, if
x = 1
in the first script, then, I need x to have the same value in the second script without defining it there, I have NO idea how to do that.
And then also killing them, I have read about some method using PIDs, but don't those change every time?
Most of the methods I found on Google looked overly complex and what I want to do is really simple. Can anyone guide me in the right direction as to what to use and how I should go at accomplishing it?
I have a function in a python script which should launch another python script multiple times, I am assuming this can be done like this(Script is just my imagination of how this would work.)
**
Here is the subprocess manual page which contains everything I will be talking about
https://docs.python.org/2/library/subprocess.html
One of the way to call one script from other is using subprocess.Popen
something on the lines
import subprocess
for i in range(0,100):
ret = subprocess.Popen("python3 /path/to/the/script.py",stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=True)
you can use the return value from Open to make the call synchronous using the communicate method.
out,err = ret.communicate()
This would block the calling script until the subprocess finishes.
I also need to pass over some defined variables into the other script??
There are multiple ways to do this.
1. Pass parameters to the called script and parse it using OptionPraser or sys.args
in the called script have something like
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-x","--variable",action="store_true",dest="xvalue",default=False)
(options,args) = parser.parse_args()
if options.xvalue == True:
###do something
in the callee script use subprocess as
ret = subprocess.Popen("python3 /path/to/the/script.py -x",stdout=subprocess.PIPE,stderr=subprocess.PIPE,shell=True)
Note the addition of -x parameter
You can use args parse
https://docs.python.org/2/library/argparse.html#module-argparse
Pass the subprocess a environment variable which can be used to configure the subprocess. This is fast but this only works one way, i.e. from parent process to child process.
in called script
import os
x = int(os.enviorn('xvalue'))
in callee script set the environment variable
import os
int x = 1
os.environ['xvalue'] = str(x)
Use sockets or pipes or some other IPC method
And then also killing them, I have read about some method using PIDs, but don't those change every time?
again you can use subprocess to hold the process id and terminate it
this will give you the process id
ret.pid
you can then use .terminate to terminate the process if it is running
ret.terminate()
to check if the process is running you can use the poll method from subprocess Popen. I would suggest you to check before you terminate the process
ret.poll()
poll will return a None if the process is running
If you just need to pass some values to second script, and you need to run that
by means of subprocess module, then you may simply pass the variables as command line arguments:
for x in range(0, iterations):
subprocess.call('python3 /path/to/second_script.py -x=%s'%x, shell=True)
And recieve the -x=1 via sys.argv list inside second_script.py (using argparse module)
On the other hand, If you need to exchange something between the two scripts dynamically (while both are running), You can use the pipe mechanism or even better, use the multiprocessing (wich requires some changes in your current code), it would make communication with and controlling it (terminating it) much cleaner.
You can pass variables to subprocesses via the command line, environment variables or passing data in on stdin. Command line is easy for simple strings that aren't too long and don't themselves have shell meta characters in them. The target script would pull them from sys.argv.
script.py:
import sys
import os
import time
x = sys.argv[1]
print(os.getpid(), "processing", x)
time.sleep(240)
subprocess.Popen starts child processes but doesn't wait for them to complete. You could start all of the children, put their popen objects in a list and finish with them later.
iterations = input("Enter the number of processes to run")
processes = []
for x in range(0, iterations):
processes.append(subprocess.Popen([sys.executable, "/path/to/the/script.py", str(x)])
time.sleep(10)
for proc in processes:
if proc.poll() is not None:
proc.terminate()
for proc in processes:
returncode = proc.wait()

How to check if a shell command is over in Python

Let's say that I have this simple line in python:
os.system("sudo apt-get update")
of course, apt-get will take some time untill it's finished, how can I check in python if the command had finished or not yet?
Edit: this is the code with Popen:
os.environ['packagename'] = entry.get_text()
process = Popen(['dpkg-repack', '$packagename'])
if process.poll() is None:
print "It still working.."
else:
print "It finished"
Now the problem is, it never print "It finished" even when it really finish.
As the documentation states it:
This is implemented by calling the Standard C function system(), and
has the same limitations
The C call to system simply runs the program until it exits. Calling os.system blocks your python code until the bash command has finished thus you'll know that it is finished when os.system returns. If you'd like to do other stuff while waiting for the call to finish, there are several possibilities. The preferred way is to use the subprocessing module.
from subprocess import Popen
...
# Runs the command in another process. Doesn't block
process = Popen(['ls', '-l'])
# Later
# Returns the return code of the command. None if it hasn't finished
if process.poll() is None:
# Still running
else:
# Has finished
Check the link above for more things you can do with Popen
For a more general approach at running code concurrently, you can run that in another thread or process. Here's example code:
from threading import Thread
...
thread = Thread(group=None, target=lambda:os.system("ls -l"))
thread.run()
# Later
if thread.is_alive():
# Still running
else:
# Has finished
Another option would be to use the concurrent.futures module.
os.system will actually wait for the command to finish and return the exit status (format dependent format).
os.system is blocking; it calls the command waits for its completion, and returns its return code.
So, it'll be finished once os.system returns.
If your code isn't working, I think that could be caused by one of sudo's quirks, it refuses to give rights on certain environments(I don't know the details tho.).

Categories

Resources