Pretty much what the headline says. I want my Python script to exit and somehow schedule from within that script a new script for immediately after the first has finished.
Sounds simple enough but I can't seem to find a way -- let alone a clean way -- of doing that.
You are able to execute functions right before the interpreter exists with the 'atexit' module. You are even able to pass args and kwargs to the function.
https://docs.python.org/3/library/atexit.html
import atexit
import os
def spawn_new():
os.system("python3 -c 'print(\"Hello!\")'")
print("Main intepreter.")
atexit.register(spawn_new)
Related
import subprocess
subprocess.Popen('python', 'second_script.py')
Does this open the second script and makes them run concurrently? Also will it close the second script if I stop the main one? If not, how can I do that?
You should do something like this
#! /usr/bin/env python3
import subprocess
pid = subprocess.Popen(['second-script']).pid
print(f'pid={pid}')
...
# do whatever you have to do here
I don't clearly understand what you need.
If you want to use some functions that are coded in 'pythonScriptA.py' in another script 'pythonScriptB.py', you can import the first script:
from pythonScriptA import *
# Use the functions in the ScriptB
If you want two script to run concurrently/in parallel, you should look into the multiprocessing library, or in any other library that allow python code to be run on multiple threads.
--
Edit: correction of the script
I am playing around with a library for my beginner students, and I'm using the multiprocessing module in Python. I ran into this problem: importing and using a module that uses multiprocessing without causing infinite loop on Windows
As an example, suppose I have a module mylibrary.py:
# mylibrary.py
from multiprocessing import Process
class MyProcess(Process):
def run(self):
print "Hello from the new process"
def foo():
p = MyProcess()
p.start()
And a main program that calls this library:
# main.py
import mylibrary
mylibrary.foo()
If I run main.py on Windows, it tries to import main.py into the new process, meaning the code is executed again which results in an infinite loop of process generation. I can fix it like so:
import mylibrary
if __name__ == "__main__":
mylibrary.foo()
But, this is pretty confusing for beginners, and moreover it seems like it shouldn't be necessary. The new process is being created in mylibrary, so why doesn't the new process just import mylibrary? Is there a way to work around this issue without having to change main.py?
I am using Python 2.7, by the way.
Windows doesn't have fork, so there's no way to make a new process just like the existing one. So the child process has to run your code again, but now you need a way to distinguish between the parent process and the child process, and __main__ is it.
This is covered in the docs here: http://docs.python.org/2/library/multiprocessing.html#windows
I don't know of another way to structure the code to avoid the fork bomb effect.
I have a Python program from which I spawn a sub-program to process some files without holding up the main program. I'm currently using bash for the sub-program, started with a command and two parameters like this:
result = os.system('sub-program.sh file.txt file.txt &')
That works fine, but I (eventually!) realised that I could use Python for the sub-program, which would be far preferable, so I have converted it. The simplest way of spawning it might be:
result = os.system('python3 sub-program.py file.txt file.txt &')
Some research has shown several more sophisticated alternatives, but I have the impression that the latest and most approved method is this one:
subprocess.Popen(["python3", "-u", "sub-program.py"])
Am I correct in thinking that that is the most appropriate way of doing it? Would anyone recommend a different method and why? Simple would be good as I'm a bit of a Python novice.
If this is the recommended method, I can probably work out what the "-u" does and how to add the parameters for myself.
Optional extras:
Send a message back from the sub-program to the main program.
Make the sub-program quit when the main program does.
Yes, using subprocess is the recommended way to go according to the documentation:
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function.
However, subprocess.Popen may not be what you're looking for. As opposed to os.system you will create a Popen object that corresponds to the subprocess and you'll have to wait for it in order to wait for it's completion, fx:
proc = subprocess.Popen(["python3", "-u", "sub-program.py"])
do_something()
res = proc.wait()
If you want to just run a program and wait for completion you should probably use subprocess.run (or maybe subprocess.call, subprocess.check_call or subprocess.check_output) instead.
Thanks skyking!
With
import subprocess
at the beginning of the main program, this does what I want:
with open('output.txt', 'w') as f:
subprocess.Popen([spawned.py, parameter1, parameter2], stdout = f)
The first line opens a file for the output from the sub-program started in the second line. In the second line, the square brackets contain the stuff for the sub-program - name followed by two parameters. The parameters are available in the sub-program in sys.argv[1] and sys.argv[2]. After that come the subprocess parameters - the f says to output to the text file mentioned above.
Is there any particular reason it has to be another program entirely? Why not just spawn another process which runs one of the functions defined within your script?
I suggest that you read up on multiprocessing. Python has module just for that: https://docs.python.org/dev/library/multiprocessing.html
Here you can find info on spawning new processes, communicating between them and syncronizing them.
Be warned though that if you want to really speed up your file processing you'll want to use processes instead of threads (due to some limitations in python, threads will only slow you down which is confusing).
Also check out this page: https://pymotw.com/2/multiprocessing/basics.html
It has some code samples that will help you out a lot.
Don't forget this guard in your script:
if __name__ == '__main__':
It is very important ;)
I am trying to create a python script that on a click of a button opens another python script and closes itself and some return function in the second script to return to the original script hope you can help.
Thanks.
Since your question is very vague, here's a somewhat vague answer:
First, think about whether you really need to do this at all. Why can't the first script just import the second script as a module and call some function on it?
But let's assume you've got a good answer for that, and you really do need to "close" and run the other script, where by "close" you mean "make your GUI invisible".
def handle_button_click(button):
button.parent_window().hide()
subprocess.call([sys.executable, '/path/to/other/script.py'])
button.parent_window().show()
This will hide the window, run the other script, then show the window again when the other script is finished. It's generally a very bad idea to do something slow and blocking in the middle of an event handler, but in this case, because we're hiding our whole UI anyway, you can get away with it.
A smarter solution would involve some kind of signal that either the second script sends, or that a watcher thread sends. For example:
def run_other_script_with_gui_hidden(window):
gui_library.do_on_main_thread(window.hide)
subprocess.call([sys.executable, '/path/to/other/script.py'])
gui_library.do_on_main_thread(window.show)
def handle_button_click(button):
t = threading.Thread(target=run_other_script_with_gui_hidden)
t.daemon = True
t.start()
Obviously you have to replace things like button.window(), window.hide(), gui_library.do_on_main_thread, etc. with the appropriate code for your chosen window library.
If you'd prefer to have the first script actually exit, and the second script re-launch it, you can do that, but it's tricky. You don't want to launch the second script as a child process, but as a sibling. Ideally, you want it to just take over your own process. Except that you need to shut down your GUI before doing that, unless your OS will do that automatically (basically, Windows will, Unix will not). Look at the os.exec family, but you'll really need to understand how these things work in Unix to do it right. Unless you want the two scripts to be tightly coupled together, you probably want to pass the second script, on the command line, the exact right arguments to re-launch the first one (basically, pass it your whole sys.argv after any other parameters).
As an alternative, you can use execfile to run the second script within your existing interpreter instance, and then have the second script execfile you back. This has similar, but not identical, issues to the exec solution.
How to send string/data to STDIN of a running process in python?
i'd like to create a front end for a CLI program. eg. i want to pass multiple string to this Pascal application:
program spam;
var a,b,c:string;
begin
while e <> "no" do
begin
writeln('what is your name?');
readln(a);
writeln('what is your quest?');
readln(b);
writeln('what is your favorite color?');
readln(c);
print(a,b,c);
end;
end.
how do i pass string to this program from python (using subprocess module in python). thankyou. sorry for my english.
If you want to control another interactive program, it could be worth trying the Pexpect module to do so. It is designed to look for prompt messages and so on, and interact with the program. Note that it doesn't currently work directly on Windows - it does work under Cygwin.
A possible non-Cygwin Windows variant is WinPexpect, which I found via this question. One of the answers on that question suggests the latest version of WinPexpect is at http://sage.math.washington.edu/home/goreckc/sage/wexpect/, but looking at the modification dates I think the BitBucket (the first link) is actually the latest.
As Windows terminals are somewhat different to Unix ones, I don't think there is a direct cross-platform solution. However, the WinPexpect docs say the only difference in the API between it and pexpect is the name of the spawn function. You could probably do something like the following (untested) code to get it to work in both:
try:
import pexpect
spawn = pexpect.spawn
except ImportError:
import winpexpect
spawn = winpexpect.winspawn
# NB. Errors may occur when you run spawn rather than (or as
# well as) when you import it, so you may have to wrap this
# up in a try...except block and handle them appropriately.
child = spawn('command and args')