I am trying to create a python script that on a click of a button opens another python script and closes itself and some return function in the second script to return to the original script hope you can help.
Thanks.
Since your question is very vague, here's a somewhat vague answer:
First, think about whether you really need to do this at all. Why can't the first script just import the second script as a module and call some function on it?
But let's assume you've got a good answer for that, and you really do need to "close" and run the other script, where by "close" you mean "make your GUI invisible".
def handle_button_click(button):
button.parent_window().hide()
subprocess.call([sys.executable, '/path/to/other/script.py'])
button.parent_window().show()
This will hide the window, run the other script, then show the window again when the other script is finished. It's generally a very bad idea to do something slow and blocking in the middle of an event handler, but in this case, because we're hiding our whole UI anyway, you can get away with it.
A smarter solution would involve some kind of signal that either the second script sends, or that a watcher thread sends. For example:
def run_other_script_with_gui_hidden(window):
gui_library.do_on_main_thread(window.hide)
subprocess.call([sys.executable, '/path/to/other/script.py'])
gui_library.do_on_main_thread(window.show)
def handle_button_click(button):
t = threading.Thread(target=run_other_script_with_gui_hidden)
t.daemon = True
t.start()
Obviously you have to replace things like button.window(), window.hide(), gui_library.do_on_main_thread, etc. with the appropriate code for your chosen window library.
If you'd prefer to have the first script actually exit, and the second script re-launch it, you can do that, but it's tricky. You don't want to launch the second script as a child process, but as a sibling. Ideally, you want it to just take over your own process. Except that you need to shut down your GUI before doing that, unless your OS will do that automatically (basically, Windows will, Unix will not). Look at the os.exec family, but you'll really need to understand how these things work in Unix to do it right. Unless you want the two scripts to be tightly coupled together, you probably want to pass the second script, on the command line, the exact right arguments to re-launch the first one (basically, pass it your whole sys.argv after any other parameters).
As an alternative, you can use execfile to run the second script within your existing interpreter instance, and then have the second script execfile you back. This has similar, but not identical, issues to the exec solution.
Related
I'm fairly new to python and need help with the following.
Say I have some code which is continually outputting data to the Python console, e.g.:
for x in range(10000): print x
I then want to be able to enter various commands which may affect the output immediately. For example, I enter a number which causes the loop to start again from this number, or I enter step=2, which causes the step level to change, etc.
Basically, I want the code to run and print in the background, while the prompt is still available.
Is such a thing possible? I'm guessing the output would have to be sent to a new window, but I am unsure how this would work out in practice. I would prefer no GUI at the moment, as I just want to keep things as simple as possible.
As #BlueRhine S says start a background thread. Here is a link to the standard library threading module docs. I would probably prefer to start a sub-process though and use a pipe to communicate between your foreground process and the worker process. Here is a link to those docs
Tried searching for the solution to this problem but due to there being a command Shell=True (don't think that is related to what I'm doing but I could well be wrong) it get's lots of hits that aren't seemingly useful.
Ok so the problem I is basically:
I'm running a Python script on a cluster. On the cluster the normal thing to do is to launch all codes/etc. via a shell script which is used to request the appropriate resources (maximum run time, nodes, processors per node, etc.) needed to run the job. This shell then calls the script and away it goes.
This isn't an issue, but the problem I have is my 'parent' code needs to wait for it's 'children' to run fully (and generate their data to be used by the parent) before continuing. This isn't a problem when I don't have the shell between it and the script but as it stands .communicate() and .wait() are 'satisfied' when the shell script is done. I need to to wait until the script(s) called by the shell are done.
I could botch it by putting a while loop in that needs certain files to exist before breaking, but this seems messy to me.
So my question is, is there a way I can get .communicate (idealy) or .wait or via some other (clean/nice) method to pause the parent code until the shell, and everything called by the shell, finishes running? Ideally (nearly essential tbh) is that this be done in the parent code alone.
I might not be explaining this very well so happy to provide more details if needed, and if somewhere else answers this I'm sorry, just point me thata way!
Per Python documentation, subprocess.call should be blocking and wait for the subprocess to complete. In this code I am trying to convert few xls files to a new format by calling Libreoffice on command line. I assumed that the call to subprocess call is blocking but seems like I need to add an artificial delay after each call otherwise I miss few files in the out directory.
what am I doing wrong? and why do I need the delay?
from subprocess import call
for i in range(0,len(sorted_files)):
args = ['libreoffice', '-headless', '-convert-to',
'xls', "%s/%s.xls" %(sorted_files[i]['filename'],sorted_files[i]['filename']), '-outdir', 'out']
call(args)
var = raw_input("Enter something: ") # if comment this line I dont get all the files in out directory
EDIT It might be hard to find the answer through the comments below. I used unoconv for document conversion which is blocking and easy to work with from an script.
It's possible likely that libreoffice is implemented as some sort of daemon/intermediary process. The "daemon" will (effectively1) parse the commandline and then farm the work off to some other process, possibly detaching them so that it can exit immediately. (based on the -invisible option in the documentation I suspect strongly that this is indeed the case you have).
If this is the case, then your subprocess.call does do what it is advertised to do -- It waits for the daemon to complete before moving on. However, it doesn't do what you want which is to wait for all of the work to be completed. The only option you have in that scenario is to look to see if the daemon has a -wait option or similar.
1It is likely that we don't have an actual daemon here, only something which behaves similarly. See comments by abernert
The problem is that the soffice command-line tool (which libreoffice is either just a link to, or a further wrapper around) is just a "controller" for the real program soffice.bin. It finds a running copy of soffice.bin and/or creates on, tells it to do some work, and then quits.
So, call is doing exactly the right thing: it waits for libreoffice to quit.
But you don't want to wait for libreoffice to quit, you want to wait for soffice.bin to finish doing the work that libreoffice asked it to do.
It looks like what you're trying to do isn't possible to do directly. But it's possible to do indirectly.
The docs say that headless mode:
… allows using the application without user interface.
This special mode can be used when the application is controlled by external clients via the API.
In other words, the app doesn't quit after running some UNO strings/doing some conversions/whatever else you specify on the command line, it sits around waiting for more UNO commands from outside, while the launcher just runs as soon as it sends the appropriate commands to the app.
You probably have to use that above-mentioned external control API (UNO) directly.
See Scripting LibreOffice for the basics (although there's more info there about internal scripting than external), and the API documentation for details and examples.
But there may be an even simpler answer: unoconv is a simple command-line tool written using the UNO API that does exactly what you want. It starts up LibreOffice if necessary, sends it some commands, waits for the results, and then quits. So if you just use unoconv instead of libreoffice, call is all you need.
Also notice that unoconv is written in Python, and is designed to be used as a module. If you just import it, you can write your own (simpler, and use-case-specific) code to replace the "Main entrance" code, and not use subprocess at all. (Or, of course, you can tear apart the module and use the relevant code yourself, or just use it as a very nice piece of sample code for using UNO from Python.)
Also, the unoconv page linked above lists a variety of other similar tools, some that work via UNO and some that don't, so if it doesn't work for you, try the others.
If nothing else works, you could consider, e.g., creating a sentinel file and using a filesystem watch, so at least you'll be able to detect exactly when it's finished its work, instead of having to guess at a timeout. But that's a real last-ditch workaround that you shouldn't even consider until eliminating all of the other options.
If libreoffice is being using an intermediary (daemon) as mentioned by #mgilson, then one solution is to find out what program it's invoking, and then directly invoke it yourself.
I'm trying to run an external, separate program from Python. It wouldn't be a problem normally, but the program is a game, and has a Python interpreter built into it. When I use subprocess.Popen, it starts the separate program, but does so under the original program's Python instance, so that they share the first Python console. I can end the first program fine, but I would rather have separate consoles (mainly because I have the console start off hidden, but it gets shown when I start the program from Python with subprocess.POpen).
I would like it if I could start the second program wholly on its own, as though I just 'double-clicked on it'. Also, os.system won't work because I'm aiming for cross-platform compatibility, and that's only available on Windows.
I would like it if I could start the second program wholly on its own, as though I just 'double-clicked on it'.
As of 2.7 and 3.3, Python doesn't have a cross-platform way to do this. A new shutil.open method may be added in the future (possibly not under that name); see http://bugs.python.org/issue3177 for details. But until then, you'll have to write your own code for each platform you care about.
Fortunately, what you're trying to do is simpler and less general than what shutil.open is ultimately hoped to provide, which means it's not that hard to code:
On OS X, there's a command called open that does exactly what you want: "The open command opens a file (or a directory or URL), just as if you had double-clicked the file's icon." So, you can just popen open /Applications/MyGame.app.
On Windows, the equivalent command is start, but unfortunately, that's part of the cmd.exe shell rather than a standalone program. Fortunately, Python comes with a function os.startfile that does the same thing, so just os.startfile(r'C:\Program Files\MyGame\MyGame.exe').
On FreeDesktop-compatible *nix systems (which includes most modern linux distros, etc.), there's a very similar command called xdg-open: "xdg-open opens a file or URL in the user's preferred application." Again, just popen xdg-open /usr/local/bin/mygame.
If you expect to run on other platforms, you'll need to do a bit of research to find the best equivalent. Otherwise, for anything besides Mac and Windows, I'd just try to popen xdg-open, and throw an error if that fails.
See http://pastebin.com/XVp46f7X for an (untested) example.
Note that this will only work to run something that actually can be double-clicked to launch in Finder/Explorer/Nautilus/etc. For example, if you try to launch './script.py', depending on your settings, it may just fire up a text editor with your script in it.
Also, on OS X, you want to run the .app bundle, not the UNIX executable inside it. (In some cases, launching a UNIX executable—whether inside an .app bundle or standalone—may work, but don't count on it.)
Also, keep in mind that launching a program this way is not the same as running it from the command line—in particular, it will inherit its environment, current directory/drive, etc. from the Windows/Launch Services/GNOME/KDE/etc. session, not from your terminal session. If you need more control over the child process, you will need to look at the documentation for open, xdg-open, and os.startfile and/or come up with a different solution.
Finally, just because open/xdg-open/os.startfile succeeds doesn't actually mean that the game started up properly. For example, if it launches and then crashes before it can even create a window, it'll still look like success to you.
You may want to look around PyPI for libraries that do what you want. http://pypi.python.org/pypi/desktop looks like a possibility.
Or you could look through the patches in issue 3177, and pick the one you like best. As far as I know, they're all pure Python, and you can easily just drop the added function in your own module instead of in os or shutil.
As a quick hack, you may be able to (ab)use webbrowser.open. "Note that on some platforms, trying to open a filename using this function, may work and start the operating system’s associated program. However, this is neither supported nor portable." In particular, IIRC, it will not work on OS X 10.5+. However, I believe that making a file: URL out of the filename actually does work on OS X and Windows, and also works on linux for most, but not all, configurations. If so, it may be good enough for a quick&dirty script. Just keep in mind that it's not documented to work, it may break for some of your users, it may break in the future, and it's explicitly considered abuse by the Python developers, so I wouldn't count on it for anything more serious. And it will have the same problems launching 'script.py' or 'Foo.app/Contents/MacOS/foo', passing env variables, etc. as the more correct method above.
Almost everything else in your question is both irrelevant and wrong:
It wouldn't be a problem normally, but the program is a game, and has a Python interpreter built into it.
That doesn't matter. If the game were writing to stdout from C code, it would do the exact same thing.
When I use subprocess.Popen, it starts the separate program, but does so under the original program's Python instance
No it doesn't. It starts an entirely new process, whose embedded Python interpreter is an entirely new instance of Python. You can verify that by, e.g., running a different version of Python than the game embeds.
so that they share the first Python console.
No they don't. They may share the same tty/cmd window, but that's not the same thing.
I can end the first program fine, but I would rather have separate consoles (mainly because I have the console start off hidden, but it gets shown when I start the program from Python with subprocess.POpen).
You could always pipe the child's stdout and stderr to, e.g., a logfile, which you could then view separately from the parent process's output, if you wanted to. But I think this is going off on a tangent that has nothing to do with what you actually care about.
Also, os.system won't work because I'm aiming for cross-platform compatibility, and that's only available on Windows.
Wrong; os.system is available on "Unix, Windows"--which is probably everywhere you care about. However, it won't work because it runs the child program in a subshell of your script, using the same tty. (And it's got lots of other problems—e.g., blocking until the child finishes.)
When I use subprocess.Popen, it starts the separate program, but does so under the original program's Python instance...
Incorrect.
... so that they share the first Python console.
This is the crux of your problem. If you want it to run in another console then you must run another console and tell it to run your program instead.
... I'm aiming for cross-platform compatibility ...
Sorry, there's no cross-platform way to do it. You'll need to run the console/terminal appropriate for the platform.
I've been working on a gui app that needs to manage external processes. Working with external processes leads to a lot of issues that can make a programmer's life difficult. I feel like maintenence on this app is taking an unacceptably long time. I've been trying to list the things that make working with external processes difficult so that I can come up with ways of mitigating the pain. This kind of turned into a rant which I thought I'd post here in order to get some feedback and to provide some guidance to anybody thinking about sailing into these very murky waters. Here's what I've got so far:
Output from the child can get mixed up with output from the parent. This can make both outputs misleading and hard to read. It can be hard to tell what came from where. It becomes harder to figure out what's going on when things are asynchronous. Here's a contrived example:
import textwrap, os, time
from subprocess import Popen
test_path = 'test_file.py'
with open(test_path, 'w') as file:
file.write(textwrap.dedent('''
import time
for i in range(3):
print 'Hello %i' % i
time.sleep(1)'''))
proc = Popen('python -B "%s"' % test_path)
for i in range(3):
print 'Hello %i' % i
time.sleep(1)
os.remove(test_path)
Output:
Hello 0
Hello 0
Hello 1
Hello 1
Hello 2
Hello 2
I guess I could have the child process write its output to a file. But it can be annoying to have to open up a file every time I want to see the result of a print statement.
If I have code for the child process I could add a label, something like print 'child: Hello %i', but it can be annoying to do that for every print. And it adds some noise to the output. And of course I can't do it if I don't have access to the code.
I could manually manage the process output. But then you open up a huge can of worms with threads and polling and stuff like that.
A simple solution is to treat processes like synchronous functions, that is, no further code executes until the process completes. In other words, make the process block. But that doesn't work if you're building a gui app. Which brings me to the next problem...
Blocking processes cause the gui to become unresponsive.
import textwrap, sys, os
from subprocess import Popen
from PyQt4.QtGui import *
from PyQt4.QtCore import *
test_path = 'test_file.py'
with open(test_path, 'w') as file:
file.write(textwrap.dedent('''
import time
for i in range(3):
print 'Hello %i' % i
time.sleep(1)'''))
app = QApplication(sys.argv)
button = QPushButton('Launch process')
def launch_proc():
# Can't move the window until process completes
proc = Popen('python -B "%s"' % test_path)
proc.communicate()
button.connect(button, SIGNAL('clicked()'), launch_proc)
button.show()
app.exec_()
os.remove(test_path)
Qt provides a process wrapper of its own called QProcess which can help with this. You can connect functions to signals to capture output relatively easily. This is what I'm currently using. But I'm finding that all these signals behave suspiciously like goto statements and can lead to spaghetti code. I think I want to get sort-of blocking behavior by having the 'finished' signal from QProcess call a function containing all the code that comes after the process call. I think that should work but I'm still a bit fuzzy on the details...
Stack traces get interrupted when you go from the child process back to the parent process. If a normal function screws up, you get a nice complete stack trace with filenames and line numbers. If a subprocess screws up, you'll be lucky if you get any output at all. You end up having to do a lot more detective work everytime something goes wrong.
Speaking of which, output has a way of disappearing when dealing external processes. Like if you run something via the windows 'cmd' command, the console will pop up, execute the code, and then disappear before you have a chance to see the output. You have to pass the /k flag to make it stick around. Similar issues seem to crop up all the time.
I suppose both problems 3 and 4 have the same root cause: no exception handling. Exception handling is meant to be used with functions, it doesn't work with processes. Maybe there's some way to get something like exception handling for processes? I guess that's what stderr is for? But dealing with two different streams can be annoying in itself. Maybe I should look into this more...
Processes can hang and stick around in the background without you realizing it. So you end up yelling at your computer cuz it's going so slow until you finally bring up your task manager and see 30 instances of the same process hanging out in the background.
Also, hanging background processes can interefere with other instances of the process in various fun ways, such as causing permissions errors by holding a handle to a file or someting like that.
It seems like an easy solution to this would be to have the parent process kill the child process on exit if the child process didn't close itself. But if the parent process crashes, cleanup code might not get called and the child can be left hanging.
Also, if the parent waits for the child to complete, and the child is in an infinite loop or something, you can end up with two hanging processes.
This problem can tie in to problem 2 for extra fun, causing your gui to stop responding entirely and force you to kill everything with the task manager.
F***ing quotes
Parameters often need to be passed to processes. This is a headache in itself. Especially if you're dealing with file paths. Say... 'C:/My Documents/whatever/'. If you don't have quotes, the string will often be split at the space and interpreted as two arguments. If you need nested quotes you can use ' and ". But if you need to use more than two layers of quotes, you have to do some nasty escaping, for example: "cmd /k 'python \'path 1\' \'path 2\''".
A good solution to this problem is passing parameters as a list rather than as a single string. Subprocess allows you to do this.
Can't easily return data from a subprocess.
You can use stdout of course. But what if you want to throw a print in there for debugging purposes? That's gonna screw up the parent if it's expecting output formatted a certain way. In functions you can print one string and return another and everything works just fine.
Obscure command-line flags and a crappy terminal based help system.
These are problems I often run into when using os level apps. Like the /k flag I mentioned, for holding a cmd window open, who's idea was that? Unix apps don't tend to be much friendlier in this regard. Hopefully you can use google or StackOverflow to find the answer you need. But if not, you've got a lot of boring reading and frusterating trial and error to do.
External factors.
This one's kind of fuzzy. But when you leave the relatively sheltered harbor of your own scripts to deal with external processes you find yourself having to deal with the "outside world" to a much greater extent. And that's a scary place. All sorts of things can go wrong. Just to give a random example: the cwd in which a process is run can modify it's behavior.
There are probably other issues, but those are the ones I've written down so far. Any other snags you'd like to add? Any suggestions for dealing with these problems?
Check out the subprocess module. It should help with output separation. I don't see any way around either separate output streams or some kind of output tagging in a single stream.
The hanging process problem is difficult as well. The only solution I have been able to make is to put a timer on the external process, and kill it if it does not return in the allotted time. Crude, nasty, and if anyone else has a good solution, I would love to hear it so I can use it too.
One thing you could do to help deal with the problem of completely un-managed shutdown is to keep a directory of pid files. Whenever you kick off an external process, write a file into your pid file directory with a name that is the pid for the process. Erase the pid file when you know the process has exited cleanly. You can use the stuff in the pid directory to help cleanup on crashes or re-starts.
This may not provide any satisfying or useful answers, but maybe it's a start.