I want to write a python script which can launch an application.The application being launched can also read python commands which I am passing through another script.
The problem I am facing is that I need to use two python scripts, one to launch an application and second one to run commands in launched application.
Can I achieve this using a single script? How do I tell python to run next few lines of script in launched application?
In general, you use subprocess.Popen to launch a command from python. If you set it as non-blocking, it'll let you keep running python statements. You also have access to the running subprocesses stdin and stdout so you can interact with the running application.
If I understand what you're asking, it'd look something like this:
import subprocess
app = subprocess.Popen(["/path/to/app", "-and", "args"], stdin=subprocess.PIPE)
app.stdin.write("python command\n")
Related
Within my python script I need to launch several processes :
1) I need to run another python script ( a flask app with the command python app.py)
2)then I need to launch the command ngrok http 5000 and from this command output get the url on which ngrok is forwarding.
I have tried to use the subprocess module, but when it executes :
subprocess.Popen( "python app/app.py",shell=True)
it launches the interactive shell and blocks the execution of my script.
What is the correct way to achieve this ?
Just instead of Popen function, should to use the call function.
subprocess.call('python app.py', shell=True)
Also see the documentation subprocess docs
I have written a simple http server by extending BaseHTTPRequestHandler. I am able to start it and process requests successfully. However it runs in the foreground, and the only way I can think of making it run in the background is to wrap it with a shell script and use nohup, & etc
Is there any way I can do this within the python script itself?
I am writing a script to launch remote desktop sessions using rdesktop. The relevant portion of the code looks like this:
subprocess.call(["rdesktop", "-a 16", "-u user", "-g 1280x1024",, server])
When this happens, the terminal is locked up until I exit the rdesktop session. Would it be possible to launch multiple desktop sessions with this script?
subprocess.Popen (py2 docs, py3 docs) is the correct answer here.
subprocess.call waits for the command to complete, while subprocess.Popen calls it in the background, and immediately executes the next line.
You can fork the python process or use threads, or run the process in the background.
Let me start with what I'm really trying to do. We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath. We picked Jython in particular because we only need to depend on the standalone jython.jar in our startup script. We decided we could write a jython script that uses subprocess.Popen to launch our application's jvm and then terminates.
One more thing. Our application uses a lot of legacy debug code that prints to standard out. So the startup script typically has been redirecting stdout/stderr to a log file. I attempted to reproduce that with our jython script like this:
subprocess.Popen(args,stdout=logFile,stderr=logFile)
After this line the launcher script and hosting jvm for jython terminates. The problem is nothing shows up in the logFile. If I instead do this:
subprocess.Popen(args,stdout=logFile,stderr=logFile).wait()
then we get logs. So the parent process needs to run parallel to the application process launched via subprocess? I want to avoid having two running jvms.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates? Is there a better way to launch the application jvm from jython? Is Jython a bad solution anyway?
We want a platform independent startup script for invoking a JVM with some system properties and a dynamically generated classpath.
You could use a platform independent script to generate a platform specific startup script either at installation time or before each invocation. In the latter case, additionally, you need a simple static platform specific script that invokes your platform independent startup-script-generating script and then the generated script itself. In both cases you start your application by calling a static platform specific script.
Can you invoke subprocess in such a way that the stdout file will be written even if the parent process terminates?
You could open file/redirect in a child process e.g., using shell:
Popen(' '.join(args+['>', 'logFile', '2>&1']), # shell specific cmdline
shell=True) # on Windows see _cmdline2list to understand what is going on
I have a python script and am wondering is there any way that I can ensure that the script run's continuously on a remote computer? Like for example, if the script crashes for whatever reason, is there a way to start it up automatically instead of having to remote desktop. Are there any other factors I have to be aware of? The script will be running on a window's machine.
Many ways - In the case of windows, even a simple looping batch file would probably do - just have it start the script in a loop (whenever it crashes it would return to the shell and be restarted).
Maybe you can use XMLRPC to call functions and pass data. Some time ago I did something like that you ask by using the SimpleXMLRPCServer and xmlrpc.client. You have examples of simple configurations in the docs.
Depends on what you mean by "crash". If it's just exceptions and stuff, you can catch everything and restart your process within itself. If it's more, then one possibility though is to run it as a daemon spawned from a separate python process that acts as a supervisor. I'd recommend supervisord but that's UNIX only. You can clone a subset of the functionality though.