I was trying to make a program in Python that would use os.system to open a file in Safari. In this case, I was trying to have it open a text copy of itself. The files name is foo.py.
import os, socket
os.system("python -m SimpleHTTPServer 4000")
IP = socket.gethostbyname(socket.gethostname())
osCommand = "open -a safari http://"+IP+":4000/foo.py"
os.system(osCommand)
system runs a program, then waits for it to finish, before returning.
So it won't get to the next line of your code until the server has finished serving. Which will never happen. (Well, you can hit ^C, and then it will stop serving—but then when you get to the next line that opens safari, it'll have no server to connect to anymore.)
This is one of the many reasons the docs for system basically tell you not to use it:
The subprocess module provides more powerful facilities for spawning new processes and retrieving their results; using that module is preferable to using this function. See the Replacing Older Functions with the subprocess Module section in the subprocess documentation for some helpful recipes.
For example:
import subprocess, socket
server = subprocess.Popen(['python', '-m', 'SimpleHTTPServer', '4000'])
IP = socket.gethostbyname(socket.gethostname())
safari = subprocess.Popen(['open', '-a', 'safari', 'http://'+IP+':4000/foo.py'])
server.wait()
safari.wait()
That will start both programs in the background, and then wait for both to finish, instead of starting one, waiting for it to finish, starting the other, and waiting for it to finish.
All that being said, this is kind of a silly way to do what you want. What's wrong with just opening a file URL (like 'file:///{}'.format(os.path.abspath(sys.argv[0]))) in Safari? Or in the default web browser (which would presumably be Safari for you, but would also work on other platform, and for Mac users who used Chrome or Firefox, and so on) by using webbrowser.open on that URL?
Related
I needed to launch Chrome programmatically, then open some more tabs, then close them all when I was done, even if an existing Chrome browser was already open. I could find partial answers, but nothing simple that worked with already running browsers.
I needed something following the KISS principle (Keep It Simple & Smart), simple code with a terminator!
Here is a simple answer that will launch, track, and terminate a new Chrome browser instance, but with child tabs too.
It launches a new process for a Chrome instance, launches additional tabs into that new Chrome webbrowser instance, and finally using "terminate()" when finished to close the original browser launched by the subprocess() and its webbrowser child tabs. This works even when there is an existing Chrome browser process running.
The standard path (user below) for Chrome.exe on Windows 10 is (usually): "C:\Program Files\Google\Chrome\Application\chrome.exe"
The code should always open a new Chrome window, even if Chrome is already running. The package "subprocess" is mandatory instead of os.system, or else it will not launch a new chrome window.
Advantages of this programmatic approach:
(1) subprocess() has a process ID, useful to track and close the browser started in the subprocess.
(2) All child tabs started within the subprocess.Popen() will be closed when the parent subprocess is terminated.
N.B. If there is an pre-existing browser instance running, my_chrome_process.terminate() will NOT terminate it; it will terminate only the instance started by the subprocess.Popen() code below. This is the expected behavior.
import subprocess
url1 = r'https://www.python.org'
url2 = r'https://github.com/'
url3 = r'https://stackoverflow.com/questions/22445217/python-webbrowser-open-to-open-chrome-browser'
url4 = r'https://docs.python.org/3.3/library/webbrowser.html'
chrome_path = r'C:\Program Files\Google\Chrome\Application\chrome.exe'
my_chrome_process = subprocess.Popen(chrome_path, shell=False)
print(f'Process ID: {my_chrome_process.pid}') # Uncomment this line if you want to see PID in Console.
import webbrowser
webbrowser.register('chrome', None, webbrowser.BackgroundBrowser(chrome_path))
webbrowser.get('chrome').open_new_tab(url1)
webbrowser.get('chrome').open_new_tab(url2)
webbrowser.get('chrome').open_new_tab(url3)
webbrowser.get('chrome').open_new_tab(url4)
my_chrome_process.terminate()
If for any reason, my_chrome_process.terminate() does not work, then use the following os.system() code to kill the browser started using subprocess().
See popen.kill not closing browser window for more information.
import os
os.system("Taskkill /PID %d /F" % my_chrome_process.pid)
My becomes unresponsive after creating a batch file and calling mstc to execute a remote desktop connection. I would have thought that this is an independent process and does not rely in any way to my python scrypt.
import os
def rdp_session(server, user, temporary_pass):
"""create Batch file to create .bat file that initiates rdp with variables"""
rdp = open("rdp_test.bat", "w")
rdp.write("cmdkey /generic:TERMSRV/"+server+" /user:"+user+" /pass:"+temporary_pass+"\n")
rdp.write("mstsc /v:"+server+" /admin")
rdp.close()
os.system("rdp_test.bat")
#os.remove("rdp_test.bat") optional, to delete file with creds after executing
I also tried using:
subprocess.call("rdp_test.bat")
subprocess.Popen(["rdp_test.bat"]) #doesnt initiate my rdp
I get the same result.
Why does this happen and what can I do so my stays responsive while my RDP runs?
To add a bit of context, I have this function within a Flask App, which I use to remote connect to different machines. when 1 rdp, the web app does not respond to any commands, and when I terminate my rdp, everything I clicked on suddenly executes.
In order for your session to continue you need to spawn another process, independent of the one that will terminate immediately after executing your script.
After reading a bit on subprocesses, I managed find that none of these options were immediately effective since I needed to not only run a subprocess with Popen but additionally needed to use Pathname expansion
from which I ended up doing:
subprocess.Popen([os.path.expanduser("My_File.bat")])
expanduser will expand a pathname that uses ~ to represent the current
user's home directory. This works on any platform where users have a
home directory, like Windows, UNIX, and Mac OS X; it has no effect on
Mac OS.
Otherwise my app would run all subsequent commands after closing my rdp session. This allows me to run multiple sub-processes independently from my web app and allows it to be responsive at the same time
I know there are similar questions posted already, but non of the methods I have seen seems to work. I want to launch the application xfoil, on mac, with python subprocess, and send xfoil a bunch of commands with a script (xfoil is an application that runs in a terminal window and you interact with it through text commands). I am able to launch xfoil with the script, but I can't seem to find out how to send commands to it. This is the code I am currently trying:
import subprocess as sp
xfoil = sp.Popen(['open', '-a', '/Applications/Xfoil.app/Contents/MacOS/Xfoil'], stdin=sp.PIPE, stdout=sp.PIPE)
stdout_data = xfoil.communicate(input='NACA 0012')
I have also tried
xfoil.stdin.write('NACA 0012\n')
in order to send commands to xfoil.
As the man page says,
The open command opens a file (or a directory or URL), just as if you had double-clicked the file's icon.
Ultimately, the application gets started by LaunchServices, but that's not important—what's important is that it's not a child of your shell, or Python script.
Also, the whole point of open is to open the app itself, so you don't have to dig into it and find the Unix executable file. If you already have that, and want to run it as a Unix executable… just run it:
xfoil = sp.Popen(['/Applications/Xfoil.app/Contents/MacOS/Xfoil'], stdin=sp.PIPE, stdout=sp.PIPE)
As it turns out, in this case, MacOS/Xfoil isn't even the right program; it's apparently some kind of wrapper around Resources/xfoil, which is the actual equivalent to what you get as /usr/local/bin/xfoil on linux. So you want to do this:
xfoil = sp.Popen(['/Applications/Xfoil.app/Contents/Resouces/xfoil'], stdin=sp.PIPE, stdout=sp.PIPE)
(Also, technically, your command line shouldn't even work at all; the -a specifies an application, not a Unix executable, and you're supposed to pass at least one file to open. But because LaunchServices can launch Unix executables as if they were applications, and open doesn't check that the arguments are valid, open -a /Applications/Xfoil.app/Contents/MacOS/Xfoil ends up doing effectively the same thing as open /Applications/Xfoil.app/Contents/MacOS/Xfoil.)
For the benefit of future readers, I'll include this information from the comments:
If you just write a line to stdin and then return from the function/fall off the end of the main script/etc., the Popen object will get garbage collected, closing both of its pipes. If xfoil hasn't finished running yet, it will get an error the next time it tries to write any output, and apparently it handles this by printing Fortran runtime error: end of file (to stderr?) and bailing. You need to call xfoil.wait() (or something else that implicitly waits) to prevent this from happening.
I need to be able to take a screenshot (of a vnc session, if putting this in the title and tags wasn't clear enough) within a python script under OSX. The remote system is already running a vnc server which I am using for other purposes, and will eventually cover the full range of common desktop operating systems, so I would prefer to keep using vnc as opposed to some other solution.
I do not have a vnc window open on my test server, as it runs headless. I have tried using vncdotool, but I'd prefer not to have to shell out, and trying to mimic the control flow causes problems because Twisted does not allow you to restart the reactor, but if you leave it running it blocks the main thread, and there seem to be problems trying to run the reactor in a separate Thread or Process...
Does anyone have any ideas?
Building upon what tangentStorm suggested, using selenium to take a screenshot. Try doing this. Open up src/Selenium2Library/keywords/_screenshot.py and look at lines 24-30.
background leaking when the page layout is somehow broken.
"""
path, link = self._get_screenshot_paths(filename)
self._current_browser().save_screenshot(path)
# Image is shown on its own row and thus prev row is closed on purpose
self._html('</td></tr><tr><td colspan="3"><a href="%s">'
Delete the line self._current_browser().save_screenshot(path) and add directly in its place
if hasattr(self._current_browser(), 'get_screenshot_as_file'):
self._current_browser().get_screenshot_as_file(path)
else:
self._current_browser().save_screenshot(path)
So in all it should look like:
background leaking when the page layout is somehow broken.
"""
path, link = self._get_screenshot_paths(filename)
if hasattr(self._current_browser(), 'get_screenshot_as_file'):
self._current_browser().get_screenshot_as_file(path)
else:
self._current_browser().save_screenshot(path)
# Image is shown on its own row and thus prev row is closed on purpose
self._html('</td></tr><tr><td colspan="3"><a href="%s">'
Then try using selenium to take the screenshot.
Reference: Fix
After reading your comments, it seems what you actually want to do is take screenshots of remote web browsers running your flash game.
... And you're using selenium to test those remote web browsers.
... Why don't you just have selenium take the screenshots for you?
http://selenium.googlecode.com/svn/trunk/docs/api/java/org/openqa/selenium/TakesScreenshot.html
I don't know of any library that does this in python for OSX.
However, there are at least three other ways to get the screenshot:
Use the java.awt.Robot class from jython. (Except twisted probably won't run on jython.)
Port Apple's ScreenSnapshot example to Cython and compile it into a python module. (Of course you can do the same thing in C, but Cython makes it much more fun.)
If you can move your server to win32, or just run win32 on your mac via parallels, then you can use the python imaging library's ImageGrab module.
However, I think shelling out to the OS is still the easiest answer. Instead of trying to get it all to run in a single process, just have two processes running: your main twisted process, and some other server that uses threads or whatever.
Then just pass messages back and forth when you want to take a screenshot. You can do this with a simple socket connection (just write another handler to in your twisted server, and have the screenshot server connect as a client)...
If it were me, I'd probably use an AMQP server like RabbitMQ to handle the message-passing, but that may be overkill for what you're doing.
Depending on your code, you might be able to use deferToThread to run the call to screencapture and return the filepath or a pil.Image instance (or whatever you need).
Using the example at http://twistedmatrix.com/documents/current/core/howto/gendefer.html#auto5 it might look something like...
from subprocess import call
import tempfile
from twisted.internet import reactor, threads
import Image ## pip install pil
## Blocking code that takes the screenshot and saves to file
def take_screenshot():
tmp_file_path = tempfile.mktemp(suffix='.png')
# os.system('screencapture %s' % tmp_file_path)
retcode = call(['screencapture', tmp_file_path])
if retcode < 0:
img = Image.open(tmp_file_path)
return img
else:
return None
## Callback fired by the deferToThread
def do_something_with_screenshot(img):
print img.filename, img.format, img.size, img.mode
reactor.stop() ## just here for this example
def run():
# get our Deferred which will be called with the largeFibonnaciNumber result
d = threads.deferToThread(take_screenshot)
# add our callback to print it out
d.addCallback(do_something_with_screenshot)
if __name__ == '__main__':
run()
reactor.run()
Perhaps you can convince the robotframework or Selenium to send a CaptureScreen Sensetalk command to Eggplant Drive.
The Taking a Screenshot post in the TestPlant forums mentions this command.
I'd like to prevent multiple instances of the same long-running python command-line script from running at the same time, and I'd like the new instance to be able to send data to the original instance before the new instance commits suicide. How can I do this in a cross-platform way?
Specifically, I'd like to enable the following behavior:
"foo.py" is launched from the command line, and it will stay running for a long time-- days or weeks until the machine is rebooted or the parent process kills it.
every few minutes the same script is launched again, but with different command-line parameters
when launched, the script should see if any other instances are running.
if other instances are running, then instance #2 should send its command-line parameters to instance #1, and then instance #2 should exit.
instance #1, if it receives command-line parameters from another script, should spin up a new thread and (using the command-line parameters sent in the step above) start performing the work that instance #2 was going to perform.
So I'm looking for two things: how can a python program know another instance of itself is running, and then how can one python command-line program communicate with another?
Making this more complicated, the same script needs to run on both Windows and Linux, so ideally the solution would use only the Python standard library and not any OS-specific calls. Although if I need to have a Windows codepath and an *nix codepath (and a big if statement in my code to choose one or the other), that's OK if a "same code" solution isn't possible.
I realize I could probably work out a file-based approach (e.g. instance #1 watches a directory for changes and each instance drops a file into that directory when it wants to do work) but I'm a little concerned about cleaning up those files after a non-graceful machine shutdown. I'd ideally be able to use an in-memory solution. But again I'm flexible, if a persistent-file-based approach is the only way to do it, I'm open to that option.
More details: I'm trying to do this because our servers are using a monitoring tool which supports running python scripts to collect monitoring data (e.g. results of a database query or web service call) which the monitoring tool then indexes for later use. Some of these scripts are very expensive to start up but cheap to run after startup (e.g. making a DB connection vs. running a query). So we've chosen to keep them running in an infinite loop until the parent process kills them.
This works great, but on larger servers 100 instances of the same script may be running, even if they're only gathering data every 20 minutes each. This wreaks havoc with RAM, DB connection limits, etc. We want to switch from 100 processes with 1 thread to one process with 100 threads, each executing the work that, previously, one script was doing.
But changing how the scripts are invoked by the monitoring tool is not possible. We need to keep invocation the same (launch a process with different command-line parameters) but but change the scripts to recognize that another one is active, and have the "new" script send its work instructions (from the command line params) over to the "old" script.
BTW, this is not something I want to do on a one-script basis. Instead, I want to package this behavior into a library which many script authors can leverage-- my goal is to enable script authors to write simple, single-threaded scripts which are unaware of multi-instance issues, and to handle the multi-threading and single-instancing under the covers.
The Alex Martelli approach of setting up a communications channel is the appropriate one. I would use a multiprocessing.connection.Listener to create a listener, in your choice. Documentation at:
http://docs.python.org/library/multiprocessing.html#multiprocessing-listeners-clients
Rather than using AF_INET (sockets) you may elect to use AF_UNIX for Linux and AF_PIPE for Windows. Hopefully a small "if" wouldn't hurt.
Edit: I guess an example wouldn't hurt. It is a basic one, though.
#!/usr/bin/env python
from multiprocessing.connection import Listener, Client
import socket
from array import array
from sys import argv
def myloop(address):
try:
listener = Listener(*address)
conn = listener.accept()
serve(conn)
except socket.error, e:
conn = Client(*address)
conn.send('this is a client')
conn.send('close')
def serve(conn):
while True:
msg = conn.recv()
if msg.upper() == 'CLOSE':
break
print msg
conn.close()
if __name__ == '__main__':
address = ('/tmp/testipc', 'AF_UNIX')
myloop(address)
This works on OS X, so it needs testing with both Linux and (after substituting the right address) Windows. A lot of caveats exists from a security point, the main one being that conn.recv unpickles its data, so you are almost always better of with recv_bytes.
The general approach is to have the script, on startup, set up a communication channel in a way that's guaranteed to be exclusive (other attempts to set up the same channel fail in a predictable way) so that further instances of the script can detect the first one's running and talk to it.
Your requirements for cross-platform functionality strongly point towards using a socket as the communication channel in question: you can designate a "well known port" that's reserved for your script, say 12345, and open a socket on that port listening to localhost only (127.0.0.1). If the attempt to open that socket fails, because the port in question is "taken", then you can connect to that port number instead, and that will let you communicate with the existing script.
If you're not familiar with socket programming, there's a good HOWTO doc here. You can also look at the relevant chapter in Python in a Nutshell (I'm biased about that one, of course;-).
Perhaps try using sockets for communication?
Sounds like your best bet is sticking with a pid file but have it not only contain the process Id - have it also include the port number that the prior instance is listening on. So when starting up check for the pid file and if present see if a process with that Id is running - if so send your data to it and quit otherwise overwrite the pid file with the current process's info.