Is there a way to run a python that can process inputs interactively? Without user input. That way methods can be called without needed to import and initialize the script.
What I have:
import very_heavy_package
very_heavy_package.initialize()
if very_heavy_package.check(input_file):
do_something()
else:
do_something_else()
I want something like:
import very_heavy_package
very_heavy_package.initialize()
#entry_point()
def check_something(input_file):
if very_heavy_package.check(input_file):
do_something()
else:
do_something_else()
import and initialize() lines take a very long time, but check_something() is pretty much instantaneous. I want to be able to check_something() multiple times on demand, without executing the full script all over.
I know this could be achieved with a server built in flask, but it seems a little overkill. Is there a more "local" way of doing this?
This example in particular is about running some Google Vision processing in an image from a surveillance camera on a Raspberry Pi Zero. Initializing the script takes a while (~10s), but making the API request is very fast(<100ms). I'm looking to achieve fast response time.
I don't think the webserver is overkill. By using an HTTP server with a REST api, you are using standards that most people will find easy to understand, use and extend. As an additional advantage, should you ever want to automate the usage of your tool, most automation tools already know how to speak REST and JSON.
Therefore, I would suggest you to follow your initial idea and use http.server or a library such as flask to create a small, no-frills web server with a REST api.
Try python -im your_module
The 'i' flag is for interactive and the 'm' flag is for module. And leave off the '.py'.
I have managed to solve my whim using signals. As I didn't need to pass any information, only trigger a function, that functionality solves my needs. So using signal python library and SIGUSR1
import signal
import time
import very_heavy_package
very_heavy_package.initialize()
def check_something():
input_file = get_file()
if very_heavy_package.check(input_file):
do_something()
else:
do_something_else()
signal.signal(signal.SIGUSR1, check_something)
while True:
# Waits for SIGUSR1
time.sleep(600)
now I can start the daemon from bash with
nohup python myscript.py &
and make the wake up call with kill
pkill -SIGUSR1 -f myscript.py
Disclaimer:
The pkill command is somewhat dangerous and it can have undesirable effects (i.e. killing a text editor with myscript.py opened). I should look into fancier ways of killing processes.
Related
We can run any script in python doing:
python main.py
Is it possible do the same if the script was a FastApi application?
Something like:
python main.py GET /login.html
To call a GET method that returns a login.html page.
If not, how I could start a FastApi application without using Uvicorn or another webserver?
I would like can run the script only when necessary.
Thanks
FastApi is designed to allow you to BUILD APIs which can be queried using a HTTP client, not directly query those APIs yourself; however, technically I believe you could.
When you start the script you could start the FastApi app in a another process running in the background, then send a request to it.
import subprocess
import threading
import requests
url = "localhost/some_path"
# launch sub process in background task while redirecting all output to /dev/null
thread = threading.Thread(target=lambda: subprocess.check_output(["uvcorn", "main:app"]))
thread.start()
response = requests.get(url)
# do something with the response...
thread.join()
Obviously this snippet has MUCH room for improvement, for example the thread will never actually end unless something bad happens, this is just a minimal example.
This is method has the clear drawback of starting the API each time you want to run the command. A better approach would be to emulate applications such as Docker, in which you would start up a local server daemon which you would then ping using the command line app.
This would mean that you would have the API running for much longer in the background, but typically these APIs are fairly light and you shouldn't notice and hit to you computer's performance. This also provides the benefit of multiple users being able to run the command at the same time.
If you used the first previous method you may run into situations where user A send a GET request, starting up the server taking hold of the configured host port combo. When user B tries to run the same command just after, they will find themselves unable to start the server. and perform the request.
This will also allow you to eventually move the API to an external server with minimal effort down the line. All you would need to do is change the base url of the requests.
TLDR; Run the FastApi application as a daemon, and query the local server from the command line program instead.
I have a Python (3) script running on Linux, referred to as the main script, which has to call a routine from a proprietary DLL. So far, I have solved this with Wine using the following construct:
# Main script running on Linux
import subprocess
# [...]
subprocess.Popen('echo "python dll_call.py %s" | wine cmd &' % options, shell = True)
# [...]
The script dll_call.py is executed by a Windows Python (3) interpreter installed under Wine. It dumps the return values into a file which is then picked up by the waiting main script. It's not exactly reliable and agonizingly slow if I have to do this a few times in a row.
I'd like to start the script dll_call.py once, offering some type of a simple server, which should expose the required routine in some sort of way. At the end of the day, I'd like to have a main script looking somewhat like this:
# Main script running on Linux
import subprocess
# [...]
subprocess.Popen('echo "python dll_call_server.py" | wine cmd &', shell = True)
# [...]
return_values = call_into_dll(options)
How can this be implemented best (if speed is required and security not a concern)?
Thank you #jsbueno and #AustinHastings for your answers and suggestions.
For those having similar problems: Inspired by the mentioned answers, I wrote a small Python module for calling into Windows DLLs from Python on Linux. It is based on IPC between a regular Linux/Unix Python process and a Wine-based Python process. Because I have needed it in too many different use-cases / scenarios, I designed it as a "generic" ctypes module drop-in replacement, which does most of the required plumbing automatically in the background.
Example: Assume you're in Python on Linux, you have Wine installed, and you want to call into msvcrt.dll (the Microsoft C runtime library). You can do the following:
from zugbruecke import ctypes
dll_pow = ctypes.cdll.msvcrt.pow
dll_pow.argtypes = (ctypes.c_double, ctypes.c_double)
dll_pow.restype = ctypes.c_double
print('You should expect "1024.0" to show up here: "%.1f".' % dll_pow(2.0, 10.0))
Source code (LGPL), PyPI package & documentation. It's still a bit rough around the edges (i.e. alpha and insecure), but it does handle most types of parameters (including pointers).
You can use the XMLRPC client and servers built-in Python's stdlib to do what you want. Just make your Wine-Python expose the desired functions as XMLRPC methods, and make an inter-process call from any other Python program to that.
It also works for calling functions running in Jython or IronPython from CPython, and also across Python2 and Python3 - the examples included in the module documentation themselves should be enough.Just check the docs: https://docs.python.org/2/library/xmlrpclib.html
If you need the calls to be asynchronous on the client side, or the server site to respond to more than one process, you can find other frameworks over which to build the calls - Celery should also work across several different Pythons while preserving call compatibility, and it is certainly enough performance-wise.
You want to communicate between two processes, where one of them is obscured by being under the control of the WINE engine.
My first thought here is to use a very decoupled form of IPC. There are just too many things that can go wrong with tight coupling and something like WINE involved.
And finally, how can this be made easy for someone new to this kind of stuff?
The obvious answer is to set up a web server. There are plenty of tutorials using plenty of packages in Python to respond to HTTP requests, and to generate HTTP requests.
So, set up a little HTTP responder in your WINE process, listen to some non-standard port (not 8080 or 80), and translate requests into calls to your DLL. If you're clever, you'll interpret web requests (http://localhost:108000/functionname?arg1=foo&arg2=bar) into possibly different DLL calls.
On the other side, create a HTTP client in your non-WINE code and make requests to your server.
I need to be able to take a screenshot (of a vnc session, if putting this in the title and tags wasn't clear enough) within a python script under OSX. The remote system is already running a vnc server which I am using for other purposes, and will eventually cover the full range of common desktop operating systems, so I would prefer to keep using vnc as opposed to some other solution.
I do not have a vnc window open on my test server, as it runs headless. I have tried using vncdotool, but I'd prefer not to have to shell out, and trying to mimic the control flow causes problems because Twisted does not allow you to restart the reactor, but if you leave it running it blocks the main thread, and there seem to be problems trying to run the reactor in a separate Thread or Process...
Does anyone have any ideas?
Building upon what tangentStorm suggested, using selenium to take a screenshot. Try doing this. Open up src/Selenium2Library/keywords/_screenshot.py and look at lines 24-30.
background leaking when the page layout is somehow broken.
"""
path, link = self._get_screenshot_paths(filename)
self._current_browser().save_screenshot(path)
# Image is shown on its own row and thus prev row is closed on purpose
self._html('</td></tr><tr><td colspan="3"><a href="%s">'
Delete the line self._current_browser().save_screenshot(path) and add directly in its place
if hasattr(self._current_browser(), 'get_screenshot_as_file'):
self._current_browser().get_screenshot_as_file(path)
else:
self._current_browser().save_screenshot(path)
So in all it should look like:
background leaking when the page layout is somehow broken.
"""
path, link = self._get_screenshot_paths(filename)
if hasattr(self._current_browser(), 'get_screenshot_as_file'):
self._current_browser().get_screenshot_as_file(path)
else:
self._current_browser().save_screenshot(path)
# Image is shown on its own row and thus prev row is closed on purpose
self._html('</td></tr><tr><td colspan="3"><a href="%s">'
Then try using selenium to take the screenshot.
Reference: Fix
After reading your comments, it seems what you actually want to do is take screenshots of remote web browsers running your flash game.
... And you're using selenium to test those remote web browsers.
... Why don't you just have selenium take the screenshots for you?
http://selenium.googlecode.com/svn/trunk/docs/api/java/org/openqa/selenium/TakesScreenshot.html
I don't know of any library that does this in python for OSX.
However, there are at least three other ways to get the screenshot:
Use the java.awt.Robot class from jython. (Except twisted probably won't run on jython.)
Port Apple's ScreenSnapshot example to Cython and compile it into a python module. (Of course you can do the same thing in C, but Cython makes it much more fun.)
If you can move your server to win32, or just run win32 on your mac via parallels, then you can use the python imaging library's ImageGrab module.
However, I think shelling out to the OS is still the easiest answer. Instead of trying to get it all to run in a single process, just have two processes running: your main twisted process, and some other server that uses threads or whatever.
Then just pass messages back and forth when you want to take a screenshot. You can do this with a simple socket connection (just write another handler to in your twisted server, and have the screenshot server connect as a client)...
If it were me, I'd probably use an AMQP server like RabbitMQ to handle the message-passing, but that may be overkill for what you're doing.
Depending on your code, you might be able to use deferToThread to run the call to screencapture and return the filepath or a pil.Image instance (or whatever you need).
Using the example at http://twistedmatrix.com/documents/current/core/howto/gendefer.html#auto5 it might look something like...
from subprocess import call
import tempfile
from twisted.internet import reactor, threads
import Image ## pip install pil
## Blocking code that takes the screenshot and saves to file
def take_screenshot():
tmp_file_path = tempfile.mktemp(suffix='.png')
# os.system('screencapture %s' % tmp_file_path)
retcode = call(['screencapture', tmp_file_path])
if retcode < 0:
img = Image.open(tmp_file_path)
return img
else:
return None
## Callback fired by the deferToThread
def do_something_with_screenshot(img):
print img.filename, img.format, img.size, img.mode
reactor.stop() ## just here for this example
def run():
# get our Deferred which will be called with the largeFibonnaciNumber result
d = threads.deferToThread(take_screenshot)
# add our callback to print it out
d.addCallback(do_something_with_screenshot)
if __name__ == '__main__':
run()
reactor.run()
Perhaps you can convince the robotframework or Selenium to send a CaptureScreen Sensetalk command to Eggplant Drive.
The Taking a Screenshot post in the TestPlant forums mentions this command.
I am using CherryPy to receive requests through REST API. Apart from handling requests the application should also do some resource management every few seconds. What is the easiest way to do this?
1) run a separate thread
2) cherrypy.process.plugins.PerpetualTimer (not sure how to use it, and it looks like it is heavy on resources?)
3) some other way?
The solution with a separate thread is fine by me, but I was wondering if there is a nicer way to do it?
Note that CherryPy is not a requirement - I have decided to use it primarily because the project looks alive and because it supports multiple simultaneous connections (in other words: I am open to alternatives).
PerpetualTimer is just a repeating version of threading._Timer.
What you really want to use is cherrypy.process.plugins.Monitor, which is little more than a way to run a separate thread for you. You should use it because it plugs into cherrypy.engine, which governs start and stop behavior for CherryPy servers. If you run your own thread, you're going to want to have it stop when CP shuts down anyway; the Monitor class already knows how to do that. It uses PerpetualTimer under the hood, until recent versions, where it was replaced by the BackgroundTask class.
my_task_runner = Monitor(cherrypy.engine, my_task, frequency=3)
my_task_runner.subscribe()
I'm starting to work with Ubuntu's "quickly" framework, which is python/gtk based. I want to write a gui wrapper for a textmode C state-machine that uses stdin/stdout.
I'm new to gtk. I can see that the python print command will write to the terminal window, so I assume I could redirect that to my C program's stdin. But how can I get my quickly program to monitor stdin (i.e. watch for the C program's stdout responses)? I suppose I need some sort of polling loop, but I don't know if/where that is supported within the "quickly" framework.
Or is redirection not the way to go - should I be looking at something like gobject.spawn_async?
The gtk version of select, is glib.io_add_watch, you may want to redirect the stdin/stdout of the process to/from the GUI, you can check an article I've written time ago:
http://pygabriel.wordpress.com/2009/07/27/redirecting-the-stdout-on-a-gtk-textview/
I'm not sure about the quickly framework, but in Python you can use the subprocess module which spawns a new child process but allows communication via stdin/stdout.
http://docs.python.org/library/subprocess.html
Take a look at the documentation, but that's pretty useful.
If you want to do polling you can use a gobject.timeout_add.
You'd create a function something like this:
def mypoller(self):
data = myproc.communicate()
if data[0]: #There's data to read
# do something with data
else:
# Do something else - delete data, return False
# to end calls to this function
and that would let you read data from your process.