Ubuntu quickly (python/gtk) - how to monitor stdin? - python

I'm starting to work with Ubuntu's "quickly" framework, which is python/gtk based. I want to write a gui wrapper for a textmode C state-machine that uses stdin/stdout.
I'm new to gtk. I can see that the python print command will write to the terminal window, so I assume I could redirect that to my C program's stdin. But how can I get my quickly program to monitor stdin (i.e. watch for the C program's stdout responses)? I suppose I need some sort of polling loop, but I don't know if/where that is supported within the "quickly" framework.
Or is redirection not the way to go - should I be looking at something like gobject.spawn_async?

The gtk version of select, is glib.io_add_watch, you may want to redirect the stdin/stdout of the process to/from the GUI, you can check an article I've written time ago:
http://pygabriel.wordpress.com/2009/07/27/redirecting-the-stdout-on-a-gtk-textview/

I'm not sure about the quickly framework, but in Python you can use the subprocess module which spawns a new child process but allows communication via stdin/stdout.
http://docs.python.org/library/subprocess.html
Take a look at the documentation, but that's pretty useful.
If you want to do polling you can use a gobject.timeout_add.
You'd create a function something like this:
def mypoller(self):
data = myproc.communicate()
if data[0]: #There's data to read
# do something with data
else:
# Do something else - delete data, return False
# to end calls to this function
and that would let you read data from your process.

Related

Interactive python daemon

Is there a way to run a python that can process inputs interactively? Without user input. That way methods can be called without needed to import and initialize the script.
What I have:
import very_heavy_package
very_heavy_package.initialize()
if very_heavy_package.check(input_file):
do_something()
else:
do_something_else()
I want something like:
import very_heavy_package
very_heavy_package.initialize()
#entry_point()
def check_something(input_file):
if very_heavy_package.check(input_file):
do_something()
else:
do_something_else()
import and initialize() lines take a very long time, but check_something() is pretty much instantaneous. I want to be able to check_something() multiple times on demand, without executing the full script all over.
I know this could be achieved with a server built in flask, but it seems a little overkill. Is there a more "local" way of doing this?
This example in particular is about running some Google Vision processing in an image from a surveillance camera on a Raspberry Pi Zero. Initializing the script takes a while (~10s), but making the API request is very fast(<100ms). I'm looking to achieve fast response time.
I don't think the webserver is overkill. By using an HTTP server with a REST api, you are using standards that most people will find easy to understand, use and extend. As an additional advantage, should you ever want to automate the usage of your tool, most automation tools already know how to speak REST and JSON.
Therefore, I would suggest you to follow your initial idea and use http.server or a library such as flask to create a small, no-frills web server with a REST api.
Try python -im your_module
The 'i' flag is for interactive and the 'm' flag is for module. And leave off the '.py'.
I have managed to solve my whim using signals. As I didn't need to pass any information, only trigger a function, that functionality solves my needs. So using signal python library and SIGUSR1
import signal
import time
import very_heavy_package
very_heavy_package.initialize()
def check_something():
input_file = get_file()
if very_heavy_package.check(input_file):
do_something()
else:
do_something_else()
signal.signal(signal.SIGUSR1, check_something)
while True:
# Waits for SIGUSR1
time.sleep(600)
now I can start the daemon from bash with
nohup python myscript.py &
and make the wake up call with kill
pkill -SIGUSR1 -f myscript.py
Disclaimer:
The pkill command is somewhat dangerous and it can have undesirable effects (i.e. killing a text editor with myscript.py opened). I should look into fancier ways of killing processes.

Python - Executing Functions During a raw_input() Prompt [duplicate]

I'm trying to write a simple Python IRC client. So far I can read data, and I can send data back to the client if it automated. I'm getting the data in a while True, which means that I cannot enter text while at the same time reading data. How can I enter text in the console, that only gets sent when I press enter, while at the same time running an infinite loop?
Basic code structure:
while True:
read data
#here is where I want to write data only if it contains '/r' in it
Another way to do it involves threads.
import threading
# define a thread which takes input
class InputThread(threading.Thread):
def __init__(self):
super(InputThread, self).__init__()
self.daemon = True
self.last_user_input = None
def run(self):
while True:
self.last_user_input = input('input something: ')
# do something based on the user input here
# alternatively, let main do something with
# self.last_user_input
# main
it = InputThread()
it.start()
while True:
# do something
# do something with it.last_user_input if you feel like it
What you need is an event loop of some kind.
In Python you have a few options to do that, pick one you like:
Twisted https://twistedmatrix.com/trac/
Asyncio https://docs.python.org/3/library/asyncio.html
gevent http://www.gevent.org/
and so on, there are hundreds of frameworks for this, you could also use any of the GUI frameworks like tkinter or PyQt to get a main event loop.
As comments have said above, you can use threads and a few queues to handle this, or an event based loop, or coroutines or a bunch of other architectures. Depending on your target platforms one or the other might be best. For example on windows the console API is totally different to unix ptys. Especially if you later need stuff like colour output and so on, you might want to ask more specific questions.
You can use a async library (see answer of schlenk) or use https://docs.python.org/2/library/select.html
This module provides access to the select() and poll() functions
available in most operating systems, epoll() available on Linux 2.5+
and kqueue() available on most BSD. Note that on Windows, it only
works for sockets; on other operating systems, it also works for other
file types (in particular, on Unix, it works on pipes). It cannot be
used on regular files to determine whether a file has grown since it
was last read.

Python daemon shows no output for an error

I read How do you create a daemon in Python? and also this topic, and tried to write a very simple daemon :
import daemon
import time
with daemon.DaemonContext():
while True:
with open('a.txt', 'a') as f:
f.write('Hi')
time.sleep(2)
Doing python script.py works and returns immediately to terminal (that's the expected behaviour). But a.txt is never written and I don't get any error message. What's wrong with this simple daemon?
daemon.DaemonContext() has option working_directory that has default fault value / i.e. your program probably doesn't have permission to create a new file there.
The problem described here is solved by J.J. Hakala's answer.
Two additional (important) things :
Sander's code (mentioned here) is better than python-daemon. It is more reliable. Just one example: try to start two times the same daemon with python-daemon : big ugly error. With Sander's code : a nice notice "Daemon already running."
For those who want to use python-daemon anyway: DaemonContext() only makes a daemon. DaemonRunner() makes a daemon + control tool, allowing to do python script.py start or stop, etc.
One thing that's wrong with it, is it has no way to tell you what's wrong with it :-)
A daemon process is, by definition, detached from the parent process and from any controlling terminal. So if it's got something to say – such as error messages – it will need to arrange that before becoming a daemon.
From the python-daemon FAQ document:
Why does the output stop after opening the daemon context?
The specified behaviour in PEP 3143_ includes the requirement to
detach the process from the controlling terminal (to allow the process
to continue to run as a daemon), and to close all file descriptors not
known to be safe once detached (to ensure any files that continue to
be used are under the control of the daemon process).
If you want the process to generate output via the system streams
‘sys.stdout’ and ‘sys.stderr’, set the ‘DaemonContext’'s ‘stdout’
and/or ‘stderr’ options to a file-like object (e.g. the ‘stream’
attribute of a ‘logging.Handler’ instance). If these objects have file
descriptors, they will be preserved when the daemon context opens.
Set up a working channel of communication, such as a log file. Ensure the files you open aren't closed along with everything else, using the files_preserve option. Then log any errors to that channel.

Query Python3 script to get stats about the script

I have a script that continually runs and accepts data (For those that are familiar and if it helps, it is connected to EMDR - https://eve-market-data-relay.readthedocs.org).
Inside the script I have debugging built in so that I can see how much data is currently in the queue for the threads to process, however this is built to be used with just printing to the console. What I would like to do is be able to either run the same script with an additional option or a totally different script that would return the current queue count without having to enable debug.
Is there a way to do this could someone please point me in the direction of the documentation/libaries that I need to research?
There are many ways to solve this; two that come to mind:
You can write the queue count to a k/v store (like memcache or redis) and then have another script read that for you and do whatever other actions required.
You can create a specific logger for your informational output (like the queue length) and set it to log somewhere else other than the console. For example, you could use it to send you an email or log to an external service, etc. See the logging cookbook for examples.

Python logging from multiple processes

I have a possibly long running program that currently has 4 processes, but could be configured to have more. I have researched logging from multiple processes using python's logging and am using the SocketHandler approach discussed here. I never had any problems having a single logger (no sockets), but from what I read I was told it would fail eventually and unexpectedly. As far as I understand its unknown what will happen when you try to write to the same file at the same time. My code essentially does the following:
import logging
log = logging.getLogger(__name__)
def monitor(...):
# Spawn child processes with os.fork()
# os.wait() and act accordingly
def main():
log_server_pid = os.fork()
if log_server_pid == 0:
# Create a LogRecordSocketServer (daemon)
...
sys.exit(0)
# Add SocketHandler to root logger
...
monitor(<configuration stuff>)
if __name__ == "__main__":
main()
So my questions are: Do I need to create a new log object after each os.fork()? What happens to the existing global log object?
With doing things the way I am, am I even getting around the problem that I'm trying to avoid (multiple open files/sockets)? Will this fail and why will it fail (I'd like to be able to tell if future similar implementations will fail)?
Also, in what way does the "normal" (one log= expression) method of logging to one file from multiple processes fail? Does it raise an IOError/OSError? Or does it just not completely write data to the file?
If someone could provide an answer or links to help me out, that would be great. Thanks.
FYI:
I am testing on Mac OS X Lion and the code will probably end up running on a CentOS 6 VM on a Windows machine (if that matters). Whatever solution I use does not need to work on Windows, but should work on a Unix based system.
UPDATE: This question has started to move away from logging specific behavior and is more in the realm of what does linux do with file descriptors during forks. I pulled out one of my college textbooks and it seems that if you open a file in append mode from two processes (not before a fork) that they will both be able to write to the file properly as long as your write doesn't exceed the actual kernel buffer (although line buffering might need to be used, still not sure on that one). This creates 2 file table entries and one v-node table entry. Opening a file then forking isn't supposed to work, but it seems to as long as you don't exceed the kernel buffer as before (I've done it in a previous program).
So I guess, if you want platform independent multi-processing logging you use sockets and create a new SocketHandler after each fork to be safe as Vinay suggested below (that should work everywhere). For me, since I have strong control over what OS my software is being run on, I think I'm going to go with one global log object with a FileHandler (opens in append mode by default and line buffered on most OSs). The documentation for open says "A negative buffering means to use the system default, which is usually line buffered for tty devices and fully buffered for other files. If omitted, the system default is used." or I could just create my own logging stream to be sure of line buffering. And just to be clear, I'm ok with:
# Process A
a_file.write("A\n")
a_file.write("A\n")
# Process B
a_file.write("B\n")
producing...
A\n
B\n
A\n
as long as it doesn't produce...
AB\n
\n
A\n
Vinay(or anyone else), how wrong am I? Let me know. Thanks for any more clarity/sureness you can provide.
Do I need to create a new log object after each os.fork()? What happens to the existing global log object?
AFAIK the global log object remains pointing to the same logger in parent and child processes. So you shouldn't need to create a new one. However, I think you should create and add the SocketHandler after the fork() in monitor(), so that the socket server has four distinct connections, one to each child process. If you don't do this, then the child processes forked off in monitor() will inherit the SocketHandler and its socket handle from their parent, but I don't know for sure that it will misbehave. The behaviour is likely to be OS-dependent and you may be lucky on OSX.
With doing things the way I am, am I even getting around the problem that I'm trying to avoid (multiple open files/sockets)? Will this fail and why will it fail (I'd like to be able to tell if future similar implementations will fail)?
I wouldn't expect failure if you create the socket connection to the socket server after the last fork() as I suggested above, but I am not sure the behaviour is well-defined in any other case. You refer to multiple open files but I see no reference to opening files in your pseudocode snippet, just opening sockets.
Also, in what way does the "normal" (one log= expression) method of logging to one file from multiple processes fail? Does it raise an IOError/OSError? Or does it just not completely write data to the file?
I think the behaviour is not well-defined, but one would expect failure modes to present as interspersed log messages from different processes in the file, e.g.
Process A writes first part of its message
Process B writes its message
Process A writes second part of its message
Update: If you use a FileHandler in the way you described in your comment, things will not be so good, due to the scenario I've described above: process A and B both begin pointing at the end of file (because of the append mode), but thereafter things can get out of sync because (e.g. on a multiprocessor, but even potentially on a uniprocessor), one process can (preempt another and) write to the shared file handle before another process has finished doing so.

Categories

Resources