Send a Variable from one Python Script to another - python

I have a python script that is constantly listening out for a json message from another server. When it receives a message one of the values is placed into a variable. Once this message is received I need another python script to be launched and the variable past over to it to be processed. Can anyone help me with how to do this?
I'm very new to python and would really appreciate any help you can give me, as I'm struggling to understand some of the other answers to similar questions.
Thank you!
Below is the code constantly running to receive messages:
class MyListener(stomp.ConnectionListener):
def on_error(self, headers, message):
print 'received an error %s' % message
def on_message(self, headers, message):
print 'received a message %s' % message
obj = json.loads(message)
detectedimsi = obj["imsi"]
print detectedimsi

To run any other program, you just use the functions in the subprocess module. But there are three complexities.
First, you want to run another Python script, not a binary. Presumably you want to run it with the same version of Python that you're running in, even if there are multiple Pythons installed. So, you want to use sys.executable as the program. For example:
subprocess.check_call([sys.executable, 'otherscript.py'])
Second, you want to pass some data. For small amounts of printable string data that will fit on the command line, just pass them as an argument:
subprocess.check_call([sys.executable, 'otherscript.py', detectedimsi])
Then, in the other script, you just look at sys.argv[1] to receive the data.
If you're trying to pass binary data, or a large amount of text, you will want to pass it over a pipe. The simplest way to do that is to pass it via stdin. But I don't think that's relevant here.
Third, you're trying to do this inside an asynchronous network client or server. So, you can't just run another script and block until it's finished, unless you're absolutely sure that it's going to finish very fast. If your server framework has a way to integrate subprocesses into it (Twisted is the only one I've ever used that does, but there may be others), great. You can fake it if you can toss the process's stdout pipe into your framework's event loop (basically, if you don't care about Windows, and your framework has a way to add a file to the reactor). But otherwise, you will have to use a thread.
I'm assuming you don't need to get any results from the other script, or even know when it's done. If that's not true, it gets a bit more complicated.
Putting it all together:
def on_message(self, headers, message):
print 'received a message %s' % message
obj = json.loads(message)
detectedimsi = obj["imsi"]
print detectedimsi
thread = threading.Thread(target=subprocess.call,
args=[[sys.executable, 'otherscript.py', detectedimsi]])
thread.daemon = True
thread.start()

You'll need the subprocess module:
import subprocess
Then in your on_message() method, you can start the other process:
def on_message(self, headers, message):
…
outputOfOtherProcess = subprocess.check_output([
'/path/to/other/process',
json.dumps(detectedimsi) ])
Your other process will receive the value you want to pass as sys.argv[1] in JSON format (so it will need to use json.loads(sys.argv[1])).
Other ways of passing the value are possible, but passing it as command line argument might be sufficient for your case (depends on the data's size and other things).
As abarnert pointed out, some things should be mentioned:
This way of calling a subprocess will block until the subprocess is finished. This might be a problem in case your script is supposed to be reactive again before the subprocess is finished.
Calling the subprocess script by giving its path as executable might fail on systems which do not provide the direct calling of scripts. In these cases you will have to call the interpreter (which can be accessed as sys.executable) and pass the path to the script as first parameter instead (and the remaining parameters after that).

Related

Protocol Handshaking Python

For my A level computing project for year 13 im writing an email client, I need to Model how pythons SMTP protocol works and show protocol handshaking. What I want to know is that when I log into gmails mail server to send an email using smtp is there a way to print out what the line of code does.
So I would want to show exactly what is going on when the line is executed.
import smtplib
server = smtplib.SMTP('smtp.gmail.com', 587)
server.login("youremailusername", "password")
msg = "\nHello!" # The /n separates the message from the headers
server.sendmail("you#gmail.com", "target#example.com", msg)
Cheers guys
Assuming that by "what the line of code does" you mean "what protocol messages get sent to and received from the server", smtplib.SMTP.set_debuglevel(level):
Set the debug output level. A true value for level results in debug messages for connection and for all messages sent to and received from the server.
If by "what the line of code does" you want to know the Python code that's being executed, you can step into the function call in the debugger. Or just read the source. Like many modules in the stdlib, smtplib is designed to be useful as sample code as well as a practical module, so at the top of the docs, there's a link to smtplib.py.
Is there a way I can write that output to a tkinter window or file?
If you look at the source linked above, you can see that it just uses print calls for its debug logging. So, this gives you a few options:
Fork smtplib and replace those print calls with something better.
Monkeypatch smtplib to give it a print function that shadows the global one. (This only works in Python 3.x; in 2.x, print isn't a function.)
Open a text file, and just assign sys.stderr = my_text_file. (This only works for files, not tkinter. And it also catches all stderr, not just the logging from smtplib.)
Create a file-like object that does whatever you want in its write method, and assign sys.stderr to that. (This works for anything you want to do, including adding to a tkinter edit window, but of course it still catches all stderr.)
Wrap the script from outside—e.g., with a wrapper script that uses subprocess.Popen to call the real script and capture its stderr in a pipe.
Which one is appropriate depends on your needs. For your assignment, assuming nothing is writing to stderr but smtplib's debug output during the time you're doing smtplib stuff, I think the file-like object idea might make sense. So:
class MyDumbFakeStderr(object):
def write(self, output):
add_to_my_file_or_tkinter_thing(output)
sys.stderr = MyDumbFakeStderr()
try:
# your smtplib code here
finally:
sys.stderr = sys.__stderr__
Obviously restoring stderr is unnecessary if you're just going to quit as soon as you're done, while if you want to do it repeatedly you'll probably want to wrap it in a contextlib.contextmanager, etc. Also, this MyDumbFakeStderr is pretty dumb (hence the name); it works fine for wrapping code that does nothing but print whole lines to stderr, but for anything more complicated you might need to add your own line buffering, or make it an io.TextIOBase, etc. This is just an idea to get you started.
You can read the function's source code.
http://www.opensource.apple.com/source/python/python-3/python/Lib/smtplib.py (search for sendmail)
You can also read a bit about SMTP: http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol#SMTP_transport_example
And try to relate the two

Reconnect a running daemon to stdout

I have an object running as a daemon in py3k.
For that, I use the Pyro4 module inside a thread (based on the code from Sander Marechal, daemon.py).
class MyDaemon(Daemon):
def run(self):
mo = MyObject()
daemon = Pyro4.Daemon(host=HOST, port=PORT)
uri = daemon.register(mo, USER)
logging.debug("MyObject ready. Object uri = {0}".format(uri))
daemon.requestLoop()
and when needed, I get the object with
mo = Pyro4.Proxy("PYRO:%s#%s:%i" % (USER, HOST, PORT))
mo.myAction(my_args)
Now I want the MyObject module to output text to sdtout. The problem is that, as running in a thread, it is not connected to sys.__stdout__.
class MyObject():
def greeting(self):
print("Hello world") # lost
I tried to create a mo.reconnect(sys.__stdout__) function to bind the current stdout to the one in the thread but Pyro4 does not accept buffer as argument.
A solution could be to simply return text at the end of my function which will be recieved by the Pyro4 proxy but I want also to be able to display info inside a function.
The question is also valid for stdin.
Is there a way to achieve what I am looking for ? Is there something I don't get and I'm overcomplicating ? Maybe Pyro4 is not the best way to do that.
Thank you
Why would you want your daemon to interact with stdin and stdout? The very fact that it is a daemon means that it shouldn't interact with the "user" (for whom stdin and stdout are intended).
Everything depends on what you want to achieve by connecting its input and output to stdin or out:
If you want user interaction, you should make your main code act as a proxy to that daemon handling input and output and the daemon just doing the processing. i.e. your daemon's interface should take the input strings (or objects if easier) as parameters and output similar objects that your proxy will take and output to the user.
If you want debugging output, a quick patch would be to read directly from the /tmp/sdaemon.log file that is where all the daemon's output goes (according to line 44). A more decent fix would be to implement proper logging throughout your code.

Unable to open a Python subprocess in Web2py (SIGABRT)

I've got an Apache2/web2py server running using the wsgi handler functionality. Within one of the controllers, I am trying to run an external executable to perform some processing on 2 files.
My approach to this is to use the subprocess module to kick off the executable. I have simplified the code to a bare-bones implementation with little success.
from subprocess import *
p = Popen(("echo", "Hello"), shell=False)
ret = p.wait()
print "Process ended with status %s" % ret
When running the above code on its own (create new file and running via python command line), it works exactly as expected.
However, as soon as I place the exact same code into my web2py controller, the external process stops working. Instead of the process returning with code 0 as is expected in the above example, it always returns -6 and "Hello" is not printed to stdout.
After doing some digging, I found that negative results from p.wait() implies that a signal caused the process to end abnormally. And, according to some docs I found, -6 corresponds to the SIGABRT signal.
I would have expected this signal to be a result of some poorly executed code in my child process. However, since this is only running echo (and since it works outside of web2py) I have my doubts that the child process is signalling itself.
Is there some web2py limitation/configuration that causes Popen() requests to always fail? If so, how can I modify my logic so that the controller (or whatever) is actually able to spawn this external process?
** EDIT: Looks like web2py applications may not like the subprocess module. According to a reply to a message reply in the web2py email group:
"You should not use subprocess in a web2py application (if you really need too, look into the admin/controllers/shell.py) but you can use it in a web2py program running from shell (web2py.py -R myprogram.py)."
I will be checking out some options based on the note here and see if any solution presents itself.
In the end, the best I was able to come up with involved setting up a simple XML RPC server and call the functions from that:
my_server.py
#my_server.py
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
from subprocess import *
proc_srvr = xmlrpclib.ServerProxy("http://localhost:12345")
def echo_fn():
p = Popen(("echo", "hello"), shell=False)
ret = p.wait()
print "Process ended with status %s" % ret
return True # RPC Server doesn't like to return None
def main():
server = SimpleXMLRPCServer(("localhost", 12345), ErrorHandler)
server.register_function(echo_fn, "echo_fn")
while True:
server.handle_request()
if __name__ == "__main__":
main()
web2py_controller.py
#web2py_controller.py
def run_echo():
proc_srvr = xmlrpclib.ServerProxy("http://localhost:12345")
proc_srvr.echo_fn()
I'll be honest, I'm not a Python nor SimpleRPCServer guru, so the overall code may not be up to best-practice standards. However, going this route did allow me to, in effect, call a subprocess from a controller in web2py.
(Note, this was a quick and dirty simplification of the code that I have in my project. I have not validated it is in a working state, so it may require some tweaks.)

SGE script: print to file during execution (not just at the end)?

I have an SGE script to execute some python code, submitted to the queue using qsub. In the python script, I have a few print statements (updating me on the progress of the program). When I run the python script from the command line, the print statements are sent to stdout. For the sge script, I use the -o option to redirect the output to a file. However, it seems that the script will only send these to the file after the python script has completed running. This is annoying because (a) I can no longer see real time updates on the program and (b) if my job does not terminate correctly (for example if my job gets kicked off the queue) none of the updates are printed. How can I make sure that the script is writing to the file each time it I want to print something, as opposed to lumping it all together at the end?
I think you are running into an issue with buffered output. Python uses a library to handle it's output, and the library knows that it's more efficient to write a block at a time when it's not talking to a tty.
There are a couple of ways to work around this. You can run python with the "-u" option (see the python man page for details), for example, with something like this as the first line of your script:
#! /usr/bin/python -u
but this doesn't work if you are using the "/usr/bin/env" trick because you don't know where python is installed.
Another way is to reopen the stdout with something like this:
import sys
import os
# reopen stdout file descriptor with write mode
# and 0 as the buffer size (unbuffered)
sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0)
Note the bufsize parameter of os.fdopen being set to 0 to force it to be unbuffered. You can do something similar with sys.stderr.
As others mentioned, it is out of performance reasons to not always write the stdout when not connected to a tty.
If you have a specific point at which you want the stdout to be written, you can force that by using
import sys
sys.stdout.flush()
at that point.
I just encountered a similar issue with SGE, and no suggested method to "unbuffer" the file IO seemed to work for me. I had to wait until the end of program execution to see any output.
The workaround I found was to wrap sys.stdout into a custom object that re-implements the "write" method. Instead of actually writing to stdout, this new method instead opens the file where IO is redirected, appends with the desired data, and then closes the file. It's a bit ugly, but I found it solved the problem, since the actual opening/closing of the file forces IO to be interactive.
Here's a minimal example:
import os, sys, time
class RedirIOStream:
def __init__(self, stream, REDIRPATH):
self.stream = stream
self.path = REDIRPATH
def write(self, data):
# instead of actually writing, just append to file directly!
myfile = open( self.path, 'a' )
myfile.write(data)
myfile.close()
def __getattr__(self, attr):
return getattr(self.stream, attr)
if not sys.stdout.isatty():
# Detect redirected stdout and std error file locations!
# Warning: this will only work on LINUX machines
STDOUTPATH = os.readlink('/proc/%d/fd/1' % os.getpid())
STDERRPATH = os.readlink('/proc/%d/fd/2' % os.getpid())
sys.stdout=RedirIOStream(sys.stdout, STDOUTPATH)
sys.stderr=RedirIOStream(sys.stderr, STDERRPATH)
# Simple program to print msg every 3 seconds
def main():
tstart = time.time()
for x in xrange( 10 ):
time.sleep( 3 )
MSG = ' %d/%d after %.0f sec' % (x, args.nMsg, time.time()-tstart )
print MSG
if __name__ == '__main__':
main()
This is SGE buffering the output of your process, it happens whether its a python process or any other.
In general you can decrease or disable the buffering in SGE by changing it and recompiling. But its not a great thing to do, all that data is going to be slowly written to disk affecting your overall performance.
Why not print to a file instead of stdout?
outFileID = open('output.log','w')
print(outFileID,'INFO: still working!')
print(outFileID,'WARNING: blah blah!')
and use
tail -f output.log
This works for me:
class ForceIOStream:
def __init__(self, stream):
self.stream = stream
def write(self, data):
self.stream.write(data)
self.stream.flush()
if not self.stream.isatty():
os.fsync(self.stream.fileno())
def __getattr__(self, attr):
return getattr(self.stream, attr)
sys.stdout = ForceIOStream(sys.stdout)
sys.stderr = ForceIOStream(sys.stderr)
and the issue has to do with NFS not syncing data back to the master until a file is closed or fsync is called.
I hit this same problem today and solved it by just writing to disk instead of printing:
with open('log-file.txt','w') as out:
out.write(status_report)
print() supports the argument flush since Python 3.3 (documentation). So, to force flush the stream:
print('Hello World!', flush=True)

Streaming stdin/stdout in Python

I'm trying to stream a bash shell to/from a simple WebSockets UI, but I'm having trouble redirecting the IO. I want to start an instance of bash and connect stdout and stdin to write_message() and on_message() functions that interact with my web UI. Here's a simplified version of what I'm trying to do:
class Handler(WebSocketHandler):
def open(self):
print "New connection opened."
self.app = subprocess.Popen(["/bin/bash", "--norc", "-i"], stdout=subprocess.PIPE, stdin=subprocess.PIPE, shell=False)
thread.start_new_thread(self.io_loop, ())
def on_message(self, message):
self.app.stdin.write(message)
def on_close(self):
self.app.terminate()
def io_loop(self):
while self.app.poll() is None:
line = self.app.stdout.readline()
if line:
self.write_message(line)
While bash appears to start and on_message does get called, I don't get any output. readline() remains blocking. I've tried stdout.read(), stdout.read(1), and various buffer modifications, but still no output. I've also tried hardcoding commands with a trailing '\n' in on_message to isolate the issue, but I still don't get any output from readline().
Ideally I want to stream each byte written to stdout in realtime, without waiting for EOL or any other characters, but I'm having a hard time finding the right API. Any pointers would be appreciated.
It looks to me like the line:
line = self.app.stdout.readline()
will block the ioloop from running because the application will spend most of its time hung up in the readline() waiting for the application to write some output instead. To get this to work, you are going to have to get the stdin and stdout of the process (and what about stderr? — you need to capture that too), switch them into non-blocking mode, and add them to the set of file descriptors that the ioloop spends its time looping on.

Categories

Resources