I have an object running as a daemon in py3k.
For that, I use the Pyro4 module inside a thread (based on the code from Sander Marechal, daemon.py).
class MyDaemon(Daemon):
def run(self):
mo = MyObject()
daemon = Pyro4.Daemon(host=HOST, port=PORT)
uri = daemon.register(mo, USER)
logging.debug("MyObject ready. Object uri = {0}".format(uri))
daemon.requestLoop()
and when needed, I get the object with
mo = Pyro4.Proxy("PYRO:%s#%s:%i" % (USER, HOST, PORT))
mo.myAction(my_args)
Now I want the MyObject module to output text to sdtout. The problem is that, as running in a thread, it is not connected to sys.__stdout__.
class MyObject():
def greeting(self):
print("Hello world") # lost
I tried to create a mo.reconnect(sys.__stdout__) function to bind the current stdout to the one in the thread but Pyro4 does not accept buffer as argument.
A solution could be to simply return text at the end of my function which will be recieved by the Pyro4 proxy but I want also to be able to display info inside a function.
The question is also valid for stdin.
Is there a way to achieve what I am looking for ? Is there something I don't get and I'm overcomplicating ? Maybe Pyro4 is not the best way to do that.
Thank you
Why would you want your daemon to interact with stdin and stdout? The very fact that it is a daemon means that it shouldn't interact with the "user" (for whom stdin and stdout are intended).
Everything depends on what you want to achieve by connecting its input and output to stdin or out:
If you want user interaction, you should make your main code act as a proxy to that daemon handling input and output and the daemon just doing the processing. i.e. your daemon's interface should take the input strings (or objects if easier) as parameters and output similar objects that your proxy will take and output to the user.
If you want debugging output, a quick patch would be to read directly from the /tmp/sdaemon.log file that is where all the daemon's output goes (according to line 44). A more decent fix would be to implement proper logging throughout your code.
Related
I have written a python daemon process that can be started and stopped using the following commands
/usr/local/bin/daemon.py start
/usr/local/bin/daemon.py stop
I can achieve the same results by calling these commands from a python script
os.system('/usr/local/bin/daemon.py start')
os.system('/usr/local/bin/daemon.py stop')
this works perfectly fine, but now I wish to add a functionality to the daemon process such that when I run the command
os.system('/usr/local/bin/daemon.py foo')
the daemon returns a Python object. So, something like :
foobar = os.sytem('/usr/local/bin/daemon.py foo')
just to be clear, I have all the logic ready in the daemon to return a Python object, I just can't figure out how to pass this object to the calling python script. Is there some way?
Don't you mean you want to implement simple serialization and deserialization?
In that case I'd propose to look at pickle (https://docs.python.org/2/library/pickle.html) to transform your data into a generic text format at the daemon side and transform it back to Python code at the client side.
I think, marshaling is what you need: https://docs.python.org/2.7/library/marshal.html & https://docs.python.org/2/library/pickle.html
For my A level computing project for year 13 im writing an email client, I need to Model how pythons SMTP protocol works and show protocol handshaking. What I want to know is that when I log into gmails mail server to send an email using smtp is there a way to print out what the line of code does.
So I would want to show exactly what is going on when the line is executed.
import smtplib
server = smtplib.SMTP('smtp.gmail.com', 587)
server.login("youremailusername", "password")
msg = "\nHello!" # The /n separates the message from the headers
server.sendmail("you#gmail.com", "target#example.com", msg)
Cheers guys
Assuming that by "what the line of code does" you mean "what protocol messages get sent to and received from the server", smtplib.SMTP.set_debuglevel(level):
Set the debug output level. A true value for level results in debug messages for connection and for all messages sent to and received from the server.
If by "what the line of code does" you want to know the Python code that's being executed, you can step into the function call in the debugger. Or just read the source. Like many modules in the stdlib, smtplib is designed to be useful as sample code as well as a practical module, so at the top of the docs, there's a link to smtplib.py.
Is there a way I can write that output to a tkinter window or file?
If you look at the source linked above, you can see that it just uses print calls for its debug logging. So, this gives you a few options:
Fork smtplib and replace those print calls with something better.
Monkeypatch smtplib to give it a print function that shadows the global one. (This only works in Python 3.x; in 2.x, print isn't a function.)
Open a text file, and just assign sys.stderr = my_text_file. (This only works for files, not tkinter. And it also catches all stderr, not just the logging from smtplib.)
Create a file-like object that does whatever you want in its write method, and assign sys.stderr to that. (This works for anything you want to do, including adding to a tkinter edit window, but of course it still catches all stderr.)
Wrap the script from outside—e.g., with a wrapper script that uses subprocess.Popen to call the real script and capture its stderr in a pipe.
Which one is appropriate depends on your needs. For your assignment, assuming nothing is writing to stderr but smtplib's debug output during the time you're doing smtplib stuff, I think the file-like object idea might make sense. So:
class MyDumbFakeStderr(object):
def write(self, output):
add_to_my_file_or_tkinter_thing(output)
sys.stderr = MyDumbFakeStderr()
try:
# your smtplib code here
finally:
sys.stderr = sys.__stderr__
Obviously restoring stderr is unnecessary if you're just going to quit as soon as you're done, while if you want to do it repeatedly you'll probably want to wrap it in a contextlib.contextmanager, etc. Also, this MyDumbFakeStderr is pretty dumb (hence the name); it works fine for wrapping code that does nothing but print whole lines to stderr, but for anything more complicated you might need to add your own line buffering, or make it an io.TextIOBase, etc. This is just an idea to get you started.
You can read the function's source code.
http://www.opensource.apple.com/source/python/python-3/python/Lib/smtplib.py (search for sendmail)
You can also read a bit about SMTP: http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol#SMTP_transport_example
And try to relate the two
I have a python script that is constantly listening out for a json message from another server. When it receives a message one of the values is placed into a variable. Once this message is received I need another python script to be launched and the variable past over to it to be processed. Can anyone help me with how to do this?
I'm very new to python and would really appreciate any help you can give me, as I'm struggling to understand some of the other answers to similar questions.
Thank you!
Below is the code constantly running to receive messages:
class MyListener(stomp.ConnectionListener):
def on_error(self, headers, message):
print 'received an error %s' % message
def on_message(self, headers, message):
print 'received a message %s' % message
obj = json.loads(message)
detectedimsi = obj["imsi"]
print detectedimsi
To run any other program, you just use the functions in the subprocess module. But there are three complexities.
First, you want to run another Python script, not a binary. Presumably you want to run it with the same version of Python that you're running in, even if there are multiple Pythons installed. So, you want to use sys.executable as the program. For example:
subprocess.check_call([sys.executable, 'otherscript.py'])
Second, you want to pass some data. For small amounts of printable string data that will fit on the command line, just pass them as an argument:
subprocess.check_call([sys.executable, 'otherscript.py', detectedimsi])
Then, in the other script, you just look at sys.argv[1] to receive the data.
If you're trying to pass binary data, or a large amount of text, you will want to pass it over a pipe. The simplest way to do that is to pass it via stdin. But I don't think that's relevant here.
Third, you're trying to do this inside an asynchronous network client or server. So, you can't just run another script and block until it's finished, unless you're absolutely sure that it's going to finish very fast. If your server framework has a way to integrate subprocesses into it (Twisted is the only one I've ever used that does, but there may be others), great. You can fake it if you can toss the process's stdout pipe into your framework's event loop (basically, if you don't care about Windows, and your framework has a way to add a file to the reactor). But otherwise, you will have to use a thread.
I'm assuming you don't need to get any results from the other script, or even know when it's done. If that's not true, it gets a bit more complicated.
Putting it all together:
def on_message(self, headers, message):
print 'received a message %s' % message
obj = json.loads(message)
detectedimsi = obj["imsi"]
print detectedimsi
thread = threading.Thread(target=subprocess.call,
args=[[sys.executable, 'otherscript.py', detectedimsi]])
thread.daemon = True
thread.start()
You'll need the subprocess module:
import subprocess
Then in your on_message() method, you can start the other process:
def on_message(self, headers, message):
…
outputOfOtherProcess = subprocess.check_output([
'/path/to/other/process',
json.dumps(detectedimsi) ])
Your other process will receive the value you want to pass as sys.argv[1] in JSON format (so it will need to use json.loads(sys.argv[1])).
Other ways of passing the value are possible, but passing it as command line argument might be sufficient for your case (depends on the data's size and other things).
As abarnert pointed out, some things should be mentioned:
This way of calling a subprocess will block until the subprocess is finished. This might be a problem in case your script is supposed to be reactive again before the subprocess is finished.
Calling the subprocess script by giving its path as executable might fail on systems which do not provide the direct calling of scripts. In these cases you will have to call the interpreter (which can be accessed as sys.executable) and pass the path to the script as first parameter instead (and the remaining parameters after that).
I need to write an app to contact to a server. After sending a few messages it should allow the user to interact with the server by sending command and receive result.
How should I pipe my current socket so that the user can interact with the server without the need of reading input and writing output from/to stdin/stdout ?
You mean like using netcat?
cat initial_command_file - | nc host:port
the answer is, something needs to read and write. In the sample shell script above, cat reads from two sources in sequence, and writes to a single pipe; nc reads from that pipe and writes to a socket, but also reads from the socket and writes to its stdout.
So there will always be some reading and writing going on ... however, you can structure your code so that doesn't intrude into the comms logic.
For example, you use itertools.chain to create an input iterator that behaves similarly to cat, so your TCP-facing code can take a single input iterable:
def netcat(input, output, remote):
"""trivial example for 1:1 request-response protocol"""
for request in input:
remote.write(request)
response = remote.read()
output.write(response)
handshake = ['connect', 'initial', 'handshake', 'stuff']
cat = itertools.chain(handshake, sys.stdin)
server = ('localhost', 9000)
netcat(cat, sys.stdout, socket.create_connection(server))
You probably want something like pexpect. Basically you'd create a spawn object that initates the connection (e.g. via ssh) then use that object's expect() and sendline() methods to issue the commands you want to send at the prompts. Then you can use the interact() method to turn control over to the user.
I've got an Apache2/web2py server running using the wsgi handler functionality. Within one of the controllers, I am trying to run an external executable to perform some processing on 2 files.
My approach to this is to use the subprocess module to kick off the executable. I have simplified the code to a bare-bones implementation with little success.
from subprocess import *
p = Popen(("echo", "Hello"), shell=False)
ret = p.wait()
print "Process ended with status %s" % ret
When running the above code on its own (create new file and running via python command line), it works exactly as expected.
However, as soon as I place the exact same code into my web2py controller, the external process stops working. Instead of the process returning with code 0 as is expected in the above example, it always returns -6 and "Hello" is not printed to stdout.
After doing some digging, I found that negative results from p.wait() implies that a signal caused the process to end abnormally. And, according to some docs I found, -6 corresponds to the SIGABRT signal.
I would have expected this signal to be a result of some poorly executed code in my child process. However, since this is only running echo (and since it works outside of web2py) I have my doubts that the child process is signalling itself.
Is there some web2py limitation/configuration that causes Popen() requests to always fail? If so, how can I modify my logic so that the controller (or whatever) is actually able to spawn this external process?
** EDIT: Looks like web2py applications may not like the subprocess module. According to a reply to a message reply in the web2py email group:
"You should not use subprocess in a web2py application (if you really need too, look into the admin/controllers/shell.py) but you can use it in a web2py program running from shell (web2py.py -R myprogram.py)."
I will be checking out some options based on the note here and see if any solution presents itself.
In the end, the best I was able to come up with involved setting up a simple XML RPC server and call the functions from that:
my_server.py
#my_server.py
from SimpleXMLRPCServer import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
from subprocess import *
proc_srvr = xmlrpclib.ServerProxy("http://localhost:12345")
def echo_fn():
p = Popen(("echo", "hello"), shell=False)
ret = p.wait()
print "Process ended with status %s" % ret
return True # RPC Server doesn't like to return None
def main():
server = SimpleXMLRPCServer(("localhost", 12345), ErrorHandler)
server.register_function(echo_fn, "echo_fn")
while True:
server.handle_request()
if __name__ == "__main__":
main()
web2py_controller.py
#web2py_controller.py
def run_echo():
proc_srvr = xmlrpclib.ServerProxy("http://localhost:12345")
proc_srvr.echo_fn()
I'll be honest, I'm not a Python nor SimpleRPCServer guru, so the overall code may not be up to best-practice standards. However, going this route did allow me to, in effect, call a subprocess from a controller in web2py.
(Note, this was a quick and dirty simplification of the code that I have in my project. I have not validated it is in a working state, so it may require some tweaks.)