Protocol Handshaking Python - python

For my A level computing project for year 13 im writing an email client, I need to Model how pythons SMTP protocol works and show protocol handshaking. What I want to know is that when I log into gmails mail server to send an email using smtp is there a way to print out what the line of code does.
So I would want to show exactly what is going on when the line is executed.
import smtplib
server = smtplib.SMTP('smtp.gmail.com', 587)
server.login("youremailusername", "password")
msg = "\nHello!" # The /n separates the message from the headers
server.sendmail("you#gmail.com", "target#example.com", msg)
Cheers guys

Assuming that by "what the line of code does" you mean "what protocol messages get sent to and received from the server", smtplib.SMTP.set_debuglevel(level):
Set the debug output level. A true value for level results in debug messages for connection and for all messages sent to and received from the server.
If by "what the line of code does" you want to know the Python code that's being executed, you can step into the function call in the debugger. Or just read the source. Like many modules in the stdlib, smtplib is designed to be useful as sample code as well as a practical module, so at the top of the docs, there's a link to smtplib.py.
Is there a way I can write that output to a tkinter window or file?
If you look at the source linked above, you can see that it just uses print calls for its debug logging. So, this gives you a few options:
Fork smtplib and replace those print calls with something better.
Monkeypatch smtplib to give it a print function that shadows the global one. (This only works in Python 3.x; in 2.x, print isn't a function.)
Open a text file, and just assign sys.stderr = my_text_file. (This only works for files, not tkinter. And it also catches all stderr, not just the logging from smtplib.)
Create a file-like object that does whatever you want in its write method, and assign sys.stderr to that. (This works for anything you want to do, including adding to a tkinter edit window, but of course it still catches all stderr.)
Wrap the script from outside—e.g., with a wrapper script that uses subprocess.Popen to call the real script and capture its stderr in a pipe.
Which one is appropriate depends on your needs. For your assignment, assuming nothing is writing to stderr but smtplib's debug output during the time you're doing smtplib stuff, I think the file-like object idea might make sense. So:
class MyDumbFakeStderr(object):
def write(self, output):
add_to_my_file_or_tkinter_thing(output)
sys.stderr = MyDumbFakeStderr()
try:
# your smtplib code here
finally:
sys.stderr = sys.__stderr__
Obviously restoring stderr is unnecessary if you're just going to quit as soon as you're done, while if you want to do it repeatedly you'll probably want to wrap it in a contextlib.contextmanager, etc. Also, this MyDumbFakeStderr is pretty dumb (hence the name); it works fine for wrapping code that does nothing but print whole lines to stderr, but for anything more complicated you might need to add your own line buffering, or make it an io.TextIOBase, etc. This is just an idea to get you started.

You can read the function's source code.
http://www.opensource.apple.com/source/python/python-3/python/Lib/smtplib.py (search for sendmail)
You can also read a bit about SMTP: http://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol#SMTP_transport_example
And try to relate the two

Related

Is there a way to read stdout in python?

What i want is to get all the text that i wrote in stdout as a string.
from sys import stdout
stdout.read() # throws io.UnsupportedOperation: not readable
Example of what i want to get:
print("abc")
stdout.read() == "abc" # True
No. As the documentation says, stdout is not readable. Think of it as sending information to a physical printer. For instance, when you send a page of text to your FAX-printer-scanner device, how would your program be able to read that output? The characters are sent to an output buffer, down to the physical device, and flushed out to the paper.
The canonical way to handle this is with logging, which has several support packages in most mature languages, including Python. You create a logger whose log method (write the output) echoes its input to both print and another store of your creation. You add a read method to give you access to that store.
This gives you a little research to do and some coding work, but I trust you can start from here. Look for the logger tutorials on line. Of course, if you get stuck with that coding, you can post your example on Stack Overflow. :-)
If you are using 3.4 or higher, there is this recipe found in the documentation for contextlib
f = io.StringIO()
with contextlib.redirect_stdout(f):
... stuff ...
result = f.getvalue()
Note that the effect on stdout is global, so don't use it in libraries or in a threaded application.
If you don't want to use logger, you could create a custom print function:
from io import StringIO
printstore = StringIO()
def myprint(*args, **kwargs):
print(*args, **kwargs) # unmodified print
kwargs["file"] = printstore
print(*args, **kwargs) # print to StringIO
This has the advantage that you get all the flexibility of the builtin print.
A drawback is that it catches only output printed with myprint.

Send a Variable from one Python Script to another

I have a python script that is constantly listening out for a json message from another server. When it receives a message one of the values is placed into a variable. Once this message is received I need another python script to be launched and the variable past over to it to be processed. Can anyone help me with how to do this?
I'm very new to python and would really appreciate any help you can give me, as I'm struggling to understand some of the other answers to similar questions.
Thank you!
Below is the code constantly running to receive messages:
class MyListener(stomp.ConnectionListener):
def on_error(self, headers, message):
print 'received an error %s' % message
def on_message(self, headers, message):
print 'received a message %s' % message
obj = json.loads(message)
detectedimsi = obj["imsi"]
print detectedimsi
To run any other program, you just use the functions in the subprocess module. But there are three complexities.
First, you want to run another Python script, not a binary. Presumably you want to run it with the same version of Python that you're running in, even if there are multiple Pythons installed. So, you want to use sys.executable as the program. For example:
subprocess.check_call([sys.executable, 'otherscript.py'])
Second, you want to pass some data. For small amounts of printable string data that will fit on the command line, just pass them as an argument:
subprocess.check_call([sys.executable, 'otherscript.py', detectedimsi])
Then, in the other script, you just look at sys.argv[1] to receive the data.
If you're trying to pass binary data, or a large amount of text, you will want to pass it over a pipe. The simplest way to do that is to pass it via stdin. But I don't think that's relevant here.
Third, you're trying to do this inside an asynchronous network client or server. So, you can't just run another script and block until it's finished, unless you're absolutely sure that it's going to finish very fast. If your server framework has a way to integrate subprocesses into it (Twisted is the only one I've ever used that does, but there may be others), great. You can fake it if you can toss the process's stdout pipe into your framework's event loop (basically, if you don't care about Windows, and your framework has a way to add a file to the reactor). But otherwise, you will have to use a thread.
I'm assuming you don't need to get any results from the other script, or even know when it's done. If that's not true, it gets a bit more complicated.
Putting it all together:
def on_message(self, headers, message):
print 'received a message %s' % message
obj = json.loads(message)
detectedimsi = obj["imsi"]
print detectedimsi
thread = threading.Thread(target=subprocess.call,
args=[[sys.executable, 'otherscript.py', detectedimsi]])
thread.daemon = True
thread.start()
You'll need the subprocess module:
import subprocess
Then in your on_message() method, you can start the other process:
def on_message(self, headers, message):
…
outputOfOtherProcess = subprocess.check_output([
'/path/to/other/process',
json.dumps(detectedimsi) ])
Your other process will receive the value you want to pass as sys.argv[1] in JSON format (so it will need to use json.loads(sys.argv[1])).
Other ways of passing the value are possible, but passing it as command line argument might be sufficient for your case (depends on the data's size and other things).
As abarnert pointed out, some things should be mentioned:
This way of calling a subprocess will block until the subprocess is finished. This might be a problem in case your script is supposed to be reactive again before the subprocess is finished.
Calling the subprocess script by giving its path as executable might fail on systems which do not provide the direct calling of scripts. In these cases you will have to call the interpreter (which can be accessed as sys.executable) and pass the path to the script as first parameter instead (and the remaining parameters after that).

How to pipe python socket to a stdin/stdout

I need to write an app to contact to a server. After sending a few messages it should allow the user to interact with the server by sending command and receive result.
How should I pipe my current socket so that the user can interact with the server without the need of reading input and writing output from/to stdin/stdout ?
You mean like using netcat?
cat initial_command_file - | nc host:port
the answer is, something needs to read and write. In the sample shell script above, cat reads from two sources in sequence, and writes to a single pipe; nc reads from that pipe and writes to a socket, but also reads from the socket and writes to its stdout.
So there will always be some reading and writing going on ... however, you can structure your code so that doesn't intrude into the comms logic.
For example, you use itertools.chain to create an input iterator that behaves similarly to cat, so your TCP-facing code can take a single input iterable:
def netcat(input, output, remote):
"""trivial example for 1:1 request-response protocol"""
for request in input:
remote.write(request)
response = remote.read()
output.write(response)
handshake = ['connect', 'initial', 'handshake', 'stuff']
cat = itertools.chain(handshake, sys.stdin)
server = ('localhost', 9000)
netcat(cat, sys.stdout, socket.create_connection(server))
You probably want something like pexpect. Basically you'd create a spawn object that initates the connection (e.g. via ssh) then use that object's expect() and sendline() methods to issue the commands you want to send at the prompts. Then you can use the interact() method to turn control over to the user.

Reconnect a running daemon to stdout

I have an object running as a daemon in py3k.
For that, I use the Pyro4 module inside a thread (based on the code from Sander Marechal, daemon.py).
class MyDaemon(Daemon):
def run(self):
mo = MyObject()
daemon = Pyro4.Daemon(host=HOST, port=PORT)
uri = daemon.register(mo, USER)
logging.debug("MyObject ready. Object uri = {0}".format(uri))
daemon.requestLoop()
and when needed, I get the object with
mo = Pyro4.Proxy("PYRO:%s#%s:%i" % (USER, HOST, PORT))
mo.myAction(my_args)
Now I want the MyObject module to output text to sdtout. The problem is that, as running in a thread, it is not connected to sys.__stdout__.
class MyObject():
def greeting(self):
print("Hello world") # lost
I tried to create a mo.reconnect(sys.__stdout__) function to bind the current stdout to the one in the thread but Pyro4 does not accept buffer as argument.
A solution could be to simply return text at the end of my function which will be recieved by the Pyro4 proxy but I want also to be able to display info inside a function.
The question is also valid for stdin.
Is there a way to achieve what I am looking for ? Is there something I don't get and I'm overcomplicating ? Maybe Pyro4 is not the best way to do that.
Thank you
Why would you want your daemon to interact with stdin and stdout? The very fact that it is a daemon means that it shouldn't interact with the "user" (for whom stdin and stdout are intended).
Everything depends on what you want to achieve by connecting its input and output to stdin or out:
If you want user interaction, you should make your main code act as a proxy to that daemon handling input and output and the daemon just doing the processing. i.e. your daemon's interface should take the input strings (or objects if easier) as parameters and output similar objects that your proxy will take and output to the user.
If you want debugging output, a quick patch would be to read directly from the /tmp/sdaemon.log file that is where all the daemon's output goes (according to line 44). A more decent fix would be to implement proper logging throughout your code.

microcrontroller output to python cgi script

I bought this temperature sensor logger kit: http://quozl.netrek.org/ts/. It works great with the supplied C code, I like to use python because of its simplicity, so I wrote a script in python that displays the output from the microcontroller. I only have one temperature sensor hooked up to the kit. I want the temperature to be displayed on a web page, but can't seem to figure it out, I'm pretty sure it has something to do with the output from the micro having a \r\n DOS EOL character and linux web servers do not interpret it properly. The book I have says "Depending on the web server you are using, you might need to make configuration changes to understand how to serve CGI files." I am using debian and apache2 and basic cgi scripts work fine.
Here is my code for just displaying the sensor to the console (this works fine):
import serial
ser = serial.Serial('/dev/ttyS0', 2400)
while 1:
result = ser.readline()
if result:
print result
Here is my test.cgi script that works:
#!/usr/bin/python
print "Content-type: text/html\n"
print "<title>CGI Text</title>\n"
print "<h1>cgi works!</h1>"
Here is the cgi script I have started to display temp (doesn't work - 500 internal server error):
#!/usr/bin/python
import sys, serial
sys.stderr = sys.stdout
ser = serial.Serial('/dev/ttyS0', 2400)
print "Content-type: text/html\n"
print """
<title>Real Time Temperature</title>
<h1>Real Time Temperature:</h1>
"""
#result = ser.readline()
#if result:
print ser.readline()
If i run python rtt.cgi in the console it outputs the correct html and temperature, I know this will not be real time and that the page will have to be reloaded every time that the user wants to see the temperature, but that stuff is coming in the future.. From my apache2 error log it says:
malformed header from script. Bad header= File "/usr/lib/cgi-bin/rtt.c: rtt.cgi
I'm guessing that the execution context under which your CGI is running is unable to complete the read() from the serial port.
Incidentally the Python standard libraries have MUCH better ways for writing CGI scripts than what you're doing here; and even the basic string handling offers a better way to interpolate your results (assuming you code has the necessary permissions to read() them) into the HTML.
At least I'd recommend something like:
#!/usr/bin/python
import sys, serial
sys.stderr = sys.stdout
ser = serial.Serial('/dev/ttyS0', 2400)
html = """Content-type: text/html
<html><head><title>Real Time Temperature</title></head><body>
<h1>Real Time Temperature:</h1>
<p>%s</p>
</body></html>
""" % ser.readline() # should be cgi.escape(ser.readline())!
ser.close()
sys.exit(0)
Notice we just interpolate the results of ser.readline() into our string using the
% string operator. (Incidentally your HTML was missing <html>, <head>, <body>, and <p> (paragraph) tags).
There are still problems with this. For example we really should at least import cgi wrap the foreign data in that to ensure that HTML entities are properly substituted for any reserved characters, etc).
I'd suggest further reading: [Python Docs]: http://docs.python.org/library/cgi.html
one more time:
# Added to allow cgi-bin to execute cgi, python and perl scripts
ScriptAlias /cgi-bin/ /var/www/cgi-bin/
AddHandler cgi-script .cgi .py .pl
<Directory /var/www>
Options +Execcgi
AddHandler cgi-script .cgi .py .pl
</Directory>
Michael,
It looks like the issue is definitely permissions, however, you shouldn't try to make your script have the permission of /dev/ttyS0. What you will probably need to do is spawn another process where the first thing you do is change your group to the group of the /dev/ttyS0 device. On my box that's 'dialout' you're may be different.
You'll need to import the os package, look in the docs for the Process Parameters, on that page you will find some functions that allow you to change your ownership. You will also need to use one of the functions in Process Management also in the os package, these functions spawn processes, but you will need to choose one that will return the data from the spawned process. The subprocess package may be better for this.
The reason you need to spawn another process is that the CGI script need to run under the Apache process and the spawn process needs to access the serial port.
If I get a chance in the next few days I'll try to put something together for you, but give it a try, don't wait for me.
Also one other thing all HTTP headers need to end in two CRLF sequences. So your header needs to be:
print "Content-type: text/html\r\n\r\n"
If you don't do this your browser may not know when the header ends and the entity data begins. Read RFC-2616
~Carl

Categories

Resources