How to launch mock HTTP server in separate thread Python? - python

I am trying to mock an HTTP server in my python script, but it fails.
Here is what I am doing:
import bottle
from restclient import GET
from threading import Thread
#bottle.route("/go")
def index():
return "ok"
server = Thread(target = bottle.run)
server.setDaemon(True)
server.start()
print "Server started..."
response = GET("http://127.0.0.1:8080/go")
assert response.body == "ok"
print "Done..."
Basically I am trying to launch bottle.py http server with 1 test route in a separate thread & then mock responses from it.
But it just won't work. The server is not getting started in a separate thread, so I always get "errno 111 connection refused" when trying to request it.
So the question is: how can it be solved? Is there any other ways to mock http servers?

You're not leaving enough time for the webserver to start up.
When you do:
server.start()
print "Server started..."
response = GET("http://127.0.0.1:8080/go")
You try to access the server just after starting it,
Depending on which thread (the main one, or the server one) gets to run first (and for how long), you might end up in a situation when the server hasn't started yet when you try to access it, hence the Connection Refused error.
You could try doing the following :
server.start()
import time
time.sleep(...) # Something long enough
# Continue your stuff.
As you can see in time.sleep -- sleeps thread or process?, doing time.sleep will only sleep the currently running thread, so you could leave enough time to your server thread to start.
Now, all that is a bit hackish, so you might want to look into your server startup process to see if there is a way to assess whether it's up and running and wait on this condition before starting up.
Looking at the bottle source now, I can't figure out a solution to do that cleanly; you could always try to hit the server repeatdly until it finally responds, thus indicating the server is alive.

Related

Pyngrok to getting connecting continuously

I have just started using ngrok, and while using the standard procedure, I can start the tunnel using ./ngrok tcp 22 and see that tunnel open in my dashboard,
But I would like to use pyngrok, and here when I use:
from pyngrok.conf import PyngrokConfig
from pyngrok import ngrok
ngrok.set_auth_token("<NGROK_AUTH_TOKEN>")
pyngrok_config = PyngrokConfig(config_path="/opt/ngrok/ngrok.yml")
ngrok.get_tunnels(pyngrok_config=pyngrok_config)
ssh_url = ngrok.connect()
It connects and generates a tunnel, but I can't see anything open in the dashboard, why?
Maybe because the python script executes and generates URL and then stops and comes out of it, but then how to make it keep running, or how to even start a tunnel using python or even API ? Please suggest the correct script, using python or API?
The thread with the ngrok tunnel will terminate as soon as the Python process terminates. So you are correct, the reason this is happening is because your script is not long lived. The easiest way to accomplish this is by following the example in the documentation.
Another issue is how you're setting the authtoken. Since you're not using the default config_path, you need to set this before setting the authtoken so it gets updated in the correct file (you'd also need to pass it to connect()). There are a couple ways to do this, but the easiest way from the docs is to just update the default config (since that's what will be used if you don't pass a pyngrok_config to any future method calls).
I also see that you're response variable is ssh_url, so you probably want to start a TCP tunnel to a port other than 80 (the default)—perhaps you've configured this in your ngrok.yml, but if not, I've updated the call to connect() to ensure this is the type of tunnel started for you and in case others try to use this same code snippet.
Full disclosure, I am the developer of pyngrok. Here is your code snippet updated with my changes.
import os, time
from pyngrok.conf import PyngrokConfig
from pyngrok import ngrok, conf
conf.get_default().config_path = "/opt/ngrok/ngrok.yml"
ngrok.set_auth_token(os.environ.get("NGROK_AUTH_TOKEN"))
ssh_tunnel = ngrok.connect(22, "tcp")
ngrok_process = ngrok.get_ngrok_process()
try:
# Block until CTRL-C or some other terminating event
ngrok_process.proc.wait()
except KeyboardInterrupt:
print(" Shutting down server.")
ngrok.kill()

Python Sockets Select is hanging - Doing other tasks while waiting for socket data?

I am rather a noob here, but trying to setup a script where I can poll a socket, and when no socket data has been sent, a loop continues to run and do other things. I have been playing with several examples I found using select(), but no matter how I organize the code, it seems to stop on or near the server.recv() line and wait for a response. I want to skip out of this if no data has been sent by a client, or if no client connection exists.
Note that this application does not require the server script to send any reply data, if it makes any difference.
The actual application is to run a loop and animate some LEDs (which needs root access to the I/O on a Raspberry Pi). I am going to send this script data from another separate script via sockets that will pass in control parameters for the animations. This way the external script does not require root access.
So far the sending and receiving of data works great, I just can't get loop to keep spinning in the absence of incoming data. It is my understanding that this is what select() was intended to allow, but the examples I've found don't seem to be working that way.
I have attempted adding server.setblocking(0) a few different places to no avail. (If I understand correctly a non-blocking instance should allow the code to skip over the recv() if no data has been sent, but I may be off on this).
I have based my code on an example here:
http://ilab.cs.byu.edu/python/select/echoserver.html
Here is the server side script followed by the client side script.
Server Code: sockselectserver.py
#!/usr/bin/env python
import select
import socket
import sys
server = socket.socket()
host = socket.gethostname()
port = 20568
size = 1024
server.bind((host,port))
server.listen(5)
input = [server,sys.stdin]
running = 1
while running:
inputready,outputready,exceptready = select.select(input,[],[])
for s in inputready:
if s == server:
# handle the server socket
client, address = server.accept()
input.append(client)
elif s == sys.stdin:
# handle standard input
junk = sys.stdin.readline()
running = 0
else:
# handle all other sockets
data = s.recv(size)
if data:
s.send(data)
else:
s.close()
input.remove(s)
print "looping"
server.close()
Client Code: skclient.py
#!/usr/bin/python # This is client.py file
import socket # Import socket module
s = socket.socket() # Create a socket object
host = socket.gethostname() # Get local machine name
port = 20568 # Reserve a port for your service.
s.connect((host, port))
data = "123:120:230:51:210:120:55:12:35:24"
s.send(data)
print s.recv(1024)
s.close # Close the socket when done
What I would like to achieve by this example is to see "looping" repeated forever, then when the client script sends data, see that data print, then see the "looping" resume printing over and over. That would tell me it's doing what is intended I can take it from there.
Interesting enough, when I test this as is, whenever I run the client, I see "looping" printed 3 times on the screen, then no more. I don't fully understand what is happening inside the select, but I'd assume it would only print 1 time.
I tried moving the inputready.. select.select() around to different places but found it appears to need to be called each time, otherwise the server stops responding (for example if it is called once prior to the endless while: loop).
I'm hoping this can be made simple enough that it can be taught to other hacker types in a maker class, so I'm hopeful I don't need to get too crazy with multi-threading and more elaborate solutions. As a last resort I'm considering logging all my parameters to mySQL from the external script then using this script to query them back out of tables. I've got experience there and would probably work, but it seems this socket angle would be a more direct solution.
Any help very much appreciated.
Great news. This was an easy fix, wanted to post in case anyone else needed it. The suggestion from acw1668 above got me going.
Simply added a timeout of "0" to the select.select() like this:
inputready,outputready,exceptready = select.select(input,[],[],0)
This is in the python docs but somehow I missed it. Link here: https://docs.python.org/2/library/select.html
Per the docs:
The optional timeout argument specifies a time-out as a floating point number in seconds. When the timeout argument is omitted the function blocks until at least one file descriptor is ready. A time-out value of zero specifies a poll and never blocks.
I tested the same code as above, adding a delay of 5 seconds using time.sleep(5) right after the print "looping" line. With the delay, if no data or client is present the code just loops every 5 seconds and prints "looping" to the screen. If I kick off the client script during the 5 second delay, it pauses and the message is processed the next time the 5 second delay ends. Occasionally it doesn't respond the very next loop, but rather the loop following. I assume this is because the first time through the server.accept is running and the next time through the s.recv() is running which actually exchanges the data.

Why is my logging messed up (socket, thread, signal)?

The log output of my python program (using the builtin logging module, but occurs even when using simple prints) is partially messed up, as you can see in the following image. Note the first line, first word still being correct and then it gets mixed up:
I tried to visualize the situation where this happens:
Basically in my main thread/program I start a simple socketserver.TCPServer to listen for incoming messages. That server runs on its own thread (QtCore.QThread) so my program is not blocked. If some other application sends a message the request handler of the TCPServer will simply forward the message to the main thread using a QtCore.SIGNAL like:
self.emit(QtCore.SIGNAL('received(const QString)'), receivedMessage)
The program then does some parsing and computation with that message and logs those, thereby producing the gibberish seen above. At some point the logging returns back to working normally.
I am not sure if this is related to sockets or threading or both, but I guess it may be a common issue and therefore I am thankful for any hints why this occurs.
I think I have located the problem:
When the external application wants to send a message it will always create a new client socket, connect to the server, send the message and then close the client socket.
The sock.close() does not seem to close immediately, the docs say I should call sock.shutdown(how) first, but unfortunately this did not help as well. I can use a small time.sleep(0.5) after the close to fix the logging issue, but instead I did something like this:
def ensure_closed(self):
while True:
try:
self.sock.recv(1024)
except:
break
def close_connection(self):
self.sock.close()
self.ensure_closed()
# Continue with other stuff.
# Now the logging behaves normally.
There might be better ways to do it.

Cancel xmlrpc client request?

Is it possible to somehow cancel xmlrpc client request?
Let say that in one thread I have code like:
svr = xmlrpclib.ServerProxy('http://localhost:9092')
svr.DoSomethingWhichNeedTime()
I don't mean some kind of TimeOut... Sometimes from another thread I can get event to cancel my work. And then I need to cancel this request.
I know that I can do it with twisted but, is it possible to do it with standard xmlrpclib?
First of all, it must be implemented on server side, not in client (xmlrpclib). If you simply interrupt your HTTP request to XML-RPC server, it's not guaranteed that long process running on the server will be interrupted at all. So xmlrpclib just can't have this functionality.
If you want to implement this behaviour, you need to create two type of requests. A request of first type will tell your server to start some long process. It must be executed in background (in another thread or process), and your XML-RPC server must send the response ("Process started!") to the client immediately. When you want to stop the process, client must send another request that will tell your server to stop executing of process.
Yes, if you want to do really dirty hacks....
Basically the ServerProxy object keeps a handle to the underlying socket/http connection. If you reached into those internals and simply close() the socket your client code will blow up with an exception. If you handle those properly its your cancel.
You can do it a little more sane if you register your own transport class for the ServerProxy via the transport parameter and give it some cancel method that does what you want.
That won't stop the server from processing things, unless it reacts to closing the channel directly.

python xinetd client disconnection handling

This may or may not being a coding issue. It may also be an xinetd deamon issue, i do not know.
I have a python script which is triggered from a linux server running xinetd. Xinetd has been setup to only allow one instance as I only want one machine to be able to connect to the service, which is therefore also limited by IP.
Currently when the client connects to xinetd the service works correctly and the script begins sending its output to the client machine. However, when the client disconnects (i.e: due to reboot), the process is still alive on the server, and this blocks the ability for the client to connect once its finished rebooting or so on.
Q: How can i detect in python that the client has disconnected. Perhaps i can test if stdout is no longer being read from by the client (and then exit the script), or is there a much eaiser way in xinetd to have the child process be killed when the client disconnects ?
(I'm using python 2.4.3 on RHEL5 linux - solutions for 2.4 are needed, but 3.1 solutions would be useful to know also.)
Add a signal handler for SIGHUP. (x)inetd sends this upon the socket disconnecting.
Monitor the signals sent to your proccess. Maybe your script isn't responding to the SIGHUP sent by xinet, monitor the signal and let it die.
You don't seem to get a SIGHUP, but you do get a SIGPIPE, at least so long as you are attempting any IO on the connection. If the application spends long periods of time not doing any IO, then you could just start a thread reading stdin to ensure you get the SIGPIPE as soon as the disconnection occurs. This was good enough for my application but then I didn't use any pipes other than the ones xinetd gave me.
I've seen several places on the net where people talk about the SIGHUP getting sent on client disconnection, so I've written an inetd python script to test out a couple of servers (one inetd and another xinetd), so you could use that to check on the signals getting sent. It just logs what it finds to /var/log/test.log. Perhaps it will be useful.
#!/usr/bin/python
import os, signal, sys
skip = ["SIGKILL", "SIG_DFL", "SIGSTOP", "SIG_IGN", "SIGCLD", "SIGCHLD"]
name_map = {}
identifiers = [i for i in dir(signal) if i.startswith("SIG") and not i in skip]
for i in identifiers:
name_map[getattr(signal, i)] = i
def handler(num, frame):
signame = name_map[num]
os.system("echo handled %s >> /var/log/test.log" % signame)
if __name__ == "__main__":
for id, name in name_map.iteritems():
signal.signal(id, handler)
while True:
print sys.stdin.readline()
sys.stdout.flush()

Categories

Resources