Pyngrok to getting connecting continuously - python

I have just started using ngrok, and while using the standard procedure, I can start the tunnel using ./ngrok tcp 22 and see that tunnel open in my dashboard,
But I would like to use pyngrok, and here when I use:
from pyngrok.conf import PyngrokConfig
from pyngrok import ngrok
ngrok.set_auth_token("<NGROK_AUTH_TOKEN>")
pyngrok_config = PyngrokConfig(config_path="/opt/ngrok/ngrok.yml")
ngrok.get_tunnels(pyngrok_config=pyngrok_config)
ssh_url = ngrok.connect()
It connects and generates a tunnel, but I can't see anything open in the dashboard, why?
Maybe because the python script executes and generates URL and then stops and comes out of it, but then how to make it keep running, or how to even start a tunnel using python or even API ? Please suggest the correct script, using python or API?

The thread with the ngrok tunnel will terminate as soon as the Python process terminates. So you are correct, the reason this is happening is because your script is not long lived. The easiest way to accomplish this is by following the example in the documentation.
Another issue is how you're setting the authtoken. Since you're not using the default config_path, you need to set this before setting the authtoken so it gets updated in the correct file (you'd also need to pass it to connect()). There are a couple ways to do this, but the easiest way from the docs is to just update the default config (since that's what will be used if you don't pass a pyngrok_config to any future method calls).
I also see that you're response variable is ssh_url, so you probably want to start a TCP tunnel to a port other than 80 (the default)—perhaps you've configured this in your ngrok.yml, but if not, I've updated the call to connect() to ensure this is the type of tunnel started for you and in case others try to use this same code snippet.
Full disclosure, I am the developer of pyngrok. Here is your code snippet updated with my changes.
import os, time
from pyngrok.conf import PyngrokConfig
from pyngrok import ngrok, conf
conf.get_default().config_path = "/opt/ngrok/ngrok.yml"
ngrok.set_auth_token(os.environ.get("NGROK_AUTH_TOKEN"))
ssh_tunnel = ngrok.connect(22, "tcp")
ngrok_process = ngrok.get_ngrok_process()
try:
# Block until CTRL-C or some other terminating event
ngrok_process.proc.wait()
except KeyboardInterrupt:
print(" Shutting down server.")
ngrok.kill()

Related

Keeping ports open in a python script that is run continuously

I'm trying to develop a server script using python 3.4 that runs perpetually and responds to client requests on up to 5 separate ports. My preferred platform is Debian 8.0. The platform currently runs on a virtual machine in the cloud. My script works fine when I run it off the command line - I need to now (1) keep it running once I log off the server and (2) keep several ports open through the script so that a windows client can connect to them.
For (1),
After trying several options [I tried using upstart, added the script to rc.local, used nohup with & to run it off the terminal, etc] that didn't seem to work, I eventually found something that does seem to keep the script running, even if it's not very elegant - I wrote an hourly cron script that checks to see if the script is running in the process list, and if not, to execute it.
Whenever I login to the VM now, I see the following output when I type 'ps -ef':
root 22007 21992 98 Nov10 14-12:52:59 /usr/bin/python3.4 /home/userxyz/cronserver.py
I assume that the script is running based on the fact that there is an active process in the system. I mention this part because I suspect that there could be a correlation with part (2) of my issue.
For (2),
The script is supposed to open ports 49100 - 49105 and listen for connection requests, etc. When I run the script from the terminal, zenmap from my client machine verifies that these ports are open. However, when the cron job initiates the script, these ports don't seem to stay open. My windows client program can't connect to the script either.
The python code I use for listening to a port:
f = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
f.bind((serviceIP, 49101))
f.listen(5)
while True:
scName, address = f.accept()
[code to handle request]
scName.shutdown(socket.SHUT_WR)
scName.close()
Any insight or assistance would be greatly appreciated!
What you ask is not easy because it depends on a variety of factors:
What is the frequency of the data received?
How many clients are expected to connect to this server?
Is there a chance two clients try to connect at the same time?
How long it takes to handle some received data?
What do you need to do with your data?
Write to a database?
Write to a file?
Calculate something?
Etc.
Depending on your answer you'll have some design decisions to make for your solution.
But since you need an answer, here's a hack that represent a way to do things:
import socketserver
import threading
import datetime
class SleepyGaryReceptionHandler(socketserver.BaseRequestHandler):
log_file_name = "/tmp/sleepygaryserver.log"
def handle(self):
# self.request is defined in BaseRequestHandler
data_received = self.request.recv(1024)
# self.client_address is also defined in BaseRequestHandler
sender_address = self.client_address[0]
# This is where you are supposed to do something with your data
# This is an example
self.write_to_log('Someone from {} sent us "{}"'.format(sender_address,
data_received))
# A way to stop the server from going on forever
# But you could do this other ways but it depends what condition
# should cause the shutdown
if data_received.startswith(b"QUIT"):
finishing_thread = threading.Thread(target=self.finish_in_another_thread)
finishing_thread.start()
# This will be called in another thread to terminate the server
# self.server is also defined in BaseRequestHandler
def finish_in_another_thread(self):
self.write_to_log("Shutting down the server")
self.server.shutdown()
# Write something (with a timestamp) to a text file so that we
# know something is happenning
def write_to_log(self, message):
timestamp = datetime.datetime.now()
timestamp_text = timestamp.isoformat(sep=' ', timespec='seconds')
with open(self.log_file_name, mode='a') as log_file:
log_file.write("{}: {}\n".format(timestamp_text, message))
service_address = "localhost"
port_number = 49101
server = socketserver.TCPServer((service_address, port_number),
SleepyGaryReceptionHandler)
server.serve_forever()
I'm using here the socketserver module instead of listening directly at a socket. This standard library module has been written to simplify writing a server. so use it!
All I do here is write to a text file what has been received. You would have to adapt it to your use.
But to have it running continuously use a cron job but to start it at the startup of the computer. Since this script will block until the server is stopped, we have to run it in the background. It would look something like that:
#reboot /usr/bin/python3 /home/sleepygary/sleppys_server.py &
I have tested it and after 5 hours it still does his thing.
Now like I said, it is a hack. If you want to go all the way and do things like any other services on your computer you have to program it in a certain way. You can find more information on this page: https://www.freedesktop.org/software/systemd/man/daemon.html
I'm really tired so there may be some errors here and there.

Broken pipe error and connection reset by peer 104

I'm using Bottle server to implement my own server using an implementation not so far away from the simple "hello world" here , my own implementation is (without the routing section of course):
bottleApp =bottle.app()
bottleApp.run(host='0.0.0.0',port=80, debug=true)
My server is keep getting unresponsive all the time and then I get in the Browser: Connection reset by peer, broken pipe errno 32
The logs give me almost exactly the same stack traces such as in question.
Here are my own logs:
What I tried so far, without success:
Wrapping the server run line with try except, something like, shown here the answer of "mhawke".
This stopped the error messages in logs, apparently because I caught them in except clause, but problem is that when catching the exception like that it means that we have been thrown out of the run method context, and I want to catch it in a way it will not cause my server to fall.
I don't know if its possible without touching the inner implementations files of bottle.
Adding this before server run line:
from signal import signal, SIGPIPE, SIG_DFL
signal(SIGPIPE,SIG_DFL)
As suggested here, but it seems that it didn't had any impact on not getting Broken pipe\connection reset errors and server responsiveness.
I thought of trying also the second answer here, but I don't have any idea where to locate this code in the context of the bottle server.
This sounds like a permissions issue or a firewall.
if you really need to listen on port 80, then you need to run with a privileged account. Also you will probably need to open port 80 for tcp traffic.
I can see your using something that appears to be Posix (Linux/Unix/OSx) If you post what OS you are using I can edit this answer to be more specific as to how to open the firewall and execute privileged commands (probably sudo but who knows).

Strange behaviour in Python SocketServer

I have created a python socket server, using a class inherited from SocketServer.BaseRequestHandler, overriding setup and handle methods. Of cource, SocketServer.BaseRequestHandler.setup is called at the end of my own setup.
This is my server class
class MyServer(SocketServer.ForkingMixIn, SocketServer.TCPServer):
timeout = 30
A typical forking socket server.
Here is how I run my server
while True:
try:
server = MyServer((host, port), MyRequestHandler)
print('Server listening on', (host, port))
server.timeout = 300 # seconds
server.serve_forever()
except:
print('Error with server, retrying in 5 seconds...')
print(sys.exc_info())
sleep(5)
host and port are predefined, no problem with them.
Server works fine, except when clients count reaches 40. After this number, no new connections will be accepted, all will be refused. I checked this with a client test python script from my own system. Only 40!
Why 40? I have checked source code for SocketServer and found nothing related to this. I currently have no clue regarding this issue. Any, and I really mean it, any help is appreciated :))
Thanks in advance
OS: CentOS 6.5
This is probably unrelated to Python. Tune your Linux kernel, in testing phase do stuff like:
turn syncookies off
increase file handles available for the user (every socket opened is also a file handle used - maybe you're running out of them?)
look at stuff like this: http://people.redhat.com/alikins/system_tuning.html#tcp
and: http://people.redhat.com/alikins/system_tuning.html#fds
check if stuff like fail2ban is installed (http://www.fail2ban.org/wiki/index.php/Main_Page)
check if rate limits are applied by iptables (in testing phase you could do iptables -F after making sure that default chain policy is ACCEPT)
and last but not in the very least, check dmesg, /var/log/messages, /var/log/syslog, etc
One thing that theoretically might be related to Python is SO_REUSEADDR:
http://www.unixguide.net/network/socketfaq/4.5.shtml
Check if you have it set for your socket.
UPDATE:
I just realized that since the 40 connections that your socket server maxes out at is actually pretty low, the simplest option could be running your socket server through systrace, just use -f flag to track forked processes as well. You could e.g. start socket server, open 35 simultaneous connections, and then connect systrace to a running process and set up 5 more connections and see what systrace reports. Very often in such situations syscalls fail with errors that are visible in systrace and allow pinpointing root cause relatively easily.
I really have now idea how I missed this in source!
class ForkingMixIn:
"""Mix-in class to handle each request in a new process."""
timeout = 300
active_children = None
max_children = 40
Yeah, now I see the max_children property.
Thanks guys

Python not getting IP if cable connected after script has started

I hope this doesn't cross into superuser territory.
So I have an embedded linux, where system processes are naturally quite stripped. I'm not quite sure which system process monitors to physical layer and starts a dhcp client when network cable is plugged in, but i made one myself.
¨
The problem is, that if i have a python script, using http connections, running before i have an IP address, it will never get a connection. Even after i have a valid IP, the python still has
"Temporary error in name resolution"
So how can I get the python to realize the new connection available, without restarting the script?
Alternatively , am I missing some normal procedure Linux runs normally at network cable connect.
The dhcp client I am using is udhcpc and python version is 2.6. Using httplib for connections.
Sounds like httplib caches /etc/resolv.conf which is updated after your DHCP client obtains an IP.
You might consider writing a wrapper script that waits until an IP address is obtained, then calls your python script.
Alternatively, you could try to, within your python script, not open any socket connections until you have acquired an IP address and /etc/resolv.conf updated.
Edit
httplib uses socket.create_connection() which, after some searching, I found does cache /etc/resolv.conf. (Well, it seems that the embedded version of libc is actually doing the caching).
You could try SugarLabs' solution, although I can't speak to it's effectiveness.
it might be that one of the python modules is caching the state of the network, and so it isn't using the latest settings. Try and reload all the network related modules. A simple example of this is:
import sys, socket, urllib
for i in [sys, socket, urllib]:
reload(i)
if that doesn't work look around in sys.modules to see what else got imported, and try more modules there. A more extreme version of the reloading would be the code below, that you could try as a last ditch effort.
code
import sys
print 'reloading %s modules ' % len(sys.modules)
for name, module in sys.modules.items():
try:
reload(module)
except:
print 'failed to import %s' % name
output
reloading 42 modules
failed to import __main__
failed to import encodings.encodings
failed to import encodings.codecs
failed to import lazr
failed to import encodings.__builtin__
After alot more research, the glibc problem jedwards suggested, seemed to be the problem. I did not find a solution, but made workaround for my usecase.
Considering I only use one URL, I added my own "resolv.file" .
A small daemon gets the IP address of the URL when PHY reports cable connected. This IP is saved to "my own resolv.conf". From this file the python script retrieves the IP to use for posts.
Not really a good solution, but a solution.

Restart python script if internet goes down

I have a python script (running on Mac OS X) that needs to be restarted when the internet goes down. If the internet is down, I would like to kill the current script, wait for the internet to go back up, and then restart it. Or, if possible, restart the function from within.
The problematic section of the Python code is as follows:
import tweetstream
# ...
with tweetstream.FilterStream(username, password, track = words) as stream:
for tweet in stream:
db.tweets.save(tweet)
Currently, if the internet goes down, the stream stops and doesn't reconnect.
It depends from os. There are few os specific methods.
First cross platform method will be own ping which will be send some packets to the Internet server. If you can not receive info that means Internet is goes down.
Try using this python implementation of ping as a subprocess. Thus, if too many timeouts occur, then you'll know the network's down and you can re-initiate the tweet process (however, to do this, you should probably put the entire tweeting process in a function of its own)
Perhaps you could try something like this:
import urllib2
def internet_on():
try:
response=urllib2.urlopen('http://74.125.131.94/',timeout=1)
return True
except urllib2.URLError as err: pass
return False
74.125.131.94 is the ip address of google.co.in . You can use whatever site you think will respond more quickly. Using a numerical IP-address avoids a DNS lookup, which may block the urllib2.urlopen call.The timeout=1 parameter makes sure that the call to urlopen will not take longer than 1 second even if the internet is not available.
Now you just need to call the internet_on() function. It will return true if the connection is on, else return false. Then you might want to wrap all the tweeting code inside a function and call it. (As #inspectorG4dget suggested).
EDIT:
For continuous checking you could do something like
def check():
while not internet_on():
pass
print "internet connection is on"
// call the tweet stuff function here.
Then when your stream stops just call the check() function and when the internet connection is back , it will call your tweet function to restart it.

Categories

Resources