Best way to check if Python program is connected to the Internet? - python

I am using python-requests and noticed that it returns unrelated errors after failing to fetch a web page when not connected to the Internet.
The documentation mentions Exceptions, but not how to use them. How should the program verify that it is indeed connected, and fail nicely if not?
I currently have no error-handling system in place, and this is what I get:
File "mem.py", line 78, in <module>
login()
File "mem.py", line 38, in login
csrf = s.cookies['csrftoken']
File "/usr/local/lib/python2.7/site-packages/requests/cookies.py", line 276, in __getitem__
return self._find_no_duplicates(name)
File "/usr/local/lib/python2.7/site-packages/requests/cookies.py", line 331, in _find_no_duplicates
raise KeyError('name=%r, domain=%r, path=%r' % (name, domain, path))
KeyError: "name='csrftoken', domain=None, path=None"

It appears that HTTP errors are not raised by default in python-requests. This answer sums it up nicely: https://stackoverflow.com/a/24460981/908703
import requests
def connected_to_internet(url='http://www.google.com/', timeout=5):
try:
_ = requests.get(url, timeout=timeout)
return True
except requests.ConnectionError:
print("No internet connection available.")
return False

Related

Python GRPC - Failed to pick subchannel

I'm trying to setup a GRPC client in Python to hit a particular server. The server is setup to require authentication via access token. Therefore, my implementation looks like this:
def create_connection(target, access_token):
credentials = composite_channel_credentials(
ssl_channel_credentials(),
access_token_call_credentials(access_token))
target = target if target else DEFAULT_ENDPOINT
return secure_channel(target = target, credentials = credentials)
conn = create_connection(svc = "myservice", session = Session(client_id = id, client_secret = secret)
stub = FakeStub(conn)
stub.CreateObject(CreateObjectRequest())
The issue I'm having is that, when I attempt to use this connection I get the following error:
File "<stdin>", line 1, in <module>
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 216, in __call__
response, ignored_call = self._with_call(request,
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 257, in _with_call
return call.result(), call
File "anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 343, in result
raise self
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 241, in continuation
response, call = self._thunk(new_method).with_call(
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 266, in with_call
return self._with_call(request,
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 257, in _with_call
return call.result(), call
File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 343, in result
raise self
File "\anaconda3\envs\test\lib\site-packages\grpc\_interceptor.py", line 241, in continuation
response, call = self._thunk(new_method).with_call(
File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 957, in with_call
return _end_unary_response_blocking(state, call, True, None)
File "\anaconda3\envs\test\lib\site-packages\grpc\_channel.py", line 849, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "failed to connect to all addresses"
debug_error_string = "{
"created":"#1633399048.828000000",
"description":"Failed to pick subchannel",
"file":"src/core/ext/filters/client_channel/client_channel.cc",
"file_line":3159,
"referenced_errors":[
{
"created":"#1633399048.828000000",
"description":
"failed to connect to all addresses",
"file":"src/core/lib/transport/error_utils.cc",
"file_line":147,
"grpc_status":14
}
]
}"
I looked up the status code associated with this response and it seems that the server is unavailable. So, I tried waiting for the connection to be ready:
channel_ready_future(conn).result()
but this hangs. What am I doing wrong here?
UPDATE 1
I converted the code to use the async connection instead of the synchronous connection but the issue still persists. Also, I saw that this question had also been posted on SO but none of the solutions presented there fixed the problem I'm having.
UPDATE 2
I assumed that this issue was occurring because the client couldn't find the TLS certificate issued by the server so I added the following code:
def _get_cert(target: str) -> bytes:
split_around_port = target.split(":")
data = ssl.get_server_certificate((split_around_port[0], split_around_port[1]))
return str.encode(data)
and then changed ssl_channel_credentials() to ssl_channel_credentials(_get_cert(target)). However, this also hasn't fixed the problem.
The issue here was actually fairly deep. First, I turned on tracing and set GRPC log-level to debug and then found this line:
D1006 12:01:33.694000000 9032 src/core/lib/security/transport/security_handshaker.cc:182] Security handshake failed: {"created":"#1633489293.693000000","description":"Cannot check peer: missing selected ALPN property.","file":"src/core/lib/security/security_connector/ssl_utils.cc","file_line":160}
This lead me to this GitHub issue, which stated that the issue was with grpcio not inserting the h2 protocol into requests, which would cause ALPN-enabled servers to return that specific error. Some further digging led me to this issue, and since the server I connected to also uses Envoy, it was just a matter of modifying the envoy deployment file so that:
clusters:
- name: my-server
connect_timeout: 10s
type: strict_dns
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: python-server
port_value: 1337
tls_context:
common_tls_context:
tls_certificates:
alpn_protocols: ["h2"] <====== Add this.

How should I handle exceptions raised in #jwt_required decorator? (in flask-jwt-extended)

I have a function with #jwt_required decorator.
class Test(Resource):
#jwt_required
def get(self):
return {"test": "ok" }
Which works fine when the correct HTTP header is set, i.e.
Authentication: Bearer [TOKEN]
but when the token is invalid/wrong or messed with, a jwt.exceptions.DecodeError is raised:
File "env/lib/python3.6/site-packages/flask_restplus/resource.py", line 44, in dispatch_request
resp = meth(*args, **kwargs)
File "env/lib/python3.6/site-packages/flask_jwt_extended/view_decorators.py", line 103, in wrapper
verify_jwt_in_request()
File "env/lib/python3.6/site-packages/flask_jwt_extended/view_decorators.py", line 32, in verify_jwt_in_request
jwt_data = _decode_jwt_from_request(request_type='access')
File "env/lib/python3.6/site-packages/flask_jwt_extended/view_decorators.py", line 267, in _decode_jwt_from_request
decoded_token = decode_token(encoded_token, csrf_token)
File "env/lib/python3.6/site-packages/flask_jwt_extended/utils.py", line 80, in decode_token
encoded_token, verify=False, algorithms=config.algorithm
File "env/lib/python3.6/site-packages/jwt/api_jwt.py", line 84, in decode
payload, _, _, _ = self._load(jwt)
File "env/lib/python3.6/site-packages/jwt/api_jws.py", line 183, in _load
raise DecodeError('Not enough segments')
jwt.exceptions.DecodeError: Not enough segments
I cannot rely on clients always using correct tokens all the time.
And I cannot catch the exception because it is raised in the decorator rather than my own function. So the result is a http 500 error. How should I handle the exception more gracefully?
Flask-jwt-extended should be handling those for you gracefully. If not, you are probably using another extension (like flask-restful for example) that is breaking native flask functionality. You can try setting this option to fix it app.config[‘PROPAGATE_EXCEPTIONS’] = True, or take a look at this thread for some advice if you are using a different flask extension that is causing problems https://github.com/vimalloc/flask-jwt-extended/issues/86

Tweepy SSLError

I have a Django management command, launched via supervisord, that uses tweepy to consume the twitter streaming API.
The agent works quite well however I notice in the logs there's an SSLError every 10-15 minutes and supervisord is re-launching the agent.
The tweepy package is latest, version 1.11. The server is ubuntu 12.04 LTS. I've tried installing the cacert into the key chain as mentioned in the link below, but no luck.
Twitter API SSL Root CA Certificate
Any suggestions?
[2012-08-26 19:28:15,656: ERROR] Error establishing the connection
Traceback (most recent call last):.../.../datasinks.py", line 102, in start
stream.filter(locations=self.locations)
File "/site/pythonenv/local/lib/python2.7/site-packages/tweepy/streaming.py", line 228, in filter
self._start(async)
File "/site/pythonenv/local/lib/python2.7/site-packages/tweepy/streaming.py", line 172, in _start
self._run()
File "/site/pythonenv/local/lib/python2.7/site-packages/tweepy/streaming.py", line 117, in _run
self._read_loop(resp)
File "/site/pythonenv/local/lib/python2.7/site-packages/tweepy/streaming.py", line 150, in _read_loop
c = resp.read(1)
File "/usr/lib/python2.7/httplib.py", line 541, in read
return self._read_chunked(amt)
File "/usr/lib/python2.7/httplib.py", line 574, in _read_chunked
line = self.fp.readline(_MAXLINE + 1)
File "/usr/lib/python2.7/socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
File "/usr/lib/python2.7/ssl.py", line 241, in recv
return self.read(buflen)
File "/usr/lib/python2.7/ssl.py", line 160, in read
return self._sslobj.read(len)
SSLError: The read operation timed out
Following is an outline of the code.
from tweepy import API, OAuthHandler
from tweepy.streaming import StreamListener, Stream
# snip other imports
class TwitterSink(StreamListener, TweetSink):
def __init__(self):
self.auth = OAuthHandler(settings.TWITTER_OAUTH_CONSUMER_KEY, settings.TWITTER_OAUTH_CONSUMER_SECRET)
self.auth.set_access_token(settings.TWITTER_OAUTH_ACCESS_TOKEN_KEY, settings.TWITTER_OAUTH_ACCESS_TOKEN_SECRET)
self.locations = '' # Snip for brevity
def start(self):
try:
stream = Stream(self.auth, self,timeout=60, secure=True)
stream.filter(locations=self.locations)
except SSLError as e:
logger.exception("Error establishing the connection")
except IncompleteRead as r:
logger.exception("Error with HTTP connection")
# snip on_data()
# snip on_timeout()
# snip on_error()
The certificate doesn't seem to be the problem. The error is just a timeout. Seems like an issue with tweepy's SSL handling to me. The code is equipped to handle socket.timeout and reopen the connection, but not a timeout arriving through SSLError.
Looking at the ssl module code (or docs), though, I don't see a pretty way to catch that. The SSLError object is raised without any arguments, just a string description. For lack of a better solution, I'd suggest adding the following right before line 118 of tweepy/streaming.py:
except SSLError, e:
if 'timeout' not in exception.message.lower(): # support all timeouts
exception = e
break
if self.listener.on_timeout() == False:
break
if self.running is False:
break
conn.close()
sleep(self.snooze_time)
Why it's timing out in the first place is a good question. I have nothing better than repeating Travis Mehlinger's suggestion of setting a higher timeout.
Here is how I have it (modified solution from here https://groups.google.com/forum/?fromgroups=#!topic/tweepy/80Ayu1joGJ4):
l = MyListener()
auth = OAuthHandler(settings.CONSUMER_KEY, settings.CONSUMER_SECRET)
auth.set_access_token(settings.ACCESS_TOKEN, settings.ACCESS_TOKEN_SECRET)
# connect to stream
stream = Stream(auth, l, timeout=30.0)
while True:
# Call tweepy's userstream method with async=False to prevent
# creation of another thread.
try:
stream.filter(follow=reporters, async=False)
# Normal exit: end the thread
break
except Exception, e:
# Abnormal exit: Reconnect
logger.error(e)
nsecs = random.randint(60, 63)
logger.error('{0}: reconnect in {1} seconds.'.format(
datetime.datetime.utcnow(), nsecs))
time.sleep(nsecs)
There is another alternative solution provided on Github:
https://github.com/tweepy/tweepy/pull/132

Repeated POST request is causing error "socket.error: (99, 'Cannot assign requested address')"

I have a web-service deployed in my box. I want to check the result of this service with various input. Here is the code I am using:
import sys
import httplib
import urllib
apUrl = "someUrl:somePort"
fileName = sys.argv[1]
conn = httplib.HTTPConnection(apUrl)
titlesFile = open(fileName, 'r')
try:
for title in titlesFile:
title = title.strip()
params = urllib.urlencode({'search': 'abcd', 'text': title})
conn.request("POST", "/somePath/", params)
response = conn.getresponse()
data = response.read().strip()
print data+"\t"+title
conn.close()
finally:
titlesFile.close()
This code is giving an error after same number of lines printed (28233). Error message:
Traceback (most recent call last):
File "testService.py", line 19, in ?
conn.request("POST", "/somePath/", params)
File "/usr/lib/python2.4/httplib.py", line 810, in request
self._send_request(method, url, body, headers)
File "/usr/lib/python2.4/httplib.py", line 833, in _send_request
self.endheaders()
File "/usr/lib/python2.4/httplib.py", line 804, in endheaders
self._send_output()
File "/usr/lib/python2.4/httplib.py", line 685, in _send_output
self.send(msg)
File "/usr/lib/python2.4/httplib.py", line 652, in send
self.connect()
File "/usr/lib/python2.4/httplib.py", line 636, in connect
raise socket.error, msg
socket.error: (99, 'Cannot assign requested address')
I am using Python 2.4.3. I am doing conn.close() also. But why is this error being given?
This is not a python problem.
In linux kernel 2.4 the ephemeral port range is from 32768 through 61000. So number of available ports = 61000-32768+1 = 28233. From what i understood, because the web-service in question is quite fast (<5ms actually) thus all the ports get used up. The program has to wait for about a minute or two for the ports to close.
What I did was to count the number of conn.close(). When the number was 28000 wait for 90sec and reset the counter.
BIGYaN identified the problem correctly and you can verify that by calling "netstat -tn" right after the exception occurs. You will see very many connections with state "TIME_WAIT".
The alternative to waiting for port numbers to become available again is to simply use one connection for all requests. You are not required to call conn.close() after each call of conn.request(). You can simply leave the connection open until you are done with your requests.
I too faced similar issue while executing multiple POST statements using python's request library in Spark. To make it worse, I used multiprocessing over each executor to post to a server. So thousands of connections created in seconds that took few seconds each to change the state from TIME_WAIT and release the ports for the next set of connections.
Out of all the available solutions available over the internet that speak of disabling keep-alive, using with request.Session() et al, I found this answer to be working which makes use of 'Connection' : 'close' configuration as header parameter. You may need to put the header content in a separte line outside the post command though.
headers = {
'Connection': 'close'
}
with requests.Session() as session:
response = session.post('https://xx.xxx.xxx.x/xxxxxx/x', headers=headers, files=files, verify=False)
results = response.json()
print results
This is my answer to the similar issue using the above solution.

App Engine urlfetch is raising exceptions when i think it should not be

I've written my first Python application with the App Engine APIs, it is intended to monitor a list of servers and notify me when one of them goes down, by sending a message to my iPhone using Prowl, or sending me an email, or both.
Problem is, a few times a week it notifies me a server is down even when it clearly isn't. I've tested it with servers i know should be up virtually all the time like google.com or amazon.com but i get notifications with them too.
I've got a copy of the code running at http://aeservmon.appspot.com, you can see that google.com was added Jan 3rd but is only listed as being up for 6 days.
Below is the relevant section of the code from checkservers.py that does the checking with urlfetch, i assumed that the DownloadError exception would only be raised when the server couldn't be contacted, but perhaps I'm wrong.
What am I missing?
Full source on github under mrsteveman1/aeservmon (i can only post one link as a new user, sorry!)
def testserver(self,server):
if server.ssl:
prefix = "https://"
else:
prefix = "http://"
try:
url = prefix + "%s" % server.serverdomain
result = urlfetch.fetch(url, headers = {'Cache-Control' : 'max-age=30'} )
except DownloadError:
logging.info('%s could not be reached' % server.serverdomain)
self.serverisdown(server,000)
return
if result.status_code == 500:
logging.info('%s returned 500' % server.serverdomain)
self.serverisdown(server,result.status_code)
else:
logging.info('%s is up, status code %s' % (server.serverdomain,result.status_code))
self.serverisup(server,result.status_code)
UPDATE Jan 21:
Today I found one of the exceptions in the logs:
ApplicationError: 5
Traceback (most recent call last):
File "/base/python_lib/versions/1/google/appengine/ext/webapp/__init__.py", line 507, in __call__
handler.get(*groups)
File "/base/data/home/apps/aeservmon/1.339312180538855414/checkservers.py", line 149, in get
self.testserver(server)
File "/base/data/home/apps/aeservmon/1.339312180538855414/checkservers.py", line 106, in testserver
result = urlfetch.fetch(url, headers = {'Cache-Control' : 'max-age=30'} )
File "/base/python_lib/versions/1/google/appengine/api/urlfetch.py", line 241, in fetch
return rpc.get_result()
File "/base/python_lib/versions/1/google/appengine/api/apiproxy_stub_map.py", line 501, in get_result
return self.__get_result_hook(self)
File "/base/python_lib/versions/1/google/appengine/api/urlfetch.py", line 331, in _get_fetch_result
raise DownloadError(str(err))
DownloadError: ApplicationError: 5
other folks have been reporting issues with the fetch service (e.g. http://code.google.com/p/googleappengine/issues/detail?id=1902&q=urlfetch&colspec=ID%20Type%20Status%20Priority%20Stars%20Owner%20Summary%20Log%20Component)
can you print the exception, it may have more detail, e.g.:
"DownloadError: ApplicationError: 2 something bad"

Categories

Resources