Requests connection error after blackouts - python

We have several Raspberry PI Zero in which we installed docker with we run a python 3.9 container. This runs a Bluetooth scanner, sends data online with requests and do other stuff. All works well until there are blackouts in the places where they are placed and requests go nuts. The PIs restart, docker goes up, it starts the container but at the moment it has to call my server with any requests.get/post, it always gives these errors:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 699, in urlopen
httplib_response = self._make_request(
File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 382, in _make_request
self._validate_conn(conn)
File "/usr/local/lib/python3.9/site-packages/urllib3/connectionpool.py", line 1010, in _validate_conn
conn.connect()
File "/usr/local/lib/python3.9/site-packages/urllib3/connection.py", line 353, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.9/site-packages/urllib3/connection.py", line 181, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0xb538aeb0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution
and
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='mywebsite.com', port=443): Max retries exceeded with url: /devices (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0xb538aeb0>: Failed to establish a new connection: [Errno -3] Temporary failure in name resolution'))
If I restart the container it returns to work fine. Thinking it had something to do with the provider we are using, I set the DNS of the PIs with Cloudflare ones but if it was the cause I would had this problem always and not after the blackouts. Does anyone know how I could fix it?

Related

Why do parameters (in particular proxies) in Requests Session not persist across Python Requests?

The requests documentation (link) mentioned that a session is what allows some parameters to persist across requests. My use case is simple; because I sit behind a corporate proxy and firewall, I need to set the proxy parameters proxies (as mentioned in the title) in a session and I don't want to have to set it for every request.
Supposedly, you can do the following (directly copied from the proxies section):
import requests
proxies = {
'http': 'http://10.10.1.10:3128',
'https': 'http://10.10.1.10:1080',
}
session = requests.Session()
session.proxies.update(proxies)
session.get('http://example.org')
This should allow you to set proxies, without stating them in the request itself. Thus my session function looks like this below:
def requests_setup():
# setup proxy
proxies = {'http': f'http://someproxy:8080',
'https': f'http://someproxy:8080'}
# initialize session
session = requests.Session()
# Part 1: set up proxy
session.proxies.update(proxies)
# Part 2: add certificate
session.verify = r'SOME_CERT_BUNDLE.pem'
return session
Get request example that results in an error
# making an example get request
setup = requests_setup()
url = "https://example.com"
r = setup.get(f"{url}", timeout=5)
Posting the full traceback below, but the following errors seems to be the problem. And my understanding of this is that the ssl cert verification did not go through for some reason (as suggested by the trace, I believe it is because proxy was not included; for a session without the verify parameter set, it would instead result in a sslCertVerification error during the request that worked below).
Error 1 ...
socket.timeout: _ssl.c:1074: The handshake operation timed out
... leading to Error 2
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='example.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', timeout('_ssl.c:1074: The handshake operation timed out')))
... and finally Error 3
requests.exceptions.ProxyError: HTTPSConnectionPool(host='example.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', timeout('_ssl.c:1074: The handshake operation timed out')))
The silver lining is that this was solved, eventually, by specifying the parameter in the request body.
setup = utils.requests_setup()
# making an example get request
url = "https://example.com"
proxies = {'http': f'http://someproxy:8080',
'https': f'http://someproxy:8080'}
r = setup.get(f"{url}", timeout=5, proxies=proxies)
But why is that the case? I can see clearly that my session's proxy attributes are initialized . But for some reason it was not utilized in the get request made using that session.
PS: There might be questions about why my proxy is prefixed with http for both cases. It is purely because we don't have a standalone https proxy server. The request also fails when I use a "HTTPS" prefix instead there.
PPS: example.com is not the site used. I have tried to use google.com, or others (such as the API I am trying to call), but that did not change the results.
Actual Error Traceback
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\urllib3\connectionpool.py", line 696, in urlopen
self._prepare_proxy(conn)
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\urllib3\connectionpool.py", line 964, in _prepare_proxy
conn.connect()
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\urllib3\connection.py", line 359, in connect
conn = self._connect_tls_proxy(hostname, conn)
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\urllib3\connection.py", line 506, in _connect_tls_proxy
ssl_context=ssl_context,
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\urllib3\util\ssl_.py", line 450, in ssl_wrap_socket
sock, context, tls_in_tls, server_hostname=server_hostname
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\urllib3\util\ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\ssl.py", line 423, in wrap_socket
session=session
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\ssl.py", line 870, in _create
self.do_handshake()
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\ssl.py", line 1139, in do_handshake
self._sslobj.do_handshake()
socket.timeout: _ssl.c:1074: The handshake operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\requests\adapters.py", line 449, in send
timeout=timeout
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\urllib3\connectionpool.py", line 756, in urlopen
method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\urllib3\util\retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='example.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', timeout('_ssl.c:1074: The handshake operation timed out')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\requests\sessions.py", line 555, in get
return self.request('GET', url, **kwargs)
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\requests\sessions.py", line 542, in request
resp = self.send(prep, **send_kwargs)
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\requests\sessions.py", line 655, in send
r = adapter.send(request, **kwargs)
File "C:\ProgramData\Anaconda3\envs\VA_API\lib\site-packages\requests\adapters.py", line 510, in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='example.com', port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot connect to proxy.', timeout('_ssl.c:1074: The handshake operation timed out')))
Information for reproducing the issue:
OS version: 'Windows-10-10.0.18362-SP0'
Python version: '3.7.11 (default, Jul 27 2021, 09:42:29) [MSC v.1916 64 bit (AMD64)]'
Requests version: '2.26.0'

Python urllib3.exceptions.NewConnectionError connecting to self-built API

I've built an API (with flask-restful) that stores data in its cache and exposes it to other applications. When I try to send a get request to this API from another app (also flask) it returns the following error
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/urllib3/connection.py", line 170, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw
File "/usr/local/lib/python3.6/dist-packages/urllib3/util/connection.py", line 73, in create_connection
for res in socket.getaddrinfo(host, port, family, socket.SOCK_STREAM):
File "/usr/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 706, in urlopen
chunked=chunked,
File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 382, in _make_request
self._validate_conn(conn)
File "/usr/local/lib/python3.6/dist-packages/urllib3/connectionpool.py", line 1010, in _validate_conn
conn.connect()
File "/usr/local/lib/python3.6/dist-packages/urllib3/connection.py", line 353, in connect
conn = self._new_conn()
File "/usr/local/lib/python3.6/dist-packages/urllib3/connection.py", line 182, in _new_conn
self, "Failed to establish a new connection: %s" % e
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x7f6f965d9358>: Failed to establish a new connection: [Errno -2] Name or service not known
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/urllib3/util/retry.py", line 574, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='data-collector.cloud', port=443): Max retries exceeded with url: /sample_url (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f6f965d9358>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I thought that this error occurred because I was sending too many requests with the same url to the API. I had then limited the number of API calls by adding
decorators = [index.limiter.limit("60/minute")] to the API. The error still persisted however. Then I thought the error might be caused by the amount of calls accepted by the server. I thought I was not closing the connection properly after making an API call. So I added
from requests.packages.urllib3 import Retry, PoolManager
retries = Retry(connect=5, read=2, redirect=5)
with PoolManager(retries=retries) as http:
response = http.request('GET', url)
But this also did not solve my issue. What am I missing here? I am using python3.8
EDIT: I found out that it's not per se the query that is causing it, because if I try other queries, then the same message pops up. I'm still lost at how to debug this :/
The error says: "Name or service not known". In other words, it's a name resolution issue (DNS).
Cross check the target host ("data-collector.cloud", which isn't in the public DNS records.)

Python Simplemonitor does not send out email on alert; possible configuration error

I'm setting up simplemonitor, found here, to have a check for the urls of my webservice. If any check fails, then it should send out an email alert.
So far, I've confirmed that the monitor works properly. However, when I shut down the service to check the email alert, it errors on sending an email:
2020-04-10 20:03:00 WARNING (simplemonitor) monitor failed but within tolerance: test-check
2020-04-10 20:03:10 ERROR (simplemonitor) monitor failed: test-check (Requests exception while opening URL: HTTPConnectionPool(host='www.test.com', port=8080): Max retries exceeded with url: /hello (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fef7c8762e8>: Failed to establish a new connection: [Errno 111] Connection refused',)))
2020-04-10 20:03:10 ERROR (simplemonitor.alerter-email) couldn't send mail
Traceback (most recent call last):
File "/anaconda/envs/test/envs/alertenv/lib/python3.5/site-packages/simplemonitor/Alerters/mail.py", line 127, in send_alert
server = smtplib.SMTP(self.mail_host, self.mail_port)
File "/anaconda/envs/test/envs/alertenv/lib/python3.5/smtplib.py", line 251, in __init__
(code, msg) = self.connect(host, port)
File "/anaconda/envs/test/envs/alertenv/lib/python3.5/smtplib.py", line 336, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/anaconda/envs/test/envs/alertenv/lib/python3.5/smtplib.py", line 307, in _get_socket
self.source_address)
File "/anaconda/envs/test/envs/alertenv/lib/python3.5/socket.py", line 712, in create_connection
raise err
File "/home/user/anaconda3/envs/test/lib/python3.5/socket.py", line 703, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused
Here is the monitor.ini file to set up the rules:
[monitor]
monitors=/home/user/monitor/monitors.ini
interval=10
[reporting]
loggers=logfile
alerters=email
[logfile]
type=logfile
filename=monitor.log
only_failures=1
[email]
type=email
host=host.domain.com
from=simplemonitor#company.com
to=user#company.com
And the monitors.ini file that defines what I'm monitoring:
[test-check]
type=http
url=http://www.test.com:8080/hello
tolerance=1
I'm running it with simplemonitor --config monitor.ini &>> monitor.log &.
Given that I only started using this, I'm not sure if this is due to a error in the code, or one on my part due to a mistake in setup.
EDIT: I feel silly. The bug was due to a typo in the monitor.ini file. I'd misspelled the name of the smtp server in the host variable. It now sent an email. I apologize for any bother.
As I said in an edit to the original question, this was due to a typo in the ini file for the smtp server name, in host. I apologize for any bother this caused.

local host ip (127.0.0.1) is not working on google-compute-engine

I've exposed an URL (http://127.0.0.1:5000/daily) but in Google Compute Engine (GCE) I am not getting the values. If I access this URL through requests in simple python program, it is running efficiently.
import requests
import json
req=requests.get('http://127.0.0.1:5000/daily')
a = json.loads(req.text)
discount_rate = a['data']['policy_rate']
six_months_kibor = a['data']['today_kibor_rate']
dollar_to_pkr= a['data']['today_usd_rate']
print(discount_rate, six_months_kibor, dollar_to_pkr)
ERROR which I am receiving from GCE is:
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f93526c16a0>: Failed to establish a new connection: [Errno 111] Connection refused
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dev_baseh/.local/lib/python3.5/site-packages/requests/adapters.py", line 449, in send
timeout=timeout
File "/home/dev_baseh/.local/lib/python3.5/site-packages/urllib3/connectionpool.py", line 641, in urlopen
_stacktrace=sys.exc_info()[2])
File "/home/dev_baseh/.local/lib/python3.5/site-packages/urllib3/util/retry.py", line 399, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: /daily (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f93526c16a0>: Failed to establish a new connection: [Errno 111] Connection refused',))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 6, in <module>
req=requests.get('http://127.0.0.1:5000/daily')
File "/home/dev_baseh/.local/lib/python3.5/site-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
File "/home/dev_baseh/.local/lib/python3.5/site-packages/requests/api.py", line 60, in request
return session.request(method=method, url=url, **kwargs)
File "/home/dev_baseh/.local/lib/python3.5/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/home/dev_baseh/.local/lib/python3.5/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/home/dev_baseh/.local/lib/python3.5/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=5000): Max retries exceeded with url: /daily (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f93526c16a0>: Failed to establish a new connection: [Errno 111] Connection refused', ))
I don't the reason, that why it is not running over GCE.
Thanks in Advance :)
The IP address 127.0.0.1 refers to the local IP address of your machine. So if you run a python program on the same machine where you're running that server, it would be able to access that address since both have the same IP address.
When you try to access 127.0.0.1 from GCP, what is happening is GCP is locally trying to access the port 5000 and not your machine's port 5000.
You would need to figure out the public facing IP address of the machine where you're running the server. If it's on your computer, you could just Google, "what is my IP" and get it.

How to fix 'ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',)' in ubuntu terminal?

I am new to python and ubuntu. I was trying to download data from synapse. About 30% was downloaded and I left the pc ON for the night. The next morning this is what I saw,
This is for an ubuntu server
My code to download :
import synapseclient
syn = synapseclient.Synapse()
syn.login('gaurangk','tennisbat')
# Obtain a pointer and download the data
syn18507661 = syn.get(entity='syn18507661')
# Get the path to the local copy of the data file
filepath = syn18507661.path
This is the complete error code,
Traceback (most recent call last):
File "ok.py", line 7, in
syn18507661 = syn.get(entity='syn18507661')
File "/home/abhinav/anaconda2/lib/python2.7/site-packages/synapseclient/client.py", line 633, in get
return self._getWithEntityBundle(entityBundle=bundle, entity=entity, **kwargs)
File "/home/abhinav/anaconda2/lib/python2.7/site-packages/synapseclient/client.py", line 749, in _getWithEntityBundle
self._download_file_entity(downloadLocation, entity, ifcollision, submission)
File "/home/abhinav/anaconda2/lib/python2.7/site-packages/synapseclient/client.py", line 807, in _download_file_entity
downloadPath = self._downloadFileHandle(entity.dataFileHandleId, objectId, objectType, downloadPath)
File "/home/abhinav/anaconda2/lib/python2.7/site-packages/synapseclient/client.py", line 1755, in _downloadFileHandle
raise exc_info0
requests.exceptions.ProxyError: HTTPSConnectionPool(host='file-prod.prod.sagebase.org', port=443): Max retries exceeded with url: /file/v1/fileHandle/batch (Caused by ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 407 Proxy Authentication Required',)))
.....

Categories

Resources