It is wired to get an exception at about 7:30am(utc+8) everyday when calling softlayer-api.
TransportError: TransportError(0): HTTPSConnectionPool(host='api.softlayer.com', port=443): Max retries exceeded with url: /xmlrpc/v3.1/SoftLayer_Product_Package (Caused by ProxyEr
ror('Cannot connect to proxy.', error('Tunnel connection failed: 503 Service Unavailable',)))
And I uses a proxy to forward https request to softlayer's server. At first I thougth it is caused by the proxy, but when I looked into the log, it showed every request had been forwarded successfully. So maybe it is caused by the server. Does the server do something so busy at that moment everyday that it fails to server?
We don't have any report about this kind of issue nor if the server is busy in SoftLayer's side. but regarding to your issue, it is something related with network issues. It seems that there is something happening with your proxy connection.
First we need to discard if the proxy can be the reason for this issue, it would be very useful if can verify that this issue is reproducible without using a proxy from your side, let me know if you could test it.
If you could check this without proxy, I recommend to submit a ticket for further investigation about this issue.
Related
I have found the original answer to this quite helpful, however an error is encountered whenever there are two http requests sent in rapid succession:
HTTPConnectionPool(host='192.168.1.140', port=8082): Max retries exceeded with url: /dev2/presence/notpresent (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xb5dbff30>: Failed to establish a new connection: [Errno 98] Address already in use'))
Sometimes I can recover via sleeping and retrying, but often it just keeps repeating.
It appears that the resource has not been released in time for the next request. Is there a way I can force a reset somehow to avoid this problem?
I am debugging a Python flask application. The application runs atop uWSGI configured with 6 threads and 1 process. I am using Flask-Executor to offload some slower tasks. These tasks create a connection with the Flask application, i.e., the same process, and perform some HTTP GET requests. The executor is configured to use 2 threads max. This application runs on Ubuntu 16.04.3 LTS.
Every once in a while the threads in the executor completely stop working. The code uses the Python requests library to do the requests. The underlying error message is:
Action failed. HTTPSConnectionPool(host='somehost.com', port=443): Max retries exceeded with url: /api/get/value (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f8d75bb5860>: Failed to establish a new connection: [Errno 11] Resource temporarily unavailable',))
The code that is running within the executor looks like this:
adapter = requests.adapters.HTTPAdapter(max_retries=3)
session = requests.Session()
session.mount('http://somehost.com:80', adapter)
session.headers.update({'Content-Type': 'application/json'})
...
session.get(uri, params=params, headers=headers, timeout=3)
I've spent a good amount of time trying to peel back the Python requests stack down to the C sockets that it uses. I've also tried reproducing this error using small C and Python programs. At first I thought it could be that sockets were not getting closed and so we were running out of allowable sockets as a resource, but that gives me a message more along the lines of "too many files are open".
Setting aside the Python stack, what could cause a [Errno 11] Resource temporarily unavailable on a socket connect() command? Also, if you've run into this using requests, are there arguments that I could pass in to prevent this?
I've seen the What can cause a “Resource temporarily unavailable” on sock send() command StackOverflow post, but I'm that's on a send() command and not on the initial connect(), which is what I suspect is where the code is getting hung up.
The error message Resource temporarily unavailable corresponds to the error code EAGAIN.
The connect() manpage states, that the error `EAGAIN occurs in the following situations:
No more free local ports or insufficient entries in the routing cache. For AF_INET see the description of /proc/sys/net/ipv4/ip_local_port_range ip(7) for information on how to increase the number of local ports.
This can happen, when very many connections to the same IP/port combination are in use and no local port for automatic binding can be found. You can check with
netstat -tulpen
which connections exactly cause this.
So I have an Azure functions app written in python and quite often the code throws an error like this.
HTTPSConnectionPool(host='www.***.com', port=443): Max retries exceeded with url: /x/y/z (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7faba31d0438>: Failed to establish a new connection: [Errno 110] Connection timed out',))
This happens in a few diffrent functions that make https connections.
I contacted support and they told me that this was caused by SNAT port exhaustion and adviced me to: "Modify the application to reuse connections instead of creating a connection per request, use connection pooling, use service endpoints if the you are connecting to resources in Azure." They sent me this link https://4lowtherabbit.github.io/blogs/2019/10/SNAT/ and also this https://learn.microsoft.com/en-us/azure/azure-functions/manage-connections
Problem is I am unsure about how to practically reuse and or pool connections in python and I am unsure what the primary cause of exhaustion is, as this data is not publicly available.
So I am looking for help with applying their advice to all our http(s) and database connections.
I made the assumption that pymongo and pyodbc (the database clients we use) would handle pooling an reuse despite me creating a new client each time a function runs. Is this incorrect and if so, how do I reuse these database clients in python to prevent this?
The problem has so far only been caused when using requests (or the zeep SOAP library that internally defaults to using requests) to hit a https endpoint. Is there any way I could improve how I use requests. Like reusing sessions or closing connections explicitly. I am aware that requests creates a session in the background when calling requests.get. But my knowledge about the library is insufficient to figure out if this is the problem and how I could solve it. I am thinking I might be able to create and reuse a single session instance for each specific http(s) call in each function, but I am unsure if this is correct and also I have no idea on how to actually do it.
In a few places I also use aiohttp and if possible would like to achive the same thing there.
I haven't looked into service endpoints yet but I am about to.
So in short. What can I in pratice do to ensure reusage/pooling with requests, pyodbc, pymongo and aiohttp?
I am running a API developed in GOLang which accepts post requests over LAN. My client is using Python to send some data (size 350KB) to the server. The python code is multithreaded and may be performing simultaneous post requests, 1 per each thread. Expected average requests per second between client and server is around 3. The requests are not getting timeout as the error gets raised much earlier.
I can not seem to find the source of the error. The network should be robust as both server and client are on a single switch having 1 Gbps. Please help.
HTTPConnectionPool(host='192.168.1.105', port=8080): Max retries exceeded with url: /match (Caused by <class 'socket.error'>: [Errno 32] Broken pipe)
On a Linux cluster, I get this error with Requests:
ConnectionError: HTTPConnectionPool(host='andes-1-47', port=8181): Max
retries exceeded with url: /jammy/api/v1 (Caused by : '')
What does this error mean? Is it a Requests problem or is it on the host, and what is the solution?
By the way, the program works successfully on both Windows and Linux standalone machines with localhost.
So the Max retries exceeded with url: ... bit can be vastly confusing. In all likelihood (since you mention that this works using localhost) that this is an application that you're deploying somewhere. This would also explain why the host name is andes-1-47 and not something most would expect (e.g., example.com). My best guess is that you need to either use the IP address for andes-1-47 (e.g., 192.168.0.255) or your linux cluster doesn't know how to resolve andes-1-47 and you should add it to your /etc/hosts file (i.e., adding the line: 192.168.0.255 andes-1-47).
If you want to see if your linux cluster can resolve the name you can always use this script:
import socket
socket.create_connection(('andes-1-47', 8181), timeout=2)
This will timeout in 2 seconds if you cannot resolve the hostname. (You can remove the timeout but it may take a lot longer to determine if the hostname is reachable that way.)
in the urlopen call, try setting retries=False or retries=1 to see the difference. The default is 3, which sounds quite reasonable.
http://urllib3.readthedocs.org/en/latest/pools.html#urllib3.connectionpool.HTTPConnectionPool.urlopen