Specifying an outgoing port with Python requests (revisited) - python

I have found the original answer to this quite helpful, however an error is encountered whenever there are two http requests sent in rapid succession:
HTTPConnectionPool(host='192.168.1.140', port=8082): Max retries exceeded with url: /dev2/presence/notpresent (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xb5dbff30>: Failed to establish a new connection: [Errno 98] Address already in use'))
Sometimes I can recover via sleeping and retrying, but often it just keeps repeating.
It appears that the resource has not been released in time for the next request. Is there a way I can force a reset somehow to avoid this problem?

Related

What can cause a “Resource temporarily unavailable” on sock connect() command

I am debugging a Python flask application. The application runs atop uWSGI configured with 6 threads and 1 process. I am using Flask-Executor to offload some slower tasks. These tasks create a connection with the Flask application, i.e., the same process, and perform some HTTP GET requests. The executor is configured to use 2 threads max. This application runs on Ubuntu 16.04.3 LTS.
Every once in a while the threads in the executor completely stop working. The code uses the Python requests library to do the requests. The underlying error message is:
Action failed. HTTPSConnectionPool(host='somehost.com', port=443): Max retries exceeded with url: /api/get/value (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7f8d75bb5860>: Failed to establish a new connection: [Errno 11] Resource temporarily unavailable',))
The code that is running within the executor looks like this:
adapter = requests.adapters.HTTPAdapter(max_retries=3)
session = requests.Session()
session.mount('http://somehost.com:80', adapter)
session.headers.update({'Content-Type': 'application/json'})
...
session.get(uri, params=params, headers=headers, timeout=3)
I've spent a good amount of time trying to peel back the Python requests stack down to the C sockets that it uses. I've also tried reproducing this error using small C and Python programs. At first I thought it could be that sockets were not getting closed and so we were running out of allowable sockets as a resource, but that gives me a message more along the lines of "too many files are open".
Setting aside the Python stack, what could cause a [Errno 11] Resource temporarily unavailable on a socket connect() command? Also, if you've run into this using requests, are there arguments that I could pass in to prevent this?
I've seen the What can cause a “Resource temporarily unavailable” on sock send() command StackOverflow post, but I'm that's on a send() command and not on the initial connect(), which is what I suspect is where the code is getting hung up.
The error message Resource temporarily unavailable corresponds to the error code EAGAIN.
The connect() manpage states, that the error `EAGAIN occurs in the following situations:
No more free local ports or insufficient entries in the routing cache. For AF_INET see the description of /proc/sys/net/ipv4/ip_local_port_range ip(7) for information on how to increase the number of local ports.
This can happen, when very many connections to the same IP/port combination are in use and no local port for automatic binding can be found. You can check with
netstat -tulpen
which connections exactly cause this.

Python Post Requests using Requests library giving broken pipe error

I am running a API developed in GOLang which accepts post requests over LAN. My client is using Python to send some data (size 350KB) to the server. The python code is multithreaded and may be performing simultaneous post requests, 1 per each thread. Expected average requests per second between client and server is around 3. The requests are not getting timeout as the error gets raised much earlier.
I can not seem to find the source of the error. The network should be robust as both server and client are on a single switch having 1 Gbps. Please help.
HTTPConnectionPool(host='192.168.1.105', port=8080): Max retries exceeded with url: /match (Caused by <class 'socket.error'>: [Errno 32] Broken pipe)

TransportError happened when calling softlayer-api

It is wired to get an exception at about 7:30am(utc+8) everyday when calling softlayer-api.
TransportError: TransportError(0): HTTPSConnectionPool(host='api.softlayer.com', port=443): Max retries exceeded with url: /xmlrpc/v3.1/SoftLayer_Product_Package (Caused by ProxyEr
ror('Cannot connect to proxy.', error('Tunnel connection failed: 503 Service Unavailable',)))
And I uses a proxy to forward https request to softlayer's server. At first I thougth it is caused by the proxy, but when I looked into the log, it showed every request had been forwarded successfully. So maybe it is caused by the server. Does the server do something so busy at that moment everyday that it fails to server?
We don't have any report about this kind of issue nor if the server is busy in SoftLayer's side. but regarding to your issue, it is something related with network issues. It seems that there is something happening with your proxy connection.
First we need to discard if the proxy can be the reason for this issue, it would be very useful if can verify that this issue is reproducible without using a proxy from your side, let me know if you could test it.
If you could check this without proxy, I recommend to submit a ticket for further investigation about this issue.

Python Requests HTTPConnectionPool and Max retries exceeded with url

On a Linux cluster, I get this error with Requests:
ConnectionError: HTTPConnectionPool(host='andes-1-47', port=8181): Max
retries exceeded with url: /jammy/api/v1 (Caused by : '')
What does this error mean? Is it a Requests problem or is it on the host, and what is the solution?
By the way, the program works successfully on both Windows and Linux standalone machines with localhost.
So the Max retries exceeded with url: ... bit can be vastly confusing. In all likelihood (since you mention that this works using localhost) that this is an application that you're deploying somewhere. This would also explain why the host name is andes-1-47 and not something most would expect (e.g., example.com). My best guess is that you need to either use the IP address for andes-1-47 (e.g., 192.168.0.255) or your linux cluster doesn't know how to resolve andes-1-47 and you should add it to your /etc/hosts file (i.e., adding the line: 192.168.0.255 andes-1-47).
If you want to see if your linux cluster can resolve the name you can always use this script:
import socket
socket.create_connection(('andes-1-47', 8181), timeout=2)
This will timeout in 2 seconds if you cannot resolve the hostname. (You can remove the timeout but it may take a lot longer to determine if the hostname is reachable that way.)
in the urlopen call, try setting retries=False or retries=1 to see the difference. The default is 3, which sounds quite reasonable.
http://urllib3.readthedocs.org/en/latest/pools.html#urllib3.connectionpool.HTTPConnectionPool.urlopen

I wish to avoid overrunning the http connections pool

I am creating a tool that will run many simultaneous calls to a RESTful API. I am using the python "Requests" module and the "threading" module. Once I stack too many simultaneous gets on the system I am getting exceptions like this:
ConnectionError: HTTPConnectionPool(host='xxx.net', port=80): Max retries exceeded with url: /thing/subthing/ (Caused by : [Errno 10055] An operation on a socket could not be performed because the system lacked sufficient buffer space or because a queue was full)
What can I do to either increase the buffer and queue space, or ask the Requests module to wait for an available slot?
(I know I could stuff it in a "try" loop, but that seems clumsy)
Use a session. If you use the requests.request family of methods (get, post, ...), each request will use it's own session with it's own connection pool, therfore not making any use of connection pooling.
If you need to fine-tune the number of connections used within a session, you can do this by changing it's HTTPAdapter

Categories

Resources