Python urrlib3 error Unexpected EOF - python

I keep getting the following error intermittently, i'm suspecting a proxy on the network is causing this, since i can run my python script when using a different connection.
I'm using python 2.7 and also using Fiddler to help with the proxy authentication.
SSLError: HTTPSConnectionPool(host='api.bogus.com', port=443): Max
retries exceeded with url: /api/v1/oauth2/token (Caused by
SSLError(SSLError("bad handshake: SysCallError(-1, 'Unexpected
EOF')",),))
At the moment I am having some limited success with the following setup, it does fail quite often but it manages to work a few times. I am using a session with the following parameters:
def get_session():
session = requests.Session()
# limit connection pool and connection numbers
session.mount('https://', HTTPAdapter(max_retries=5, pool_connections=100, pool_maxsize=100))
# make sure all connections are closed
session.headers.update({"Connection": "close"})
return session

Related

Confluent kafka python SSL verification

I am using confluent kafka python 'https://github.com/confluentinc/confluent-kafka-python' for writing application. Both kafka and schema registry is secured and uses https endpoints.
While running the application, i am getting following error
Result: Failure Exception: SSLError: HTTPSConnectionPool(host='hostname', port=443):
Max retries exceeded with url: //subjects/schema-value/versions (Caused by SSLError(SSLError("bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])")))
Question 1:
For connecting to schema registry, where to specify the ceritificate value ?
Question 2:
For testing, i want to disable SSL verification in python, What is the option to do that ?
Thanks in advance.
This is the config that I used for my avro producer:
avro_producer_conf = {
"bootstrap.servers": "SSL://127.0.0.1:9094",
"security.protocol": "ssl",
# Certificates used by simple Producer
"ssl.ca.location": "/ssl/root/intermediate/ca-chain.cert.pem",
"ssl.certificate.location": "/ssl/root/intermediate/producer/producer.cert.pem",
"ssl.key.location": "/ssl/root/intermediate/producer/producer.key.pem",
'schema.registry.url': "https://schemaregistry:8081",
# Certificates used by Schema Registry
"schema.registry.ssl.ca.location": "/ssl/root/intermediate/ca-chain.cert.pem",
"schema.registry.ssl.certificate.location": "/ssl/root/intermediate/producer/producer.cert.pem",
"schema.registry.ssl.key.location": "/ssl/root/intermediate/producer/producer.key.pem"
}
The AvroProducer __init__() method is doing the separation of parameters. Everything you want to pass to SchemaRegistry needs to start with schema.registry.<parameter>. To use SSL with Schema registry make sure you use a non-encrypted key(private key without password). Make sure you don't have REQUESTS_CA_BUNDLE environment variable set, it will confuse the library.

Is it possible to check number of HTTPConnection request made

I have a uWSGI server running in a linux VM node where multiple requests are made to this.
At only some point there are some errors like ReadTimeout, HTTPConnectionPool and recovered automatically.
ConnectionError: HTTPConnectionPool(host='10.1.1.1', port=8000): Max retries exceeded with url: /app_servers (Caused by NewConnectionError('<requests.packages.urllib3.connection.
HTTPConnection object at 0x7f16e8a89190>: Failed to establish a new connection: [Errno 101] Network is unreachable',))
Is it due to requests exceeded ? or some network lookup issue.
I tried using netstat and sar command to identify the root cause, but CPU and IO stats are fine.
No of establisbed connected(ESTABLISHED) and CLOSE_WAIT state requests are also less. Not sure how to check for the past time.
How to check the number of http connection made at that point of time or why the HTTPConnectionPool (Max url exceeds)error occurs

https get request with python proxy doesn't work

I've looked at and tried several stack solutions regarding the https requests using a proxy with python, and I've seen the discussions on GitHub.
My impression was that the requests library in python 3 now has support for https requests using a proxy and so I don't understand why mine doesn't work:
import requests
proxydict = {
'http':'http://xx.xx.xx.xxx:5555/',
'https':'https://xx.xx.xxx.xx:5555/'
}
requests.get('https://www.google.co.uk',proxies = proxydict).
When I run this code I get:
ConnectionError: HTTPSConnectionPool(host='www.python.org', port=443):
Max retries exceeded with url: / (Caused by SSLError(SSLError("bad handshake:
Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')],)",),))
I was used a postman proxy to do this.
My http requests with a proxy work fine. Can someone help?
Edit:
I wanted to do this with a corporate proxy so I can't simply use a different proxy. Also, the https addresses work fine in the browser, it's simply when I do a python request with the proxy as described in the documentation that this error occurs.
Thanks.
Hi because your proxy is not working with ssl or your proxy is used try other
from here
https://www.sslproxies.org/
scroll down and you see https proxies

How solve python requests error: "Max retries exceeded with url"

I have the follow code:
res = requests.get(url)
I use multi-thread method that will have the follow error:
ConnectionError: HTTPConnectionPool(host='bjtest.com', port=80): Max retries exceeded with url: /rest/data?method=check&test=123 (Caused by : [Errno 104] Connection reset by peer)
I have used the follow method, but it still have the error:
s = requests.session()
s.keep_alive = False
OR
res = requests.get(url, headers={'Connection': 'close'})
So, I should how do it?
BTW, the url is OK, but it only can be visited internal, so the url have no problem. Thanks!
you run your script on Mac? I also meet similar problem, you can execute ulimit -n to check how many files you can handle in a time.
you can use below to enlarge the configuration.
resource.setrlimit(resource.RLIMIT_NOFILE, (the number you reset,resource.RLIM_INFINITY))
hoping can help you.
my blog which associated with your problem
I got a similar case, hopefully it can save some time to you:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8001): Max retries exceeded with url: /enroll/ (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x10f96ecc0>: Failed to establish a new connection: [Errno 61] Connection refused'))
The problem was actually silly... the localhost was down at port 8001! Restarting the server solved it.
The error message (which is admittedly a little confusing) actually means that requests failed to connect to your requested URL at all.
In this case that's because your url is http://bjtest.com/rest/data?method=check&test=123, which isn't a real website.
It has nothing to do with the format you made the request in. Fix your url and it should (presumably) work for you.

Python requests.get(url, timeout=75) does not wait for specified timeout

requests.get("http://172.19.235.178", timeout=75)
is my piece of code.
It is trying a get request on the url which is a phone and is supposed to wait upto 75 seconds for it to return a 200OK.
This request works perfectly on one Ubuntu machine but does not wait for 75 seconds on another machine.
according the documentation on https://2.python-requests.org/en/master/user/advanced/#timeouts you can set a timeout in the requests connection part but the timeout you are encountering is an OS related socket timeout.
notice that if you do:
requests.get("http://172.19.235.178", timeout=1)
you get:
ConnectTimeout: HTTPConnectionPool(host='172.19.235.178', port=80):
Max retries exceeded with url: / (Caused by
ConnectTimeoutError(, 'Connection to 172.19.235.178 timed out. (connect
timeout=1)'))
while when you do
requests.get("http://172.19.235.178", timeout=75)
you get:
ConnectionError: HTTPConnectionPool(host='172.19.235.178', port=80): Max
retries exceeded with url: / (Caused by
NewConnectionError(': Failed to establish a new connection: [Errno
10060] A connection attempt failed because the connected party did not
properly respond after a period of time, or established connection
failed because connected host has failed to respond',))
while you could change you OS behavior as stated here: http://willbryant.net/overriding_the_default_linux_kernel_20_second_tcp_socket_connect_timeout
In your case I would put a timeout of 10 and iterate over it a few times with a try except statement

Categories

Resources