On linux I started a server at localhost:8090. From a docker container through Jupyter notebook I am trying to send a PUT request to the localhost server. GET requests seem to work just fine from the docker container but PUT requests show the following error.
ConnectionError: HTTPConnectionPool(host='localhost', port=8090): Max retries exceeded with url: /xxxxx/xxxxx/xxxxx (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6b9ee1d978>: Failed to establish a new connection: [Errno 111] Connection refused'))
Using curl from local terminal, I am able to send a PUT request without any problem.
The PUT requests seemed to be working in first 1-2 hours (even from the docker container) but then the error started appearing. Is it possible that there are some connections still alive and the server cannot accept any more? Restarting the server and my machine did not fix the problem.
Related
I am calling a REST API to Informatica from POSTMAN and Python (requests library) and find the behavior quite funny.
When I am on VPN I can only make a successful call from POSTMAN, however if I switch VPN off both Python and POSTMAN calls work perfectly.
Python script generated automatically by POSTMAN.
Error:
ConnectionError: HTTPSConnectionPool(host='use4-mdm.dm-us.informaticacloud.com', port=443): Max retries exceeded with url: /rdm-service/external/v2/export (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000002393AB297C0>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
Any ideas what the reason might be?
UPD
To make my question more clear:
This is a corporate VPN on work laptop
My system does not have *_PROXY variables
No default proxy in requests library
import requests
session = requests.Session()
session.proxies
>>> {}
http.client library - same result
Settings in POSTMAN are in screenshot below
you may want to have a look at the "proxies" parameter of python requests :)
I have been using Pychrome and Chrome Dev tool protocol to check for network requests in chrome dev tools. It was working successfully yesterday. I have made no changes and today I have started to get this error
ConnectionError: HTTPConnectionPool(host='localhost', port=8000): Max retries exceeded with url: /json/new (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x102b5e0a0>: Failed to establish a new connection: [Errno 61] Connection refused'))
I have tried killing anything running on any ports, checked the 8000 was free and all good. Going and using the sample basic script given by pychrome here https://github.com/fate0/pychrome I still get the same error. So It must be something on my machine causing an issue but can figure out why it worked find yesterday and not today. Using the script give on pychromes git page. Its failing for me on step tab = browser.new_tab()
All suggestions greatly appreciated.
I realised now that I never started the other headless browser. For pychrome to read the network tab, it starts on headless browser and then in the other it goes to the site you want. One is basically reading the other. So basically starting the chrome browser worked first as a sort of server fixed.
sudo /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --headless --disable-gpu --remote-debugging-port=8000
Thats how its started on Mac at least. I think its easier on linux.
we recently configured aws privatelink to snowflake account and updated python connector (v.1.8.0) properties to use privatelink URL.
Connection keeps failing with below error.
Failed to execute request: HTTPSConnectionPool(host='testaccount.us-west-2.privatelink.snowflakecomputing.com', port=443): Max retries exceeded with url: /session/v1/login-request?warehouse=TEST_WH&request_id=12345&request_guid=f5467 (Caused by ProtocolError('Connection aborted.', BadStatusLine("''",)))
Has anyone encountered this issue when using AWS privatelink?
Any inputs would be greatly appreciated.
On the host where you are running python, if linux or OSX, can you run:
curl -v -L https://testaccount.us-west-2.privatelink.snowflakecomputing.com:443
Do you know if you have a proxy in place allowing your Web URL to work?
https://www.digitalcitizen.life/how-set-proxy-server-all-major-internet-browsers-windows
That error is because the python code is unable to reach the privatelink URL. Either it's on a host that is blocked, there is a firewall blocking, or you require a proxy.
My application running with Uwsgi on the server that using proxy. The server proxy is in one network. But whenever the application trying to make https request (443), it raises an error like it cannot connect to internet.
HTTPSConnectionPool(host='api.blabla.com', port=443): Max retries exceeded with url: /endpoint/v2/checkout_url (Caused by NewConnectionError(': Failed to establish a new connection: [Errno -3] Temporary failure in name resolution',))
for proxy, I'm using squid for http and https. and for the client side, the proxy has been setup system-wide.
Kindly need advice from this issue. thanks in advance.
I searched for similar questions and suspect this to be a proxy setting issue after searching Stack Overflow, but I am just wondering why for example requests.get('http://google.com') returns the correct response without any errors when executed in cmd on a windows 7 machine, but when I start making requests in my django project and use the manage.py runserver test site, I get the following error:
CONNECTIONERROR: Max retries exceeded with url: (Caused by <class 'socket.gaierror'>: [Errno 11004] getaddrinfo failed)
I assume the problem is the test server that you are running with manage.py since the code works in cmd. Thanks in advance for any explanations of why this is occurring.
The DNS lookup is failing on your server. You should investigate why your server cannot resolve the domain name you're using.