File "C:\Python27\lib\socket.py", line 224, in meth
return getattr(self._sock,name)(*args) gaierror: [Errno 11004]
getaddrinfo failed
Getting this error when launching the hello world sample from here:
http://bottlepy.org/docs/dev/
It most likely means the hostname can't be resolved.
import socket
socket.getaddrinfo('localhost', 8080)
If it doesn't work there, it's not going to work in the Bottle example. You can try '127.0.0.1' instead of 'localhost' in case that's the problem.
The problem, in my case, was that some install at some point defined an environment variable http_proxy on my machine when I had no proxy.
Removing the http_proxy environment variable fixed the problem.
The problem in my case was that I needed to add environment variables for http_proxy and https_proxy.
E.g.,
http_proxy=http://your_proxy:your_port
https_proxy=https://your_proxy:your_port
To set these environment variables in Windows, see the answers to this question.
Make sure you pass a proxy attribute in your command
forexample - pip install --proxy=http://proxyhost:proxyport pixiedust
Use a proxy port which has direct connection (with / without password). Speak with your corporate IT administrator. Quick way is find out network settings used in eclipse which will have direct connection.
You will encouter this issue often if you work behind a corporate firewall. You will have to check your internet explorer - InternetOptions -LAN Connection - Settings
Uncheck - Use automatic configuration script
Check - Use a proxy server for your LAN. Ensure you have given the right address and port.
Click Ok
Come back to anaconda terminal and you can try install commands
May be this will help some one. I have my proxy setup in python script but keep getting the error mentioned in the question.
Below is the piece of block which will take my username and password as a constant in the beginning.
if (use_proxy):
proxy = req.ProxyHandler({'https': proxy_url})
auth = req.HTTPBasicAuthHandler()
opener = req.build_opener(proxy, auth, req.HTTPHandler)
req.install_opener(opener)
If you are using corporate laptop and if you did not connect to Direct Access or office VPN then the above block will throw error. All you need to do is to connect to your org VPN and then execute your python script.
Thanks
I spent some good hours fixing this but the solution turned out to be really simple. I had my ftp server address starting with ftp://. I removed it and the code started working.
FTP address before:
ftp_css_address = "ftp://science-xyz.xyz.xyz.int"
Changed it to:
ftp_css_address = "science-xyz.xyz.xyz.int"
Related
I have a brand new Django setup on my computer, I ran the command runserver, and I get an ERR_CONNECTION_REFUSED in chrome.
localhost is added to allowed_hosts and I get no error in django, when I check for the port it is not active.
I am running Django in wsl and accessing chrome from windows on the same machine
I have tried adding to my IP, changing browser, adding to allowed hosts, I initially had this issue in another project and I set up this new project to see if the problem would resolve, it didn't and the new project is completely clean no way something could be messed up there.
I tried running the server in windows and finally got an error
Error: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions
I ran it with a whole lot of different port numbers which I am sure is not in used but no luck.
Any help would be greatly appreciated
edit 3: lol a simple restart of my computer did the trick, I guess it was a port blocking which is really weird because I tried dozens of ports and it didn't show up in use when I used netstat
My python application is running on port number 6666 on a linux machine to which I can connect using putty and I have sudo permissions to execute commands (I dont know root password)
1.If I change the port number in that application to 443 and if I run that application then getting some permission denied error at the time of socket binding
2.and if I use sudo for the above case then getting some module not found error.
If I open https://that_server_name:6666/path_to_my_appln from my localhost, then getting the error as cound not find response (as I am able to successfully run my application with port 6666, I ran the application and I tried to open that url)
or else if I open https://that_server_name:443/path_to_my_appln , then getting error as 503, service temporarily unavailable(as I am getting above mentioned errors for port number 443, so I did not start my application in backend)
My question is how to map 443 to an application running on 6666 port
In order to listen to a port below 1024 on Linux you need to have root permissions. You can
Run the program as root and secure it, for example by dropping privileges after binding to the socket.
Use a webserver (Apache, nginx, ...) to proxy the request.
Of cause there are some more solutions.
You should try to solve the problem of module not found error. That would be a good solution to your problem. If you post your module not found error that would be helpful. How are you running the Python application. Are you running it from a virtualenv?
The code below:
import urllib2
file = urllib2.urlopen("http://foo.bar.com:82")
works just fine on my mac (OS X 10.8.4 running Python 2.7.1. It opens the URL and I can parse the file with no problems.
When I try the EXACT same code (these two lines) in GoDaddy Python 2.7.3 (or 2.4) I receive an error:
urllib2.URLError: <urlopen error (111, 'Connection refused')
The problem has something to do with the port :82 that is an essential part of the address. I have tried using a forwarding address with masking, etc., and nothing works.
Any idea why it would work in one environment and not in the other (ostensibly similar) environment? Any ideas how to get around this? I also tried Mechanize to no avail. Previous posts have suggested focusing on urllib2.HTTPBasicAuthHandler, but it works fine on my OS X environment without anything special.
Ideas are welcome.
Connection refused means that your operating system tried to contact the remote host, but got a "closed port" message.
Most likely, this is because of a firewall between GoDaddy and foo.bar.com. Most likely, foo.bar.com is only reachable from your computer or your local network, but it also could be GoDaddy preventing access to strange ports.
From a quick look at the GoDaddy support forums, it looks like they only support outgoing requests to ports 80 (HTTP) and 443 (HTTPS) on their shared hosts. See e.g.
http://support.godaddy.com/groups/web-hosting/forum/topic/curl-to-ports-other-than-80/
I'm trying to set up an ipython notebook server on an Ubuntu host machine - but can't seem to access it remotely. I set up the notebook server as per the tutorial, and launch it - everything seems fine. But going to https://my-host-ip:9999/ I get a timeout (error 118) message in the browser.
My intuition is that need to open the appropriate port (9999 in the setup tutorial) on my host. How do I do this (safely) with Ubuntu? More generally, is there a debugging checklist I should go through at this point?
Did you try to run it as public ? (listening on '*' ?)
http://ipython.org/ipython-doc/dev/interactive/htmlnotebook.html#running-a-public-notebook-server
Don't forget to make it over https and with password.
Is the port 9999 open?
I don't know why, but I had to set the IP to the IP of host to make it work.
I'm having a weird problem. I have this Python application and when I try to open a url in the application, for instance urllib2.urlopen("http://google.com", None) I get the following error:
IOError: [Errno socket error] [Errno 8] nodename nor servname provided, or not known
However when I do the same thing on the python command line interpreter it works fine. The same python executable is being used for both the application and the command line.
nslookup google.com seems to work fine. I opened up wireshark and it looks like when the application tries to open google.com only a mDNS query goes out for "My-Name-MacBook-Pro.local". However, when the command line tries to open google.com a regular DNS query goes out for "google.com" I found if I hardcoded Google's IP in /etc/hosts then the request from the application finally started working.
It seems something weird must be altering how the application resolves domain names, but I have no idea what could be doing this.
I'm running Mac OSX 10.6.7 and Python 2.6.
Edit: I am not using a proxy to access the internet
Just see that you don't have HTTP_PROXY environment variable set which is preventing this. (In which case, that would be a bad error message. Given the proper directory and try again, like
import urllib
r = urlib.urlopen('http://www.google.com')
print r.read()