Related
I'm using the REST API for a Cisco CMX device, and trying to write Python code which makes a GET request to the API for information. The code is as follows and is the same as that in the file except with the necessary information changed.
from http.client import HTTPSConnection
from base64 import b64encode
# Create HTTPS connection
c = HTTPSConnection("0.0.0.0")
# encode as Base64
# decode to ascii (python3 stores as byte string, need to pass as ascii
value for auth)
username_password = b64encode(b"admin:password").decode("ascii")
headers = {'Authorization': 'Basic {0}'.format(username_password)}
# connect and ask for resource
c.request('GET', '/api/config/v1/aaa/users', headers=headers)
# response
res = c.getresponse()
data = res.read()
However, I am continuously getting the following error:
Traceback (most recent call last):
File "/Users/finaris/PycharmProjects/test/test/test.py", line 14, in <module>
c.request('GET', '/api/config/v1/aaa/users', headers=headers)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1106, in request
self._send_request(method, url, body, headers)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1151, in _send_request
self.endheaders(body)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1102, in endheaders
self._send_output(message_body)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 934, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 877, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/http/client.py", line 1260, in connect
server_hostname=server_hostname)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 377, in wrap_socket
_context=self)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 752, in __init__
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 988, in do_handshake
self._sslobj.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/ssl.py", line 633, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:645)
I also tried updating OpenSSL but that had no effect.
I had the same error and google brought me to this question, so here is what I did, hoping that it helps others in a similar situation.
This is applicable for OS X.
Check in the Terminal which version of OpenSSL I had:
$ python3 -c "import ssl; print(ssl.OPENSSL_VERSION)"
>> OpenSSL 0.9.8zh 14 Jan 2016
As my version of OpenSSL was too old, the accepted answer did not work.
So I had to update OpenSSL. To do this, I updated Python to the latest version (from version 3.5 to version 3.6) with Homebrew, following some of the steps suggested here:
$ brew update
$ brew install openssl
$ brew install python3
Then I was having problems with the PATH and the version of python being used, so I just created a new virtualenv making sure that the newest version of python was taken:
$ virtualenv webapp --python=python3.6
Issue solved.
The only thing you have to do is to install requests[security] in your virtualenv. You should not have to use Python 3 (it should work in Python 2.7). Moreover, if you are using a recent version of macOS, you don't have to use homebrew to separately install OpenSSL either.
$ virtualenv --python=/usr/bin/python tempenv # uses system python
$ . tempenv/bin/activate
$ pip install requests
$ python
>>> import ssl
>>> ssl.OPENSSL_VERSION
'OpenSSL 0.9.8zh 14 Jan 2016' # this is the built-in openssl
>>> import requests
>>> requests.get('https://api.github.com/users/octocat/orgs')
requests.exceptions.SSLError: HTTPSConnectionPool(host='api.github.com', port=443): Max retries exceeded with url: /users/octocat/orgs (Caused by SSLError(SSLError(1, u'[SSL: TLSV1_ALERT_PROTOCOL_VERSION] tlsv1 alert protocol version (_ssl.c:590)'),))
$ pip install 'requests[security]'
$ python # install requests[security] and try again
>>> import requests
>>> requests.get('https://api.github.com/users/octocat/orgs')
<Response [200]>
requests[security] allows requests to use the latest version of TLS when negotiating the connection. The built-in openssl on macOS supports TLS v1.2.
Before you install your own version of OpenSSL, ask this question: how is Google Chrome loading https://github.com?
I believe TLSV1_ALERT_PROTOCOL_VERSION is alerting you that the server doesn't want to talk TLS v1.0 to you. Try to specify TLS v1.2 only by sticking in these lines:
import ssl
from http.client import HTTPSConnection
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
# Create HTTPS connection
c = HTTPSConnection("0.0.0.0", context=context)
Note, you may need sufficiently new versions of Python (2.7.9+ perhaps?) and possibly OpenSSL (I have "OpenSSL 1.0.2k 26 Jan 2017" and the above seems to work, YMMV)
None of the accepted answers pointed me in the right direction, and this is still the question that comes up when searching the topic, so here's my (partially) successful saga.
Background: I run a Python script on a Beaglebone Black that polls the cryptocurrency exchange Poloniex using the python-poloniex library. It suddenly stopped working with the TLSV1_ALERT_PROTOCOL_VERSION error.
Turns out that OpenSSL was fine, and trying to force a v1.2 connection was a huge wild goose chase - the library will use the latest version as necessary. The weak link in the chain was actually Python, which only defined ssl.PROTOCOL_TLSv1_2, and therefore started supporting TLS v1.2, since version 3.4.
Meanwhile, the version of Debian on the Beaglebone considers Python 3.3 the latest. The workaround I used was to install Python 3.5 from source (3.4 might have eventually worked too, but after hours of trial and error I'm done):
sudo apt-get install build-essential checkinstall
sudo apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev
wget https://www.python.org/ftp/python/3.5.4/Python-3.5.4.tgz
sudo tar xzf Python-3.5.4.tgz
cd Python-3.5.4
./configure
sudo make altinstall
Maybe not all those packages are strictly necessary, but installing them all at once saves a bunch of retries. The altinstall prevents the install from clobbering existing python binaries, installing as python3.5 instead, though that does mean you have to re-install additional libraries. The ./configure took a good five or ten minutes. The make took a couple of hours.
Now this still didn't work until I finally ran
sudo -H pip3.5 install requests[security]
Which also installs pyOpenSSL, cryptography and idna. I suspect pyOpenSSL was the key, so maybe pip3.5 install -U pyopenssl would have been sufficient but I've spent far too long on this already to make sure.
So in summary, if you get TLSV1_ALERT_PROTOCOL_VERSION error in Python, it's probably because you can't support TLS v1.2. To add support, you need at least the following:
OpenSSL 1.0.1
Python 3.4
requests[security]
This has got me past TLSV1_ALERT_PROTOCOL_VERSION, and now I get to battle with SSL23_GET_SERVER_HELLO instead.
Turns out this is back to the original issue of Python selecting the wrong SSL version. This can be confirmed by using this trick to mount a requests session with ssl_version=ssl.PROTOCOL_TLSv1_2. Without it, SSLv23 is used and the SSL23_GET_SERVER_HELLO error appears. With it, the request succeeds.
The final battle was to force TLSv1_2 to be picked when the request is made deep within a third party library. Both this method and this method ought to have done the trick, but neither made any difference. My final solution is horrible, but effective. I edited /usr/local/lib/python3.5/site-packages/urllib3/util/ssl_.py and changed
def resolve_ssl_version(candidate):
"""
like resolve_cert_reqs
"""
if candidate is None:
return PROTOCOL_SSLv23
if isinstance(candidate, str):
res = getattr(ssl, candidate, None)
if res is None:
res = getattr(ssl, 'PROTOCOL_' + candidate)
return res
return candidate
to
def resolve_ssl_version(candidate):
"""
like resolve_cert_reqs
"""
if candidate is None:
return ssl.PROTOCOL_TLSv1_2
if isinstance(candidate, str):
res = getattr(ssl, candidate, None)
if res is None:
res = getattr(ssl, 'PROTOCOL_' + candidate)
return res
return candidate
and voila, my script can finally contact the server again.
As of July 2018, Pypi now requires that clients connecting to it use TLS 1.2. This is an issue if you're using the version of python shipped with MacOS (2.7.10) because it only supports TLS 1.0. You can change the version of ssl that python is using to fix the problem or upgrade to a newer version of python. Use homebrew to install the new version of python outside of the default library location.
brew install python#2
For Mac OS X
1) Update to Python 3.6.5 using the native app installer downloaded from the official Python language website https://www.python.org/downloads/
I've found that the installer is taking care of updating the links and symlinks for the new Python a lot better than homebrew.
2) Install a new certificate using "./Install Certificates.command" which is in the refreshed Python 3.6 directory
> cd "/Applications/Python 3.6/"
> sudo "./Install Certificates.command"
Another source of this problem: I found that in Debian 9, the Python httplib2 is hardcoded to insist on TLS v1.0. So any application that uses httplib2 to connect to a server that insists on better security fails with TLSV1_ALERT_PROTOCOL_VERSION.
I fixed it by changing
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
to
context = ssl.SSLContext()
in /usr/lib/python3/dist-packages/httplib2/__init__.py .
Debian 10 doesn't have this problem.
I got this problem too.
In macos, here is the solution:
Step 1: brew restall python. now you got python3.7 instead of the old python
Step 2: build the new env base on python3.7. my path is /usr/local/Cellar/python/3.7.2/bin/python3.7
now, you'll not being disturbed by this problem.
I encountered this exact issue when I attempted gem install bundler, and I was confused by all the Python responses (since I was using Ruby). Here was my exact error:
ERROR: Could not find a valid gem 'bundler' (>= 0), here is why:
Unable to download data from https://rubygems.org/ - SSL_connect returned=1 errno=0 state=SSLv2/v3 read server hello A: tlsv1 alert protocol version (https://rubygems.org/latest_specs.4.8.gz)
My solution: I updated Ruby to the most recent version (2.6.5). Problem solved.
I ran into this issue using Flask with the flask_mqtt extension. The solution was to add this to the Python file:
app.config['MQTT_TLS_VERSION'] = ssl.PROTOCOL_TLSv1_2
Using pip install for any module apparently on my Ubuntu 16.04 system with python 2.7.11+ throws this error:
TypeError: unsupported operand type(s) for -=: 'Retry' and 'int'
What is wrong with pip? How could I reinstall it, if necessary?
Update: Full traceback is below
sunny#sunny:~$ pip install requests
Collecting requests
Exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/pip/basecommand.py", line 209, in main
status = self.run(options, args)
File "/usr/lib/python2.7/dist-packages/pip/commands/install.py", line 328, in run
wb.build(autobuilding=True)
File "/usr/lib/python2.7/dist-packages/pip/wheel.py", line 748, in build
self.requirement_set.prepare_files(self.finder)
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 360, in prepare_files
ignore_dependencies=self.ignore_dependencies))
File "/usr/lib/python2.7/dist-packages/pip/req/req_set.py", line 512, in _prepare_file
finder, self.upgrade, require_hashes)
File "/usr/lib/python2.7/dist-packages/pip/req/req_install.py", line 273, in populate_link
self.link = finder.find_requirement(self, upgrade)
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 442, in find_requirement
all_candidates = self.find_all_candidates(req.name)
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 400, in find_all_candidates
for page in self._get_pages(url_locations, project_name):
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 545, in _get_pages
page = self._get_page(location)
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 648, in _get_page
return HTMLPage.get_page(link, session=self.session)
File "/usr/lib/python2.7/dist-packages/pip/index.py", line 757, in get_page
"Cache-Control": "max-age=600",
File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 480, in get
return self.request('GET', url, **kwargs)
File "/usr/lib/python2.7/dist-packages/pip/download.py", line 378, in request
return super(PipSession, self).request(method, url, *args, **kwargs)
File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 468, in request
resp = self.send(prep, **send_kwargs)
File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/sessions.py", line 576, in send
r = adapter.send(request, **kwargs)
File "/usr/share/python-wheels/CacheControl-0.11.5-py2.py3-none-any.whl/cachecontrol/adapter.py", line 46, in send
resp = super(CacheControlAdapter, self).send(request, **kw)
File "/usr/share/python-wheels/requests-2.9.1-py2.py3-none-any.whl/requests/adapters.py", line 376, in send
timeout=timeout
File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/connectionpool.py", line 610, in urlopen
_stacktrace=sys.exc_info()[2])
File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3-none-any.whl/urllib3/util/retry.py", line 228, in increment
total -= 1
TypeError: unsupported operand type(s) for -=: 'Retry' and 'int'
Ubuntu comes with a version of PIP from precambrian and that's how you have to upgrade it if you do not want to spend hours and hours debugging pip related issues.
apt-get remove python-pip python3-pip
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
python3 get-pip.py
As you observed I included information for both Python 2.x and 3.x
If you are behind a proxy, you must do some extra configuration steps before starting the installation. You must set the environment variable http_proxy to the proxy address. Using bash this is accomplished with the command
export http_proxy="http://user:pass#my.site:port/"
You can also provide the
--proxy=[user:pass#]url:port
parameter to pip. The [user:pass#] portion is optional.
Updating setuptools has worked out fine for me.
sudo pip install --upgrade setuptools
First of all, this problem exists because of network issues, and uninstalling and re-installing everything won't be of much help. Probably you are behind proxy, and in that case you need to set proxy.
But in my case, I was facing the problem because I wasn't behind proxy. Generally, I work behind proxy, but when working from home, I set the proxy to None in Network settings.
But I was still getting the same errors even after removing the proxy settings.
So, when I did type
env | grep proxy
I found something like this :
http_proxy=http://127.0.0.1:1234/
And this was the reason I was still getting the very same error, even when I thought I had removed the proxy settings.
To unset this proxy, type
unset http_proxy
Follow the same approach for all the other entries, such as https_proxy.
What happens here is that the the vendored versions of request/urllib3 clash when imported in two different places (same code, but different names). If you then have a network error, it doesn't retry to get the wheel, but fails with the above error. See here for a deeper dive into this error.
For the solution with system pip, see above.
If you have this problem in a virtualenv built by python -m venv (which still copies the wheels from /usr/share/python-wheels, even if you have pip installed separately), the easiest way to "fix" it seems to be:
create the virtualenv: /usr/bin/python3.6 -m venv ...
install requests into the environment (this might raise the above error): <venv>/bin/pip install requests
remove the copied versions of requests which would be used by pip: rm <venv>/share/python-wheels/{requests,chardet,urllib3}-*.whl
Now a <venv>/bin/pip uses the installed version of requests which has urllib3 vendored.
port 443 is not open, just allow custom tcp port 443 if on AWS else open the port 443 for the outbound connections ...
Just upgrade pip worked for me:
pip install --upgrade pip
I have the same problem when installing a RaspberryPI TFT from Adafruit with pitft.sh / adafruit-pitft.sh.
I am not happy about coding-styles with errors from somewhere to be interpreted somehow - as could be seen by the previous answers.
Remark: The type error exception of retry.py is obviously a bug, caused by an unappropriate assignement and calculation of an instance of the class Reply to an int with the default value of 10 - somewhere in the code...
Should be fixed either by adding an inplace-operator, or fixing the erroneous assignment.
So tried to analyse and patch the error itself first. The actual error in my case case is the same - retry.py called by pip.
The installation script adafruit-pitft.sh / pitft.sh tries to apply urllib3 which itself tries to install nested dependencies by pip, so the same error.
https://github.com/adafruit/Raspberry-Pi-Installer-Scripts/blob/master/adafruit-pitft.sh
https://github.com/adafruit/Raspberry-Pi-Installer-Scripts
adafruit-pitft.sh # or pitft.sh
...
_stacktrace=sys.exc_info()[2])
File "/usr/share/python-wheels/urllib3-1.13.1-py2.py3 none-any.whl/urllib3/util/retry.py", line 228, in increment
total -= 1
TypeError: unsupported operand type(s) for -=: 'Retry' and 'int'
For the current distribution(based on debian-9.6.0/stretch):
File "/usr/share/python-wheels/urllib3-1.19.1-py2.py3-none-any.whl/urllib3/util/retry.py", line 315, in increment
total -= 1
TypeError: unsupported operand type(s) for -=: 'Retry' and 'int'
The following - dirty *:) - patch enables a sounding error trace:
# File: retry.py - in *def increment(self, ..* about line 315
# original: total = self.total
# patch: quick-and-dirty-fix
# START:
if isinstance(self.total, Retry):
self.total = self.total.total
if type(self.total) is not int:
self.total = 2 # default is 10
# END:
# continue with original:
total = self.total
if total is not None:
total -= 1
connect = self.connect
read = self.read
redirect = self.redirect
cause = 'unknown'
status = None
redirect_location = None
if error and self._is_connection_error(error):
# Connect retry?
if connect is False:
raise six.reraise(type(error), error, _stacktrace)
elif connect is not None:
connect -= 1
The sounding output with the temporary patch is(displayed twice...?):
Retrying (Retry(total=1, connect=None, read=None, redirect=None)) after connection broken by 'ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at/
Retrying (Retry(total=0, connect=None, read=None, redirect=None)) after connection broken by
'ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at/
Could not find a version that satisfies the requirement evdev (from versions: )
No matching distribution found for evdev
WARNING : Pip failed to install software!
So in my case actually two things cause the error, this may vary in other environments:
Missing evdev => try to install
Failed to connect a repo/dist containing evdev in order to download. => finally give it up
My installation environment is offline from an internal debian+raspbian mirror, thus
do not want to set the proxy...
So I proceeded by manual installation of the missing component evdev:
download evdev from PyPI(or e.g. from github.com):
https://pypi.org/project/evdev/
https://files.pythonhosted.org/packages/7e/53/374b82dd2ccec240b7388c65075391147524255466651a14340615aabb5f/evdev-1.1.2.tar.gz
Unpack and install manually as root user - for all local accounts, so detected as installed:
sudo su -
tar xf evdev-1.1.2.tar.gz
cd evdev-1.1.2
python setup.py install
Call install script again:
adafruit-pitft.sh # or pitft.sh
...Answer dialogues...
...that's it.
If you proceed online by direct PyPI access:
check your routing + firewall for access to pypi.org
set a proxy if required (http_proxy/https_proxy)
And it works..
Hope this helps in other cases too.
Arno-Can Uestuensoez
----------------------------------------------
See also: issue - 35334: https://bugs.python.org/issue35334
----------------------------------------------
See now also: issue - 1486: https://github.com/urllib3/urllib3/issues/1486
for file: https://github.com/urllib3/urllib3/blob/master/src/urllib3/util/retry.py
check for network issues, to bypass the exception case code
In my case, I was using a custom index, that index had no route and such would trigger the exception case code. The exception case bug still exists and still masks the real issue, however I was able to work around this by testing the connectivity with other tools such as nc -vzw1 myindex.example.org 443 and retrying when the network was up.
I was facing similar issue while trying to install awscli tool on ec2 instance.
I changed security group to allow port 443 inbound and outbound access and that solved the issue for me.
I got this error when I was trying to create a virtualenv with command virtualenv myVirtualEnv. I just added a sudo before the command; it solved everything.
Solution:
1. sudo apt remove python-pip
2. pip3 install pip (or install pip by get-pip.py)
Why:
This error occurred on pip 8.0.1 which installed by apt-get. And happened only when your network is unstable.
If you have a pip installed with apt, it hides the pip you installed by other ways, so you should remove the apt one first.
I disconnected the network and tested 8.0.1, 9.0.3, 10.x the 3 versions installed with pip3 or get-pip.py, no error occurred. So, I think only the apt version of pip 8.0.1 has that bug, the others is ok.
I tried the solution answered above:
apt-get remove python-pip python3-pip
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
python3 get-pip.py
When I tried
python get-pip.py
python3 get-pip.py
I got this message
Could not install packages due to an EnvironmentError:
[Errno 13] Permission denied: /usr/bin/pip3 Consider using the --user
option or check the permissions.
I did the following and it works
python3 -m venv env
source ./env/bin/activate
Sudo apt-get update
apt-get remove python-pip python3-pip
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
python3 get-pip.py
pip3 install pip
sudo easy_install pip
pip install --upgrade pip
In my case, i had opened Pycharm in sudo mode, and was running pip install nltk in pycharm terminal which showed this error. running with sudo pip install solves the error.
I also had this issue. Initially, a proxy was set and work fine. Then I connected to a network where it doesn't go through a proxy. After unsetting proxy pip again get works.
unset http_proxy; unset http_prox; unset HTTP_PROXY; unset HTTPS_PROXY
Bizarrely if I remove the proxy from the environment and add it to the command line it works for me. For example to upgrade pip itself:
env http_proxy= https_proxy= pip install pip --upgrade --proxy 'http://proxy-url:80'
My issue was having the proxy in the environment. It seems that pip only honors the one in argument.
This is the working solution to this problem I found.
sudo apt-get clean
cd /var/lib/apt
sudo mv lists lists.old
sudo mkdir -p lists/partial
sudo apt-get clean
sudo apt-get update
For myself, it turns out that wlan0 was down, which resulted in me being unable to connect out. So, ensuring that wlan0 was up, allowed pip / pip3 to work without issue.
fixed it temporary:
pip install requests -i http://a.b.com/pypi/simple --trusted-host a.b.com
fixed it permanent:
linux OS: add these in ~/.pip/pip.conf(create it if no exist)
[global]
index-url = http://a.b.com/pypi/simple
[install]
trusted-host = a.b.com
ps: http://a.b.com/pypi/simple your proxy_http_address
I'm trying to learn a bit of packet generation with scapy. It looks pretty cool. Following some documentation I'm doing this:
l3=IP(dst="192.168.0.1", src="192.168.0.2", tos=(46 << 2))
But only to get the error message of:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/scapy/sendrecv.py", line 251, in send
__gen_send(conf.L3socket(*args, **kargs), x, inter=inter, loop=loop, count=count,verbose=verbose, realtime=realtime)
File "/usr/lib/python2.7/dist-packages/scapy/arch/linux.py", line 307, in __init__
self.ins = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(type))
File "/usr/lib/python2.7/socket.py", line 187, in __init__
_sock = _realsocket(family, type, proto)
error: [Errno 1] Operation not permitted
Running scapy as root solved the problem. But that's not what I wanted. Is it because normal user can't create RAW socket? If so, is there a solution?
Scapy needs root privileges to create raw sockets because it uses the Python socket library. Raw sockets are only allowed to used "with an effective user ID of 0 or the CAP_NET_RAW capability" according to the Linux raw man pages.
I can't find what looks to be reliable documentation on setting the CAP_NET_RAW capability, but if you are looking to a work around to running Scapy scripts that user raw sockets without root, that is what you need to do.
To run Scapy with just cap_net_raw privilege...
The safest and less complicated way I know is, in order:
Make a personal copy of the python binary:
$ sudo cp /usr/bin/python2.7 ~/python_netraw
Own it:
$ sudo chown your user name ~/python_netraw
Don't let anybody else run it:
$ chmod -x,u+x ~/python_netraw
Give it cap_net_raw capability:
$ sudo setcap cap_net_raw=eip /usr/bin/python_netraw
Run scapy with it:
$ ~/python_netraw -O /usr/bin/scapy
(Or use sudo each time you need to run Scapy with raw privileges.)
A dirty approach, possibly insecure: Directly give CAP_NET_RAW capability to Python:
sudo setcap cap_net_raw=eip $(readlink -f $(which python))
To run a temporary python environment (like for scapy) with cap_net_raw I found this works:
sudo -E capsh --caps="cap_setpcap,cap_setuid,cap_setgid+ep cap_net_raw+eip" --keep=1 --user="$USER" --addamb="cap_net_raw" -- -c /usr/bin/python3
adapted from the Arch Wiki
I'm getting a windows error when I try to use pip or easy_install inside virtualenvs. cygwin/python 2.7/windows 7
*** error: [Errno 10013] An attempt was made to access a socket in a way forbidden by its access permissions
This error shows up in other questions on SO, but they're all related to people trying to listen to port 80. In my case I'm trying to connect out, to port 80/443 - which shouldn't be nearly as restricted.
I can use pip with the main python, just not with the one in the virtualenv, so I don't think it's a library problem. The following is a test run, done in a console2 window with admin privileges.
$ which python
/cygdrive/c/Python27/python
I can reach pypi here:
$ pip install
Downloading/unpacking django from https://pypi.python.org/packages/any/D/Dj....
ctl^c
virtualenv --version
1.11.4
making the virtualenv:
$ virtualenv testenv
New python executable in testenv\Scripts\python.exe
Installing setuptools, pip...done.
$ source testenv/Scripts/activate
virtualenv is really activated. (can see sys.path is different when running the new python)
(testenv)$ which python
/cygdrive/c/proj/testenv/Scripts/python
(testenv)$ which pip
/cygdrive/c/proj/testenv/Scripts/pip
(testenv)$ pip --version
pip 1.5.4 from C:\proj\testenv\lib\site-packages (python 2.7)
(testenv)$ pip install django
Cannot fetch index base URL https://pypi.python.org/simple/
Place breakpoint at c:\python27\Lib\socket.py : 576
(testenv)$ pip install django
the relevant code from around here in socket.py:
res=getaddrinfo(host, port, 0, SOCK_STREAM)
res
>>> [(2, 1, 0, '', ('103.245.222.184', 443))]
af, socktype, proto, canonname, sa = res[0]
sock = socket(af, socktype, proto)
sock
>>> <socket._socketobject object at 0x020505E0>
socket is created ok.
sock.connect(sa)
*** error: [Errno 10013] An attempt was made to access a socket in a way forbidden by its access permissions
socket library fails.
Since everything works except for the virtualenv, I think that somehow virtualenv isn't creating the python in the right way, so that windows isn't giving it enough permissions to even connect out.
I have no anti-virus software running, and the problem occurs even when I turn off the windows firewall.
Permissions: testenv, testenv/Scripts are 755. So is python & all exes in Scripts. Any ideas?
Where is the problem?
import nmap
I installed nmap and python, and when I use import nmap there is no any problem. But when use:
nmap.PortScanner()
this error is thrown:
Traceback (most recent call last):
File "<pyshell#3>", line 1, in <module>
nmap.PortScanner()
File "./nmap/nmap.py", line 153, in __init__
raise PortScannerError('nmap program was not found in path. PATH is:{0}'.format(os.getenv('PATH')))
nmap.nmap.PortScannerError: 'nmap program was not found in path. PATH is : /usr/lib /lightdm/lightdm:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games: /usr/local/games'"
For Windows users:
I would suggest first closing all terminals and IDLE or any other window you currently have opened when trying to run your script.
Next, open a command line and type
pip uninstall python-nmap
If you are unsure if Nmap binaries are installed on your current system, do a simple search for
nmap
from your start menu. If it is installed, continue to the next step, if not, go to Nmap's official download page
Download the windows self install and run it. Record the directory it is being installed to.
Go to that directory. For me it was
C:\Program Files (x86)\Nmap
Open your system's environment variables editor usually found in
My PC > System Information > Advance settings > Environment Variables
Or right click
My PC or My Computer or whatever yours is called and select properties then advance settings then Environment Variables at the bottom of the Advanced tab
select Path for both You and the System
press Edit and enter the full path to your Nmap director
eg ;C:\Program Files (x86)\Nmap\
Press ok and exit the editor.
Now go back to your command line and enter: pip install python-nmap
Allow it to install and then restart your ide and test your code again.
python-nmap seems to depend on nmap, which is the binary that does the actual network scanning and auditing.
You can check in a terminal if nmap is in your $PATH with the following command:
which nmap
Debian-like
You can install nmap in debian-like distros with:
apt-get install nmap
Arch linux:
pacman -Sy nmap
Already installed nmap
If you're sure the nmap binary is installed, but you think it is not in your $PATH, you might have to add the directory where nmap is installed to your $PATH.
To do that, edit the .bashrc file in your user's directory, or /etc/bashrc (which will change for all users) and add the following:
export PATH="$PATH:/usr/local/nmap/bin"
but changing /usr/local/nmap/bin for the directory where the nmap binary is installed.
After changing the file, be sure to open a new shell session, or type exec bash to refresh it.
You also have to make sure, that it has execute permission (chmod +x <file>).
When you execute:
nmap --version
You should see something like this:
Nmap version 6.46 ( http://nmap.org )
Platform: i686-pc-linux-gnu
Compiled with: liblua-5.2.3 openssl-1.0.1g libpcre-8.34 libpcap-1.5.3 nmap-libdnet-1.12 ipv6
Compiled without:
Available nsock engines: epoll poll select
If you do, nmap is installed and in your $PATH.
I have had the same problem. Just type in a terminal:
sudo apt-get install nmap
and problem solved.
Faced similar issue while trying to run nm= nmap.PortScanner()
I tried most of the solutions given above, but they did not work for me. The thing that worked for me was installing nmap for Mac OS X using home brew (Information at: http://brew.sh) and running the command
$ brew install nmap.
Now nm= nmap.PortScanner() runs without the earlier error.
Running on Raspberry Pi 3 with Jessy lite
I had to:
sudo apt-get update
sudo apt-get upgrade
then I could:
sudo apt-get install nmap
nmap --version
Note about nmap
I used nmap to search the mask 192.168.1.0/24, but it didnt seam to find ALL ip´s. Eg: my laptop on 192.168.1.119 wasnt found, so I ended up using a combination of:
def ping(self, ip):
# Use the system ping command with count of 1 and wait time of 1.
ret = subprocess.call(['ping', '-c', '1', '-W', '1', ip],
stdout=open('/dev/null', 'w'),
stderr=open('/dev/null', 'w'))
return ret == 0 # Return True if our ping command succeeds
inside a multithreaded Pinger
Pinger I got from: http://blog.boa.nu/2012/10/python-threading-example-creating-pingerpy.html
I created my own IpInfo class to store information and search for open ports on each IP, and here I use nmap: (Code is "work in progress", but you will get the idea. Ideas to tune performance would be nice)
class IpInfo(object):
ip = None
hostname = None
ports = []
lastSeenAt = strftime("%Y-%m-%d %H:%M:%S", gmtime())
def findHostName(self):
if(ip):
self.hostname = str(socket.gethostbyaddr(ip)[0])
else:
raise NameError('IP missing')
def findOpenPorts(self):
print('findOpenPorts')
nm = nmap.PortScanner()
nm.scan(host)
nm.command_line()
nm.scaninfo()
for proto in nm[self.ip].all_protocols():
print('----------')
print('Protocol : %s' % proto)
lport = nm[self.ip][proto].keys() #<------ This 'proto' was changed from the [proto] to the ['tcp'].
lport.sort()
for port in lport:
if(nm[self.ip][proto][port]['state'] == 'open'):
self.ports.append(port)
for macOS user simply use brew install nmap instead of using pip
I have a perfect solution for this..
First type:- apt-get remove nmap
Then :- apt autoremove
Then :- go to www.pypi.org
And type python nmap and download the 0.6 version
Extract it using command :- tar -zxvf filename
cd to the new extracted file
Type:- python setup.py install
And then
apt-get install nmap
And you are ready to go.
For Windows:
I found this helpful:
choco install nmap
You must run this under an elevated command if possibly Powershell
I assume you have already done pip install python-nmap