This question already has answers here:
Invalid syntax (SyntaxError) in except handler when using comma
(5 answers)
Closed last month.
I'm using macOS Sierra. When importing builtwith I get these following error:
Daniels-MacBook-Pro:~ Daniel$ python
Python 3.5.2 |Anaconda 4.2.0 (x86_64)| (default, Jul 2 2016, 17:52:12)
[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import builtwith
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/danielotero/anaconda3/lib/python3.5/site-packages/builtwith/__init__.py", line 43
except Exception, e:
^
SyntaxError: invalid syntax
What I can do to import it correctly?
This is because the builtwith package you installed is developed by Python2, not Python3. So it uses print and Exception as Python2 does. It also uses urllib2 library which is seperated into two part of urllib library in Python3.
It's better to use Python2 (Python2.7) to finish the work or you have to modify the source code of builtwith, that is, change all print statement into print() function, change except Exception, e into except Exception as e, and change all urllib2 functions into functions in urllib.requests and urllib.error.
According to the module's issue tracker, it is not compatible with Python 3. The project owner says
This module was built with Python 2 in mind. Patches are welcome to also support Python 3, however would need to maintain backwards compatibility.
Since they don't appear to want to port it to Python 3 in order to remain backwards compatible, you should either use Python 2, look for another library, or attempt to port it yourself.
Related
I'm running Python 3.5.6 on a distribution where TLS versions below 1.2 have been compiled out of OpenSSL by passing these options to ./configure: no-ssl no-tls1 no-tls1_1 no-ssl3-method no-tls1-method no-tls1_1-method. The OpenSSL version is 1.1.1d. Python 3 is built from source at distro build time and linked against the version of OpenSSL included in the distro.
Everything builds successfully, but when I try to import the ssl library in Python, I get the following error:
$ python3
Python 3.5.6 (default, Mar 23 2020, 05:11:33)
[GCC 8.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import ssl
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.5/ssl.py", line 99, in <module>
import _ssl # if we can't import it, let the error propagate
ImportError: /usr/lib/python3.5/lib-dynload/_ssl.cpython-35m-aarch64-linux-gnu.so: undefined symbol: TLSv1_method
I don't understand why this error occurs at runtime. The only reference I can find in the Python 3.5.6 code to TLSv1_method is line 3088 of _ssl.c:
ctx = SSL_CTX_new(TLSv1_method());
Using no-tls1-method does compile out the implementation of TLSv1_method, and that line in the Python code is not guarded by any #ifdef. But I'd expect that to cause a failure at link time for the _ssl.cpython-35m-aarch64-linux-gnu.so module, not at runtime when Python tries to import the module. What's going on here, and is there a way to fix it without patching Python? I cannot upgrade the version of OpenSSL or Python in use.
It seems that my confusion resulted from misunderstanding how _ssl.cpython-35m-aarch64-linux-gnu.so links to OpenSSL. I assumed that it was statically linked, but it's actually dynamically linked to libssl.so. This is why the error occurs at runtime when the shared object is loaded.
It seems, then, that the only way to fix this without updating Python is to patch the call to TLSv1_method to use the generic TLS_method instead. I'll leave this question open for a few days though in case anyone has a better suggestion.
Edit: I filed a Python bug for this issue. It should be fixed in a future release.
I am trying to setup an ML model using fastai and have to do the following imports:
import fastai.models
import fastai.nlp
import fastai.dataset
However, it gives me the following error by the fastai imports.
Python 2.7.15rc1 (default, Apr 15 2018, 21:51:34)
[GCC 7.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import fastai.nlp
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/fastai/nlp.py", line 172
if os.path.isdir(path): paths=glob(f'{path}/*.*')
^
SyntaxError: invalid syntax
Apparently, the character f in glob(f'{path}/*.*') is causing the error. I fixed the error by removing f, but it seems that there are lots of these errors in the fastai library.
My current thought is that I am using an incorrect python version. Could anyone give me some pointer?
Strings in the shape of:
f'{path}/*.*'
are called f-strings and were introduced in Python3.6.
That's why you get the SyntaxError - for versions lower than Python3.6 a SyntaxError will be raised, as this syntax just doesn't exist in lower versions.
So obviously fast-ai is programmed for Python3.6 or higher.
When you take a look at the installation issues (you have to scroll down a bit),
you can see under Is My System Supported? the first point:
Python: You need to have python 3.6 or higher
So I'm afraid updating your python is the easiest way to solve the problem!
If you like to learn more about f-strings you can take a look here: https://www.python.org/dev/peps/pep-0498/
Help! i am getting this error again and again....on light table while i m trying to run python code
File "C:\Python34\Lib\site.py", line 176
file=sys.stderr)
^
SyntaxError: invalid syntax
This is a code with installation.
I have no idea about the Light Table part, but the error you show is the one that you'd get if you were to somehow try to execute a Python 3 print function call under Python 2 (where print is a statement with a quirky syntax rather than a function). Lines 175-176 of site.py in the Python 3.4 distribution look like this (modulo leading indentation):
print("Error processing line {:d} of {}:\n".format(n+1, fullname),
file=sys.stderr)
and sure enough, if you try to execute that in a Python 2 interpreter you'll get a SyntaxError, with the cursor pointing to that same = sign:
Python 2.7.8 (default, Jul 3 2014, 06:13:58)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> print("Error processing line {:d} of {}:\n".format(n+1, fullname), file=sys.stderr)
File "<stdin>", line 1
print("Error processing line {:d} of {}:\n".format(n+1, fullname), file=sys.stderr)
^
SyntaxError: invalid syntax
I'd suggest looking closely at the settings for the Light Table Python plugin to see if anything's awry. You should also check the setting for your PYTHONPATH environment variable. If it includes a reference to the C:\Python34 directory and you're running Python 2, that could be the cause of the problem. Here's an example of the exact same problem on OS X, caused by starting Python 2 with a PYTHONPATH that refers to Python 3's library directory:
noether:~ mdickinson$ export PYTHONPATH=/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/
noether:~ mdickinson$ python2.7
File "/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site.py", line 176
file=sys.stderr)
^
SyntaxError: invalid syntax
I've recently given up on macports and gone to homebrew. I'm trying to be able to import numpy and scipy. I seem to have installed everything correctly, but when I type python in terminal, it seems to run the default mac python.
I'm on OSX 10.8.4
I followed this post: python homebrew by default
and tried to move the homebrew directory to the front of my %PATH by entering
export PATH=/usr/local/bin:/usr/local/sbin:~/bin:$PATH
then "echo $PATH" returns
/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/opt/local/bin:/opt/local/sbin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:/opt/X11/bin
however when I look for where my python is by "which python", I get
/usr/bin/python
For some reason when I import numpy in interpreter it works but not so for scipy.
Python 2.7.2 (default, Oct 11 2012, 20:14:37)
[GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> import scipy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named scipy
>>>
What do I need to do to get python to run as my homebrew-installed python? Should this fix my problem and allow me to import scipy?
Homebrew puts things in a /usr/local/Cellar/<appname> directory, if I'm not mistaken. You should find the bin of the python in there and put it in your path before hitting /usr/bin.
For example, on my 10.8, python is located at /usr/local/Cellar/python/2.7.5/bin and I put that directory before /usr/bin/python in my PATH variable.
I do that similarly for other instances of me wanting to use homebrew version of an app, another example being sqlite.
After a very thorough read of the Python's decimal module documentation, I still find myself puzzled by what happens when I divide a decimal.
In Python 2.4.6 (makes sense):
>>> import decimal
>>> decimal.Decimal(1000) / 10
Decimal("100")
In Python 2.5.6, Python 2.6.7, and Python 2.7.2 (puzzling):
>>> import decimal
>>> decimal.Decimal(1000) / 10
Decimal('0.00000-6930898827444486144')
More confusing yet, that result doesn't even appear to be valid:
>>> decimal.Decimal('0.00000-6930898827444486144')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/decimal.py", line 548, in __new__
"Invalid literal for Decimal: %r" % value)
File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/decimal.py", line 3844, in _raise_error
raise error(explanation)
decimal.InvalidOperation: Invalid literal for Decimal: '0.00000-6930898827444486144'
The result is the same using decimal.Decimal(1000) / decimal.Decimal(10), so it's not an issue with using an int as the divisor.
Part of the issue is clearly around precision:
>>> decimal.Decimal("1000.000") / decimal.Decimal("10.000")
Decimal('0.00000-6930898827444486144')
>>> decimal.Decimal("1000.000") / decimal.Decimal("10")
Decimal('0.000200376420520689664')
But there should be ample precision in decimal.Decimal("1000.000") to divide safely by 10 and get an answer that's at least in the right ballpark.
The fact that this behavior is unchanged through three major revisions of Python says to me that it is not a bug.
What am I doing wrong? What am I missing?
How can I divide a decimal (short of using Python 2.4)?
From your MacPorts bug, you have installed Xcode 4 and your version of Python 2.7.2 was built with the clang C compiler, rather than gcc-4.2. There is at least one known problem with building with clang on OS X that has been fixed in Python subsequent to the 2.7.2. release. Either apply the patch or, better, ensure the build uses gcc-4.2. Something like (untested!):
sudo bash
export CC=/usr/bin/gcc-4.2
port clean python27
port upgrade --force python27
prior to the build might work if MacPorts doesn't override it.
UPDATE: The required patch has now been applied to the MacPorts port files for Python 2. See https://trac.macports.org/changeset/87442
Just for the record:
For python 2.7.3 compiled with clang (via Homebrew on OS X), this seems to be fixed.
Python 2.7.3 (default, Oct 10 2012, 13:00:00)
[GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import decimal
>>> decimal.Decimal(1000) / 10
Decimal('100')