It's considered bad Python to use imports like this:
import my_module
When you are doing a relative import and this would work:
from . import my_module
Is there a tool that can detect these non-dotted relative imports in my code and warn me so I could update them to the dotted syntax? My project has hundreds of Python modules and I would like to do this automatically. (Possibly such a tool would override __import__ and detect the bad imports as they happen when I run the program.)
Does anyone know of such tool?
[Reposted as an answer because it apparently did the trick]
2to3 will automatically convert them, because it's compulsory in Python 3.
Here's the relevant source code if you want to modify it for your purposes.
Alternatively, you could just run 2to3 with only that fixer: 2to3 -w -f import myproject/
pylint gives warnings about relative imports, along with tons of other stuff that's considered, for one reason or another, "bad Python".
Related
I have a specific problem which might require a general solution. I am currently learning apache thrift. I used this guide.I followed all the steps and i am getting a import error as Cannot import module UserManager. So the question being How does python import lookup take place. Which directory is checked first. How does it move upwards?
How does sys.path.append('') work?
I found out the answer for this here. I followed the same steps. But i am still facing the same issue. Any ideas why? Anything more i should put up that could help debug you guys. ?
Help is appreciated.
On windows, Python looks up modules from the Lib folder in the default python path, for example from "C:\Python34\Lib\". You can add your Python libaries in a custom folder ("my-lib" or sth.) in there, but you need a file in order to tell Python that you can import from there. This file is called __init__.py , and is totally empty. That data structure should look like this:
my-lib
__init__.py
/myfolder
mymodule.py
(This is how every Python module works. For example urllib.request, it's at "%PYTHONPATH%\Lib\urllib\request.py")
You can import from the "mymodule.py" file by typing
import my-lib
and then using
mylib.mymodule.myfunction
or you can use
from my-lib import mymodule
And then just using the name of you function.
You can now use sys.path.append to append the path you pass into the function to the folders Python looks for the modules (Please note that thats not permanent). If the path of your modules should be static, you should consider putting these in the Lib folder. If that path is relative to your file you could look for the path of the file you execute from, and then append the sys.path relative to your file, but i reccomend using relative imports.
If you consider doing that, i recommend reading the docs, you can do that here: https://docs.python.org/3/reference/import.html#submodules
If I got you right, you're using Python 3.3 from Blender but try to include the 3.2 standard library. This is bound to give you a flurry of issues, you should not do that. Find another way. It's likely that Blender offers a way to use the 3.3 standard library (and that's 99% compatible with 3.2). Pure-Python third party library can, of course, be included by fiddling with sys.path.
The specific issue you're seeing now is likely caused by the version difference. As people have pointed out in the comments, Python 3.3 doesn't find the _tkinter extension module. Although it is present (as it works from Python 3.2), it is most likely in a .so file with an ABI tag that is incompatible with Blender's Python 3.3, hence it won't even look at it (much like a module.txt is not considered for import module). This is a good thing. Extension modules are highly version-specific, slight ABI mismatches (such as between 3.2 and 3.3, or two 3.3 compiled with different options) can cause pretty much any kind of error, from crashes to memory leaks to silent data corruption or even something completely different.
You can verify whether this is the case via import _tkinter; print(_tkinter.file) in the 3.2 shell. Alternatively, _tkinter may live in a different directory entirely. Adding that directory won't actually fix the real issue outlined above.
For any new readers coming along that are still having issues, try the following. This is cleaner than using sys.path.append if your app directory is structured with your .py files that contain functions for import underneath your script that imports those files. Let me illustrate.
Script that imports files: main.py
Function files named like: func1.py
main.py
/functionfolder
__init__.py
func1.py
func2.py
The import code in your main.py file should look as follows:
from functionfolder import func1
from functionfolder import func2
As Agilix correctly stated, you must have an __init__.py file in your "functionfolder" (see directory illustration above).
In addition, this solved my issue with Pylance not resolving the import, and showing me a nagging error constantly. After a rabbit-hole of sifting through GitHub issues, and trying too many comparatively complicated proposed solutions, this ever-so-simple solution worked for me.
You may try with declaring sys.path.append('/path/to/lib/python') before including any IMPORT statements.
I just created a __init__.py file inside my new folder, so the directory is initialised, and it worked (:
I recently upgraded from python2.7 to python3 and think it may have screwed up some configurations. Now when I try to run a module, I get import errors. Let's say I have a directory structure like this:
/directory
/directory/__init__.py
/directory/run.py
/directory/app/db.py
/directory/app/views.py
/directory/app/__init__.py
with the following imports...
/directory/run.py says 'import app'
/directory/app/db.py says 'import views'
When I execute run.py, I get an error saying the module views cannot be found. However, if I go into /directory/app and execute db.py, then the import runs correctly. I've also found that if I change the /directory/app/db.py to say "from app import views" then it works correctly when executing run.py. However, this used to all work!
It seems like the import statements are not taking into account the folder it's being executed in. It seems like this wants me to base all of my imports out of the root folder, which seems incorrect and would take me time to change everything.
Any ideas as to what happened? This has been driving me crazy.
In Python3, implicit relative imports have been removed, all imports need to be either absolute, or use explicit relative imports.
This is not going to change, you need to either replace them with from app import views or from . import views.
Python 2.x and Python 3.x differ in so many ways it is usually extremely helpful to use 2to3 or another similar tool to "port" (convert) the code.
The issue you are running into likely has to do with the fact that Python 2 uses relative imports but Python 3 uses absolute imports (I may have that backwards). It is possible to change the import statement to make the import work, though for the torrent of compatibility issues that will surely follow, I highly recommend using 2to3 and then making any final adjustments manually.
Good luck!
Specs: Python 2.7
I'm working on a project that has several modules, I want to activate some features from the __future__ module in all of them. I would like to import all the features I need on one module, and then import that single module to every other, and have those features be active in all of them, or something to that effect.
I tried:
[A.py]
from __future__ import division
[B.py]
import A
print(1/2)
Running B.py the division was still integer. I tried:
[A.py]
print(1/2)
[B.py]
from __future__ import division
import A
Running B.py gave the same result. With both previous examples I also tried switching 'import A' by 'from A import *' with the same results.
I searched Google for a while, and found the best description about how the __future__ module works, obviously enough, on the Python documentation. There I could only find the assurance the features would be active in the module they were imported to, without any mention of how to do it globally.
So I'd like to know if there is a way of doing this, either the way I described, or creating some sort of runtime configuration file, or through some other means.
There's no way to do this in-language; you really can't make __future__ imports global in this sense. (Well, you probably can replace the normal import statements with something complicated around imp or something. See the Future statement documentation and scroll down to "Code compiled by…" But anything like this is almost certainly a bad idea.)
The reason is that from __future__ import division isn't really a normal import. Or, rather, it's more than a normal import. You actually do get a name called division that you can inspect, but just having that value has no effect—so passing it to other modules doesn't affect those modules. On top of the normal import, Python has special magic that detects __future__ imports at the top of a module, or in the interactive interpreter, and changes the way your code is compiled. See future for the "real import" part, and Future statements for the "magic" part, if you want all the details.
And there's no configuration file that lets you do this. But there is a command-line parameter:
python -Qnew main.py
This has the same effect as doing a from __future__ import division everywhere.
You can add this to the #! lines, or alias pyfuturediv='python -Qnew' (or even alias python='python -Qnew') in your shell, or whatever, which maybe as good as a configuration file for your purposes.
But really, if you want to make sure module B gets new-style division, you probably should have the __future__ declaration in B in the first place.
Or, of course, you could just write for Python 3.0+ instead of 2.3-2.7. (Note that some of the core devs were against having command-line arguments, because "the right way to get feature X globally is to use a version of Python >= feature X's MandatoryRelease".) Or use // when you mean //.
Another possibility is to use six, a module designed to let you write code that's almost Python 3.3 and have it work properly in 2.4-2.7 (and 3.0-3.2). For example, you don't get a print function, but you do get a print_ function that works exactly the same. You don't get Unicode literals, but you get u() fake literals—which, together with a UTF-8 encoding declaration in the source, is almost good enough. And it provides a whole lot of stuff that you can't get from __future__ as well—StringIO and BytesIO, exec as a function, the next function, etc.
If the problem is that you have 1000 source files, and it's a pain to edit them all, you could use sed, or use 3to2 with just the option that fixes division, or…
Another approach would be using isort. isort has a -a command line flag to add imports to files that you specify. Simply running isort without arguments will run it recursively on all python files in the current working directory and all subdirectories.
If, like me, you have a virtual environment inside that folder, and are using git (or have an equivalent way of listing only your files) and don't want to run it on all files inside that virtual environment, you can use something like:
git ls-tree -r HEAD --name-only | grep "\.py$" | xargs isort -a -y "from __future__ import division"
Just after standard pythonmodule imports?
If I postpone it to the main function and do my specific module imports before it, it gives error (which is quite obvious). Python Style guide no where mentions the correct location for it.
It should go before the import or from statements that need it (which as you say is obvious). So for example a module could start with:
import sys
import os
import math
try:
import foo
except ImportError:
if 'foopath' in sys.path: raise
sys.path.append('foopath')
import foo
Note that I've made the append conditional (on the import failing and the specific module's path not being on sys.path yet) to avoid the risk of sys.path ending up with dozens of occurrences of string foopath, which would not be particularly helpful;-).
One reason this isn't mentioned in PEP 8 or other good Python style guides is that modifying sys.path isn't something you want to do in a real program; it makes your program less robust and portable. A better solution might be to put your package somewhere that would already be in sys.path or to define PYTHONPATH systemwide to include your package.
I often use a shell script to launch my python applications: I put my sys.path.insert (append) statement just after the "standard" python module imports in my "launch python script".
Using sys.path.insert(0, ...) gets your "imports" in priority in the path list.
I generally do it before importing anything. If you're worried that your module names might conflict with the Python stdlib names, then change your module names!
I think it's a matter of taste. But most people tend to put it behind the import sys :-)
I prefer wrapping it in an extra function:
def importmod_abs(name):
sys.path.append() ..
__import__ ...
sys.path.pop()
... this way sys.path remains clean. Of course that's only applicable to certain module structures. Anyways, I'd import everything that works without altering sys.path first.
In a Python system for which I develop, we usually have this module structure.
mymodule/
mymodule/mymodule/feature.py
mymodule/test/feature.py
This allows our little testing framework to easily import test/feature.py and run unit tests. However, we now have the need for some shell scripts (which are written in Python):
mymodule/
mymodule/scripts/yetanotherfeature.py
mymodule/test/yetanotherfeature.py
yetanotherfeature.py is installed by the module Debian package into /usr/bin. But we obviously don't want the .py extension there. So, in order for the test framework to still be able to import the module I have to do this symbolic link thingie:
mymodule/
mymodule/scripts/yetanotherfeature
mymodule/scripts/yetanotherfeature.py # -> mymodule/scripts/yetanotherfeature
mymodule/test/yetanotherfeature.py
Is it possible to import a module by filename in Python, or can you think of a more elegant solution for this?
The imp module is used for this:
daniel#purplehaze:/tmp/test$ cat mymodule
print "woho!"
daniel#purplehaze:/tmp/test$ cat test.py
import imp
imp.load_source("apanapansson", "mymodule")
daniel#purplehaze:/tmp/test$ python test.py
woho!
daniel#purplehaze:/tmp/test$
You could most likely use some tricker by using import hooks, I wouldn't recommend it though. On the other hand I would also probably do it the other way around , have your .py scripts somewhere, and make '.py'less symbolic links to the .py files. So your library could be anywhere and you can run the test from within by importing it normall (since it has the py extension), and then /usr/bin/yetanotherfeature points to it, so you can run it without the py.
Edit: Nevermind this (at least the hooks part), the import imp solution looks very good to me :)
Check out imp module:
http://docs.python.org/library/imp.html
This will allow you to load a module by filename. But I think your symbolic link is a more elegant solution.
Another option would be to use setuptools:
"...there’s no easy way to have a script’s filename match local conventions on both Windows and POSIX platforms. For another, you often have to create a separate file just for the “main” script, when your actual “main” is a function in a module somewhere... setuptools fixes all of these problems by automatically generating scripts for you with the correct extension, and on Windows it will even create an .exe file..."
https://pythonhosted.org/setuptools/setuptools.html#automatic-script-creation