I have a weird error re: imports in Python 2.7 when one script calls a different script which calls a different script - testbench.py imports user.py imports hardware.py
testbench.py runs a hardware testbench
user.py takes user input (Specifically, the serial number of the hardware to test)
hardware.py has some information about valid #SNs (In the integer highestSerial)
user.py uses the hardware.highestSerial variable
In both cases, the workflow is like this:
>>>python
>>>import user
>>>help(user)
>>>exit()
>>>python testbench.py
CASE 1
\\testbench.py
\\user.py
\\hardware.py
\\__init__.py
\\hardware\\__init__.py
\\hardware\\hardwareList.txt
Output 1
No warnings from help(user)
Calling the script outputs: AttributeError: 'module' object has no attribute 'highestSerial'
CASE 2
\\testbench.py
\\user.py
\\hardware.py
\\__init__.py
\\hardware\\hardwareList.txt
Output 2
help(user) outputs: __warningregistry__ = {(Not importing directory 'hardware': missing _...
calling the script works fine
Difference between cases
In the first case, there is a hardware folder with__init__.py in it - there are no warnings, but the code breaks (Because the attribute I'm looking for isn't in the folder)
In the second case, there is no hardware folder so I get a Not importing directory warning but the code works fine.
Now obviously I could just rename some things but do any of you know what is going on behind the scenes?
EDIT And things go completely crazy when I put hardware.py inside \hardware\ but we'll forget that scenario temporarily
EDIT 2 My thinking has been that I want to make a hardware.py script to access all the things in the \hardware\ folder - serial number list, hardware types etc., none of which is in python but rather in .txt files, .csv files etc. Is that an entirely mistaken way to do things?
You have both a hardware package and a hardware module. Don't. Rename one or the other; Python has to inspect the hardware directory too.
In case 1, the hardware/__init__.py package is being imported before the hardware.py module is found, and it appears you left the __init__.py file empty, so trying to access highestSerial raises an attribute error.
In case 2, the hardware directory is inspected for a __init__.py file first, raising a warning to let you know that that file is missing; this is to prevent a common error made by beginnning Python developers that forget to create that file.
Python then does find hardware.py and imports that instead.
You should not use directory names that match module names. Just rename hardware.py (and adjust your imports) or rename the directory.
Related
I see in several old questions that around the switch from Python 2 to Python 3, implicit relative imports were not allowed. However, I ran into a problem that seems to suggest they do happen and am wondering if something changed or if my understanding is incorrect. Edit: To clarify, I thought the below would not be an implicit relative import, but it seems to act like it.
I made an empty file called queue.py and another file called uses_queue.py which contains:
from multiprocessing import Queue
q = Queue()
which crashes on execution saying ImportError: cannot import name 'Empty' from 'queue' however it will not crash if I delete that aforementioned empty file called queue.py. Changing from multiprocessinging to from .multiprocessing or _multiprocessing or ._multiprocessing which didn't change anything.
I've got a Python script.
I've had several functions in this script which I decided to move to a 'package' folder beside the main script.
In this folder, I created a *.py file where I put all my functions.
I've placed an empty init.py near this file within the 'package' folder.
When starting the code of my main script with:
from package_folder.my_functions import *
the script works well when calling every functions from that file.
But when trying to import it directly:
import package_folder.my_functions
it doesn't seems to work as well as with the above technique.
The cause seems to be the fact that in the file wellmy_functions.py, I have a function that needs an other one, declared previously in that file.
I had this obscure error on that function that needs an other one:
TypeError: 'NoneType' object is not callable
Is this permissible and if not, how to manage this case?
It's generally not a good idea to use from module import *. Wildcard importing leads to namespace pollution; you imported more names than you need and if you accidentally refer to an imported name you may not get the NameError you wanted.
Also, if a future version of the library added additional names, you could end up masking other names, leading to strange bugs still:
Example
from my_mod1 import func1
from my_mod2 import *
If you upgrade my_mod2 and it now includes a my_mod2.func1 it'll replace the my_mod1.func1 import in the 1st line.
I have ran into strange problem, which I cannot find an answer.
I want to use file which may be located in different modules, with same path names (folders contain empty init.py files as well):
road1/pato/
road2/pato/modtest.py
where modtest contains simply a=1
Simple script for testing, test.py , contains:
import pato.modtest
print(pato.modtest.a)
and running
PYTHONPATH=road2/ python test.py
runs fine as expected. What is confusing, is that
PYTHONPATH=road1/:road2/ python test.py
gives an error
ImportError: No module named 'pato.modtest'
All the documentation I have read states that PYTHONPATH may contain multiple path-s and it should be just fine, running program is just looking through them in order. In this case, however, adding empty path in the front of path seem to prevent reading from later path's. If this is expected behaviour, fine, I'd appreciate links to good docs about it.
You have a namespace clash.
According to your PYTHONOPATH, when you import "pato.modtest" Python first looks if "pato" or "pato.modtest" are present in the current namespace.
As they are not present it then goes to sys.path and tries the first path which in your case is "road1/".
It finds the module "pato" there and then looks for object "modtest", not having found, it looks for a module road1/pato/modtest, not having found, it gives up.
Hello generous SO'ers,
This is a somewhat complicated question, but hopefully relevant to the more general use of global objects from a child-module.
I am using some commercial software that provides a python library for interfacing with their application through TCP. (I can't post the code for their library I don't think.)
I am having an issue with calling an object from a child module, that I think is more generally related to global variables or some such. Basically, the object's state is as expected when the child-module is in the same directory as all the other modules (including the module that creates the object).
But when I move the offending child module into a subfolder, it can still access the object but the state appears to have been altered, and the object's connection to the commercial app doesn't work anymore.
Following some advice from this question on global vars, I have organized my module's files as so:
scriptfile.py
pyFIMM/
__init__.py # imports all the other files
__globals.py # creates the connection object used in most other modules
__pyfimm.py # main module functions, such as pyFIMM.connect()
__Waveguide.py # there are many of these files with various classes and functions
(...other files...)
PhotonDesignLib/
__init__.py # imports all files in this folder
pdPythonLib.py # the commercial library
proprietary/
__init__.py # imports all files in this folder
University.py # <-- The offending child-module with issues
pyFIMM/__init__.py imports the sub-files like so:
from __globals import * # import global vars & create FimmWave connection object `fimm`
from __pyfimm import * # import the main module
from __Waveguide import *.
(...import the other files...)
from proprietary import * # imports the subfolder containing `University.py`
The __init__.py's in the subfolders "PhotonDesignLib" & "proprietary" both cause all files in the subfolders to imported, so, for example, scriptfile.py would access my proprietary files as so: import pyFIMM.proprietary.University. This is accomplished via this hint, coded as follows in proprietary/__init__.py:
import os, glob
__all__ = [ os.path.basename(f)[:-3] for f in glob.glob(os.path.dirname(__file__)+"/*.py")]
(Numerous coders from a few different institutions will have their own proprietary code, so we can share the base code but keep our proprietary files/functions to ourselves this way, without having to change any base code/import statements. I now realize that, for the more static PhotonDesignLib folder, this is overkill.)
The file __globals.py creates the object I need to use to communicate with their commercial app, with this code (this is all the code in this file):
import PhotonDesignLib.pdPythonLib as pd # the commercial lib/object
global fimm
fimm = pd.pdApp() # <- - this is the offending global object
All of my sub-modules contain a from __global import * statement, and are able to access the object fimm without specifically declaring it as a global var, without any issue.
So I run scriptfile.py, which has an import statement like from pyFIMM import *.
Most importantly, scriptfile.py initiates the TCP connection made to the application via fimm.connect() right at the top, before issuing any commands that require the communication, and all the other modules call fimm.Exec(<commands for app>) in various routines, which has been working swimmingly well - the fimm object has so-far been accessible to all modules, and keeps it's connection state without issue.
The issue I am running into is that the file proprietary/University.py can only successfully use the fimm object when it's placed in the pyFIMM root-level directory (ie. the same folder as __globals.py etc.). But when University.py is imported from within the proprietary sub-folder, it gives me an "application not initialized" error when I use the fimm object, as if the object had been overwritten or re-initialized or something. The object still exists, it just isn't maintaining it's connection state when called by this sub-module. (I've checked that it's not reinitialized in another module.)
If, after the script fails in proprietary/University.py, I use the console to send a command eg. pyFimm.fimm.Exec(<command to app>), it communicates just fine!
I set proprietary/University.py to print a dir(fimm) as a test right at the beginning, which works fine and looks like the fimm object exists as expected, but a subsequent call in the same file to fimm.Exec() indicates that the object's state is not correct, returning the "application not initialized" error.
This almost looks like there are two fimm objects - one that the main python console (and pyFIMM modules) see, which works great, and another that proprietary/University.py sees which doesn't know that we called fimm.connect() already. Again, if I put University.py in the main module folder "pyFIMM" it works fine - the fimm.Exec() calls operate as expected!
FYI proprietary/University.py imports the __globals.py file as so:
import sys, os, inspect
ScriptDir = inspect.currentframe().f_code.co_filename # get path to this module file
(ParentDir , tail) = os.path.split(ScriptDir) # split off top-level directory from path
(ParentDir , tail) = os.path.split(ParentDir) # split off top-level directory from path
sys.path.append(ParentDir) # add ParentDir to the python search path
from __globals import * # import global vars & FimmWave connection object
global fimm # This line makes no difference, was just trying it.
(FYI, Somewhere on SO it was stated that inspect was better than __file__, hence the code above.)
Why do you think having the sub-module in a sub-folder causes the object to lose it's state?
I suspect the issue is either the way I instruct University.py to import __globals.py or the "import all files in this folder" method I used in proprietary/__init__.py. But I have little idea how to fix it!
Thank you for looking at this question, and thanks in advance for your insightful comments.
I'm looking to get the path of a module after os.chdir has been called.
In this example:
import os
os.chdir('/some/location')
import foo # foo is located in the current directory.
os.chdir('/other/location')
# How do I get the path of foo now? ..is this impossible?
..the foo.__file__ variable will be 'foo.py', as will inspect.stack()[0][1] -- yet, there's no way to know where 'foo.py' is located now, right?
What could I use, outside (or inside, without storing it as a variable at import time) of 'foo', which would allow me to discover the location of foo?
I'm attempting to build a definitive method to determine which file a module is executing from. Since I use IPython as a shell, this is something I could actually run into.
Example usage:
I have two versions of a project I'm working on, and I'm comparing their behavior during the process of debugging them. ..let's say they're in the directories 'proj1' and 'proj2'. ..which foo do I have loaded in the IPython interpreter again?
The ideal:
In [242]: from my_tools import loc
In [243]: loc(foo)
'/home/blah/projects/proj2/foo.py'
** As abarnert noted, that is not possible, as python does not record the base directory location of relative imports. This will, however, work with normal (non-relative) imports.
** Also, regular python (as opposed to IPython) does not allow imports from the current directory, but rather only from the module directory.
The information isn't available anymore, period. Tracebacks, the debugger, ipython magic, etc. can't get at it. For example:
# foo.py
def bar():
1/0
$ ipython
In [1]: import foo
In [2]: os.chdir('/tmp')
In [3]: foo.baz()
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-5-a70d319d0d05> in <module>()
----> 1 foo.baz()
/private/tmp/foo.pyc in baz()
ZeroDivisionError: integer division or modulo by zero
So:
the foo.__file__ variable will be 'foo.py', as will inspect.stack()[0][1] -- yet, there's no way to know where 'foo.py' is located now, right?
Right. As you can see, Python treats it as a relative path, and (incorrectly) resolves it according to the current working directory whenever it needs an absolute path.
What could I use, outside (or inside, without storing it as a variable at import time) of 'foo', which would allow me to discover the location of foo?
Nothing. You have to store it somewhere.
The obvious thing to do is to store os.path.abspath(foo.__file__) from outside, or os.path.abspath(__file__) from inside, at import time. Not what you were hoping for, but I can't think of anything better.
If you want to get tricky, you can build an import hook that modifies modules as they're imported, adding a new __abspath__ attribute or, more simply, changing __file__ to always been an abspath. This is easier with the importlib module Python 3.1+.
As a quick proof of concept, I slapped together abspathimporter. After doing an import imppath, every further import you do that finds a normal .py file or package will absify its __file__.
I don't know whether it works for .so/.pyd modules, or .pyc modules without source. It definitely doesn't work for modules inside zipfiles, frozen modules, or anything else that doesn't use the stock FileFinder. It won't retroactively affect the paths of anything imported before it. It requires 3.3+, and is horribly fragile (most seriously, the FileFinder class or its hook function has to be the last thing in sys.path_hooks—which it is by default in CPython 3.3.0-3.3.1 on four Mac and linux boxes I tested, but certainly isn't guaranteed).
But it shows what you can do if you want to. And honestly, for playing around in iPython for the past 20 minutes or so, it's kind of handy.
import os
import foo
foodir = os.getcwd()
os.chdir('/other/location')
foodir now has original directory stored in it...