Do unused imports in Python hamper performance? - python

Is there any effect of unused imports in a Python script?

You pollute your namespace with names that could interfere with your variables and occupy some memory.
Also you will have a longer startup time as the program has to load the module.
In any case, I would not become too neurotic with this, as if you are writing code you could end up writing and deleting import os continuously as your code is modified. Some IDE's as PyCharm detect unused imports so you can rely on them after your code is finished or nearly completed.

"Unused" might be a bit harder to define than you think, for example this code in test.py:
import sys
import unused_stuff
sys.exit(0)
unused_stuff seems to be unused, but if it were to contain:
import __main__
def f(x): print "Oh no you don't"
__main__.sys.exit = f
Then running test.py doesn't do what you'd expect from a casual glance.

Related

Passing imports to a script from inside an external method

I have kind of a tricky question, so that it is difficult to even describe it.
Suppose I have this script, which we will call master:
#in master.py
import slave as slv
def import_func():
import time
slv.method(import_func)
I want to make sure method in slave.py, which looks like this:
#in slave.py
def method(import_func):
import_func()
time.sleep(10)
actually runs like I imported the time package. Currently it does not work, I believe because the import stays exists only in the scope of import_func().
Keep in mind that the rules of the game are:
I cannot import anything in slave.py outside method
I need to pass the imports which method needs through import_func() in master.py
the procedure must work for a variable number of imports inside method. In other words, method cannot know how many imports it will receive but needs to work nonetheless.
the procedure needs to work for any import possible. So options like pyforest are not suitable.
I know it can theoretically be done through importlib, but I would prefer a more straightforward idea, because if we have a lot of imports with different 'as' labels it would become extremely tedious and convoluted with importlib.
I know it is kind of a quirky question but I'd really like to know if it is possible. Thanks
What you can do is this in the master file:
#in master.py
import slave as slv
def import_func():
import time
return time
slv.method(import_func)
Now use time return value in the slave file:
#in slave.py
def method(import_func):
time = import_func()
time.sleep(10)
Why would you have to do this? It's because of the application's stack. When import_func() is called on slave.py, it imports the library on the stack. However, when the function terminates, all stack data is released from memory. So the library would get released and collected by the garbage collector.
By returning time from import_func(), you guarantee it continues existing in memory once the function terminates executing.
Now, to import more modules? Simple. Return a list with multiples modules inside. Or maybe a Dictionary for simple access. That's one way of doing it.
[Edit] Using a dictionary and importlib to pass multiple imports to slave.py:
master.py:
import test2 as slv
import importlib
def master_import(packname, imports={}):
imports[packname] = importlib.import_module(packname)
def import_func():
imports = {}
master_import('time', imports)
return imports
slv.method(import_func)
slave.py:
#in slave.py
def method(import_func):
imports = import_func()
imports['time'].sleep(10)
This way, you can literally import any modules you want on master.py side, using master_import() function, and pass them to slave script.
Check this answer on how to use importlib.

Python Importing modules in a package

I currently have a module I created that has a number of functions.
It's getting quite large so I figured I should make it into a package and split the functions up to make it more manageable.
I'm just testing out how this all works before I do this for real so apologies if it seems a bit tenuous.
I've created a folder called pack_test and in it I have:
__init__.py
foo.py
bar.py
__init__.py contains:
__all__ = ['foo', 'bar']
from . import *
import subprocess
from os import environ
In the console I can write import pack_test as pt and this is fine, no errors.
pt. and two tabs shows me that I can see pt.bar, pt.environ, pt.foo and pt.subprocess in there.
All good so far.
If I want to reference subprocess or environ in foo.py or bar.py how do I do it in there?
If in bar.py I have a function which just does return subprocess.call('ls') it errors saying NameError: name 'subprocess' is not defined. There must be something I'm missing which enables me to reference subprocess from the level above? Presumably, once I can get the syntax from that I can also just call environ in a similar way?
The alternative as I could see it would be to have import subprocess in both foo.py and bar.py but then this seems a bit odd to me to have it appear across multiple files when I could have it the once at a higher level, particularly if I went on to have a large number of files rather than just 2 in this example.
TL;DR:
__init__.py :
import foo
import bar
__all__ = ["foo", "bar"]
foo.py:
import subprocess
from os import environ
# your code here
bar.py
import subprocess
from os import environ
# your code here
There must be something I'm missing which enables me to reference subprocess from the level above?
Nope, this is the expected behaviour.
import loads a module (if it isn't already), caches it in sys.modules (idem), and bind the imported names in the current namespace. Each Python module has (or "is") it's own namespace (there's no real "global" namespace). IOW, you have to import what you need in each module, ie if foo.py needs subprocess, it must explicitely import it.
This can seem a bit tedious at first but in the long run it really helps wrt/ maintainability - you just have to read the imports at the top of your module (pep 08: always put all imports at the beginning of the module) to know where a name comes from.
Also you should not use star imports (aka wild card imports aka from xxx import *) anywhere else than in your python shell (and even then...) - it's a maintainance time bomb. Not only because you don't know where each name comes from, but also because it's a sure way to rebind an already import name. Imagine that your foo module defines function "func". Somewhere you have "from foo import *; from bar import *", then later in the code a call to func. Now someone edits bar.py and adds a (distinct) "func" function, and suddenly you call fails, because you're not calling the expected "func". Now enjoy debugging this... And real-life examples are usually a bit more complex than this.
So if you fancy your mental sanity, don't be lazy, don't try to be smart either, just do the simple obvious thing: explicitely import the names you're interested in at the top of your modules.
(been here, done that etc)
You could create modules.py containing
import subprocess
import os
Then in foo.py or any of your files just have.
from modules import *
Your import statements in your files are then static and just update modules.py when you want to add an additional module accessible to them all.

import module_name Vs __import__('module_name')

I am writing a python module and I am using many imports of other different modules.
I am bit confused that whether I should import all the necessary dependent modules in the opening of the file or shall I do it when necessary.
I also wanted to know the implications of both.
I come from C++ back ground so I am really thrilled with this feature and does not see any reason of not using __import__(), importing the modules only when needed inside my function.
Kindly throw some light on this.
To write less code, import a module at the first lines of the script, e.g.:
#File1.py
import os
#use os somewhere:
os.path.chdir(some_dir)
...
...
#use os somewhere else, you don't need to "import os" everywhere
os.environ.update(some_dict)
While sometimes you may need to import a module locally (e.g., in a function):
abc=3
def foo():
from some_module import abc #import inside foo avoids you from naming conflicts
abc(...) #call the function, nothing to do with the variable "abc" outside "foo"
Don't worry about the time consumption when calling foo() multiple times, since import statements loads modules/functions only one time. Once a module/function is imported, the object is stored in dictionary sys.modules, which is a lookup table for speedup when running the same import statement.
As #bruno desthuilliers mentioned, importing insede functions may not be that pythonic, it violates PEP8, here's a discussion I found, you should stick to importing at the top of the file most of the time.
First, __import__ isn't usually needed anywhere. It's main purpose is to support dynamic importing of things that you don't know ahead of time (think plug-ins). You can easily use the import statement inside your function:
import sys
def foo():
import this
if __name__ == "__main__":
print sys.version_info
foo()
The main advantage to importing everything up-front is that it is most customary. That's where people reading your code will go to see if something is imported or not. Also, you don't need to write import os in every function that uses os. The main downsides of this approach are that:
you can get yourself into unresolvable import loops (A imports B which imports A)
that you pull everything into memory even if you aren't going to use it.
The second problem isn't typically an issue -- very rarely do you notice the performance or memory impact of an import.
If you run into the first problem, it's likely a symptom of poorly grouped code and the common stuff should be factored into a new module C which both A and B can use.
Firstly, it's a violation of PEP8 using imports inside functions.
Calling import it's an expensive call EVEN if the module is already loaded, so if your function is gonna being called many times this will not compensate the performance gain.
Also when you call "import test" python do this:
dataFile = __ import__('test')
The only downside of imports at the top of file it's the namespace that get polluted very fast depending on complexity of the file, but if your file it's too complex it's a signal of bad design.

Python architecture - import extra modules, or import modules in code execution section?

I have a module that defines a class which instantiates a class from one of two (or more) other modules. Below are a couple of code examples. In the first example, two modules are imported, but only one is used (one per instance of MyIo). In the second example, only the required module is imported. There may be one or more instances of MyIo in a higher level module.
I like that the second example only imports what is used, but I don't really like that the import is taking place in a 'code execution' section.
My questions are:
Which of the examples is a better architectural choice, and why?
Is there a penalty for importing modules that are not eventually
used?
Are imports in code execution sections in Python considered 'bad form?'
This example imports both modules, but only uses one...
''' MyIo.py '''
...
...
from DevSerial import Device as DeviceSerial
from DevUSB import Device as DeviceUSB
class MyIo:
def __init__(self, port)
if port.lower() == 'usb':
self.device=DeviceUSB()
else:
self.device=DeviceSerial(port)
...
...
The following imports only the module being used...
''' MyIo.py '''
...
...
class MyIo:
def __init__(self, port)
if port.lower() == 'usb':
from DevUSB import Device
self.device=Device()
else:
from DevSerial import Device
self.device=Device(port)
...
...
As per PEP 8, all imports should be together at the top of the file. Having them spread throughout the file leads to hard to maintain and debug software.
The only performance overhead I can think of is at program startup - it has to load more modules. Once the program is running there shouldn't be any extra overhead.
To answer your questions:
The former. It is clearly obvious what other files are used, whereas you have to dig through the second to find all the dependencies.
Yes, but only at startup.
Yes.
Actually, even tho you are importing the modules into a function, they will still exists into sys.modules once your function is done executing unless your are deleting them manually. So yeah, there's no point to don't import them directly at the top of your code (like example #1).
The most common use for imports that are not just jammed up at the top of the page is for situations where sibling modules represent different, mutually exclusive options: the best example is os.path, which is automatically swapped for the appropriate module. Even there its common to do the differential import up at the top and not down in the code.

python refresh/reload

This is a very basic question - but I haven't been able to find an answer by searching online.
I am using python to control ArcGIS, and I have a simple python script, that calls some pre-written code.
However, when I make a change to the pre-written code, it does not appear to result in any change. I import this module, and have tried refreshing it, but nothing happens.
I've even moved the file it calls to another location, and the script still works fine. One thing I did yesterday was I added the folder where all my python files are to the sys path (using sys.append('path') ), and I wonder if that made a difference.
Thanks in advance, and sorry for the sloppy terminology.
It's unclear what you mean with "refresh", but the normal behavior of Python is that you need to restart the software for it to take a new look on a Python module and reread it.
If your changes isn't taken care of even after restart, then this is due to one of two errors:
The timestamp on the pyc-file is incorrect and some time in the future.
You are actually editing the wrong file.
You can with reload re-read a file even without restarting the software with the reload() command. Note that any variable pointing to anything in the module will need to get reimported after the reload. Something like this:
import themodule
from themodule import AClass
reload(themodule)
from themodule import AClass
One way to do this is to call reload.
Example: Here is the contents of foo.py:
def bar():
return 1
In an interactive session, I can do:
>>> import foo
>>> foo.bar()
1
Then in another window, I can change foo.py to:
def bar():
return "Hello"
Back in the interactive session, calling foo.bar() still returns 1, until I do:
>>> reload(foo)
<module 'foo' from 'foo.py'>
>>> foo.bar()
'Hello'
Calling reload is one way to ensure that your module is up-to-date even if the file on disk has changed. It's not necessarily the most efficient (you might be better off checking the last modification time on the file or using something like pyinotify before you reload), but it's certainly quick to implement.
One reason that Python doesn't read from the source module every time is that loading a module is (relatively) expensive -- what if you had a 300kb module and you were just using a single constant from the file? Python loads a module once and keeps it in memory, until you reload it.
If you are running in an IPython shell, then there are some magic commands that exist.
The IPython docs cover this feature called the autoreload extension.
Originally, I found this solution from Jonathan March's blog posting on this very subject (see point 3 from that link).
Basically all you have to do is the following, and changes you make are reflected automatically after you save:
In [1]: %load_ext autoreload
In [2]: %autoreload 2
In [3]: Import MODULE
In [4]: my_class = Module.class()
my_class.printham()
Out[4]: ham
In [5]: #make changes to printham and save
In [6]: my_class.printham()
Out[6]: hamlet
I used the following when importing all objects from within a module to ensure web2py was using my current code:
import buttons
import table
reload(buttons)
reload(table)
from buttons import *
from table import *
I'm not really sure that is what you mean, so don't hesitate to correct me. You are importing a module - let's call it mymodule.py - in your program, but when you change its contents, you don't see the difference?
Python will not look for changes in mymodule.py each time it is used, it will load it a first time, compile it to bytecode and keep it internally. It will normally also save the compiled bytecode (mymodule.pyc). The next time you will start your program, it will check if mymodule.py is more recent than mymodule.pyc, and recompile it if necessary.
If you need to, you can reload the module explicitly:
import mymodule
[... some code ...]
if userAskedForRefresh:
reload(mymodule)
Of course, it is more complicated than that and you may have side-effects depending on what you do with your program regarding the other module, for example if variables depends on classes defined in mymodule.
Alternatively, you could use the execfile function (or exec(), eval(), compile())
I had the exact same issue creating a geoprocessing script for ArcGIS 10.2. I had a python toolbox script, a tool script and then a common script. I have a parameter for Dev/Test/Prod in the tool that would control which version of the code was run. Dev would run the code in the dev folder, test from test folder and prod from prod folder. Changes to the common dev script would not run when the tool was run from ArcCatalog. Closing ArcCatalog made no difference. Even though I selected Dev or Test it would always run from the prod folder.
Adding reload(myCommonModule) to the tool script resolved this issue.
The cases will be different for different versions of python.
Following shows an example of python 3.4 version or above:
hello import hello_world
#Calls hello_world function
hello_world()
HI !!
#Now changes are done and reload option is needed
import importlib
importlib.reload(hello)
hello_world()
How are you?
For earlier python versions like 2.x, use inbuilt reload function as stated above.
Better is to use ipython3 as it provides autoreload feature.

Categories

Resources