When i import my self written function of a module in a python script it takes about 6 seconds to load. The function contains only about 50 lines of code but that shouldn't even matter since it has not been executed yet right?
This is the script that loads the function:
#/usr/bin/env python
import time
print(time.clock())
from os import chdir
print(time.clock())
from os.path import abspath, dirname
print(time.clock())
from Project.controllers.SpiderController import RunSpider
print(time.clock())
And the output is as follows:
0.193569
0.194114
0.194458
6.315348
I also tried to import the whole module but the result was the same.
What could be the cause of that?
Some side notes:
I use python 2.7.9
The module uses the scrapy framework
The python script is running on a Raspberry Pi 1 Model B
but that shouldn't even matter since it has not been executed yet right?
The code of the function itself is not executed, but the code in the file is executed. This is logical, since that file might contain decorators, library calls, inner constants, etc. It is even possible that the function is build (so that an algorithm constructs the function).
With from <module> import <item> you do a almost normal import, but you create only one reference to an item in that package.
So it can take a long time if there is a program written in the module (that is not scoped in an if __name__ == '__main__':) or when you import a large amount of additional libraries.
It is for instance possible to construct a function like:
def foo(x):
return x + 5
def bar(y):
return y * 2
def qux(x):
return foo(bar(x))
If you then run from module import qux, then it will first have to define foo and bar, since qux depends on these.
Furthermore although the code itself is not executed, the interpreter will analyze the function: it will transform the source code into a syntax tree and do some analysis (which variables are local, etc.).
Finally note that a package typically has an __init__.py file, that initializes the package. That file is also executed and can take considerable time as well. For instance some packages that have a database connection, will already set up the connection to that database, and it can take some time before the database responds to the connection.
Related
I have kind of a tricky question, so that it is difficult to even describe it.
Suppose I have this script, which we will call master:
#in master.py
import slave as slv
def import_func():
import time
slv.method(import_func)
I want to make sure method in slave.py, which looks like this:
#in slave.py
def method(import_func):
import_func()
time.sleep(10)
actually runs like I imported the time package. Currently it does not work, I believe because the import stays exists only in the scope of import_func().
Keep in mind that the rules of the game are:
I cannot import anything in slave.py outside method
I need to pass the imports which method needs through import_func() in master.py
the procedure must work for a variable number of imports inside method. In other words, method cannot know how many imports it will receive but needs to work nonetheless.
the procedure needs to work for any import possible. So options like pyforest are not suitable.
I know it can theoretically be done through importlib, but I would prefer a more straightforward idea, because if we have a lot of imports with different 'as' labels it would become extremely tedious and convoluted with importlib.
I know it is kind of a quirky question but I'd really like to know if it is possible. Thanks
What you can do is this in the master file:
#in master.py
import slave as slv
def import_func():
import time
return time
slv.method(import_func)
Now use time return value in the slave file:
#in slave.py
def method(import_func):
time = import_func()
time.sleep(10)
Why would you have to do this? It's because of the application's stack. When import_func() is called on slave.py, it imports the library on the stack. However, when the function terminates, all stack data is released from memory. So the library would get released and collected by the garbage collector.
By returning time from import_func(), you guarantee it continues existing in memory once the function terminates executing.
Now, to import more modules? Simple. Return a list with multiples modules inside. Or maybe a Dictionary for simple access. That's one way of doing it.
[Edit] Using a dictionary and importlib to pass multiple imports to slave.py:
master.py:
import test2 as slv
import importlib
def master_import(packname, imports={}):
imports[packname] = importlib.import_module(packname)
def import_func():
imports = {}
master_import('time', imports)
return imports
slv.method(import_func)
slave.py:
#in slave.py
def method(import_func):
imports = import_func()
imports['time'].sleep(10)
This way, you can literally import any modules you want on master.py side, using master_import() function, and pass them to slave script.
Check this answer on how to use importlib.
I have a Python program where a function imports another script and runs it. But the script gets run only the first time the function is called.
def Open_Generator(event):
import PasswordGenerator
Any tips?
*The function is called using a button in a tkinter window
This is by design. You should only import a module once. Trying to import a module more than once will cause Python to re-fetch the module object from the cache, but this won't cause the module's code to execute a second time.
Most well-designed modules won't do anything right away when you import them, or at least won't do anything obviously visible. Generally, if you want a module to do work, you need to call one of its functions.
I'm guessing your PasswordGenerator module has some code at the file-level scope. In other words, it has code that isn't inside a function. Try to move that code into a function. Then you can call that function from Open_Generator.
import PasswordGenerator
def Open_Generator(event):
my_password = PasswordGenerator.generate_password()
I am writing a python module and I am using many imports of other different modules.
I am bit confused that whether I should import all the necessary dependent modules in the opening of the file or shall I do it when necessary.
I also wanted to know the implications of both.
I come from C++ back ground so I am really thrilled with this feature and does not see any reason of not using __import__(), importing the modules only when needed inside my function.
Kindly throw some light on this.
To write less code, import a module at the first lines of the script, e.g.:
#File1.py
import os
#use os somewhere:
os.path.chdir(some_dir)
...
...
#use os somewhere else, you don't need to "import os" everywhere
os.environ.update(some_dict)
While sometimes you may need to import a module locally (e.g., in a function):
abc=3
def foo():
from some_module import abc #import inside foo avoids you from naming conflicts
abc(...) #call the function, nothing to do with the variable "abc" outside "foo"
Don't worry about the time consumption when calling foo() multiple times, since import statements loads modules/functions only one time. Once a module/function is imported, the object is stored in dictionary sys.modules, which is a lookup table for speedup when running the same import statement.
As #bruno desthuilliers mentioned, importing insede functions may not be that pythonic, it violates PEP8, here's a discussion I found, you should stick to importing at the top of the file most of the time.
First, __import__ isn't usually needed anywhere. It's main purpose is to support dynamic importing of things that you don't know ahead of time (think plug-ins). You can easily use the import statement inside your function:
import sys
def foo():
import this
if __name__ == "__main__":
print sys.version_info
foo()
The main advantage to importing everything up-front is that it is most customary. That's where people reading your code will go to see if something is imported or not. Also, you don't need to write import os in every function that uses os. The main downsides of this approach are that:
you can get yourself into unresolvable import loops (A imports B which imports A)
that you pull everything into memory even if you aren't going to use it.
The second problem isn't typically an issue -- very rarely do you notice the performance or memory impact of an import.
If you run into the first problem, it's likely a symptom of poorly grouped code and the common stuff should be factored into a new module C which both A and B can use.
Firstly, it's a violation of PEP8 using imports inside functions.
Calling import it's an expensive call EVEN if the module is already loaded, so if your function is gonna being called many times this will not compensate the performance gain.
Also when you call "import test" python do this:
dataFile = __ import__('test')
The only downside of imports at the top of file it's the namespace that get polluted very fast depending on complexity of the file, but if your file it's too complex it's a signal of bad design.
Python won't allow me to import classes into eachother. I know there is no "package" solution in Python, so I'm not sure how this could be done. Take a look at my files' codes:
file Main.py:
from Tile import tile
tile.assign()
class main:
x = 0
#staticmethod
def assign():
tile.x = 20
file Tile.py:
from Main import main
main.assign()
class tile:
x = 0
#staticmethod
def assign():
main.x = 20
I get the error "class tile can not be imported".
If file A imports file B, and file B imports file A, they would keep importing each other indefinitely until the program crashed.
You need to rethink your logic in a way that doesn't require this circular dependency. For example, a 3rd file could import both files and perform both assignments.
You have your import backwards and the from needs to have the name of the module
from Tile import tile
Python begins executing Main.py. It sees the import, and so goes to execute Tile.py. Note that it has not yet executed the class statement!
So Python begins executing Tile.py. It sees an import from Main, and it already has that module in memory, so it doesn't re-execute the code in Main.py (even worse things would go wrong if it did). It tries to pull out a variable main from the Main module, but the class statement binding main hasn't executed yet (we're still in the process of executing the import statement, advice that line). So you get the error about there not being a clsss main in module Main (or Tile, if you started from there).
You could avoid that by importing the modules rather than importing classes out of the modules, and using qualified names, but then you'd fall one line down when Main.main doesn't work. Your code makes no sense I'm a dynamic language; you can't have both the definition of class main wait until after tile.assign has been called and the definition of class tile wait until after main.assign has been called.
If you really need this circular dependency (it's often, but not always a sign that something has gone wrong at the design stage), then you need to separate out "scaffolding" like defining classes and functions and variables from "execution", where you actually call classes and functions or use variables. Then your circular imports of the "scaffolding" will work even though none of the modules will be properly initialized while the importing is going on, and by the time you get to starting the "execution" everything will work.
If I have a Python module implemented as a directory (i.e. package) that has both a top level function run and a submodule run, can I count on from example import run to always import the function? Based on my tests that is the case at least with Python 2.6 and Jython 2.5 on Linux, but can I count on this generally? I tried to search information about the import priorities but couldn't find anything.
Background:
I have a pretty large package that people generally run as a tool from the command line but also sometimes use programmatically. I would like to have simple entry points for both usages and consider to implement them like this:
example/__init__.py:
def run(*args):
print args # real application code belongs here
example/run.py:
import sys
from example import run
run(*sys.argv[1:])
The first entry point allows users to access the module from Python like this:
from example import run
run(args)
The latter entry point allows users to execute the module from the command line using both of the approaches below:
python -m example.run args
python path/to/example/run.py args
This both works great and covers everything I need. Before taking this into real use, I would like to know is this a sound approach that I can expect to work with all Python implementations on all operating systems.
I think this should always work; the function definition will shadow the module.
However, this also strikes me as a dirty hack. The clean way to do this would be
# __init__.py
# re-export run.run as run
from .run import run
i.e., a minimal __init__.py, with all the running logic in run.py:
# run.py
def run(*args):
print args # real application code belongs here
if __name__ == "__main__":
run(*sys.argv[1:])