how to stop the import of a python module - python

suppose I have a file my_plugin.py
var1 = 1
def my_function():
print("something")
and in my main program I import this plugin
import my_plugin
Is there a way to silently disable this plugin with something like a return statement?
for example I could "mask" the behavior of my_function like this:
def my_function():
return
print("something")
I am wondering if I can do this for the module as a way to turn it on and off depending on what I am trying to do with the overall project. So something like:
return # this is invalid, but something that says stop running this module
# but continue on with the rest of the python program
var1 = 1
def my_function():
print("something")
I suppose I could just comment everything out and that would work... but I was wondering if something a little more concise exists
--- The purpose:
The thinking behind this is I have a large-ish code-base that is extensible by plugins. There is a plugins directory so the main program looks in the directory and runs all the modules that are in there. The use case was just to put a little kill switch inside plugins that are causing problems as an alternative to deleting or moving the file temporarily

You can just conditionally import the module:
if thing == otherthing:
import module
This is entire valid syntax in python. With this you can set a flag on a variable at the start of your project that will import modules based on what you need in that project.

Related

Passing imports to a script from inside an external method

I have kind of a tricky question, so that it is difficult to even describe it.
Suppose I have this script, which we will call master:
#in master.py
import slave as slv
def import_func():
import time
slv.method(import_func)
I want to make sure method in slave.py, which looks like this:
#in slave.py
def method(import_func):
import_func()
time.sleep(10)
actually runs like I imported the time package. Currently it does not work, I believe because the import stays exists only in the scope of import_func().
Keep in mind that the rules of the game are:
I cannot import anything in slave.py outside method
I need to pass the imports which method needs through import_func() in master.py
the procedure must work for a variable number of imports inside method. In other words, method cannot know how many imports it will receive but needs to work nonetheless.
the procedure needs to work for any import possible. So options like pyforest are not suitable.
I know it can theoretically be done through importlib, but I would prefer a more straightforward idea, because if we have a lot of imports with different 'as' labels it would become extremely tedious and convoluted with importlib.
I know it is kind of a quirky question but I'd really like to know if it is possible. Thanks
What you can do is this in the master file:
#in master.py
import slave as slv
def import_func():
import time
return time
slv.method(import_func)
Now use time return value in the slave file:
#in slave.py
def method(import_func):
time = import_func()
time.sleep(10)
Why would you have to do this? It's because of the application's stack. When import_func() is called on slave.py, it imports the library on the stack. However, when the function terminates, all stack data is released from memory. So the library would get released and collected by the garbage collector.
By returning time from import_func(), you guarantee it continues existing in memory once the function terminates executing.
Now, to import more modules? Simple. Return a list with multiples modules inside. Or maybe a Dictionary for simple access. That's one way of doing it.
[Edit] Using a dictionary and importlib to pass multiple imports to slave.py:
master.py:
import test2 as slv
import importlib
def master_import(packname, imports={}):
imports[packname] = importlib.import_module(packname)
def import_func():
imports = {}
master_import('time', imports)
return imports
slv.method(import_func)
slave.py:
#in slave.py
def method(import_func):
imports = import_func()
imports['time'].sleep(10)
This way, you can literally import any modules you want on master.py side, using master_import() function, and pass them to slave script.
Check this answer on how to use importlib.

python get the script which imported my script

I want to make my own programming language based on python which will provide additional features that python wasn't provide, for example to make multiline anonymous function with custom syntax. I want my programming language is so simple to be used, just import my script, then I read the script file which is imported my script, then process it's code and stop anymore execution of the script which called my script to prevent error on syntax...
Let say there are 2 py file, main.py and MyLanguage.py
The main.py imported MyLanguage.py
Then how to get the main.py file from MyLanguage.py if main.py can be another name(Dynamic Name)?
Additional information:
I using python 3.4.4 on Windows 7
Like Colonder, I believe the project you have in mind is far more difficult than you imagine.
But, to get you started, here is how to get the main.py file from inside MyLanguage.py. If your importing module looks like this
# main.py
import MyLanguage
if __name__ == "__main__":
print("Hello world from main.py")
and the module it is importing looks like this, in Python 3:
#MyLanguage.py
import inspect
def caller_discoverer():
print('Importing file is', inspect.stack()[-1].filename)
caller_discoverer()
or (edit) like this, in Python 2:
#MyLanguage.py
import inspect
def caller_discoverer():
print 'Importing file is', inspect.stack()[-1][1]
caller_discoverer()
then the output you will get when you run main.py is
Importing file is E:/..blahblahblah../StackOverflow-3.6/48034902/main.py
Hello world from main.py
I believe this answers the question you asked, though I don't think it goes very far towards achieving what you want. The reason for my scepticism is simple: the import statement expects a file containing valid Python, and if you want to import a file with your own non-Python syntax, then you are going to have to do some very clever stuff with import hooks. Without that, your program will simply fail at the import statement with a syntax error.
Best of luck.

Check name of running script file in module

I have 2 app files with import same module:
#app1.py
import settings as s
another code
#app2.py
import settings as s
another code
I need in module check if running first or second app:
#settings.py
#pseudocode
if running app1.py:
print ('app1')
elif:
print ('app2')
I check module inspect but no idea.
Also I am open for all better solutions.
EDIT: I feel a bit foolish (I guess it is easy)
I try:
var = None
def foo(a):
var = a
print (var)
but still None.
I'm not sure it is possible for an importee to know who imported it. Even if it was, it sounds like code smell to me.
Instead, what you can do is delegate the decision of what actions are to be taken by app1 and app2, instead of having settings make that decision.
For example:
settings.py
def foo(value):
if value == 'app1':
# do something
else:
# do something else
app1.py
from settings import foo
foo('app1')
And so on.
To assign within the function and have it reflect on a global variable. Example:
A.py
var = None
def foo(a):
global var
var = a
def print_var():
print(var)
test.py
import A
A.print_var()
A.foo(123)
A.print_var()
Output:
None
123
Note that globals aren't recommended in general as a programming practice, so use them as little as possible.
I think your current approach is not the best way do solve your issue. You can solve this, too, by modifying settings.py slightly. You have two possible ways to go: either the solution of coldspeed, or using delegates. Either way, you have to store the code of your module inside functions.
Another way to solve this issue would be (depending on the amount of code lines which depend on the app name) to pass a function/delegate to the function as a parameter like this:
#settings.py
def theFunction(otherParemters, callback):
#do something
callback()
#app1.py
from settings import theFunction
def clb():
print("called from settings.py")
#do something app specific here
theFunction(otherParameter, clb)
This appears to be a cleaner solution compared to the inspect solution, as it allows a better separation of the two modules.
It depends highly on the range of application, whether you should choose the first or the second version; maybe you could provide us with more information about the broader issue you are trying to solve.
As others have said, perhaps this is not the best way to achieve it. If you do want to though, how about using sys.argv to identify the calling module?
app:
import settings as s
settings:
import sys
import os
print sys.argv[0]
# \\path\\to\\app.py
print os.path.split(sys.argv[0])[-1]
# app.py
of course, this gives you the file that was originally run from the command line, so if this is part of a further nested set of imports this won't work for you.
This works for me.
import inspect
import os
curframe = inspect.currentframe()
calframe = inspect.getouterframes(curframe, 1)
if os.path.basename(calframe[1][1]) == 'app1.py':
print ('app1')
else:
print ('app2')

python: dynamically loading one-time plugins?

I'm writing a python application in which I want to make use of dynamic, one-time-runnable plugins.
By this I mean that at various times during the running of this application, it looks for python source files with special names in specific locations. If any such source file is found, I want my application to load it, run a pre-named function within it (if such a function exists), and then forget about that source file.
Later during the running of the application, that file might have changed, and I want my python application to reload it afresh, execute its method, and then forget about it, like before.
The standard import system keeps the module resident after the initial load, and this means that subsequent "import" or "__import__" calls won't reload the same module after its initial import. Therefore, any changes to the python code within this source file are ignored during its second through n-th imports.
In order for such packages to be loaded uniquely each time, I came up with the following procedure. It works, but it seems kind of "hacky" to me. Are there any more elegant or preferred ways of doing this? (note that the following is an over-simplified, illustrative example)
import sys
import imp
# The following module name can be anything, as long as it doesn't
# change throughout the life of the application ...
modname = '__whatever__'
def myimport(path):
'''Dynamically load python code from "path"'''
# get rid of previous instance, if it exists
try:
del sys.modules[modname]
except:
pass
# load the module
try:
return imp.load_source(modname, path)
except Exception, e:
print 'exception: {}'.format(e)
return None
mymod = myimport('/path/to/plugin.py')
if mymod is not None:
# call the plugin function:
try:
mymod.func()
except:
print 'func() not defined in plugin: {}'.format(path)
Addendum: one problem with this is that func() runs within a separate module context, and it has no access to any functions or variables within the caller's space. I therefore have to do inelegant things like the following if I
want func_one(), func_two() and abc to be accessible within the invocation
of func():
def func_one():
# whatever
def func_two():
# whatever
abc = '123'
# Load the module as shown above, but before invoking mymod.func(),
# the following has to be done ...
mymod.func_one = func_one
mymod.func_two = func_two
mymod.abc = abc
# This is a PITA, and I'm hoping there's a better way to do all of
# this.
Thank you very much.
I use the following code to do this sort of thing.
Note that I don't actually import the code as a module, but instead execute the code in a particular context. This lets me define a bunch of api functions automatically available to the plugins without users having to import anything.
def load_plugin(filename, context):
source = open(filename).read()
code = compile(source, filename, 'exec')
exec(code, context)
return context['func']
context = { 'func_one': func_one, 'func_two': func_two, 'abc': abc }
func = load_plugin(filename, context)
func()
This method works in python 2.6+ and python 3.3+
The approach you use is totally fine. For this question
one problem with this is that func() runs within a separate module context, and it has no access to any functions or variables within the caller's space.
It may be better to use execfile function:
# main.py
def func1():
print ('func1 called')
exec(open('trackableClass.py','r').read(),globals()) # this is similar to import except everything is done in the current module
#execfile('/path/to/plugin.py',globals()) # python 2 version
func()
Test it:
#/path/to/plugin.py
def func():
func1()
Result:
python main.py
# func1 called
One potential problem with this approach is namespace pollution because every file is run in the current namespace which increase the chance of name conflict.

Python: intercept a class loading action

Summary: when a certain python module is imported, I want to be able to intercept this action, and instead of loading the required class, I want to load another class of my choice.
Reason: I am working on some legacy code. I need to write some unit test code before I start some enhancement/refactoring. The code imports a certain module which will fail in a unit test setting, however. (Because of database server dependency)
Pseduo Code:
from LegacyDataLoader import load_me_data
...
def do_something():
data = load_me_data()
So, ideally, when python excutes the import line above in a unit test, an alternative class, says MockDataLoader, is loaded instead.
I am still using 2.4.3. I suppose there is an import hook I can manipulate
Edit
Thanks a lot for the answers so far. They are all very helpful.
One particular type of suggestion is about manipulation of PYTHONPATH. It does not work in my case. So I will elaborate my particular situation here.
The original codebase is organised in this way
./dir1/myapp/database/LegacyDataLoader.py
./dir1/myapp/database/Other.py
./dir1/myapp/database/__init__.py
./dir1/myapp/__init__.py
My goal is to enhance the Other class in the Other module. But since it is legacy code, I do not feel comfortable working on it without strapping a test suite around it first.
Now I introduce this unit test code
./unit_test/test.py
The content is simply:
from myapp.database.Other import Other
def test1():
o = Other()
o.do_something()
if __name__ == "__main__":
test1()
When the CI server runs the above test, the test fails. It is because class Other uses LegacyDataLoader, and LegacydataLoader cannot establish database connection to the db server from the CI box.
Now let's add a fake class as suggested:
./unit_test_fake/myapp/database/LegacyDataLoader.py
./unit_test_fake/myapp/database/__init__.py
./unit_test_fake/myapp/__init__.py
Modify the PYTHONPATH to
export PYTHONPATH=unit_test_fake:dir1:unit_test
Now the test fails for another reason
File "unit_test/test.py", line 1, in <module>
from myapp.database.Other import Other
ImportError: No module named Other
It has something to do with the way python resolves classes/attributes in a module
You can intercept import and from ... import statements by defining your own __import__ function and assigning it to __builtin__.__import__ (make sure to save the previous value, since your override will no doubt want to delegate to it; and you'll need to import __builtin__ to get the builtin-objects module).
For example (Py2.4 specific, since that's what you're asking about), save in aim.py the following:
import __builtin__
realimp = __builtin__.__import__
def my_import(name, globals={}, locals={}, fromlist=[]):
print 'importing', name, fromlist
return realimp(name, globals, locals, fromlist)
__builtin__.__import__ = my_import
from os import path
and now:
$ python2.4 aim.py
importing os ('path',)
So this lets you intercept any specific import request you want, and alter the imported module[s] as you wish before you return them -- see the specs here. This is the kind of "hook" you're looking for, right?
There are cleaner ways to do this, but I'll assume that you can't modify the file containing from LegacyDataLoader import load_me_data.
The simplest thing to do is probably to create a new directory called testing_shims, and create LegacyDataLoader.py file in it. In that file, define whatever fake load_me_data you like. When running the unit tests, put testing_shims into your PYTHONPATH environment variable as the first directory. Alternately, you can modify your test runner to insert testing_shims as the first value in sys.path.
This way, your file will be found when importing LegacyDataLoader, and your code will be loaded instead of the real code.
The import statement just grabs stuff from sys.modules if a matching name is found there, so the simplest thing is to make sure you insert your own module into sys.modules under the target name before anything else tries to import the real thing.
# in test code
import sys
import MockDataLoader
sys.modules['LegacyDataLoader'] = MockDataLoader
import module_under_test
There are a handful of variations on the theme, but that basic approach should work fine to do what you describe in the question. A slightly simpler approach would be this, using just a mock function to replace the one in question:
# in test code
import module_under_test
def mock_load_me_data():
# do mock stuff here
module_under_test.load_me_data = mock_load_me_data
That simply replaces the appropriate name right in the module itself, so when you invoke the code under test, presumably do_something() in your question, it calls your mock routine.
Well, if the import fails by raising an exception, you could put it in a try...except loop:
try:
from LegacyDataLoader import load_me_data
except: # put error that occurs here, so as not to mask actual problems
from MockDataLoader import load_me_data
Is that what you're looking for? If it fails, but doesn't raise an exception, you could have it run the unit test with a special command line tag, like --unittest, like this:
import sys
if "--unittest" in sys.argv:
from MockDataLoader import load_me_data
else:
from LegacyDataLoader import load_me_data

Categories

Resources