I have 2 app files with import same module:
#app1.py
import settings as s
another code
#app2.py
import settings as s
another code
I need in module check if running first or second app:
#settings.py
#pseudocode
if running app1.py:
print ('app1')
elif:
print ('app2')
I check module inspect but no idea.
Also I am open for all better solutions.
EDIT: I feel a bit foolish (I guess it is easy)
I try:
var = None
def foo(a):
var = a
print (var)
but still None.
I'm not sure it is possible for an importee to know who imported it. Even if it was, it sounds like code smell to me.
Instead, what you can do is delegate the decision of what actions are to be taken by app1 and app2, instead of having settings make that decision.
For example:
settings.py
def foo(value):
if value == 'app1':
# do something
else:
# do something else
app1.py
from settings import foo
foo('app1')
And so on.
To assign within the function and have it reflect on a global variable. Example:
A.py
var = None
def foo(a):
global var
var = a
def print_var():
print(var)
test.py
import A
A.print_var()
A.foo(123)
A.print_var()
Output:
None
123
Note that globals aren't recommended in general as a programming practice, so use them as little as possible.
I think your current approach is not the best way do solve your issue. You can solve this, too, by modifying settings.py slightly. You have two possible ways to go: either the solution of coldspeed, or using delegates. Either way, you have to store the code of your module inside functions.
Another way to solve this issue would be (depending on the amount of code lines which depend on the app name) to pass a function/delegate to the function as a parameter like this:
#settings.py
def theFunction(otherParemters, callback):
#do something
callback()
#app1.py
from settings import theFunction
def clb():
print("called from settings.py")
#do something app specific here
theFunction(otherParameter, clb)
This appears to be a cleaner solution compared to the inspect solution, as it allows a better separation of the two modules.
It depends highly on the range of application, whether you should choose the first or the second version; maybe you could provide us with more information about the broader issue you are trying to solve.
As others have said, perhaps this is not the best way to achieve it. If you do want to though, how about using sys.argv to identify the calling module?
app:
import settings as s
settings:
import sys
import os
print sys.argv[0]
# \\path\\to\\app.py
print os.path.split(sys.argv[0])[-1]
# app.py
of course, this gives you the file that was originally run from the command line, so if this is part of a further nested set of imports this won't work for you.
This works for me.
import inspect
import os
curframe = inspect.currentframe()
calframe = inspect.getouterframes(curframe, 1)
if os.path.basename(calframe[1][1]) == 'app1.py':
print ('app1')
else:
print ('app2')
Related
I have 2 different files:
One is from CI build:
build.py
ABC_ACTIVATE = False
def activate_abc():
global ABC_ACTIVATE
ABC_ACTIVATE = True
# Maybe some more very long code.
One is from customize
customize.py
from build import *
activate_abc()
print ABC_ACTIVATE
The idea is customize the activation for each environment by 1 function instead of very long code. But it doesn't work since ABC_ACTIVATE is always False.
It seems that the global variable cannot receive the same context in the other file. Potentially some "cyclical dependencies" problem.
So my question is: Is there any better structure solution? The idea is still activate by a function and customize.py would be the last setting for apache build.
The global seem cannot receive the same context in the other files. Maybe some "cyclical dependencies" problem.
Once you imported it, ABC_ACTIVATE becomes local in the context of that script. Therefore, mutating the variable in build.py won't reflect in your other module.
So my question is: Is there any better structure solution?
One thing you could is create an intermediary method that return the ABC_ACTIVATE Boolean in your build.py.
def is_abc_activated():
return ABC_ACTIVATE
and then importing it like so,
from build import activate_abc, is_abc_activated
print(is_abc_activated())
activate_abc()
print(is_abc_activated())
>>>>
False
True
Basically, this will remove your unconditional import from build import * which is an anti-idiom in Python. Also, it will increase readability since accessing ABC_ACTIVATE can be confusing on what exactly you're doing.
After some discussion, my friend found a quite hack solution for it:
build.py:
ABC_ACTIVATE = False
def activate_abc(other_context):
other_context.ABC_ACTIVATE = True
And in customize.py:
from build import *
activate_abc(sys.modules[__name__])
print ABC_ACTIVATE
It works now.
That looks like incorrect syntax for a function definition in build.py: the first { should be a : and the second } is not needed as python uses indentation to signify code blocks
ACTIVATE = False
def activate():
global ACTIVATE
ACTIVATE = True
Maybe you could also do...
import build
build.activate()
...As when the script in build.py uses the variable in the same file whereas your imported variable is a different variable since its being imported to the current file's namespace.
suppose I have a file my_plugin.py
var1 = 1
def my_function():
print("something")
and in my main program I import this plugin
import my_plugin
Is there a way to silently disable this plugin with something like a return statement?
for example I could "mask" the behavior of my_function like this:
def my_function():
return
print("something")
I am wondering if I can do this for the module as a way to turn it on and off depending on what I am trying to do with the overall project. So something like:
return # this is invalid, but something that says stop running this module
# but continue on with the rest of the python program
var1 = 1
def my_function():
print("something")
I suppose I could just comment everything out and that would work... but I was wondering if something a little more concise exists
--- The purpose:
The thinking behind this is I have a large-ish code-base that is extensible by plugins. There is a plugins directory so the main program looks in the directory and runs all the modules that are in there. The use case was just to put a little kill switch inside plugins that are causing problems as an alternative to deleting or moving the file temporarily
You can just conditionally import the module:
if thing == otherthing:
import module
This is entire valid syntax in python. With this you can set a flag on a variable at the start of your project that will import modules based on what you need in that project.
I am writing functional tests using pytest for a software that can run locally and in the cloud. I want to create 2 modules, each with the same module/fixture names, and have pytest load one or the other depending if I'm running tests locally or in the cloud:
/fixtures
/fixtures/__init__.py
/fixtures/local_hybrids
/fixtures/local_hybrids/__init__.py
/fixtures/local_hybrids/foo.py
/fixtures/cloud_hybrids
/fixtures/cloud_hybrids/__init__.py
/fixtures/cloud_hybrids/foo.py
/test_hybrids/test_hybrids.py
foo.py (both of them):
import pytest
#pytest.fixture()
def my_fixture():
return True
/fixtures/__init__.py:
if True:
import local_hybrids as hybrids
else:
import cloud_hybrids as hybrids
/test_hybrids/test_hybrids.py:
from fixtures.hybrids.foo import my_fixture
def test_hybrid(my_fixture):
assert my_fixture
The last code block doesn't work of course, because import fixtures.hybrids is looking at the file system instead of __init__.py's "fake" namespace, which isn't like from fixtures import hybrids, which works (but then you cannot use the fixtures as the names would involve dot notation).
I realize that I could play with pytest_generate_test to alter the fixture dynamically (maybe?) but I'd really hate managing each fixture manually from within that function... I was hoping the dynamic import (if x, import this, else import that) was standard Python, unfortunately it clashes with the fixtures mechanism:
import fixtures
def test(fixtures.hybrids.my_fixture): # of course it doesn't work :)
...
I could also import each fixture function one after the other in init; more legwork, but still a viable option to fool pytest and get fixture names without dots.
Show me the black magic. :) Can it be done?
I think in your case it's better to define a fixture - environment or other nice name.
This fixture can be just a getter from os.environ['KEY'] or you can add custom command line argument like here
then use it like here
and the final use is here.
What im trying to tell is that you need to switch thinking into dependency injection: everything should be a fixture. In your case (and in my plugin as well), runtime environment should be a fixture, which is checked in all other fixtures which depend on the environment.
You might be missing something here: If you want to re-use those fixtures you need to say it explicitly:
from fixtures.hybrids.foo import my_fixture
#pytest.mark.usefixtures('my_fixture')
def test_hybrid(my_fixture):
assert my_fixture
In that case you could tweak pytest as following:
from local_hybrids import local_hybrids_fixture
from cloud_hybrids import cloud_hybrids_fixture
fixtures_to_test = {
"local":None,
"cloud":None
}
#pytest.mark.usefixtures("local_hybrids_fixture")
def test_add_local_fixture(local_hybrids_fixture):
fixtures_to_test["local"] = local_hybrids_fixture
#pytest.mark.usefixtures("cloud_hybrids_fixture")
def test_add_local_fixture(cloud_hybrids_fixture):
fixtures_to_test["cloud"] = cloud_hybrids_fixture
def test_on_fixtures():
if cloud_enabled:
fixture = fixtures_to_test["cloud"]
else:
fixture = fixtures_to_test["local"]
...
If there are better solutions around I am also interested ;)
I don't really think there is a "good way" of doing that in python, but still it is possible with a little amount of hacking. You can update sys.path for the subfolder with fixtures you would like to use and import fixtures directly. In dirty case it look's like that:
for your fixtures/__init__.py:
if True:
import local as hybrids
else:
import cloud as hybrids
def update_path(module):
from sys import path
from os.path import join, pardir, abspath
mod_dir = abspath(join(module.__file__, pardir))
path.insert(0, mod_dir)
update_path(hybrids)
and in the client code (test_hybrids/test_hybrids.py) :
import fixtures
from foo import spam
spam()
In other cases you can use much more complex actions to perform a fake-move of all modules/packages/functions etc from your cloud/local folder directly into the fixture's __init__.py. Still, I think - it does not worth a try.
One more thing - black magic is not the best thing to use, I would recommend you to use a dotted notation with "import X from Y" - this is much more stable solution.
Use the pytest plugins feature and put your fixtures in separate modules. Then at runtime select which plug-in you’ll be drawing from via a command line argument or an environment variable. It needs to be something global because you need to place different pytest_plugins list assignments based on the global value.
Take a look at the section Conditional Plugins from this repo https://github.com/jxramos/pytest_behavior/tree/main/conditional_plugins
Say that I have two python file
# importer.py
parameter = 4
import importee
and
# importee.py
print parameter
Can I or how can I in importee.py access importer.py's parameter?
I'm using an ugly work around, borrow (and pollute) the sys
# importer.py
import sys
sys.parameter = 4
import importee
and
# importee.py
print sys.parameter
Too ugly.
Looking for better solution.
The recommended way to achieve what I think you want to achieve is to declare function in importee, and call it, e.g.:
# importer.py
import importee
importee.call_me(4)
and:
# importee.py
def call_me(parameter):
print(parameter)
It is preferable to avoid performing any operations in global scope. And especially print()ing anything but I suppose your minimal example doesn't match your real use case :).
By the way, the ugly work around you have mentioned is practically equivalent to using a separate configuration module. For example:
# importer.py
import config
config.param = 4
import importee
+
# importee.py
import config
print(config.param)
+
# config.py
param = 7 # some default
It's still nowhere close to pretty but at least avoids mangling with system modules.
Summary: when a certain python module is imported, I want to be able to intercept this action, and instead of loading the required class, I want to load another class of my choice.
Reason: I am working on some legacy code. I need to write some unit test code before I start some enhancement/refactoring. The code imports a certain module which will fail in a unit test setting, however. (Because of database server dependency)
Pseduo Code:
from LegacyDataLoader import load_me_data
...
def do_something():
data = load_me_data()
So, ideally, when python excutes the import line above in a unit test, an alternative class, says MockDataLoader, is loaded instead.
I am still using 2.4.3. I suppose there is an import hook I can manipulate
Edit
Thanks a lot for the answers so far. They are all very helpful.
One particular type of suggestion is about manipulation of PYTHONPATH. It does not work in my case. So I will elaborate my particular situation here.
The original codebase is organised in this way
./dir1/myapp/database/LegacyDataLoader.py
./dir1/myapp/database/Other.py
./dir1/myapp/database/__init__.py
./dir1/myapp/__init__.py
My goal is to enhance the Other class in the Other module. But since it is legacy code, I do not feel comfortable working on it without strapping a test suite around it first.
Now I introduce this unit test code
./unit_test/test.py
The content is simply:
from myapp.database.Other import Other
def test1():
o = Other()
o.do_something()
if __name__ == "__main__":
test1()
When the CI server runs the above test, the test fails. It is because class Other uses LegacyDataLoader, and LegacydataLoader cannot establish database connection to the db server from the CI box.
Now let's add a fake class as suggested:
./unit_test_fake/myapp/database/LegacyDataLoader.py
./unit_test_fake/myapp/database/__init__.py
./unit_test_fake/myapp/__init__.py
Modify the PYTHONPATH to
export PYTHONPATH=unit_test_fake:dir1:unit_test
Now the test fails for another reason
File "unit_test/test.py", line 1, in <module>
from myapp.database.Other import Other
ImportError: No module named Other
It has something to do with the way python resolves classes/attributes in a module
You can intercept import and from ... import statements by defining your own __import__ function and assigning it to __builtin__.__import__ (make sure to save the previous value, since your override will no doubt want to delegate to it; and you'll need to import __builtin__ to get the builtin-objects module).
For example (Py2.4 specific, since that's what you're asking about), save in aim.py the following:
import __builtin__
realimp = __builtin__.__import__
def my_import(name, globals={}, locals={}, fromlist=[]):
print 'importing', name, fromlist
return realimp(name, globals, locals, fromlist)
__builtin__.__import__ = my_import
from os import path
and now:
$ python2.4 aim.py
importing os ('path',)
So this lets you intercept any specific import request you want, and alter the imported module[s] as you wish before you return them -- see the specs here. This is the kind of "hook" you're looking for, right?
There are cleaner ways to do this, but I'll assume that you can't modify the file containing from LegacyDataLoader import load_me_data.
The simplest thing to do is probably to create a new directory called testing_shims, and create LegacyDataLoader.py file in it. In that file, define whatever fake load_me_data you like. When running the unit tests, put testing_shims into your PYTHONPATH environment variable as the first directory. Alternately, you can modify your test runner to insert testing_shims as the first value in sys.path.
This way, your file will be found when importing LegacyDataLoader, and your code will be loaded instead of the real code.
The import statement just grabs stuff from sys.modules if a matching name is found there, so the simplest thing is to make sure you insert your own module into sys.modules under the target name before anything else tries to import the real thing.
# in test code
import sys
import MockDataLoader
sys.modules['LegacyDataLoader'] = MockDataLoader
import module_under_test
There are a handful of variations on the theme, but that basic approach should work fine to do what you describe in the question. A slightly simpler approach would be this, using just a mock function to replace the one in question:
# in test code
import module_under_test
def mock_load_me_data():
# do mock stuff here
module_under_test.load_me_data = mock_load_me_data
That simply replaces the appropriate name right in the module itself, so when you invoke the code under test, presumably do_something() in your question, it calls your mock routine.
Well, if the import fails by raising an exception, you could put it in a try...except loop:
try:
from LegacyDataLoader import load_me_data
except: # put error that occurs here, so as not to mask actual problems
from MockDataLoader import load_me_data
Is that what you're looking for? If it fails, but doesn't raise an exception, you could have it run the unit test with a special command line tag, like --unittest, like this:
import sys
if "--unittest" in sys.argv:
from MockDataLoader import load_me_data
else:
from LegacyDataLoader import load_me_data