I have 2 different files:
One is from CI build:
build.py
ABC_ACTIVATE = False
def activate_abc():
global ABC_ACTIVATE
ABC_ACTIVATE = True
# Maybe some more very long code.
One is from customize
customize.py
from build import *
activate_abc()
print ABC_ACTIVATE
The idea is customize the activation for each environment by 1 function instead of very long code. But it doesn't work since ABC_ACTIVATE is always False.
It seems that the global variable cannot receive the same context in the other file. Potentially some "cyclical dependencies" problem.
So my question is: Is there any better structure solution? The idea is still activate by a function and customize.py would be the last setting for apache build.
The global seem cannot receive the same context in the other files. Maybe some "cyclical dependencies" problem.
Once you imported it, ABC_ACTIVATE becomes local in the context of that script. Therefore, mutating the variable in build.py won't reflect in your other module.
So my question is: Is there any better structure solution?
One thing you could is create an intermediary method that return the ABC_ACTIVATE Boolean in your build.py.
def is_abc_activated():
return ABC_ACTIVATE
and then importing it like so,
from build import activate_abc, is_abc_activated
print(is_abc_activated())
activate_abc()
print(is_abc_activated())
>>>>
False
True
Basically, this will remove your unconditional import from build import * which is an anti-idiom in Python. Also, it will increase readability since accessing ABC_ACTIVATE can be confusing on what exactly you're doing.
After some discussion, my friend found a quite hack solution for it:
build.py:
ABC_ACTIVATE = False
def activate_abc(other_context):
other_context.ABC_ACTIVATE = True
And in customize.py:
from build import *
activate_abc(sys.modules[__name__])
print ABC_ACTIVATE
It works now.
That looks like incorrect syntax for a function definition in build.py: the first { should be a : and the second } is not needed as python uses indentation to signify code blocks
ACTIVATE = False
def activate():
global ACTIVATE
ACTIVATE = True
Maybe you could also do...
import build
build.activate()
...As when the script in build.py uses the variable in the same file whereas your imported variable is a different variable since its being imported to the current file's namespace.
Related
I got a hint to use optional requirements and conditional import to provide a function that can use pandas or not, depending whether it's available.
See here for reference:
https://stackoverflow.com/a/74862141/10576322
This solution works, but if I test this code I get always a bad coverage since I either have pandas imported or not. So even if I configure hatch to create environments for both tests, it looks like the tests don't cover this if/else function definition sufficiently.
Is there a proper way around to eg. combine the two results? Or can I tell coverage that the result is expected for that block of code?
Code
The module is looking like that:
try:
import pandas as pd
PANDAS_INSTALLED = True
except ImportError:
PANDAS_INSTALLED = False
if PANDAS_INSTALLED:
def your_function(...):
# magic with pandas
return output
else:
def your_function(...):
# magic without pandas
return output
The idea is that the two version of the two functions work exactly the same beside the inner procedures. So everybody no matter where can use my_module.my_function and don't need to start writing code depending on what environment they are on.
The same is true for testing. I can write tests for my_module.my_function and if the venv has pandas I am testing one part of it and if not the test is testing the other part.
from mypackage import my_module
def test_my_function:
res = 'foo'
assert my_module.my_function() == res
That is working fine, but coverage evaluation is complicated.
Paths to solution
Till now I am ware of two solutions.
1. mocking the behavior
#TYZ suggested to have always pandad as dependency for testing and mock the global variable.
I tried that, but it didn't work as I expected it. The reason is that I can of course mock the PANDAS_INSTALLED variable, but the function defifintion already took place during import and is not affected anymore by the variable.
I tried to check if I can mock the import in another test module, but didn't succeed.
2. defining venvs w and w/o pandas and combine results
I found that coverage and pytest-cov have the abillity to append test results between environments or combine different results.
In a first test I changed the pytest-cov script in hatch to include --cov-append. That worked, but it's totally global. That means if I get complete coverage in Python 3.8, but for whatever reason the switch doesn't work in Python 3.9 I wouldn't see it.
What I like to do is to combine the different results by some logic inherited from hatchs test.matrix. Like coverage combine py38.core py38.pandas and the same for 3.9. So I would see if I have same coverage for all tested versions.
I guess that there are possibly solutions to do that with tox, but maybe I don't need to include another tool.
If it is a test case you're writing, shouldn't the behavior you're testing be the same regardless of whether pandas is installed or not ? From the original question it appears like you'd have the function defined anyways. The intent of your unit test then ought to be -- "given these parameters test whether return value/behavior is this".
That said, if you want coverage with or without pandas, my recommendation would be to declare 2 differently named functions (which can be imported and unit tested separately), whereas your runtime function is assigned depending on the flag in the import block. Something like:
# your_code.py
try:
import pandas as pd
PANDAS_INSTALLED = True
except ImportError:
PANDAS_INSTALLED = False
def _using_pandas(...):
...
def _not_using_pandas(...):
....
do_something = _using_pandas if PANDAS_INSTALLED else _not_using_pandas
__all__ = ['do_something']
# -------------
# your_tests.py
try:
import pandas as pd
PANDAS_INSTALLED = True
except ImportError:
PANDAS_INSTALLED = False
from your_code import _using_pandas, _not_using_pandas, do_something
import pytest
#pytest.mark.skipif(not PANDAS_INSTALLED)
def test_code_using_pandas(...):
...
#pytest.mark.skipif(PANDAS_INSTALLED)
def test_code_not_using_pandas(...):
...
def test_do_something(...):
# test behavior independent of imports
...
suppose I have a file my_plugin.py
var1 = 1
def my_function():
print("something")
and in my main program I import this plugin
import my_plugin
Is there a way to silently disable this plugin with something like a return statement?
for example I could "mask" the behavior of my_function like this:
def my_function():
return
print("something")
I am wondering if I can do this for the module as a way to turn it on and off depending on what I am trying to do with the overall project. So something like:
return # this is invalid, but something that says stop running this module
# but continue on with the rest of the python program
var1 = 1
def my_function():
print("something")
I suppose I could just comment everything out and that would work... but I was wondering if something a little more concise exists
--- The purpose:
The thinking behind this is I have a large-ish code-base that is extensible by plugins. There is a plugins directory so the main program looks in the directory and runs all the modules that are in there. The use case was just to put a little kill switch inside plugins that are causing problems as an alternative to deleting or moving the file temporarily
You can just conditionally import the module:
if thing == otherthing:
import module
This is entire valid syntax in python. With this you can set a flag on a variable at the start of your project that will import modules based on what you need in that project.
I have 2 app files with import same module:
#app1.py
import settings as s
another code
#app2.py
import settings as s
another code
I need in module check if running first or second app:
#settings.py
#pseudocode
if running app1.py:
print ('app1')
elif:
print ('app2')
I check module inspect but no idea.
Also I am open for all better solutions.
EDIT: I feel a bit foolish (I guess it is easy)
I try:
var = None
def foo(a):
var = a
print (var)
but still None.
I'm not sure it is possible for an importee to know who imported it. Even if it was, it sounds like code smell to me.
Instead, what you can do is delegate the decision of what actions are to be taken by app1 and app2, instead of having settings make that decision.
For example:
settings.py
def foo(value):
if value == 'app1':
# do something
else:
# do something else
app1.py
from settings import foo
foo('app1')
And so on.
To assign within the function and have it reflect on a global variable. Example:
A.py
var = None
def foo(a):
global var
var = a
def print_var():
print(var)
test.py
import A
A.print_var()
A.foo(123)
A.print_var()
Output:
None
123
Note that globals aren't recommended in general as a programming practice, so use them as little as possible.
I think your current approach is not the best way do solve your issue. You can solve this, too, by modifying settings.py slightly. You have two possible ways to go: either the solution of coldspeed, or using delegates. Either way, you have to store the code of your module inside functions.
Another way to solve this issue would be (depending on the amount of code lines which depend on the app name) to pass a function/delegate to the function as a parameter like this:
#settings.py
def theFunction(otherParemters, callback):
#do something
callback()
#app1.py
from settings import theFunction
def clb():
print("called from settings.py")
#do something app specific here
theFunction(otherParameter, clb)
This appears to be a cleaner solution compared to the inspect solution, as it allows a better separation of the two modules.
It depends highly on the range of application, whether you should choose the first or the second version; maybe you could provide us with more information about the broader issue you are trying to solve.
As others have said, perhaps this is not the best way to achieve it. If you do want to though, how about using sys.argv to identify the calling module?
app:
import settings as s
settings:
import sys
import os
print sys.argv[0]
# \\path\\to\\app.py
print os.path.split(sys.argv[0])[-1]
# app.py
of course, this gives you the file that was originally run from the command line, so if this is part of a further nested set of imports this won't work for you.
This works for me.
import inspect
import os
curframe = inspect.currentframe()
calframe = inspect.getouterframes(curframe, 1)
if os.path.basename(calframe[1][1]) == 'app1.py':
print ('app1')
else:
print ('app2')
I've been trying to evaluate a simple "integrate(x,x)" statement from within Python, by following the Sage instructions for importing Sage into Python. Here's my entire script:
#!/usr/bin/env sage -python
from sage.all import *
def main():
integrate(x,x)
pass
main()
When I try to run it from the command line, I get this error thrown:
NameError: global name 'x' is not defined
I've tried adding var(x) into the script, global x, tried replacing integrate(x,x) with sage.integrate(x,x), but I can't seem to get it to work, I always get an error thrown.
The command I'm using is ./sage -python /Applications/path_to/script.py
I can't seem to understand what I'm doing wrong here.
Edit: I have a feeling it has something to do with the way I've "imported" sage. I have my a folder, let's call it folder 1, and inside of folder 1 is the "sage" folder and the "script.py"
I am thinking this because typing "sage." doesn't bring up any autocomplete options.
The name x is not imported by import sage.all. To define a variable x, you need to issue a var statement, like thus
var('x')
integrate(x,x)
or, better,
x = SR.var('x')
integrate(x,x)
the second example does not automagically inject the name x in the global scope, so that you have to explicitly assign it to a variable.
Here's what Sage does (see the file src/sage/all_cmdline.py):
from sage.all import *
from sage.calculus.predefined import x
If you put these lines in your Python file, then integrate(x,x) will work. (In fact, sage.calculus.predefined just defines x using the var function from sage.symbolic.ring; this just calls SR.var, as suggested in the other answer. But if you want to really imitate Sage's initialization process, these two lines are what you need.)
I'm sure this is very simple but I've been unable to get it working correctly. I need to have my main python script call another python script and pass variables from the original script to the script that I've called
So for a simplistic example my first script is,
first.py
x = 5
import second
and my second script is,
second.py
print x
and I would expect it to print x but I get
NameError: name 'x' is not defined
I'm not sure if import is right way to achieve this, but if someone could shed light on it in a simple way that would be great!
thanks,
EDIT
After reading the comments I thought I would expand on my question. Aswin Murugesh answer fixes the import problem I was having, however the solution does not have the desired outcome as I can not seem to pass items in a list this way.
In first.py I have a list which I process as follows
for insert, (list) in enumerate(list, start =1):
'call second.py passing current list item'
I wanted to pass each item in the list to a second python file for further processing (web scraping), I didn't want to do this in first.py as this is meant to be the main 'scan' program which then calls other programs. I hope this now make more sense.
Thanks for the comments thus far.
When you call a script, the calling script can access the namespace of the called script. (In your case, first can access the namespace of second.) However, what you are asking for is the other way around. Your variable is defined in the calling script, and you want the called script to access the caller's namespace.
An answer is already stated in this SO post, in the question itself:
Access namespace of calling module
But I will just explain it here in your context.
To get what you want in your case, start off the called script with the following line:
from __main__ import *
This allows it to access the namespace (all variables and functions) of the caller script.
So now your calling script is, as before:
x=5
import second
and the called script is:
from __main__ import *
print x
This should work fine.
use the following script:
first.py:
x=5
second.py
import first
print first.x
this will print the x value. Always imported script data should be referenced with the script name, like in first.x
To avoid namespace pollution, import the variables you want individually: from __main__ import x, and so on. Otherwise you'll end up with naming conflicts you weren't aware of.
Try use exec
Python3.5:
first.py
x=5
exec(open('second.py').read())
second.py
print(x)
You can also pass x by using:
x=5
myVars = {'x':x}
exec(open('second.py').read(), myVars)
Not sure if this is a good way.
Finally,
I created a package for Python to solve this problem.
Install Guli from PIP.
$ pip install guli
Guli doesn't require installing any additional PIP package.
With the package you can
Guli can be used to pass between different Python scripts, between many processes or at the same script.
pass variables between main Process and another (Multiprocess) Process.
Pass variables between different Python scripts.
Pass variables between 'Main Process' and another (Multiprocess) Process.
Use variables at the same script.
Create / Delete / Edit - GuliVariables.
Example
import guli
import multiprocessing
string = guli.GuliVariable("hello").get()
print(string) # returns empty string ""
def my_function():
''' change the value from another process '''
guli.GuliVariable("hello").setValue(4)
multiprocessing.Process(target=my_function).start()
import time
time.sleep(0.01) # delay after process to catch the update
string = guli.GuliVariable("hello").get()
print(string) # returns "success!!!"
Hope I solved the problem for many people!