How to set dynamic global variables in Python? - python

Let's say I've a file a.py and a has a variable var.
In file b.py, I imported, a.py and set a.var = "hello".
Then I imported a.py in c.py and tried to access a.var but I get None.
How to reflect the change in data? It is obvious that two different instances are getting called, but how do I change it?
Edit:
What I am trying to do here is create a config file which constitutes of this:
CONFIG = {
'client_id': 'XX',
'client_secret': 'XX',
'redirect_uri': 'XX'
}
auth_token = ""
auth_url = ""
access_point = ""
Now, let's say I use config.py in the script a.py and set config.access_point = "web"
So, if I import config.py in another file, I want the change to reflect and not return "".
Edit:
Text file seems like an easy way out. I can also use ConfigParser module. But isn't it a bit too much if reading form a file needs to be done in every server request?

As a preliminary, a second import statement, even from another module, does not re-execute code in the source file for the imported module if it has been loaded previously. Instead, importing an already existing module just gives you access to the namespace of that module.
Thus, if you dynamically change variables in your module a, even from code in another module, other modules should in fact see the changed variables.
Consider this test example:
testa.py:
print "Importing module a"
var = ""
testb.py:
import testa
testa.var = "test"
testc.py:
import testa
print testa.var
Now in the python interpreter (or the main script, if you prefer):
>>> import testb
Importing module a
>>> import testc
test
>>>
So you see that the change in var made when importing b is actually seen in c.
I would suspect that your problem lies in whether and in what order your code is actually executed.

In file a.py define a class:
class A:
class_var = False
def __init__(self):
self.object_var = False
Then in b.py import this and instantiate an object of class A:
from a import A
a = A()
Now you can access the attributes of the instance of a.
a.object_var = True
And you can access variables for the class A:
A.class_var = True
If you now check for:
a.class_var
-> True
a.object_var
-> True
another_instance = A()
another_instance.object_var
->False
another_instance.class_var
->True

well , i'd use an text file to set values there,
a.py :
with open("values.txt") as f:
var = f.read()#you can set certain bytes for read
so whenever you import it , it will initialise new value to the var according to the values in the text file , and whenever you want to change the value of var just change the value of the text file

Related

Why isn't my class's init function running?

I'm struggling with understanding the __init __ function for a class when calling another file. This has been asked a lot, and I must be having an off day, because I can't get this to work at all! Well, I take that back.. if I only use __init __ or don't use it at all, it works. it's probably something dumb and obvious that I'm missing - Here's what's up ::
Folder Structure
root
controller
__init __.py
main.py
File 1 - main.py
from controller import appkey , appconf
# log file gets created in here #
app_process = appkey.generate(logfile)
File 2 - controller.__init __.py
from public import printlog
class appkey(object) :
def __init__(self,logfile) :
self.logfile = logfile
def generate(self) :
printlog.log( message = f'Generating runtime key for application instance.'
## <<<< ERROR HAPPENS HERE | AttributeError: 'str' object has no attribute 'logfile' >>>> ##
, file = self.logfile ### <--
, level = 'info' )
try :
<<<< stuff >>>>
return run_ , key_
except :
<<<< mayday | exit >>>>
Visual Studio shows self: 'runtime/application_logs/12072021-122743' as a variable once entering the controller.__init __.py file, but it never becomes "self.logfile".
I appreciate the feedback. Thank you!
The issue is that you are not creating an appkey object anywhere, you are attempting to call generate on the class itself (not an instance of the class). You are also passing logfile to the generate method when you should be passing it to the constructor.
You can change your code in main.py to this or something similar:
# Call the constructor first to create an appkey object, named 'a'
a = appkey(logfile)
# Now call generate on it
app_process = a.generate()
You would need to first create the class object, then call the generate method in main.py.
from controller import appkey , appconf
# log file gets created in here #
appkey_obj = appkey(logfile)
app_process = appkey_obj.generate()
You are setting the value of self.logfile (ie. the logfile instance variable) when you create the object. Since it is an instance variable, you do not need to pass it into the generate` method.
You are getting that error because when you pass logfile into generate, the self argument in generate is being set to the logfile string value. self is also an instance variable that will refer to the object itself when nothing else is passed in. However, since you are passing in logfile, self is set to logfile and it tries to call (basically) logfile.logfile (ie. tries to call an instance variable of the string).

Is there any way to access variable in one function of a file to another file

I have 2 files prgm.py and test.py
1.prgm.py
def move(self)
H=newtest.myfunction()
i= H.index(z)
user=newuser.my_function()
print(user[i])
How will i get user[i] in the other code named test.py
Use an import statement in the other file;
Like this - from prgm import move
Note: For this to work both of the files needs to be in the same folder or the path to the file you are importing needs to be in your PYTHONPATH
Instead of printing the result, you can simply return it. In the second file, you just import the function from this source file and call it.
Given the situation, move is actually a class method, so you need to import the whole class and instance it in the second file
prgm.py
class Example:
def move(self):
H = newtest.myfunction()
i = H.index(z)
user = newuser.my_function()
return user[i]
test.py
from prgm import Example
example = Example()
user = example.move()
# do things with user

Call a function from different file where the file name and function name are read from a list

I have multiple functions stored in different files, Both file names and function names are stored in lists. Is there any option to call the required function without the conditional statements?
Example, file1 has functions function11 and function12,
def function11():
pass
def function12():
pass
file2 has functions function21 and function22
def function21():
pass
def function22():
pass
and I have the lists
file_name = ["file1", "file2", "file1"]
function_name = ["function12", "function22", "funciton12"]
I will get the list index from different function, based on that I need to call the function and get the output.
If the other function will give you a list index directly, then you don't need to deal with the function names as strings. Instead, directly store (without calling) the functions in the list:
import file1, file2
functions = [file1.function12, file2.function22, file1.function12]
And then call them once you have the index:
function[index]()
There are ways to do what is called "reflection" in Python and get from the string to a matching-named function. But they solve a problem that is more advanced than what you describe, and they are more difficult (especially if you also have to work with the module names).
If you have a "whitelist" of functions and modules that are allowed to be called from the config file, but still need to find them by string, you can explicitly create the mapping with a dict:
allowed_functions = {
'file1': {
'function11': file1.function11,
'function12': file1.function12
},
'file2': {
'function21': file2.function21,
'function22': file2.function22
}
}
And then invoke the function:
try:
func = allowed_functions[module_name][function_name]
except KeyError:
raise ValueError("this function/module name is not allowed")
else:
func()
The most advanced approach is if you need to load code from a "plugin" module created by the author. You can use the standard library importlib package to use the string name to find a file to import as a module, and import it dynamically. It looks something like:
from importlib.util import spec_from_file_location, module_from_spec
# Look for the file at the specified path, figure out the module name
# from the base file name, import it and make a module object.
def load_module(path):
folder, filename = os.path.split(path)
basename, extension = os.path.splitext(filename)
spec = spec_from_file_location(basename, path)
module = module_from_spec(spec)
spec.loader.exec_module(module)
assert module.__name__ == basename
return module
This is still unsafe, in the sense that it can look anywhere on the file system for the module. Better if you specify the folder yourself, and only allow a filename to be used in the config file; but then you still have to protect against hacking the path by using things like ".." and "/" in the "filename".
(I have a project that does something like this. It chooses the paths from a whitelist that is also under the user's control, so I have to warn my users not to trust the path-whitelist file from each other. I also search the directories for modules, and then make a whitelist of plugins that may be used, based only on plugins that are in the directory - so no funny games with "..". And I'm still worried I forgot something.)
Once you have a module name, you can get a function from it by name like:
dynamic_module = load_module(some_path)
try:
func = getattr(dynamic_module, function_name)
except AttributeError:
raise ValueError("function not in module")
At any rate, there is no reason to eval anything, or generate and import code based on user input. That is most unsafe of all.
Another alternative. This is not much safer than an eval() however.
Someone with access to the lists you read from the config file could inject malicious code in the lists you import.
I.e.
'from subprocess import call; subprocess.call(["rm", "-rf", "./*" stdout=/dev/null, stderr=/dev/null, shell=True)'
Code:
import re
# You must first create a directory named "test_module"
# You can do this with code if needed.
# Python recognizes a "module" as a module by the existence of an __init__.py
# It will load that __init__.py at the "import" command, and you can access the methods it imports
m = ["os", "sys", "subprocess"] # Modules to import from
f = ["getcwd", "exit", "call; call('do', '---terrible-things')"] # Methods to import
# Create an __init__.py
with open("./test_module/__init__.py", "w") as FH:
for count in range(0, len(m), 1):
# Writes "from module import method" to __init.py
line = "from {} import {}\n".format(m[count], f[count])
# !!!! SANITIZE THE LINE !!!!!
if not re.match("^from [a-zA-Z0-9._]+ import [a-zA-Z0-9._]+$", line):
print("The line '{}' is suspicious. Will not be entered into __init__.py!!".format(line))
continue
FH.write(line)
import test_module
print(test_module.getcwd())
OUTPUT:
The line 'from subprocess import call; call('do', '---terrible-things')' is suspicious. Will not be entered into __init__.py!!
/home/rightmire/eclipse-workspace/junkcode
I'm not 100% sure I'm understanding the need. Maybe more detail in the question.
Is something like this what you're looking for?
m = ["os"]
f = ["getcwd"]
command = ''.join([m[0], ".", f[0], "()"])
# Put in some minimum sanity checking and sanitization!!!
if ";" in command or <other dangerous string> in command:
print("The line '{}' is suspicious. Will not run".format(command))
sys.exit(1)
print("This will error if the method isnt imported...")
print(eval(''.join([m[0], ".", f[0], "()"])) )
OUTPUT:
This will error if the method isnt imported...
/home/rightmire/eclipse-workspace/junkcode
As pointed out by #KarlKnechtel, having commands come in from an external file is a gargantuan security risk!

Dynamically reload a class definition in Python

I've written an IRC bot using Twisted and now I've gotten to the point where I want to be able to dynamically reload functionality.
In my main program, I do from bots.google import GoogleBot and I've looked at how to use reload to reload modules, but I still can't figure out how to do dynamic re-importing of classes.
So, given a Python class, how do I dynamically reload the class definition?
Reload is unreliable and has many corner cases where it may fail. It is suitable for reloading simple, self-contained, scripts. If you want to dynamically reload your code without restart consider using forkloop instead:
http://opensourcehacker.com/2011/11/08/sauna-reload-the-most-awesomely-named-python-package-ever/
You cannot reload the module using reload(module) when using the from X import Y form. You'd have to do something like reload(sys.modules['module']) in that case.
This might not necessarily be the best way to do what you want, but it works!
import bots.google
class BotClass(irc.IRCClient):
def __init__(self):
global plugins
plugins = [bots.google.GoogleBot()]
def privmsg(self, user, channel, msg):
global plugins
parts = msg.split(' ')
trigger = parts[0]
if trigger == '!reload':
reload(bots.google)
plugins = [bots.google.GoogleBot()]
print "Successfully reloaded plugins"
I figured it out, here's the code I use:
def reimport_class(self, cls):
"""
Reload and reimport class "cls". Return the new definition of the class.
"""
# Get the fully qualified name of the class.
from twisted.python import reflect
full_path = reflect.qual(cls)
# Naively parse the module name and class name.
# Can be done much better...
match = re.match(r'(.*)\.([^\.]+)', full_path)
module_name = match.group(1)
class_name = match.group(2)
# This is where the good stuff happens.
mod = __import__(module_name, fromlist=[class_name])
reload(mod)
# The (reloaded definition of the) class itself is returned.
return getattr(mod, class_name)
Better yet subprocess the plugins, then hypervise the subprocess, when the files change reload the plugins process.
Edit: cleaned up.
You can use the sys.modules to dynamically reload modules based on user-input.
Say that you have a folder with multiple plugins such as:
module/
cmdtest.py
urltitle.py
...
You can use sys.modules in this way to load/reload modules based on userinput:
import sys
if sys.modules['module.' + userinput]:
reload(sys.modules['module.' + userinput])
else:
' Module not loaded. Cannot reload '
try:
module = __import__("module." + userinput)
module = sys.modules["module." + userinput]
except:
' error when trying to load %s ' % userinput
When you do a from ... import ... it binds the object into the local namespace, so all you need to is re-import it. However, since the module is already loaded, it will just re-import the same version of the class so you would need to reload the module too. So this should do it:
from bots.google import GoogleBot
...
# do stuff
...
reload(bots.google)
from bots.google import GoogleBot
If for some reason you don't know the module name you can get it from GoogleBot.module.
def reload_class(class_obj):
module_name = class_obj.__module__
module = sys.modules[module_name]
pycfile = module.__file__
modulepath = string.replace(pycfile, ".pyc", ".py")
code=open(modulepath, 'rU').read()
compile(code, module_name, "exec")
module = reload(module)
return getattr(module,class_obj.__name__)
There is a lot of error checking you can do on this, if your using global variables you will probably have to figure out what happens then.

Python variable assigned by an outside module is accessible for printing but not for assignment in the target module

I have two files, one is in the webroot, and another is a bootstrap located one folder above the web root (this is CGI programming by the way).
The index file in the web root imports the bootstrap and assigns a variable to it, then calls a a function to initialize the application. Everything up to here works as expected.
Now, in the bootstrap file I can print the variable, but when I try to assign a value to the variable an error is thrown. If you take away the assignment statement no errors are thrown.
I'm really curious about how the scoping works in this situation. I can print the variable, but I can't asign to it. This is on Python 3.
index.py
# Import modules
import sys
import cgitb;
# Enable error reporting
cgitb.enable()
#cgitb.enable(display=0, logdir="/tmp")
# Add the application root to the include path
sys.path.append('path')
# Include the bootstrap
import bootstrap
bootstrap.VAR = 'testVar'
bootstrap.initialize()
bootstrap.py
def initialize():
print('Content-type: text/html\n\n')
print(VAR)
VAR = 'h'
print(VAR)
Thanks.
Edit: The error message
UnboundLocalError: local variable 'VAR' referenced before assignment
args = ("local variable 'VAR' referenced before assignment",)
with_traceback = <built-in method with_traceback of UnboundLocalError object at 0x00C6ACC0>
try this:
def initialize():
global VAR
print('Content-type: text/html\n\n')
print(VAR)
VAR = 'h'
print(VAR)
Without 'global VAR' python want to use local variable VAR and give you "UnboundLocalError: local variable 'VAR' referenced before assignment"
Don't declare it global, pass it instead and return it if you need to have a new value, like this:
def initialize(a):
print('Content-type: text/html\n\n')
print a
return 'h'
----
import bootstrap
b = bootstrap.initialize('testVar')

Categories

Resources