Python Import a library inside a function for multi-threading - python

I have a library that uses quite a few global variables, that I'd like to use in a multi-threaded application, however what I'd like to know is if I import the library inside a function, will the library's global variables etc. be separate copies, so that they don't corrupt each other?

No. There will only be a single instance of the 'global' variables (presumably defined at the top level of the module).
A module is only ever imported once, importing it a second time simply adds it to the appropriate namespace.

No. Python has Module Scope here whereby the global variables you have defined in that module if mutated by other threads without locking will have unpredictable behaviour.
I would refactor your code into a set of objects with remove the use of globals and possibly also implement locking if you intend to share the same objects.

Related

Call function in python module in temporary context?

I'm currently extending a project that does not implement any classes. I need to call a function from one module that has side effects on global variables in that module. If I simply import the module and call the function, this has side effects on the rest of the program, which I don't want.
Solutions that I thought about so far:
Modify the module so that it's a class: This would break compatibility with the existing code which I want to avoid.
Save the state of the class at the beginning of the method and restore it at the end: This could have side effects because there is multi threading involved.
Copy the whole module: Probably the best option, but I want to avoid code duplication.
Is there a better option to achieve what I want to do?
This situation sounds like classical XY problem (https://en.wikipedia.org/wiki/XY_problem)
Assume that there exists a module, and that module has a function, and that function is hard coded to change state that is maintained with module level variables.
In this situation there probably does not exist any sensible way to programmatically avoid the function changing the module level variables, unless the function explicitly supports assigning custom context.
You can create ad-hoc solution that fixes the problem for a known variable but general purpose solution sounds impractical when comparing to restructuring the code.
Without knowing more details it is hard to suggest anything specifically, except for restructuring the module.
Some options:
Use classes
Make the function support custom context by providing locals() from elsewhere
Make a clone of function without side effects
Save state of known module level variables, call the function, restore state
Interestingly enough, you can automate the last option if you can be sure that the side effects exists within a known scope. Using dir(your_module) provides programmatical access.
Ultimately I think the problem in and of itself is proof that you should consider restructuring. This is coming from someone who is hell bent on the ability to abuse python by doing stuff like walking up the callstack to change variables in parent-of-parent-of-parent-of-parent to force program state. I even made debugger capable of doing that (https://github.com/hirsimaki-markus/SEAPIE)
Yet I still think breaking compability and restructing is the more sensible long term solution. Of the ad-hoc solutions, I would suggest creating side-effectless versions of the necessary functions.

How are the contents of the builtins module available in the global namespace without import in Python?

I've been using Python for a good period of time. I have never found out how built-in functions work. In different words, how are they included without having any module imported to use them? What if I want to add to them (locally)?
This may seem naive. But, I haven't really found any answer that explains comprehensively how do we have built-in functions, global variables, etc., available to us when developing a script.
In a nutshell, where do we include the builtins module?
I have encountered this question. But it gives a partial answer to my question.
The not-implementation-details part of the answer is that the builtins module, or __builtin__ in Python 2, provides access to the built-ins namespace. If you want to modify the built-ins (you usually shouldn't), setting attributes on builtins is how you'd go about it.
The implementation details part of the answer is that Python keeps track of built-ins in multiple ways. For example, each frame object keeps track of the built-in namespace it's using, which may be different from other frames' built-in namespaces. You can access this through a frame's f_builtins attribute. When a LOAD_GLOBAL instruction fails to find a name in the frame's globals, it looks in the frame's builtins. There's also a __builtins__ global variable in most global namespaces, but it's not directly used for built-in variable lookup; instead, it's used to initialize f_builtins in certain situations during frame object creation. There's also a builtins reference in the global PyInterpreterState, which is used as default builtins if there's no current frame object.

How do I define a variable across all modules without needing an import?

I have a situation (just for development purposes) where I would like to define a variable globally - not just module-global but everywhere, without requiring an import statement (or the global keyword). I understand this is typically a bad idea, but it's fine for my temporary application.
This seems to work:
import builtins
builtins.__dict__['global_variable_name'] = 123
print(global_variable_name)

Python2.7 - Is writing to an imported variable atomic?

My project has the following structure:
A coordinator script that imports 2 modules and makes a thread for the main function of each.
I have a variable I need to share between the two imported modules. Module 1 writes it, and Module 2 reads it (asynchronically). Since read and write are atomic, it should be thread safe. In Python though, global variables are module scoped, therefore, a global variable approach does not work.
My intention is to add this shared variable to my config module and have both modules import it (it's probably super dirty, but my Python knowledge is limited). I think this would at least let me work around scoping, but I don't know if atomicity is maintained if the shared variable is imported.
Thanks!

python adding variables to imported modules

I wasn't looking what I had previously written on the line so I accidently declared a variable in ipython as:
np.zerosn=10
Surprisingly this was allowed. So I thought that maybe it was because you can name use periods in your variable names, but that is not the case. So I'm wondering what is actually happening. Is this adding a new variable to the numpy module?
Yes.
In general, (most/many) python objects have dynamic attribute spaces, and you can stick whatever you want onto them whenever you want. And modules are just objects. Their attribute space is essentially the same as their global scope.
Pure python functions are another (perhaps surprising) example of something onto which you can stick arbitrary attributes, though these are not associated with the function's local scope.
Most 'builtin' types (i.e. those which are implemented in extension modules, rather than those that are found in the __builtins__ module) and their instances, do not have dynamic attribute spaces. Neither do pure python types with __slots__.

Categories

Resources