I would like to use thread local storage across multiple classes.
But threading.local() gets me a new instance of storage every time I call it. Is there a way to use
threading.local().__setattr__('r_id', uuid.uuid4())
And there an equivalent of somehow to access in a completely different class ?
threading.local().__getattribute__('r_id')
Here you can find your ans
https://stackoverflow.com/questions/104983/what-is-thread-local-storage-in-python-and-why-do-i-need-it
Here you can find something interesting as well
Efficient way to determine whether a particular function is on the stack in Python
Related
I have an object that wraps some Active Directory functions which are used quite frequently in my codebase. I have a convenience function to create it, but each time it is creating an SSL connection which is slow and inefficient. The way I can improve this in some places is to pass it to functions in a loop but this is not always convenient.
The class is state-free so it's thread-safe and could be shared within each Django instance. It should maintain its AD connection for at least a few minutes, and ideally not longer than an hour. There are also other non-AD objects I would like to do the same with.
I have used the various cache types, including in-memory, is it appropriate to use these for functional objects? I thought they were only meant for (serializable) data.
Alternatively: is there a Django suitable pattern for service locators or connection pooling like you often seen in Java apps?
Thanks,
Joel
I have found a solution that appears to work well, which is simply a Python feature that is similar to a static variable in Java.
def get_ad_service():
if "ad_service" not in get_ad_service.__dict__:
logger.debug("Creating AD service")
get_ad_service.ad_service = CompanyADService(settings.LDAP_SERVER_URL,
settings.LDAP_USER_DN,
settings.LDAP_PASSWORD)
logger.debug("Created AD service")
else:
logger.debug("Returning existing AD service")
return get_ad_service.ad_service
My code already calls this function to get an instance of the AD service so I don't have to do anything further to make it persistent.
I found this and similar solutions here: What is the Python equivalent of static variables inside a function?
Happy to hear alternatives :)
I am trying to reduce sqlite3 db look ups in python. I have system with limited RAM of 1 GB only where i am implementing it. I want to store current DB values somewhere from where i can retrieve them without consulting DB again and again. One thing to keep in mind is that trigger point of all of my python scripts (processes) is different and there is no master script or you can say i am not controlling all of my scripts from one point.
What's in my knowledge:
I don't want to save/retrieve data from file as i don't want to make read/write operations. In a nutshell i don't want to manipulate it via file (Simply saying no to pickel and shelve python modules)
I also cannot use in memory cache modules like memcahced and beaker because of limitation of memory size and also these modules are intended for server side development and i am working on stand alone scripts (iot device)
I cannot use singleton classes because of limitation of namespaces and scope. As soon as scope of one script ends, instance of singleton also vanishes and i am not able to persist instance of singleton class in all of my python scripts. I am not able to use static variables and static methods too because instance does not stick in scope and everything becomes volatile and goes back to initialized value instead of current DB values every next time i import singleton class script in any of my other scripts.
As trigger point of all of my python scripts is different which also makes it impossible to use global variables too. Global variables are required to be initialized with some value whereas i want current DB values in global variables.
I also cannot do memory segmentation as python does not allow me to do so.
What else can i do more?
Is there any python library or any other language's library which allows me to insert current DB values so that i instead of looking up from Sqlite3 DB i get values from there without doing any read/write operation?? (By read/write operation i mean not to load from hard drive or sd card)
Thanks in advance, any help from you is highly appreciated.
I want to export a function via RPC. Therefore I use Pyro4 for Python. This works so far. Now I want that function to work also on the data the belong to the RPC-Server. Is this possible? If so, how?
#!/usr/bin/env python3
import Pyro4
import myrpcstuff
listIwantToWorkWith=["apples","bananas","oranges"]
rcpthing=myrpcstuff.myrpcFunction()
daemon=Pyro4.Daemon()
uri=daemon.register(rpcthing)
daemon.requestLoop()
What do I have to write in myrpcstuff.myrpcFunction() to access listIwantToWorkWith, or do I have to mark the list global?
This isn't a Pyro specific question; the question is a general Python question about how to share data between functions or modules.
Pass the data you want to work on, to the objects that you want to have access to the data. You can do this via parameters or perhaps by creating a custom class and pass it the data via its init. This is all basic Python.
If you want to stay within the Pyro realm though, perhaps have a look at the examples that come with it, to see how you can do certain things?
So, I have two objects that represent a user's settings and a user's data. What I'd like to do is have some kind of monitor going that will save the settings/data whenever I change them in my code. I could surely just include a function call after every change, but just as an interesting concept, I was wondering if such an idea was possible.
The first solution that comes to mind is spinning off a thread that will make a copy of the objects and then loop a timer, each time checking if the objects differ from the copy.
Would such a plan be feasible? Is there a better way? The data object could theoretically be kind of large because it stores some cached images and lots of text; would I be causing noticeable performance issues by doing this?
There is no way in Python to monitor changes to a simple variable. However, you can do exactly what you want to member variables in a class instance.
Look up "property descriptors".
Here's a nice explanation: http://nbviewer.ipython.org/urls/gist.github.com/ChrisBeaumont/5758381/raw/descriptor_writeup.ipynb
Let's say that i have a Python module to control a videoconference system. In that module i have some global variables and functions to control the states of the videoconference, the calls, a phone book, etc.
To start the control system, the module self-executes a function to initialize the videoconference (ethernet connection, polling states and so)
Now, if i need to start controlling a second videoconference system, i'm not sure how to approach that problem: i thought about making the videoconference module a class and create two instances (one for each videoconference system) and then initialize both, but the problem is that i don't really need to have two instances of a videoconference class since i won't do anything with those objects because i only need to initialize the systems; after that i don't need to call or keep them for anything else.
example code:
Videoconference.py
class Videoconference:
def __init__(self):
self.state = 0
#Initialization code
Main.py
from Videoconference import Videoconference
vidC1 = Videoconference()
vidC2 = Videoconference()
#vidC1 and vidC2 will never be use again
So, the question is: should i convert the videoconference module to a class and create instances (like in the example), even if i'm not going to use them for anything else appart of the initialization process? Or is there another solution without creating a class?
Perhaps this is a matter of preference, but I think having a class in the above case would be the safer bet. Often I'll write a function and when it gets too complicated I'll think that I should have created a class (and often do so), but I've never created a class that was too simple and thought that this is too easy why didn't I just create a function.
Even if you have one object instead of two, it often helps readability to create a class. For example:
vid = VideoConference()
# vid.initialize_old_system() # Suppose you have an old system that you no longer use
# But want to keep its method for reference
vid.initialize_new_system()
vid.view_call_history(since=yesterday)
This sounds like the perfect use case for a VideoConferenceSystem object. You say you have globals (ew!) that govern state (yuck!) and calls functions for control.
Sounds to me like you've got the chance to convert that all to an object that has attributes that hold state and methods to mutate it. Sounds like you should be refactoring more than just the initialization code, so those vidC1 and vidC2 objects are useful.
I think you're approaching this problem the right way in your example. In this way, you can have multiple video conferences, each of which may have different attribute states (e.g. vidC1.conference_duration, etc.).