Static Objects in Python - python

I have a complex set of classes that i call through a rest API. When I make calls through the rest interface, the class objects are created an they are persisted or they die at the end of the call. I want these objects to stay in memory so I dont have to create and kill the object every time I make the call. Any suggestions of how I can do this. A friend suggested using something like a static class , but I dont seem to find how I can achieve this is python
Any help will be appreciated.

Without knowing more about how your server is implemented I suggest that you could use some kind of cache, e.g. memcached. You can use python-memcached to interface to it, or a framework specific one such as those for django, bottle, flask et. al.

Related

The correct way to create a new instance using pythoncom and force early binding

Spent a little too much time trying to figure it out by myself...
I'm working with a FEA app called Simcenter Femap. In my program I need to create N new instances of it after I get some data from base instance for some asyncio fun. Can't even start on the asyncio part because I can't force early binding on new instances.
What is working for me at this point:
Created a makepy wrapper, called it PyFemap as Femap help is suggesting, made an import
Connected to a running instance
femap_object = pythoncom.connect('femap.model')
feAppBaseInstance = PyFemap.model(femap_object)
Every method of every Femap object works perfectly fine after this.
I am able to create instances using DispatchEx('femap.model') and invoke methods that don't require data conversion.
But for the rest of the methods to work I need to force early binding on these instances through already existing wrapper (as I see it).
"Python programming on win32" suggests that I use gencache.EnsureModule to create a wrapper and link it to created instance. But when I try to do it through type library's CLSID I get an error that it's not registered. Is there really no way to do it with a wrapper I already created?
Also tried to do all of this using comtypes. Some parts work better for me with it some are worse. But the end result is the same. If I may, I'd like to ask how to do it with comtypes too.
Sorry if I'm missing something really obvious.
I recommend using pythoncom.New(...) instead of .connect(...).
I'll post the solution since I solved the issue.
It is actually really obvious. I ended up using pythoncom.New(...) for multiple instances, but I think other methods would work just as well if you only need one. The issue was that I didn't attach the wrapper head class (model) to these new instances, which was pretty silly of me.
To create a new instance
femap_object = pythoncom.new('femap.model')
To assign a win32 wrapper (PyFemap) to it.
new_instance = PyFemap.model(femap_object)

Using an inherited Flask class : how to type

OK so first of all, this is not my code, I'm simply maintaining it. It's a Jukebox, written in Python with Flask, and the main Flask app is actually an inherited Flask class.
This Jukebox(Flask) class is declared in the init.py, but there is so more code elsewhere that use the app, especially to access some shared values (through the use of Flask.current_app, tho I've seen that it might be better to use Flask.g).
I simply wanted to know if there was a way to type Flask.current_app such that my IDE knows that it's a Jukebox object, so that it would be easier to work with ?
If you know anything about that, it'd be wonderful.

How to keep a shared functional object in memory in Django?

I have an object that wraps some Active Directory functions which are used quite frequently in my codebase. I have a convenience function to create it, but each time it is creating an SSL connection which is slow and inefficient. The way I can improve this in some places is to pass it to functions in a loop but this is not always convenient.
The class is state-free so it's thread-safe and could be shared within each Django instance. It should maintain its AD connection for at least a few minutes, and ideally not longer than an hour. There are also other non-AD objects I would like to do the same with.
I have used the various cache types, including in-memory, is it appropriate to use these for functional objects? I thought they were only meant for (serializable) data.
Alternatively: is there a Django suitable pattern for service locators or connection pooling like you often seen in Java apps?
Thanks,
Joel
I have found a solution that appears to work well, which is simply a Python feature that is similar to a static variable in Java.
def get_ad_service():
if "ad_service" not in get_ad_service.__dict__:
logger.debug("Creating AD service")
get_ad_service.ad_service = CompanyADService(settings.LDAP_SERVER_URL,
settings.LDAP_USER_DN,
settings.LDAP_PASSWORD)
logger.debug("Created AD service")
else:
logger.debug("Returning existing AD service")
return get_ad_service.ad_service
My code already calls this function to get an instance of the AD service so I don't have to do anything further to make it persistent.
I found this and similar solutions here: What is the Python equivalent of static variables inside a function?
Happy to hear alternatives :)

wx.DirDialog and App Store Sandboxing

As I understand with a sandboxing that Apple added you can no longer write outside your sandbox, but using the NSOpenPanel you can ask the user to specify the directory and let you write there. For example, there is this wrapper to simplify things: https://github.com/leighmcculloch/AppSandboxFileAccess
But I have a python app using wx, which I need to extend to support this, but as I understand wx.DirDialog doesn't have anything like this. https://wxpython.org/Phoenix/docs/html/wx.DirDialog.html?highlight=dirdialog#wx-dirdialog
Is there some other class for this?
P.S. I'm quite new to the wx+python so maybe there is some other option like integrating those Obj-c classes and using them instead of DirDialog? Although I would like to avoid it if possible

Large(ish) django application architecture

How does one properly structure a larger django website such as to retain testability and maintainability?
In the best django spirit (I hope) we started out by not caring too much about decoupling between different parts of our website. We did separate it into different apps, but those depend rather directly upon each other, through common use of model classes and direct method calls.
This is getting quite entangled. For example, one of our actions/services looks like this:
def do_apply_for_flat(user, flat, bid_amount):
assert can_apply(user, flat)
application = Application.objects.create(
user=user, flat=flat, amount=bid_amount,
status=Application.STATUS_ACTIVE)
events.logger.application_added(application)
mails.send_applicant_application_added(application)
mails.send_lessor_application_received(application)
return application
The function does not only perform the actual business process, no, it also handles event logging and sending mails to the involved users. I don't think there's something inherently wrong with this approach. Yet, it's getting more and more difficult to properly reason about the code and even test the application, as it's getting harder to separate parts intellectually and programmatically.
So, my question is, how do the big boys structure their applications such that:
Different parts of the application can be tested in isolation
Testing stays fast by only enabling parts that you really need for a specific test
Code coupling is reduced
My take on the problem would be to introduce a centralized signal hub (just a bunch of django signals in a single python file) which the single django apps may publish or subscribe to. The above example function would publish an application_added event, which the mails and events apps would listen to. Then, for efficient testing, I would disconnect the parts I don't need. This also increases decoupling considerably, as services don't need to know about sending mails at all.
But, I'm unsure, and thus very interested in what's the accepted practice for these kind of problems.
For testing, you should mock your dependencies. The logging and mailing component, for example, should be mocked during unit testing of the views. I would usually use python-mock, this allows your views to be tested independently of the logging and mailing component, and vice versa. Just assert that your views are calling the right service calls and mock the return value/side effect of the service call.
You should also avoid touching the database when doing tests. Instead try to use as much in memory objects as possible, instead of Application.objects.create(), defer the save() to the caller, so that you can test the services without having to actually have the Application in the database. Alternatively, patch out the save() method, so it won't actually save, but that's much more tedious.
Transfer some parts of your app to different microservices. This will make some parts of your app focused on doing one or two things right (e.g. event logging, emails). Code coupling is also reduced and different parts of the site can be tested in isolation as well.
The microservice architecture style involves developing a single application as a collection of smaller services that communicates usually via an API.
You might need to use a smaller framework like Flask.
Resources:
For more information on microservices click here:
http://martinfowler.com/articles/microservices.html
http://aurelavramescu.blogspot.com/2014/06/user-microservice-python-way.html
First, try to brake down your big task into smaller classes. Connect them with usual method calls or Django signals.
If you feel that the sub-tasks are independent enough, you can implement them as several Django applications in the same project. See the Django tutorial, which describes relation between applications and projects.

Categories

Resources