What's the most elegant way to fetch data from an external API if I want to be faithful to the Single Responsibility Principle? Where/when exactly should it be made?
Assuming I've got a POST /foo endpoint which after being called should somehow trigger a call to the external API and fetch/save some data from it in my local DB.
Should I add the call in the view? Or the Model?
I usually add any external API calls into dedicated services.py module (same level as your models.py that you're planning to save results into or common app if any of the existing are not logically related)
Inside that module you can use class called smth like MyExtarnalService and add all needed methods for fetching, posting, removing etc. just like you would do with drf api view.
Also remember to handle exceptions properly (timeouts, connection errors, error response codes) by defining custom error exception classes.
Related
I'm designing a Flask app that graphs some weather data for several cities. It makes sense to me to use a "City" class that handles the fetching and parsing of the data every time the page is loaded. However, what I'm not sure about is how Flask would handle these instances. Is Flask "smart" enough to know to release the memory for these instances after the page is served? Or will it just gradually consume more and more memory?
Alternatively, would I just be able to create a single global class instance for each city OUTSIDE of the "#app.route" functions that I could use whenever a page is requested?
The deployment server will be Windows IIS using FastCGI, in case that matters at all.
Flask is "just" a framework. It is still executed and managed by the "normal" Python interpreter so the question "how Flask would handle these instances" is nonexistent.
Define classes and use their instances as you would in any other Python project/snippet, however it might be beneficial to think where to define them.
It will not make sense inside a route since the class will be redefined every time a request is received, but the how is exactly the same.
What we'd like to do is iterate over all shared forms on a Google account (potentially hundreds), over which we'd like to, using Python, then call the getEditResponseUrl() function for each FormResponse, which seems only accessible via Google Apps Script, which we could presumably just call in Python using execute() per https://developers.google.com/apps-script/api/quickstart/python.
However, it seems like the Apps Script with this behavior would need to be included in each shared form first, else the function in question won't exist as far as the client library is concerned; is there a way to call, for example, getEditResponseUrl() and other functions that are part of FormResponse and FormApp objects using any kind of Python client library or HTTP API, rather than having it exclusively found in the context of Google Apps Scripts and explicitly defined in the form beforehand? Thanks!
I have some code (a celery task) which makes a call via urllib to a Django view. The code for the task and the view are both part of the same Django project.
I'm testing the task, and need it to be able to contact the view and get data back from it during the test, so I'm using a LiveServerTestCase. In theory I set up the database in the setUp function of my test case (I add a list of product instances) and then call the task, it does some stuff, and then calls the Django view through urllib (hitting the dev server set up by the LiveServerTestCase), getting a JSON list of product instances back.
In practice, though, it looks like the products I add in setUp aren't visible to the view when it's called. It looks like the test case code is using one database (test_<my_database_name>) and the view running on the dev server is accessing another (the urllib call successfully contacts the view but can't find the product I've asked for).
Any ideas why this may be the case?
Might be relevant - we're testing on a MySQL db instead of the sqlite.
Heading off two questions (but interested in comments if you think we're doing this wrong):
I know It seems weird that the task accesses the view using urllib. We do this because the task usually calls one of a series of third party APIs to get info about a product, and if it cannot access these, it accesses our own Django database of products. The code that makes the urllib call is generic code that is agnostic of which case we're dealing with.
These are integration tests so we'd prefer actually make the urllib call rather than mock it out
The celery workers are still feeding off of the dev database even if the test server brings up other databases because they were told to in the settings file.
One fix would be to make a separate settings_test.py file that specifies the test database name and bring up celery workers from the setup command using subprocess.checkoutput that consume from a special queue for testing. Then these celery workers would feed from the test database rather than the dev database.
I am considering a multi-tenant environment where I can have each tenant access a different subdomain and then potentially allocate a namespace based on that domain.
For instance,
tenantA.mydomain.com
tenantB.mydomain.com
Then I would want to have namespace tenantA for all tenantA data and tenantB for all tenantB data.
From the docs, it sounds like I would accomplish this in my appengine_config.py file and do something like this:
from google.appengine.api import namespace_manager
def namespace_manager_default_namespace_for_request():
this_namespace = get_namespace_from_subdomain()
return this_namespace
First question, is this a reasonable/good approach?
Second question, it's unclear what variables are available in this scope - any pointers on how to implement the get_namepsace_from_subdomain() function?
Finally, if there were some functionality I wanted to provide that would cross namespaces could this still be achieved with a global namespace? For instance, say a user has an account in multiple tenants and I want to give a view of his activity across all tenants.
It is possible and reasonable to use multi-tenancy in your app based on sub-domains, though from my experience you should also allow overriding the namespace by using a url param.
e.g.
tenantB.mydomain.com/?tenant=tenantA => namespace=tenantA
This will make you life a lot easier and will enable you to test your newest appengine versions on *.appspot.com before moving them to production (especially if you are planning on SSL access).
Once you set the namespace than only the entities under that namespace will be available, you can change the namespace via code whenever you want - the scope doesn't matter.
for the sub-domain - you can parse it out from the one of the client's request headers.
you can write whatever you want to the global namespace and access it whenever you want via code. for the scenario you described, you need to save the user activity on the global namespace.
also, take a loot at the official python example for using namespaces from the GAE team.
https://github.com/GoogleCloudPlatform/appengine-guestbook-namespaces-python
It gives you everything you need to get started.
I don't think this is the best approach to take. The problem with this approach is that you're tightly coupling you application with the infrastructure in a way. Domain and subdomain are just an easier way to access a machine bound to a specific ip address. I would classify domain names to be part of the infrastructure, not really part of the application. If you go with the above mentioned approach you're introducing some knowledge about the infrastructure into the application and thus making your application less flexible. What happens if you, for some reason, sometime in the future decide that your client A should use clientA.mydomain.com? or how about keyClientA.myotherdomain.com? Or how about you want to allow your client A to use their domain name, i.e. support.clientA.com? If your application does not know anything about domains and infrastructure setup then it's a lot easier to just reconfigure DNS server and get the portability.
If I had this scenario, I would have some kind of mapping of URLs to a tenant id, and then use that tenant id as a namespace name. This way you can easily map different URLs to a tenant id. You can even map multiple URLs to the same tenant id and expose the same application on multiple URLs. Your mapping can be stored in a simple config file or even in the AppEngine datastore itself within global namespace. If the config is stored in the AppEngine datastore, you can have your admin section of the application (or even another AppEngine module) which you can use to update config in real time.
I'm in the middle of trying to create a django website to access data in a MySQL database. The intenion is to also create a UI in Dojo (javascript). Also I would like the django backend to also provide webservices (RPC for python functions) to allow access to the MySQL database remotely. So for example, if someone wants to use Perl scripts to access the database (and possible other additional functionality like calculations based off of data in the database) they can do so in their native language (Perl).
Now ideally, the web services API is the same for javascript as well as another remote service that wants to access these services. I've found that JSON-RPC is a good way to go for this, as there is typically built in support for this in javascript in addition to the numerous additional benefits. Also a lot of people seem to be preferring SOAP to JSON.
I've seen several ways to do this:
1) Create a unique URI for each function that you would like to access:
https://code.djangoproject.com/wiki/JSONRPCServerMiddleware
2) Create one point of access, and pass the method name in the JSON package. In this particular example an SMD is automatically generated.
https://code.djangoproject.com/wiki/Jsonrpc
The issue with (1) is that if there are many functions to be accessed, then there will be many URI's that will be used. This does not seem like an elegant solution. The issue with (2) is that I need to compare functions against a list of all functions. Again this is not an elegant solution either.
Is there no way that we can take the advantages of (1) and (2) to create an interface such that:
- Only one URI is used as a point of access
- Functions are called directly (without having to be compared against a list of functions)
Any help with this will be really appreciated. Thanks!
what about using REST API?
One possibility to do the comparisons would be to use a dict like so:
def func1(someparams):
#do something
return True
def func2(sameparams):
#do something else
return True
{'func1': func1,
'func2': func2}
Then when you get the API call, you look it up in the dict and call from there, any function not in the dict would get the 404 handler.
It sounds like what you really want is a RPC server of some kind (SOAP, say, using soaplib) that is written in python and uses your application's data model, and what ever other APIs you have constructed to handle the business logic.
So I might implement the web service with soaplib, and have it call into the datamodel and other python modules as needed. People wanting to access your web application's data would use the SOAP service, but the web application would use the underlying datamodel + apis (for speed, your web app could use the SOAP service too, but it would be slower).