Connections mirroring in Django tests - python

I have a multi-database configuration of Django 1.10 and json fixtures for connections. For example my configuration looks like
DATABASES = {
'default': {
'NAME': 'default',
...,
'TEST': {
'NAME': 'test_default',
}
},
'second': {
'NAME': 'second',
...,
'TEST': {
'NAME': 'test_second',
'MIRROR': 'default',
}
}
}
When Django bootstrap testing environment it loads TestCase.fixtures into not-mirrored connections (in my case only into test_default).
When test case next tries to get model placed in second connection it fails with DoesNotExists.
This happens because fixtures are loaded into first connection that is not committed because of using savepoints between running test cases.
Consequently all tests that assume presence of data in mirroring connections like in master connection will fail!
This looks like a problem with Django test bootstrap algorithm.
Also possible that i do something completely wrong.
Why Django not loads fixtures also into mirrored connections?
-- OR --
Why Django not starts transactions after loading fixtures?

Related

Cannot seem able to modify cache value from celery task

Description:
I want to have a cached value (let's call it a flag) to know when a celery task finishes execution.
I have a view for the frontend to poll this flag until it turns to False.
Code:
settings.py:
...
MEMCACHED_URL = os.getenv('MEMCACHED_URL', None) # Cache of devel or production
if MEMCACHED_URL:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': MEMCACHED_URL,
}
}
else:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-snowflake',
}
}
api/views.py:
def a_view(request):
# Do some stuff
cache.add(generated_flag_key, True)
tasks.my_celery_task.apply_async([argument_1, ..., generated_flag_key])
# Checking here with cache.get(generated_flag_key), the value is True.
# Do other stuff.
tasks.py:
#shared_task
def my_celery_task(argument_1, ..., flag_cache_key):
# Do stuff
cache.set(flag_cache_key, False) # Checking here with
# cache.get(flag_cache_key),the
# flag_cache_key value is False
views.py:
def get_cached_value(request, cache_key):
value = cache_key.get(cache_key) # This remains True until the cache key
# expires.
Problem:
If I run the task synchronously everything works as expected. When I run the task asynchronously, the cache key stays the same (as expected) and it is correctly passed around through those 3 methods, but the cached value doesn't seem to be updated between the task and the view.
If you run your tasks asynchronously, they are part of different processes which means that because of the LocMemCache backend, the task and the view will not use the same storage (each has its own memory).
Since #Linovia's answer and a dive in Django's documentation, I am now using django-redis as a workaround for my case.
The only thing that needs to change is the CACHES settings (and an active Redis server of course!):
settings.py:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": 'redis://127.0.0.1:6379/1',
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
Now the cache storage is singular.
django-redis is a well-documented library and one can follow the instructions to make it work.

How to connect an MS Access table to become the backend for a django form?

I want to use a MS Access Table as the backend for a django form. I found that there is a django.pyodbc, but having trouble connecting it through there. I also understand this is not the best option, but this is what I have to work with right now plus this is only going to be for internal use. Has anyone successfully connected them can show an example?
DATABASES = {
'default': {
'ENGINE': '',
'HOST': '',
'NAME': '',
'OPTIONS':{
'host_is_server':True
},
}
}

How can I make my django app use sqlite in the development server but not on to production server

I am using PostresSQL on my production server, not that it should matter to the question.
I hear there is an easy way to set it up so even though my project is pulling from the same repo I can set it to use the correct DB for the environment that it is in.
The most simple approach would be:
if DEBUG:
# My debug config
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
...
}
}
else:
# My production config
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
...
}
}
In most of the projects at my work we use even cleaner approach. We don't have setup.py, instead setup is a package with several modules. It looks like:
# proj/app/settings/__init__.py
from .settings.common import * # proj/app/settings/common.py
from .settings.something_else import *
try:
from .settings.development import *
# if successful, we're in the development environment
# inside of the development.py you can redefine everything
# includig DATABASES
except ImportError:
# don't have settings/development.py
assert DEBUG is False
# we're on production
then project/app/settings/development.py is only present on the development machines and contains all the dev-related config.
Unless you want to use DEBUG in your production, this solution may actually not only give you correct DATABASES setting, but also protect you from accidentally putting DEBUG enabled project on the production:
if DEBUG:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
else:
DATABASES = {
*your production DB settings*
}

Mongoengine's Inconsistent behaviour returns result of aggregate() as Cursor object in ArchLinux and String in Ubuntu

I have been using mongoengine for handling MongoDb along with BottlePy.
Recently I used one of its aggreagation APIs.
data.postsByYear = Post._get_collection().aggregate([
{
'$match': {
'isDraft' : False
}
},
{
'$sort': {
'published': -1
}
},
{
'$group': {
'_id': {'year': {'$year':'$published'}},
'posts': {'$push': { 'title': '$title', 'id': '$_id'}}
},
},
{
'$sort': {
'_id': -1
}
}
])
In my ArchLinux machine with python 2.7 I had the output which was a Cursor object just like the what the docs have mentioned.
In the mongo shell, if the cursor returned from the db.collection.aggregate() is not assigned to a variable using the var keyword, then the mongo shell automatically iterates the cursor up to 20 times. See Cursors for cursor behavior in the mongo shell and Iterate a Cursor in the mongo Shell for handling cursors in the mongo shell.
But in my Ubuntu EC2 server in Amazon the response of the function is a string. I am not sure if this is a pymongo bug or mongoengine bug so did not post this as an issue.
Just making sure if this is not a mistake of mine.
Plus is there a workaround for this ?

How to do a custom insert inside a python-eve app

I have some custom flask methods in an eve app that need to communicate with a telnet device and return a result, but I also want to pre-populate data into some resources after retrieving data from this telnet device, like so:
#app.route("/get_vlan_description", methods=['POST'])
def get_vlan_description():
switch = prepare_switch(request)
result = dispatch_switch_command(switch, 'get_vlan_description')
# TODO: populate vlans resource with result data and return status
My settings.py looks like this:
SERVER_NAME = '127.0.0.1:5000'
DOMAIN = {
'vlans': {
'id': {
'type': 'integer',
'required': True,
'unique': True
},
'subnet': {
'type': 'string',
'required': True
},
'description': {
'type': 'boolean',
'default': False
}
}
}
I'm having trouble finding docs or source code for how to access a mongo resource directly and insert this data.
Have you looked into the on_insert hook? From the documentation:
When documents are about to be stored in the database, both on_insert(resource, documents) and on_insert_<resource>(documents) events are raised. Callback functions could hook into these events to arbitrarily add new fields, or edit existing ones. on_insert is raised on every resource being updated while on_insert_<resource> is raised when the <resource> endpoint has been hit with a POST request. In both circumstances, the event will be raised only if at least one document passed validation and is going to be inserted. documents is a list and only contains documents ready for insertion (payload documents that did not pass validation are not included).
So, if I get what you want to achieve, you could have something like this:
def telnet_service(resource, documents):
"""
fetch data from telnet device;
update 'documents' accordingly
"""
pass
app = Eve()
app.on_insert += telnet_service
if __name__ == "__main__":
app.run()
Note that this way you don't have to mess with the database directly as Eve will take care of that.
If you don't want to store the telnet data but only send it back along with the fetched documents, you can hook to on_fetch instead.
Lastly, if you really want to use the data layer you can use app.data.driveras seen in this example snippet.
use post_internal
Usage example:
from run import app
from eve.methods.post import post_internal
payload = {
"firstname": "Ray",
"lastname": "LaMontagne",
"role": ["contributor"]
}
with app.test_request_context():
x = post_internal('people', payload)
print(x)

Categories

Resources