Connect to Redis from Django view - python

I found solution for connect django view to redis:
import redis
cacheDB = redis.StrictRedis()
cacheDB.sadd("new_post", post.id)
But when I add code to the view, my page loads with ~2 sec delay. Did it creates a new connect session on every request? Or maybe it's because of my Win7 test platform...
My modules: redis, redis_cache, django_redis.
in settings.py:
CACHES = {
"default": {
"BACKEND": "redis_cache.cache.RedisCache",
"LOCATION": "127.0.0.1:6379:1",
"OPTIONS": {
"CLIENT_CLASS": "redis_cache.client.DefaultClient",
}
}
}
SESSION_ENGINE = 'redis_sessions.session' # for djcelery
And nothing for redis in INSTALLED_APPS, mb i missed something here?

Related

Flasgger does not load when hostname has a path

I have a Flask application and I've integrated Flasgger for documentation. When I run my app locally, I can access swagger at http://127.0.0.1:5000/apidocs. But when it's deployed to our dev environment, the hostname is https://services.company.com/my-flask-app. And when I add /apidocs at the end of that URL, swagger does not load.
This is how I've configured swagger:
swagger_config = {
"headers": [],
"specs": [
{
"endpoint": "APISpecification",
"route": "/APISpecification",
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
],
"static_url_path": "/flasgger_static",
"specs_route": "/apidocs/",
"url_prefix": "/my-flask-app", # TODO - redo this for INT deployment
}
When I run this, on my local I can access swagger at http://127.0.0.1:5000/my-flask-app/apidocs/#/, but I think on my dev environment it'd probably be accessible at https://services.company.com/my-flask-app/my-flask-app/api-docs. When I check the console, Flasgger tries to get the css from https://services.company.com/ not https://services.company.com/my-flask-app
Any ideas on how I can resolve this?

hosting flask_restplus Swagger UI on Herokuapp says "No API Definition provided."

I'm trying to host the Swagger UI of Flask Restplus on Heroku server. It builds successfully and when checked in the logs of the heroku, even there it says "Build succeeded".
But the problem is when I check the actual hosting there's just a msg on the page saying
No API definition provided.
Btw the swagger-UI loads successfully on the browser when run locally.
Following is a sample code snipet for swagger-ui
from flask import Flask
from flask_restplus import Resource, Api
import os
app = Flask(__name__)
api = Api(app)
#api.route('/hello')
class HelloWorld(Resource):
def get(self):
return {'hello': 'world'}
if __name__ == '__main__':
port = int(os.environ.get("PORT", 5000))
app.run(host="0.0.0.0", port=port, debug=True)
So what am I doing wrong here? Is there any way that you can host a simple minimal flask_restplus swagger-UI on heroku ? Any help is appreciated, thanks.
EDIT
Following is the content of the swagger.json
{
"swagger": "2.0",
"basePath": "/",
"paths": {
"/hello": {
"get": {
"responses": {
"200": {
"description": "Success"
}
},
"operationId": "get_hello_world",
"tags": [
"default"
]
}
}
},
"info": {
"title": "API",
"version": "1.0"
},
"produces": [
"application/json"
],
"consumes": [
"application/json"
],
"tags": [
{
"name": "default",
"description": "Default namespace"
}
],
"responses": {
"ParseError": {
"description": "When a mask can't be parsed"
},
"MaskError": {
"description": "When any error occurs on mask"
}
}
}
Also if it helps, this is what's inside the Procfile
web: python app.py
Posting what worked for me, just in case if someone has the same concern in future.
I changed the Procfile from
web: python app.py
to
web: gunicorn app:app
and then the swagger-UI first page also started showing up on heroku. Earlier the endpoints were still accessible but the first page ie. the swagger-UI page wasn't showing up. But making this change got it working.

Cannot seem able to modify cache value from celery task

Description:
I want to have a cached value (let's call it a flag) to know when a celery task finishes execution.
I have a view for the frontend to poll this flag until it turns to False.
Code:
settings.py:
...
MEMCACHED_URL = os.getenv('MEMCACHED_URL', None) # Cache of devel or production
if MEMCACHED_URL:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': MEMCACHED_URL,
}
}
else:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-snowflake',
}
}
api/views.py:
def a_view(request):
# Do some stuff
cache.add(generated_flag_key, True)
tasks.my_celery_task.apply_async([argument_1, ..., generated_flag_key])
# Checking here with cache.get(generated_flag_key), the value is True.
# Do other stuff.
tasks.py:
#shared_task
def my_celery_task(argument_1, ..., flag_cache_key):
# Do stuff
cache.set(flag_cache_key, False) # Checking here with
# cache.get(flag_cache_key),the
# flag_cache_key value is False
views.py:
def get_cached_value(request, cache_key):
value = cache_key.get(cache_key) # This remains True until the cache key
# expires.
Problem:
If I run the task synchronously everything works as expected. When I run the task asynchronously, the cache key stays the same (as expected) and it is correctly passed around through those 3 methods, but the cached value doesn't seem to be updated between the task and the view.
If you run your tasks asynchronously, they are part of different processes which means that because of the LocMemCache backend, the task and the view will not use the same storage (each has its own memory).
Since #Linovia's answer and a dive in Django's documentation, I am now using django-redis as a workaround for my case.
The only thing that needs to change is the CACHES settings (and an active Redis server of course!):
settings.py:
CACHES = {
"default": {
"BACKEND": "django_redis.cache.RedisCache",
"LOCATION": 'redis://127.0.0.1:6379/1',
"OPTIONS": {
"CLIENT_CLASS": "django_redis.client.DefaultClient",
}
}
}
Now the cache storage is singular.
django-redis is a well-documented library and one can follow the instructions to make it work.

How to authenticate a WAMP connection via a ticket in python

I'm trying to connect to a WAMP bus from a different application that has certain roles configured. The roles are authenticated with a static ticket, so I believe that I need to declare what role I want to connect as and what the associated ticket is. I'm writing this in Python and have most of the component set up, but I can't find any documentation about how to do this sort of authentication.
from autobahn.twisted.component import Component, run
COMP = Component(
realm=u"the-realm-to-connect",
transports=u"wss://this.is.my.url/topic",
authentication={
# This is where I need help
# u"ticket"?
# u"authid"?
}
)
Without the authentication, I'm able to connect to and publish to the WAMP bus when it is running locally on my computer, but that one is configured to allow anonymous users to publish. My production WAMP bus does not allow anonymous users to publish, so I need to authenticate what role this is connecting as. The Autobahn|Python documentation implies that it can be done in Python, but I've only been able to find examples of how to do it in JavaScript/JSON in Crossbar.io's documentation.
the documentation is not very up to date.
With the Component it is necessary to do like that for tickets:
from autobahn.twisted.component import Component, run
component = Component(
realm=u"the-realm-to-connect",
transports=u"wss://this.is.my.url/topic",
authentication={
"ticket": {
"authid": "username",
"ticket": "secrettoken"
}
},
)
Here is some example that can be helpful for you:
https://github.com/crossbario/crossbar-examples/tree/master/authentication
I think you need to use WAMP-Ticket Dynamic Authentication method.
WAMP-Ticket dynamic authentication is a simple cleartext challenge
scheme. A client connects to a realm under some authid and requests
authmethod = ticket. Crossbar.io will "challenge" the client, asking
for a ticket. The client sends the ticket, and Crossbar.io will in
turn call a user implemented WAMP procedure for the actual
verification of the ticket.
So you need to create an additional component to Authenticate users:
from autobahn.twisted.wamp import ApplicationSession
from autobahn.wamp.exception import ApplicationError
class AuthenticatorSession(ApplicationSession):
#inlineCallbacks
def onJoin(self, details):
def authenticate(realm, authid, details):
ticket = details['ticket']
print("WAMP-Ticket dynamic authenticator invoked: realm='{}', authid='{}', ticket='{}'".format(realm, authid, ticket))
pprint(details)
if authid in PRINCIPALS_DB:
if ticket == PRINCIPALS_DB[authid]['ticket']:
return PRINCIPALS_DB[authid]['role']
else:
raise ApplicationError("com.example.invalid_ticket", "could not authenticate session - invalid ticket '{}' for principal {}".format(ticket, authid))
else:
raise ApplicationError("com.example.no_such_user", "could not authenticate session - no such principal {}".format(authid))
try:
yield self.register(authenticate, 'com.example.authenticate')
print("WAMP-Ticket dynamic authenticator registered!")
except Exception as e:
print("Failed to register dynamic authenticator: {0}".format(e))
and add Authentication method in the configuration:
"transports": [
{
"type": "web",
"endpoint": {
"type": "tcp",
"port": 8080
},
"paths": {
"ws": {
"type": "websocket",
"serializers": [
"json"
],
"auth": {
"ticket": {
"type": "dynamic",
"authenticator": "com.example.authenticate"
}
}
}
}
}
]

Django Channels Group.send not working in python console?

I tried Group(groupname).send in the python console and it does not seem to work. Why is this?
This is my consumers.py arrangement:
def ws_connect(message):
message.reply_channel.send({"accept": True})
Group(secure_group).add(message.reply_channel)
def ws_receive(message):
# Nothing to do here
Group(secure_group).send({
"text": "Received {}".format(message.content['text'])
})
def ws_disconnect(message):
Group(secure_group).discard(message.reply_channel)
Routing:
from channels.routing import route
from App.consumers import (
ws_connect,
ws_receive,
ws_disconnect
)
channel_routing = [
route("websocket.connect", ws_connect),
route("websocket.receive", ws_receive),
route("websocket.disconnect", ws_disconnect),
]
Terminal commands:
from channels import Group
#import secure_group here
Group(secure_group).send({ "text": "Tester" })
All my clients have never recieved the text.
CHANNEL_LAYERS config:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgiref.inmemory.ChannelLayer",
"ROUTING": "App.routing.channel_routing",
},
}
Inmemory channel layer doesn't support cross-process communication. You can't perform Group send in other say terminal. Try with Redis backend you can send message.
From the doc In-Memory

Categories

Resources