Redis key management - python

So im working with redis in a python app. any advice on key management? i try and keep all redis calls in one location but something seems off with hardcoding keys everywhere. tips?

I use a set of wrapper classes that handle key generation, something like:
public User Get(string username) {
return redis.Get("user:"+username);
}
I have an instance of each of these classes globally available, so just need to call
Server.Users.Get(username);
That's in .NET of course, but something similar should work in any language. An additional advantage of this approach over using a generic mapping tool is that it provides a good place to put things like connection sharing and indexing.

Well, key names are comparable to classes and attributes in your application, so some degree of 'repetition' will happen.
However, if you find you're repeating a lot of code to actually build your keys, then you might want to look at a mapper. In Python there's Redisco.
Or if you need something simpler, in Ruby there's Nest (porting it to Python should be trivial). It allows you to DRY up your references to keys:
users = Nest.new("User")
user = users[1]
# => "User:1"
user[:name]
# => "User:1:name"
redis.set(user[:name], "Foo")
# or even:
user[:name].set("Foo")
user[:name].get
# => "Foo"

Related

How to access self.context property using a variable in flask

I want to access a property exist in the self.context using a variable. I have a variable name "prop" and it contains a value and it is already set in the self.context. I am using Flask Restplus framework.
prop = 'binding'
If I try to access this property like below then it gives me an error:
Object is not subscriptable
I want to know if there is any way to get the value? Like doing this:
print(self.context[prop])
I only get one solution don't know if its correct or not, I tried this :
self.context.__getattribute__(prop)
There are two ways to do this, the simplest is using getattr:
getattr(self.context, prop)
This function internally calls __getattribute__ so it's the same as your code, just a little neater.
However, you still have the problem that you probably only want to access some of the context, while not risking editing values set by different parts of your application. A much better way would be to store a dict in context:
self.context.attributes = {}
self.context.attributes["binding"] = False
print(self.context.attributes[prop])
This way, only certain context variables can be accessed and those that are meant to be dynamic don't mess with any used by your application code directly.

Zope AdvancedQuery ICatalogTool

I have an ICatalogTool, and catalog which I could query using AdvancedQuery and I want to learn how to use this tool, which queries I could use to find something in that Catalog.
I have an example of usage of this tool:
results = ICatalogTool(dc).search(query=Eq('id', self._object.ip))
# Eq - is an "EQUALS" in AdvancedQuery
# dc - instance of DeviceList class
# self._object.ip - some IP for search
I have read a documentation and found that each function like Eq takes some index. So I want to know which other indexes except 'id' are in my catalog. How to look for that? Are there some tools for introspection?
Look in the Zope Management Interface in the Indexes tab. Otherwise, you can list index names programmatically by calling the indexes() method of the catalog object.
IMHO, you should familiarize with the basic query interface (calling searchResults() method using queries specified as mappings) before attempting to use the AdvancedQuery add-on.

Pythonic alternatives to if/else statements

I'm trying to avoid a complex web of if/elif/else statements.
What pythonic approaches can I use to accomplish my goal.
Here's what I want to achieve:
My script will receive a slew of different urls,
youtube.com, hulu.com, netflix.com, instagram.com, imgur.com, etc, etc possibly 1000s of different domains.
I will have a function/set of different instructions that will be called for each distinct site.
so....
if urlParsed.netloc == "www.youtube.com":
youtube()
if urlParsed.netloc == "hulu.com":
hulu()
and so on for 100s of lines....
Is there anyway to avoid this course of action...if xyz site do funcA, if xyz site do funcB.
I want to use this as a real world lesson to learn some advance structuring of my python code.
So please feel free to guide me towards something fundamental to Python or programming in general.
Use a dispatch dictionary mapping domain to function:
def default_handler(parsed_url):
pass
def youtube_handler(parsed_url):
pass
def hulu_handler(parsed_url):
pass
handlers = {
'www.youtube.com': youtube_handler,
'hulu.com': hulu_handler,
}
handler = handlers.get(urlParsed.netloc, default_handler)
handler(urlParsed)
Python functions are first-class objects and can be stored as values in a dictionary just like any other object.
You can use a dict
myDict = {"www.youtube.com": youtube, "hulu.com": hulu}
...
...
if urlParsed.netloc in myDict:
myDict[urlParsed.netloc]()
else:
print "Unknown URL"

What's a good way to load and store data in global constants for caching in Django?

How do you normally load and store stuff from the DB in global constants for caching during initialisation? The global constants will not change again later.
Do you just make the DB query during load time and put it in a constant, or use a lazy loading mechanism of some sort?
What I have in mind is code in the global scope like this:
SPECIAL_USER_GROUP = Group.objects.get(name='very special users')
OTHER_THING_THAT_DOESNT_CHANGE = SomeDbEnum.objects.filter(is_enabled=True)
# several more items like this
I ran into issues doing that when running tests using an empty test database. An option would be to put all the needed data in fixtures, but I want to avoid coupling each individual test with irrelevant data they don't need.
Would the following be considered good style?
#memoize
def get_special_user_group():
return Group.objects.get(name='very special users')
Or would a generic reusable mechanism be preferred?
Django has a cache framework that you could use.
http://docs.djangoproject.com/en/dev/topics/cache/
It's got a low level caching api that does what you want.
from django.core.cache import cache
cache.set('my_key', 'hello, world!', 30)
cache.get('my_key')
To use it, you'd do something like
if cache.get("key"):
return cache.get("key")
else:
value = some_expensive_operation()
cache.set("key",value)
return value
Using something like this will give you more flexibility in the future.
An option would be to put all the needed data in fixtures,
Good thinking.
but I want to avoid coupling each individual test with irrelevant data they don't need.
Then define smaller fixtures.
If necessary, use the TestCase setUp method to create the necessary database row.

Duplicate an AppEngine Query object to create variations of a filter without affecting the base query

In my AppEngine project I have a need to use a certain filter as a base then apply various different extra filters to the end, retrieving the different result sets separately. e.g.:
base_query = MyModel.all().filter('mainfilter', 123)
Then I need to use the results of various sub queries separately:
subquery1 = basequery.filter('subfilter1', 'xyz')
#Do something with subquery1 results here
subquery2 = basequery.filter('subfilter2', 'abc')
#Do something with subquery2 results here
Unfortunately 'filter()' affects the state of the basequery Query instance, rather than just returning a modified version. Is there any way to duplicate the Query object and use it as a base? Is there perhaps a standard Python way of duping an object that could be used?
The extra filters are actually applied by the results of different forms dynamically within a wizard, and they use the 'running total' of the query in their branch to assess whether to ask further questions.
Obviously I could pass around a rudimentary stack of filter criteria, but I'd rather use the Query itself if possible, as it adds simplicity and elegance to the solution.
There's no officially approved (Eg, not likely to break) way to do this. Simply creating the query afresh from the parameters when you need it is your best option.
As Nick has said, you better create the query again, but you can still avoid repeating yourself. A good way to do that would be like this:
#inside a request handler
def create_base_query():
return MyModel.all().filter('mainfilter', 123)
subquery1 = create_base_query().filter('subfilter1', 'xyz')
#Do something with subquery1 results here
subquery2 = create_base_query().filter('subfilter2', 'abc')
#Do something with subquery2 results here

Categories

Resources