I have list type data in redis, there're so many keys which can't be fetched all one time. I tried to use python redis Lrange function to get in batch style, such as 1000 a time, but it seems not work as it always return empty. Lrange regard * as a character, how should I do it?
conn.Lrange(f'test-{id}-*', 0, 1000)
conn.lrange() and the underlying Redis LRANGE command returns the elements of a given set, not the keys of a database: your code returns an empty array because the given key does not exist.
To retrieve the keys of a database you should use the SCAN command, exposed by conn.scan().
Related
I am trying to write a simple python script to collect certain command outputs from mongodb, as part of validating the database before and after backup.
Below is the script.
import pymongo
import json
client = pymongo.MongoClient('localhost',username='admin',password='admin')
db = client['mydb']
collections = list(db.list_collection_names())
for i in collections:
print(db.$i.estimated_document_count())
All the collections are stored in the list called collections and I want to run it in for loop so that I can get document count in each collection. I know the last print statement here is wrong. How to get it right? I want $i to substitute the collection name during each iteration so that I can get the document count of that collection.
When I run "print(db.audit.estimated_document_count())" it gives me the document count in audit collection. But how to iterate through the list in for loop and substitute the value of i in the command?
Also, for validating backup/restore is there any other commands that I should run against database to verify backup/restore?
You cannot use a computed value as an identifier in at "dot" expression, at least not without resorting to dirty tricks.
What you can do, is to find some other mechanism for getting a collection given its name. According to the tutorial and API reference, you can use db['foo'] instead of db.foo.
I want to get the count of values given the key schema.
I have a set in my Redis with their keys being: 'sample:key:schema'
I want to get total number of values associated with this key.
Currently, I do the following and it works
import redis
redis_client = redis.StrictRedis(host='localhost', port=6379, db=0)
key_schema = 'sample:key:schema'
count_of_values = len(redis_client.smembers(key_schema))
Is there a better way to get the counts directly without having to fetch all the records and count them?
You don't have to get with smembers and len later. You may use scard for this, this is the link for python documentation.
This is from the official redis documentation
Returns the set cardinality (number of elements) of the set stored at key.
I am having trouble with the parameter of an SNMP query in a python script. An SNMP query takes an OID as a parameter. The OID I use here is written in the code below and, if used alone in a query, should return a list of states for the interfaces of the IP addresses I am querying onto.
What I want is to use that OID with a variable appended to it in order to get a very precise information (if I use the OID alone I will only get a list of thing that would only complexify my problem).
The query goes like this:
oid = "1.3.6.1.4.1.2011.5.25.119.1.1.3.1.2."
variable = "84.79.84.79"
query = session.get(oid + variable)
Here, this query will return a corrupted SNMPObject, as in the process of configuration of the device I am querying on, another number is added, for some reason we do not really care about here, between these two elements of the parameter.
Below is a screenshot showing some examples of an SNMP request that only takes as a parameter the OID above, without the variable appended, on which you may see that my variable varies, and so does the highlighted additional number:
Basically what I am looking for here is the response, but unfortunately I cannot predict for each IP address I am querying what will that "random" number be.
I could use a loop that tries 20 or 50 queries and only saves the response of the only one that would have worked, but it's ugly. What would be better is some built-in function or library that would just say to the query:
"SNMP query on that OID, with any integer appended to it, and with my variable appended to that".
I definitely don't want to generate a random int, as it is already generated in the configuration of the device I am querying, I just want to avoid looping just to get a proper response to a precise query.
I hope that was clear enough.
Something like this should work:
from random import randint
variable = "84.79.84.79"
numbers = "1.3.6.1.4.1.2011.5.25.119.1.1.3.1.2"
query = session.get('.'.join([numbers, str(randint(1,100)), variable])
In a Django project, I'm using redis as a fast backend.
I can LPUSH multiple values in a redis list like so:
lpush(list_name,"1","2","3")
However, I can't do it if I try
values_list = ["1","2","3"]
lpush(list_name,values_list)
For the record, this doesn't return an error. Instead it creates a list list_name with a single value. E.g. ['["1","2","3"]']. This is not usable if later one does AnObject.objects.filter(id__in=values_list). Nor does it work if one does AnObject.objects.filter(id__in=values_list[0]) (error: invalid literal for int() with base 10: '[').
What's the best way to LPUSH numerous values in bulk into a redis list (python example is preferred).
lpush (list_name, *values_list)
This will unpack contents in value_list as parameters.
If you have enormous numbers of values to insert into db, you can pipeline them. Or you can use the command line tool redis-cli --pipe to populate a database
Is there a way to store multiple values in a Redis list at the same time? I can only find a way to insert 1 value at the time in a list.
I've been looking at the following commands documentation: http://redis.io/commands
update: I have created a ticket for this feature.
You can use a pipeline, if using redis-py you could check out the below, I ran this on an aws instance of redis elasticache and found the following:
import redis
rs = redis.StrictRedis(host='host', port=6379, db=0)
q='test_queue'
Ran in 0.17s for 10,000 vals
def multi_push(q,vals):
pipe = rs.pipeline()
for val in vals:
pipe.lpush(q,val)
pipe.execute()
Ran in 13.20s for 10,000 vals
def seq_push(q,vals):
for val in vals:
rs.lpush(q,val)
~ 78x faster.
Yeah, it doesn't look like that's possible. You might be able to use MULTI (transactions) to store multiple values in an atomic sequence.
You can if you run 2.4. While it isn't marked stable yet, it should be soon IIRC. that said I'm running out of trunk and it's been rock solid for me with many gigs of data and a high rate if churn. For more details on variadic commands see 2.4 and other news.
For lists, not that i am aware of, but depending on your data volume, it might be efficient for you to re-cast your data to use redis' multiset command, HMSET, which does indeed give you multiple inserts in a single call:
HMSET V3620 UnixTime 1309312200 UID 64002 username "doug" level "noob"
As you expect, HMSET creates the redis keyed to V3620. The key follows the HMSET command followed by multiple field--value pairs:
HMSET key field 1 value 1 field 2 value 2