How to search inside json file with redis? - python

Here I set a json object inside a key in a redis. Later I want to perform search on the json file stored in the redis. My search key will always be a json string like in the example below and i want to match this inside the stored json file.
Currently here i am doing this by iterating and comparing but instead i want to do it with redis. How can I do it ?
rd = redis.StrictRedis(host="localhost",port=6379, db=0)
if not rd.get("mykey"):
with open(os.path.join(BASE_DIR, "my_file.josn")) as fl:
data = json.load(fl)
rd.set("mykey", json.dumps(data))
else:
key_values = json.loads(rd.get("mykey"))
search_json_key = {
"key":"value",
"key2": {
"key": "val"
}
}
# here i am searching by iterating and comparing instead i want to do it with redis
for i in key_values['all_data']:
if json.dumps(i) == json.dumps(search_json_key):
# return
# mykey format looks like this:
{
"all_data": [
{
"key":"value",
"key2": {
"key": "val"
}
},
{
"key":"value",
"key2": {
"key": "val"
}
},
{
"key":"value",
"key2": {
"key": "val"
}
},
]
}

To do search with Redis and JSON you have two options - you can use the FT CREATE command to create an index that you can then use FT SEARCH over, (while both of these web pages show the CLI syntax you can do
rd.ft().create() / search() in your python script)
OR you can check out the python OM client that will take care of that to some extent for you.
Either way you'll have to do a bit of a rework to fully take advantage of Redis' search capabilities.

Related

Automatically entering next JSON level using Python in a similar way to JQ in bash

I am trying to use Python to extract pricePerUnit from JSON. There are many entries, and this is just 2 of them -
{
"terms": {
"OnDemand": {
"7Y9ZZ3FXWPC86CZY": {
"7Y9ZZ3FXWPC86CZY.JRTCKXETXF": {
"offerTermCode": "JRTCKXETXF",
"sku": "7Y9ZZ3FXWPC86CZY",
"effectiveDate": "2020-11-01T00:00:00Z",
"priceDimensions": {
"7Y9ZZ3FXWPC86CZY.JRTCKXETXF.6YS6EN2CT7": {
"rateCode": "7Y9ZZ3FXWPC86CZY.JRTCKXETXF.6YS6EN2CT7",
"description": "Processed translation request in AWS GovCloud (US)",
"beginRange": "0",
"endRange": "Inf",
"unit": "Character",
"pricePerUnit": {
"USD": "0.0000150000"
},
"appliesTo": []
}
},
"termAttributes": {}
}
},
"CQNY8UFVUNQQYYV4": {
"CQNY8UFVUNQQYYV4.JRTCKXETXF": {
"offerTermCode": "JRTCKXETXF",
"sku": "CQNY8UFVUNQQYYV4",
"effectiveDate": "2020-11-01T00:00:00Z",
"priceDimensions": {
"CQNY8UFVUNQQYYV4.JRTCKXETXF.6YS6EN2CT7": {
"rateCode": "CQNY8UFVUNQQYYV4.JRTCKXETXF.6YS6EN2CT7",
"description": "$0.000015 per Character for TextTranslationJob:TextTranslationJob in EU (London)",
"beginRange": "0",
"endRange": "Inf",
"unit": "Character",
"pricePerUnit": {
"USD": "0.0000150000"
},
"appliesTo": []
}
},
"termAttributes": {}
}
}
}
}
}
The issue I run into is that the keys, which in this sample, are 7Y9ZZ3FXWPC86CZY, CQNY8UFVUNQQYYV4.JRTCKXETXF, and CQNY8UFVUNQQYYV4.JRTCKXETXF.6YS6EN2CT7 are a changing string that I cannot just type out as I am parsing the dictionary.
I have python code that works for the first level of these random keys -
with open('index.json') as json_file:
data = json.load(json_file)
json_keys=list(data['terms']['OnDemand'].keys())
#Get the region
for i in json_keys:
print((data['terms']['OnDemand'][i]))
However, this is tedious, as I would need to run the same code three times to get the other keys like 7Y9ZZ3FXWPC86CZY.JRTCKXETXF and 7Y9ZZ3FXWPC86CZY.JRTCKXETXF.6YS6EN2CT7, since the string changes with each JSON entry.
Is there a way that I can just tell python to automatically enter the next level of the JSON object, without having to parse all keys, save them, and then iterate through them? Using JQ in bash I can do this quite easily with jq -r '.terms[][][]'.
If you are really sure, that there is exactly one key-value pair on each level, you can try the following:
def descend(x, depth):
for i in range(depth):
x = next(iter(x.values()))
return x
You can use dict.values() to iterate over the values of a dict. You can also use next(iter(dict.values())) to get a first (only) element of a dict.
for demand in data['terms']['OnDemand'].values():
next_level = next(iter(demand.values()))
print(next_level)
If you expect other number of children than 1 in the second level, you can just nest the fors:
for demand in data['terms']['OnDemand'].values():
for sub_demand in demand.values()
print(sub_demand)
If you are insterested in the keys too, you can use dict.items() method to iterate over dict keys and values at the same time:
for demand_key, demand in data['terms']['OnDemand'].items():
for sub_demand_key, sub_demand in demand.items()
print(demand_key, sub_demand_key, sub_demand)

How do I automate finding and replacing a JSON attribute?

This is an example of a JSON database that I will work with in my Python code.
{
"name1": {
"file": "abc"
"delimiter": "n"
},
"name2": {
"file": "def"
"delimiter": "n"
}
}
Pretend that a user of my code presses a GUI button that is supposed to change the name of "name1" to whatever the user typed into a textbox.
How do I change "name1" to a custom string without manually copying and pasting the entire JSON database into my actual code? I want the code to load the JSON database and change the name by itself.
Load the JSON object into a dict. Grab the name1 entry. Create a new entry with the desired key and the same value. Delete the original entry. Dump the dict back to your JSON file.
This is likely not the best way to perform the task. Use sed on Linux or its Windows equivalent (depending on your loaded apps) to make the simple stream-edit change.
If I understand clearly the task. Here is an example:
import json
user_input = input('Name: ')
db = json.load(open("db.json"))
db[user_input] = db.pop('name1')
json.dump(db, open("db.json", 'w'))
You can use the object_hook parameter that json.loads() accepts to detect JSON objects (dictionaries) that have an entry associated with the old key and re-associate its value with new key they're encountered.
This can be implement as a function as shown follows:
import json
def replace_key(json_repr, old_key, new_key):
def decode_dict(a_dict):
try:
entry = a_dict.pop(old_key)
except KeyError:
pass # Old key not present - no change needed.
else:
a_dict[new_key] = entry
return a_dict
return json.loads(json_repr, object_hook=decode_dict)
data = '''{
"name1": {
"file": "abc",
"delimiter": "n"
},
"name2": {
"file": "def",
"delimiter": "n"
}
}
'''
new_data = replace_key(data, 'name1', 'custom string')
print(json.dumps(new_data, indent=4))
Output:
{
"name2": {
"file": "def",
"delimiter": "n"
},
"custom string": {
"file": "abc",
"delimiter": "n"
}
}
I got the basic idea from #Mike Brennan's answer to another JSON-related question How to get string objects instead of Unicode from JSON?

How do I return an upper field in a JSON with python?

So, I need some help returning an ID having found a certain string. My JSON looks something like this:
{
"id": "id1"
"field1": {
"subfield1": {
"subrield2": {
"subfield3": {
"subfield4": [
"string1",
"string2",
"string3"
]
}
}
}
}
"id": "id2"
"field1": {
"subfield1": {
"subrield2": {
"subfield3": {
"subfield4": [
"string4",
"string5",
"string6"
]
}
}
}
}
}
Now, I need to get the ID from a certain string, for example:
For "string5" I need to return "id2"
For "string2" I need to return "id1"
In order to find these strings I have used objectpath python module like this: json_Tree.execute('$..subfield4'))
After doing an analysis on a huge amount of strings, I need to return the ones that are meeting my criterias. I have the strings that I need (for example "string3"), but now I have to return the IDs.
Thank you!!
Note: I don't have a lot of experience with coding, I just started a few months ago to work on a project in Python and I have been stuck on this for a while
Making some assumptions about the actual structure of the data as being:
[
{
"id": "id1",
"subfield1": {
"subfield2": {
"subfield3": {
"subfield4": [
"string1",
"string2",
"string3"
]
}
}
}
}
// And so on
]
And assuming that each string1, string2 etc. is in only one id, then you can construct this mapping like so:
data: List[dict] # The json parsed as a list of dicts
string_to_id_mapping = {}
for record in data:
for string in record["subfield1"]["subfield2"]["subfield3"]["subfield4"]:
string_to_id_mapping[string] = record["id"]
assert string_to_id_mapping["string3"] == "id1"
If each string can appear in multiple ids then the following will catch all of them:
from collections import defaultdict
data: List[dict] # The json parsed as a list of dicts
string_to_id_mapping = defaultdict(set)
for record in data:
for string in record["subfield1"]["subfield2"]["subfield3"]["subfield4"]:
string_to_id_mapping[string].add(record["id"])
assert string_to_id_mapping["string3"] == {"id1"}

How to get this json object in python?

so I want to get the first key element from this JSON using python 3.7 without knowing its name.
Here is the JSON:
{
"intent":[
{
"confidence":0.99313362101529,
"value":"sendmessage"
}
],
"wikipedia_search_query":[
{
"suggested":true,
"confidence":0.93804001808167,
"value":"message",
"type":"value"
}
],
"messenger_recipient":[
{
"confidence":0.93138399364195,
"value":"me",
"type":"value"
}
]
}
EDIT:
I want to compare the name of the first key like so:
if(jsonobj[0] == "wikipedia_search_query")
dosomething()
While Python 3.6+ does maintain insertion order on dictionaries, there's no guarantee that your incoming JSON will be in the order you expect. That being said, if you can guarantee the insertion order, here's a working example.
import json
js = """{
"intent":[
{
"confidence":0.99313362101529,
"value":"sendmessage"
}
],
"wikipedia_search_query":[
{
"suggested":true,
"confidence":0.93804001808167,
"value":"message",
"type":"value"
}
],
"messenger_recipient":[
{
"confidence":0.93138399364195,
"value":"me",
"type":"value"
}
]
}"""
json_data = json.loads(js)
first_key = next(iter(json_data))
first_value = json_data[next(iter(json_data))]
print(first_key)
print(first_value)
Output
intent
[{'confidence': 0.99313362101529, 'value': 'sendmessage'}]

Elasticsearch/Python - Re-index data after changing the mappings?

I'm a little stuck on how to re-index data in elastic search after a mapping or a data type has been changed.
According to elastic search docs
Pull the documents in from your old index, using a scrolled search and index them into the new index using the bulk API. Many of the client APIs provide a reindex() method which will do all of this for you. Once you are done, you can delete the old index.
This is my old mapping
{
"test-index2": {
"mappings": {
"business": {
"properties": {
"address": {
"type": "nested",
"properties": {
"country": {
"type": "string"
},
"full_address": {
"type": "string"
}
}
}
}
}
}
}
}
New Index mapping, I'm changing full_address -> location_address
{
"test-index2": {
"mappings": {
"business": {
"properties": {
"address": {
"type": "nested",
"properties": {
"country": {
"type": "string"
},
"location_address": {
"type": "string"
}
}
}
}
}
}
}
}
I'm using the python client for elasticsearch
https://elasticsearch-py.readthedocs.org/en/master/helpers.html#elasticsearch.helpers.reindex
from elasticsearch import Elasticsearch
from elasticsearch.helpers import reindex
es = Elasticsearch(["es.node1"])
reindex(es, "source_index", "target_index")
However this transfers the data from one index to another.
How may i use this to change the mappings/(data types etc) for my case above?
It's Straightforward if you use the scan&scroll and the Bulk API already implemented in the python client of elasticsearch
First -> Fetch all the documents by scan&scroll method
Loop through and make neccessary modifications to each document
Insert the modified documents into a new index using the Bulk API
from elasticsearch import Elasticsearch, helpers
es = Elasticsearch()
# Use the scan&scroll method to fetch all documents from your old index
res = helpers.scan(es, query={
"query": {
"match_all": {}
},
"size":1000
},index="old_index")
new_insert_data = []
# Change the mapping and everything else by looping through all your documents
for x in res:
x['_index'] = 'new_index'
# Change "address" to "location_address"
x['_source']['location_address'] = x['_source']['address']
del x['_source']['address']
# This is a useless field
del x['_score']
es.indices.refresh(index="testing_index3")
# Add the new data into a list
new_insert_data.append(x)
es.indices.refresh(index="new_index")
print new_insert_data
#Use the Bulk API to insert the list of your modified documents into the database
helpers.bulk(es,new_insert_data)
The reindex() API simply "moves" documents from one index to another. There is no way it can detect/infer that the field name full_address in documents of the old index should be location_address in documents in the new index. I doubt there is any API provided by standard Elasticsearch clients that can do what you desire. The only way I can think of achieving this is through additional custom logic on the client side which maintains a dictionary of field names from old index to new index and then read documents from old index and indexes the corresponding document to the new index with new field names obtained from the field name dictionary.
After updating the mapping, this can be done by updating the exiting documents using bulk API.
POST /_bulk
{"update":{"_id":"59519","_type":"asset","_index":"assets"}}
{"doc":{"facility_id":491},"detect_noop":false}
Note - Use 'detect_noop' for detecting the noop update.

Categories

Resources