How to loop through each firebase database child with python? - python

I have this firebase data structure
I want to print all the keys under the firebase generated keys (i.e from 3030-00-809-7702 to newNSN) so I use this code
Inventory = db.child("Inventories").get()
for business in Inventory.each():
businessid = business.key()
productdb = db.child("Inventories").child(businessid).get()
for product in productdb.each():
productid = product.key()
print(businessid)
print(productid)
but I what I get is this
so only the last keys are being printed instead of all the keys. What am I doing wrongly and how can I get it to print all the keys ?

Just print the value at the current tree to get the whole thing
inventory = db.child("Inventories").get()
for business in inventory.each():
print(business.val())
Or you go iterate it, which is really inefficient to request N items from Firebase for N children.
inventorydb = db.child("Inventories")
for businessid in inventorydb.shallow().get().each():
productdb = inventory.child(businessid)
# print the ids
print([id for id in productdb.shallow().get()])

Related

If python doesn't find certain value inside JSON, append something inside list

I'm making a script with Python to search for competitors with a Google API.
Just for you to see how it works:
First I make a request and save data inside a Json:
# make the http GET request to Scale SERP
api_result = requests.get('https://api.scaleserp.com/search', params)
# Save data inside Json
dados = api_result.json()
Then a create some lists to get position, title, domain and things like that, then I create a loop for to append the position from my competitors inside my lists:
# Create the lists
sPositions = []
sDomains = []
sUrls = []
sTitles = []
sDescription = []
sType = []
# Create loop for to look for information about competitors
for sCompetitors in dados['organic_results']:
sPositions.append(sCompetitors['position'])
sDomains.append(sCompetitors['domain'])
sUrls.append(sCompetitors['link'])
sTitles.append(sCompetitors['title'])
sDescription.append(sCompetitors['snippet'])
sType.append(sCompetitors['type'])
The problem is that not every bracket of my Json is going to have the same values. Some of them won't have the "domain" value. So I need something like "when there is no 'domain' value, append 'no domain' to sDomains list.
I'm glad if anyone could help.
Thanks!!
you should use the get method for dicts so you can set a default value incase the key doesn't exist:
for sCompetitors in dados['organic_results']:
sPositions.append(sCompetitors.get('position', 'no position'))
sDomains.append(sCompetitors.get('domain', 'no domain'))
sUrls.append(sCompetitors.get('link', 'no link'))
sTitles.append(sCompetitors.get('title', 'no title'))
sDescription.append(sCompetitors.get('snippet', 'no snippet'))
sType.append(sCompetitors.get('type', 'no type'))

How do you iterate over a set or a list in Flask and PyMongo?

I have produced a set of matching IDs from a database collection that looks like this:
{ObjectId('5feafffbb4cf9e627842b1d9'), ObjectId('5feaffcfb4cf9e627842b1d8'), ObjectId('5feb247f1bb7a1297060342e')}
Each ObjectId represents an ID on a collection in the DB.
I got that list by doing this: (which incidentally I also think I am doing wrong, but I don't yet know another way)
# Find all question IDs
question_list = list(mongo.db.questions.find())
all_questions = []
for x in question_list:
all_questions.append(x["_id"])
# Find all con IDs that match the question IDs
con_id = list(mongo.db.cons.find())
con_id_match = []
for y in con_id:
con_id_match.append(y["question_id"])
matches = set(con_id_match).intersection(all_questions)
print("matches", matches)
print("all_questions", all_questions)
print("con_id_match", con_id_match)
And that brings up all the IDs that are associated with a match such as the three at the top of this post. I will show what each print prints at the bottom of this post.
Now I want to get each ObjectId separately as a variable so I can search for these in the collection.
mongo.db.cons.find_one({"con": matches})
Where matches (will probably need to be a new variable) will be one of each ObjectId's that match the DB reference.
So, how do I separate the ObjectId in the matches so I get one at a time being iterated. I tried a for loop but it threw an error and I guess I am writing it wrong for a set. Thanks for the help.
Print Statements:
**matches** {ObjectId('5feafffbb4cf9e627842b1d9'), ObjectId('5feaffcfb4cf9e627842b1d8'), ObjectId('5feb247f1bb7a1297060342e')}
**all_questions** [ObjectId('5feafb52ae1b389f59423a91'), ObjectId('5feafb64ae1b389f59423a92'), ObjectId('5feaffcfb4cf9e627842b1d8'), ObjectId('5feafffbb4cf9e627842b1d9'), ObjectId('5feb247f1bb7a1297060342e'), ObjectId('6009b6e42b74a187c02ba9d7'), ObjectId('6010822e08050e32c64f2975'), ObjectId('601d125b3c4d9705f3a9720d')]
**con_id_match** [ObjectId('5feb247f1bb7a1297060342e'), ObjectId('5feafffbb4cf9e627842b1d9'), ObjectId('5feaffcfb4cf9e627842b1d8')]
Usually you can just use find method that yields documents one-by-one. And you can filter documents during iterating with python like that:
# fetch only ids
question_ids = {question['_id'] for question in mongo.db.questions.find({}, {'_id': 1})}
matches = []
for con in mongo.db.cons.find():
con_id = con['question_id']
if con_id in question_ids:
matches.append(con_id)
# you can process matched and loaded con here
print(matches)
If you have huge amount of data you can take a look to aggregation framework

Problems storing information of JSON into dictionary for loop in python

I'm new in this of API's and web development. so I'm sorry if my question is very basic :(.
I want to create a web browser of food recipes based in the ingredients contained. I'm using 2 queries urls to obtain the information because I need to acces to 2 json files. First one to obtain the id for each recipe based in the ingredient searched by the user and second one to obtain the information of each recipe based on the id returned in the first url.
The code I have is this one:
#Function that return id's of recipes that contains the word queried by user.
def ids(query):
try:
api_key = os.environ.get("API_KEY")
response = requests.get(f"https://api.spoonacular.com/recipes/autocomplete?apiKey={api_key}&query={urllib.parse.quote_plus(query)}")
response.raise_for_status()
except requests.RequestException:
return response
try:
ids = []
quotes = response.json()
for quote in quotes:
ids.append(quote['id'])
return ids
except (KeyError,TypeError, ValueError):
return None
#save inside a list named "ids", the id's of recipes that contains the ingredient chicken
ids = ids("chicken")
#function that return the differents options of recipes based in the ids.
def lookup(ids):
for ID in ids:
try:
api_key = os.environ.get("API_KEY")
response = requests.get(f"https://api.spoonacular.com/recipes/{ID}/information?apiKey{api_key}&includeNutrition=false")
response.raise_for_status()
except requests.RequestException:
return response
The main issue I have is that I don't know how to store the information returned in response, as you may notice I use into the "lookup" function a loop to get the responses for all ID contained in the list ids, but considering that I'll obtain 1 response for each ID (for instance if I have 6 ids, I'll obtain 6 different responses with 6 different information into the json files).
finally the info I want to store is this one
quote = response.json()
results = {'id':quote["id"],'title':quote["title"],'url':quote["sourceUrl"]}
This is the link with a sample of the data and the url used to obtain the json
https://spoonacular.com/food-api/docs#Get-Recipe-Information
I'm stucking trying to store this information located inside the different json files in a dictionary using python.
Any kind of help will be amazing!!
You would best use a dict for it with a structure matching the recipes you get back:
Assuming the API returns name, duration, difficulty and these are fields you will use later, as well as that you also save other data besides recipes for your program you could use a dict. If this is not the case simply use a list of dicts that represent single recipes
#just a dummy setup to simulate getting different recipes back from the API
one_response = {"name" : "Chicken and Egg", "duration" : 14, "difficulty" : "easy"}
another_response = {"name" : "Chicken square", "duration" : 100, "difficulty" : "hard"}
def get_recipe(id):
if id == 1:
return one_response
else:
return another_response
ids = [1,2]
# Here would be other information maybe as well, that you capture somewhere else. If you don't have this then simply use a list with recipes dicts inside..
queried_recipes = {"recipes" :[] }
for i in ids:
# Here you simply add a recipes to your recipes dict
queried _recipes["recipes"].append(get_recipe(i))
print (queried_recipes)
OUT: {'recipes': [{'name': 'Chicken and Egg', 'duration': 14, 'difficulty': 'easy'}, {'name': 'Chicken square', 'duration': 100, 'difficulty': 'hard'}]}
print(queried_recipes["recipes"][0]["duration"])
OUT: 14
You may want to use https://spoonacular.com/food-api/docs#Get-Recipe-Information-Bulk instead. That will get you all the information you want in one JSON document without having to loop through repeated calls to https://api.spoonacular.com/recipes/{ID}/information.
However, to answer the original question:
def lookup(ids):
api_key = os.environ.get("API_KEY")
results = []
for ID in ids:
response = requests.get(f"https://api.spoonacular.com/recipes/{ID}/information?apiKey{api_key}&includeNutrition=false")
response.raise_for_status()
quote = response.json()
result = {'id':quote["id"],'title':quote["title"],'url':quote["sourceUrl"]}
results.append(result)
return results

Getting results from nested json with variable dict keys in python

I would like to extract "date", "account", "extended_tweet" and "location" from a nested json file with variable dict keys from 1 to 500.
How can I loop with my code snippet through all 500 dicts in Python 3.8?
I get only 500 times the same result for the first dict right now.
data = json.load(json_data)
json_data.close()
for i in list(range(0, 501)):
date = data['1']['created_at']
account = data['1']['user']['name']
extended_tweet = data['1']['retweeted_status']['extended_tweet']['full_text']
location = data['1']['user']['location']
print(date, account, extended_tweet, location)
Try this :
for i in range(1, 501):
date = data[str(i)]['created_at']
account = data[str(i)]['user']['name']
extended_tweet = data[str(i)]['retweeted_status']['extended_tweet']['full_text']
location = data[str(i)]['user']['location']
print(date, account, extended_tweet, location)
Faults in your code :
You were looping through the range, but were not using the loop counter to get the exact key. You need to get the keys '1', '2',... for loops, but were using same '1' each time.
You need to use range(1, 501), in stead you were doing range(0, 501) which will be trying to get the key for '0' too.
Improvement, not a fault : you don't need to use list(range()); in stead you can directly use range() for looping.

python: using Sets and .intersection on a file of twitter followers & friends ids

I have a Couch DB with followers and friends ids of a single twitter user. Friends are identified under the group “friend_edges” and followers under “follower_edges”.
I am trying to find ids of those who are both followers and friends (at the same time) of that user.
In order to do that, I was requested to convert lists of followers and friends into sets, and then use the intersection operation between sets-- like set1.intersection(set.2)
Below is my code. It returns the only 2 values of friends who are also followers. Since the dataset has almost 2,000 ids, I’m positive this value is wrong.
Can someone tell me what is wrong with my code?… I appreciate your guidance but, although there are many ways program these tasks, I do need to use the Sets and .intersection, so please try and help me using those only... =)
from twitter_login import oauth_login
from twitter_DB import load_from_DB
from sets import Set
def friends_and_followers(users):
#open a lists for friends and another for followers
friends_list, followers_list = [], []
#find the users id under the label "friend_edges"
if id in users["friend_edges"] :
#loop in the "friend edges" group and find id's values
for value in id:
#add value to the list of friends
friends_list += value
#put the rest of the ids under the followers' list
else:
followers_list += value
return friends_list, followers_list
print friends_list, followers_list
#convert list of friends into a set
flist= set(friends_list)
#convert list of friends into a set
follwlist= set(followers_list)
if __name__ == '__main__':
twitter_api = oauth_login()
# check couchdb to look at this database
DBname = 'users-thatguy-+-only'
# load all the tweets
ff_results = load_from_DB(DBname)
#show number loaded
print 'number loaded', len(ff_results)
#iterate over values in the file
for user_id in ff_results:
#run the function over the values
both_friends_followers = friends_and_followers(user_id)
print "Friends and Followers of that guy: ", len(both_friends_followers)
The reason you get a length of two is because you return this:
return friends_list, followers_list
Which is a tuple of two lists, then take the length of that tuple, which is two.
I managed to convert from dictionary to set by extracting the values and adding those to a list using list.append(), as follows:
if 'friend_edges' in doc.keys():
flist = []
for x in doc['friend_edges']:
flist.append(x)

Categories

Resources