I have a mongodb collection that looks something like the following:
_id: ObjectId("123456789")
continent_name: "Europe"
continent_id: "001"
countries: Array
0
country_name: "France"
country_id: "011"
cities: Array
0
city_name: "Paris"
city_id: "101"
1
city_name: "Maseille"
city_id: "102"
1
country_name: "England"
country_id: "012"
cities: Array
0
city_name: "London"
city_id: "201"
1
city_name: "Bath"
city_id: "202"
And so on for other continents>countries>cities.
I'm unclear on what approach to take when updating this collection.
Let's say I run my data collection again and discover a new city in England, resulting in an array for [London, Bath, Manchester], how do I update the value of Europe>England>[] pythonically without touching France?
Is it possible to search where(continent=Europe && country=England)?
My current working theory is to do something like the following:
def mongo_add_document(continent, country, cities):
data= {
"continent_name": continent["name"],
"continent_id": continent["id"],
"countries": [
{
"country_name": country["name"],
"country_id": country["id"],
"cities": [
{"city_id": city["id"], "city_id": city["id"]} for city in cities
]
}
]
}
cities.find_one_and_update(
{"continent_id": continent["id"]},
data,
upsert=True
)
But my concern is this will overwrite other countries in the continent document.
db.getCollection('cities')
.updateOne({"continent_name": "Europe", "countries.country_name": "England", "countries.cities.city_name" : {$ne: "Manchester"}},
{$push: {"countries.$.cities": {"city_name": "Manchester", "city_id":"whatever"}}})
Related
I am using Github's GraphQL API to fetch some issue details.
I used Python Requests to fetch the data locally.
This is how the output.json looks like
{
"data": {
"viewer": {
"login": "some_user"
},
"repository": {
"issues": {
"edges": [
{
"node": {
"id": "I_kwDOHQ63-s5auKbD",
"title": "test issue 1",
"number": 146,
"createdAt": "2023-01-06T06:39:54Z",
"closedAt": null,
"state": "OPEN",
"updatedAt": "2023-01-06T06:42:00Z",
"comments": {
"edges": [
{
"node": {
"id": "IC_kwDOHQ63-s5R2XCV",
"body": "comment 01"
}
},
{
"node": {
"id": "IC_kwDOHQ63-s5R2XC9",
"body": "comment 02"
}
}
]
},
"labels": {
"edges": []
}
},
"cursor": "Y3Vyc29yOnYyOpHOWrimww=="
},
{
"node": {
"id": "I_kwDOHQ63-s5auKm8",
"title": "test issue 2",
"number": 147,
"createdAt": "2023-01-06T06:40:34Z",
"closedAt": null,
"state": "OPEN",
"updatedAt": "2023-01-06T06:40:34Z",
"comments": {
"edges": []
},
"labels": {
"edges": [
{
"node": {
"name": "food"
}
},
{
"node": {
"name": "healthy"
}
}
]
}
},
"cursor": "Y3Vyc29yOnYyOpHOWripvA=="
}
]
}
}
}
}
The json was put inside a list using
result = response.json()["data"]["repository"]["issues"]["edges"]
And then this list was put inside a DataFrame
import pandas as pd
df = pd.DataFrame (result, columns = ['node', 'cursor'])
df
These are the contents of the data frame
id
title
number
createdAt
closedAt
state
updatedAt
comments
labels
I_kwDOHQ63-s5auKbD
test issue 1
146
2023-01-06T06:39:54Z
None
OPEN
2023-01-06T06:42:00Z
{'edges': [{'node': {'id': 'IC_kwDOHQ63-s5R2XCV","body": "comment 01"}},{'node': {'id': 'IC_kwDOHQ63-s5R2XC9","body": "comment 02"}}]}
{'edges': []}
I_kwDOHQ63-s5auKm8
test issue 2
147
2023-01-06T06:40:34Z
None
OPEN
2023-01-06T06:40:34Z
{'edges': []}
{'edges': [{'node': {'name': 'food"}},{'node': {'name': 'healthy"}}]}
I would like to split/explode the comments and labels columns.
The values in these columns are nested dictionaries
I would like there to be as many rows for a single issue, as there are comments & labels.
I would like to flatten out the data frame.
So this involves split/explode and concat.
There are several stackoverflow answers that delve on this topic. And I have tried the code from several of them.
I can not paste the links to those questions, because stackoverflow marks my question as spam due to many links.
But these are the steps I have tried
df3 = df2['comments'].apply(pd.Series)
Drill down further
df4 = df3['edges'].apply(pd.Series)
df4
Drill down further
df5 = df4['node'].apply(pd.Series)
df5
The last statement above gives me the KeyError: 'node'
I understand, this is because node is not a key in the DataFrame.
But how else can i split this dictionary and concatenate all columns back to my issues row?
This is how I would like the output to look like
id
title
number
createdAt
closedAt
state
updatedAt
comments
labels
I_kwDOHQ63-s5auKbD
test issue 1
146
2023-01-06T06:39:54Z
None
OPEN
2023-01-06T06:42:00Z
comment 01
Null
I_kwDOHQ63-s5auKbD
test issue 1
146
2023-01-06T06:39:54Z
None
OPEN
2023-01-06T06:42:00Z
comment 02
Null
I_kwDOHQ63-s5auKm8
test issue 2
147
2023-01-06T06:40:34Z
None
OPEN
2023-01-06T06:40:34Z
Null
food
I_kwDOHQ63-s5auKm8
test issue 2
147
2023-01-06T06:40:34Z
None
OPEN
2023-01-06T06:40:34Z
Null
healthy
If dct is your dictionary from the question you can try:
df = pd.DataFrame(d['node'] for d in dct['data']['repository']['issues']['edges'])
df['comments'] = df['comments'].str['edges']
df = df.explode('comments')
df['comments'] = df['comments'].str['node'].str['body']
df['labels'] = df['labels'].str['edges']
df = df.explode('labels')
df['labels'] = df['labels'].str['node'].str['name']
print(df.to_markdown(index=False))
Prints:
id
title
number
createdAt
closedAt
state
updatedAt
comments
labels
I_kwDOHQ63-s5auKbD
test issue 1
146
2023-01-06T06:39:54Z
OPEN
2023-01-06T06:42:00Z
comment 01
nan
I_kwDOHQ63-s5auKbD
test issue 1
146
2023-01-06T06:39:54Z
OPEN
2023-01-06T06:42:00Z
comment 02
nan
I_kwDOHQ63-s5auKm8
test issue 2
147
2023-01-06T06:40:34Z
OPEN
2023-01-06T06:40:34Z
nan
food
I_kwDOHQ63-s5auKm8
test issue 2
147
2023-01-06T06:40:34Z
OPEN
2023-01-06T06:40:34Z
nan
healthy
#andrej-kesely has answered my question.
I have selected his response as the answer for this question.
I am now posting a consolidated script that includes my poor code and andrej's great code.
In this script i want to fetch details from Github's GraphQL API Server.
And put it inside pandas.
Primary source for this script is this gist.
And a major chunk of remaining code is an answer by #andrej-kesely.
Now onto the consolidated script.
First import the necessary packages and set headers
import requests
import json
import pandas as pd
headers = {"Authorization": "token <your_github_personal_access_token>"}
Now define the query that will fetch data from github.
In my particular case, I am fetching issue details form a particular repo
it can be something else for you.
query = """
{
viewer {
login
}
repository(name: "your_github_repo", owner: "your_github_user_name") {
issues(states: OPEN, last: 2) {
edges {
node {
id
title
number
createdAt
closedAt
state
updatedAt
comments(first: 10) {
edges {
node {
id
body
}
}
}
labels(orderBy: {field: NAME, direction: ASC}, first: 10) {
edges {
node {
name
}
}
}
comments(first: 10) {
edges {
node {
id
body
}
}
}
}
cursor
}
}
}
}
"""
Execute the query and save the response
def run_query(query):
request = requests.post('https://api.github.com/graphql', json={'query': query}, headers=headers)
if request.status_code == 200:
return request.json()
else:
raise Exception("Query failed to run by returning code of {}. {}".format(request.status_code, query))
result = run_query(query)
And now is the trickiest part.
In my query response, there are several nested dictionaries.
I would like to split them - more details in my question above.
This magic code from #andrej-kesely does that for you.
df = pd.DataFrame(d['node'] for d in result['data']['repository']['issues']['edges'])
df['comments'] = df['comments'].str['edges']
df = df.explode('comments')
df['comments'] = df['comments'].str['node'].str['body']
df['labels'] = df['labels'].str['edges']
df = df.explode('labels')
df['labels'] = df['labels'].str['node'].str['name']
print(df)
I have a json structured as this:
{
"data": [
{
"groups": {
"data": [
{
"group_name": "Wedding planning - places and bands (and others) to recommend!",
"date_joined": "2009-03-12 01:01:08.677427"
},
{
"group_name": "Harry Potter and the Deathly Hollows",
"date_joined": "2009-01-15 01:38:06.822220"
},
{
"group_name": "Xbox , Playstation, Wii - console fans",
"date_joined": "2010-04-02 04:02:58.078934"
}
]
},
"id": "0"
},
{
"groups": {
"data": [
{
"group_name": "Lost&Found (Strzegom)",
"date_joined": "2010-02-01 14:13:34.551920"
},
{
"group_name": "Tennis, Squash, Badminton, table tennis - looking for sparring partner (Strzegom)",
"date_joined": "2008-09-24 17:29:43.356992"
}
]
},
"id": "1"
}
]
}
How does one parse jsons in this form? Should i try building a class resembling this format? My desired output is a csv where index is an "id" and in the first column I have the most recently taken group, in the second column the second most recently taken group and so on.
Meaning the result of this would be:
most recent second most recent
0 Xbox , Playstation, Wii - console fans Wedding planning - places and bands (and others) to recommend!
1 Lost&Found (Strzegom) Tennis, Squash, Badminton, table tennis - looking for sparring partner (Strzegom)
solution could be like this:
data = json.load(f)
result = []
# it's max element in there for each id. Helping how many group_name here for this example [3,2]
max_element_group_name = [len(data['data'][i]['groups']['data']) for i in range(len(data['data']))]
max_element_group_name.sort()
for i in range(len(data['data'])):
# get id for each groups
id = data['data'][i]['id']
# sort data_joined in groups
sorted_groups_by_date = sorted(data['data'][i]['groups']['data'],key=lambda x : time.strptime(x['date_joined'],'%Y-%m-%d %H:%M:%S.%f'),reverse=True)
# get groups name using minumum value in max_element_group_name for this example [2]
group_names = [sorted_groups_by_date[j]['group_name'] for j in range(max_element_group_name[0])]
# add result list with id
result.append([id]+group_names)
# create df for list
df = pd.DataFrame(result, columns = ['id','most recent', 'second most recent'])
# it could be better.
I'm reading a dataframe and converting it into a json file. I'm using python 3 and 0.25.3 version of pandas for it. I already got some help from you guys (Manipulating data of Pandas dataframe), but I have some questions about the code and how it works.
My dataframe:
id label id_customer label_customer part_number number_client
6 Sao Paulo CUST-99992 Brazil 7897 982
6 Sao Paulo CUST-99992 Brazil 888 12
92 Hong Kong CUST-88888 China 147 288
Code:
import pandas as pd
data = pd.read_excel(path)
data[["part_number","number_client"]] = data[["part_number","number_client"]].astype(str)
f = lambda x: x.split('_')[0]
j =(data.groupby(["id","label","id_customer","label_customer"])['part_number','number_client']
.apply(lambda x: x.rename(columns=f).to_dict('r')).reset_index(name='Number')
.groupby(["id", "label"])[ "id_customer", "label_customer", "Number"]
.apply(lambda x: x.rename(columns=f).to_dict('r')).reset_index(name='Customer')
.to_json(orient='records'))
print (j)
Json I'm getting:
[{
"id": 6,
"label": "Sao Paulo",
"Customer": [{
"id": "CUST-99992",
"label": "Brazil",
"number": [{
"part": "7897",
"client": "982"
},
{
"part": "888",
"client": "12"
}
]
}]
},
{
"id": 92,
"label": "Hong Kong",
"Customer": [{
"id": "CUST-888888",
"label": "China",
"number": [{
"part": "147",
"client": "288"
}]
}]
}
]
1st Question: lambda and apply function are spliting my columns' name when a _ is found.. That is just a piece of my dataframe and some columns I'd like to preserve the name.. e.g: I want get part_number and number_client instead part and client in my json structure. How can I fix this?
2nd Question: I can have different lists with the same key name. E.g: In customer list I have part_number key, but I can also have the same name of key inside another list with another value. E.g: part_number inside test list.
3rd Question: In my complete dataframe, I have a column called Additional_information when I have a simple text. I have to get a structure like this:
...
"Additional_information":[{
{
"text": "testing",
}
},
{
"text": "testing again",
}
]
for a dataframe like this:
id label id_customer label_customer part_number number_client Additional_information
6 Sao Paulo CUST-99992 Brazil 7897 982 testing
6 Sao Paulo CUST-99992 Brazil 7897 982 testing again
What should I change?
1st Question:
You can write custom function for rename, e.g. like:
def f(x):
vals = ['part_number', 'number_client']
if x in vals:
return x
else:
return x.split('_')[0]
2nd Question
If I understand correctly keys in final json are created from columns of original Dataframe, and also by parameter name by reset_index of my solution. If want some another logic for change keys (columns names) is possible change first solution.
3rd Question
In original solution is changed to_json to to_dict for possible modify final list of dict like append text info, for json is used json.dumps in last step:
import json
def f(x):
vals = ['part_number', 'number_client']
if x in vals:
return x
else:
return x.split('_')[0]
d =(data.groupby(["id","label","id_customer","label_customer"])['part_number','number_client']
.apply(lambda x: x.rename(columns=f).to_dict('r')).reset_index(name='Number')
.groupby(["id", "label"])[ "id_customer", "label_customer", "Number"]
.apply(lambda x: x.rename(columns=f).to_dict('r')).reset_index(name='Customer')
.to_dict(orient='records'))
#print (d)
d1 = (data[['Additional_information']].rename(columns={'Additional_information':'text'})
.to_dict(orient='records'))
d1 = {'Additional_information':d1}
print (d1)
{'Additional_information': [{'text': 'testing'}, {'text': 'testing again'}]}
d.append(d1)
#print (d)
j = json.dumps(d)
#print (j)
I have an item register on dynamodb with the next structure:
{
"OwnerID":"12312wqeq",
"license":"23423werwegdf",
"MaintenanceList":{
"10-11-2018":{
"garage" : "lopcars",
"city" : "NY",
"country" "USA",
"location" : "1929-1927 Fulton St Brooklyn"
}
}
}
I need to add a new Maintenance to the list, and I tried this:
response=table.update_item(
Key={
"OwnerID":"12312wqeq",
"license":"23423werwegdf",'
}
,UpdateExpression = "SET #d1=:dt",
ExpressionAttributeValues = {
':dt' : "12-11-2019":{
"garage" : "Crazycars",
"city" : "NY",
"country" "USA",
"location" : "120 E Suffolk Ave Central Islip"
}
}
},
ExpressionAttributeNames={
'#d1' : 'MaintenanceList'
},
ReturnValues="UPDATED_NEW"
)
but overwrite the attribute MaintenanceList and I need it to look like this after update:
{
"OwnerID":"12312wqeq",
"license":"23423werwegdf",
"MaintenanceList":{
"10-11-2018":{
"garage" : "lopcars",
"city" : "NY",
"country" "USA",
"location" : "1929-1927 Fulton St Brooklyn"
},
"12-11-2019":{
"garage" : "Crazycars",
"city" : "NY",
"country" "USA",
"location" : "120 E Suffolk Ave Central Islip"
}
}
}
The SET MaintenanceList=:dt expression indeed replaces the value of the the attributed called MaintenanceList. If you wanted the content of this attribute to be a hash table, and add to it, you need to update it using a nested atribute path as explained in this DynamoDB documentation. For example do something like SET #d1.#date=:dt.
However, note that keeping a hash table inside a single attribute's value is problematic - its total size is strictly limited (to 400KB) and you'll also pay for the entire item size every time you read or write a small part of it.
I have a nested json, and i want to transform it into a pandas dataframe. I was able to normalize with json_normalize.
However, there are still json layer within the dataframe, which i also want to unpack. How can i do it in the best way? I will likely have to deal with this a few more times within the project i am doing currently
The json i have is the following
{
"data": {
"allOpportunityApplication": {
"data": [
{
"id": "111111111",
"opportunity": {
"programme": {
"short_name": "XX"
}
},
"person": {
"home_lc": {
"name": "NAME"
}
},
"standards": [
{
"constant_name": "constant1",
"standard_option": {
"option": "true"
}
},
{
"constant_name": "constant2",
"standard_option": {
"option": "true"
}
}
]
}
]
}
}
}
Used json_normalize
standards_df = json_normalize(
standard_json['allOpportunityApplication']['data'],
record_path=['standards'],
meta=['id','person','opportunity']
)
with that i get a dataframe with the columns: constant_name, standard_option, id, person, opportunity. The problem is that the data standard_option, person and opportunity are json, with a single option inside.
The current ouput and expected output for each column is as follow
Standard_option
Currently an item in the column "standard_option" looks like:
{'option': 'true'}
I want it to be just true
Person
Currently an item in the column "person" looks like:
{'programme': {'short_name': 'XX'}}
I want it to look like: XX
Opportunity
Currently an item in the column "opportunity" looks like:
{'home_lc': {'name': 'NAME'}}
I want it to look like: NAME
Might not be the best way, but I think it works.
standards_df['person'] = (standards_df.loc[:, 'person']
.apply(lambda x: x['home_lc']['name']))
standards_df['opportunity'] = (standards_df.loc[:, 'opportunity']
.apply(lambda x: x['programme']['short_name']))
constant_name standard_option.option id person opportunity
0 constant1 true 111111111 NAME XX
1 constant2 true 111111111 NAME XX
standard_option was already fine when I run your code