Explode function - python

This is my first question on here. I have searched around on here and throughout the web and I seem unable to find the answer to my question. I'm trying to explode out a list in a json file out into multiple columns and rows. Everything I have tried so far has proven unsuccessful.
I am doing this over multiple json files within a directory in order to have it print out in the dataframe like this.
Goal:
did
Version
Nodes
rds
time
c
sc
f
uc
did
Version
Nodes
rds
time
c
sc
f
uc
did
Version
Nodes
rds
time
c
sc
f
uc
did
Version
Nodes
rds
time
c
sc
f
uc
Instead I get this in my dataframe:
did
Version
Nodes
rds
fusage
did
Version
Nodes
rds
everything in fusage
did
Version
Nodes
rds
everything in fusage
did
Version
Nodes
rds
everything in fusage
example of the json I'm working with. The json structure will not change
{
"did": "123456789",
"mId": "1a2b3cjsks",
"timestamp": "2021-11-26T11:10:58.322000",
"beat": {
"did": "123456789",
"collectionTime": "2010-05-26 11:10:58.004783+00",
"Nodes": 6,
"Version": "v1.4.6-2",
"rds": "0.00B",
"fusage": [
{
"time": "2010-05-25",
"c": "string",
"sc": "string",
"f": "string",
"uc": "int"
},
{
"time": "2010-05-19",
"c": "string",
"sc": "string",
"f": "string",
"uc": "int"
},
{
"t": "2010-05-23",
"c": "string",
"sc": "string",
"f": "string",
"uc": "int"
},
{
"time": "2010-05-23",
"c": "string",
"sc": "string",
"f": "string",
"uc": "int"
}
]
}
}
My end goal is to get the dataframe out to a csv in order to be ingested. I appreciate everyone's help looking at this.
using python 3.8.10 & pandas 1.3.4
python code below
import csv
import glob
import json
import os
import pandas as pd
tempdir = '/dir/to/files/json_temp'
json_files = os.path.join(tempdir, '*.json')
file_list = glob.glob(json_files)
dfs = []
for file in file_list:
with open(file) as f:
data = pd.json_normalize(json.loads(f.read()))
dfs.append(data)
df = pd.concat(dfs, ignore_index=True)
df.explode('fusage')
print(df)

If you're going to use the explode function, after that, apply pd.Series over the column containing the fusage list (beat.fusage) to obtain a Series for each list item.
/dir/to/files
├── example-v1.4.6-2.json
└── example-v2.2.2-2.json
...
for file in file_list:
with open(file) as f:
data = pd.json_normalize(json.loads(f.read()))
dfs.append(data)
df = pd.concat(dfs, ignore_index=True)
fusage_list = df.explode('beat.fusage')['beat.fusage'].apply(pd.Series)
df = pd.concat([df, fusage_list], axis=1)
# show desired columns
df = df[['did', 'beat.Version', 'beat.Nodes', 'beat.rds', 'time', 'c', 'sc', 'f', 'uc']]
print(df)
Output from df
did beat.Version beat.Nodes beat.rds time c sc f uc
0 123456789 v1.4.6-2 6 0.00B 2010-05-25 string string string int
0 123456789 v1.4.6-2 6 0.00B 2010-05-19 string string string int
0 123456789 v1.4.6-2 6 0.00B NaN string string string int
0 123456789 v1.4.6-2 6 0.00B 2010-05-23 string string string int
1 123777777 v2.2.2-2 4 0.00B 2010-05-25 string string string int
1 123777777 v2.2.2-2 4 0.00B 2010-05-19 string string string int
1 123777777 v2.2.2-2 4 0.00B NaN string string string int
1 123777777 v2.2.2-2 4 0.00B 2010-05-23 string string string int

Related

Convert http text response to pandas dataframe [duplicate]

This question already has answers here:
Convert Python dict into a dataframe
(18 answers)
JSON to pandas DataFrame
(14 answers)
Closed last year.
I want to convert the below text into a pandas dataframe. Is there a way I can use Python Pandas pre-built or in-built parser to convert? I can make a custom function for parsing but want to know if there is pre-built and/or fast solution.
In this example, the dataframe should result in two rows, one each of ABC & PQR
{
"data": [
{
"ID": "ABC",
"Col1": "ABC_C1",
"Col2": "ABC_C2"
},
{
"ID": "PQR",
"Col1": "PQR_C1",
"Col2": "PQR_C2"
}
]
}
You've listed everything you need as tags. Use json.loads to get a dict from string
import json
import pandas as pd
d = json.loads('''{
"data": [
{
"ID": "ABC",
"Col1": "ABC_C1",
"Col2": "ABC_C2"
},
{
"ID": "PQR",
"Col1": "PQR_C1",
"Col2": "PQR_C2"
}
]
}''')
df = pd.DataFrame(d['data'])
print(df)
Output:
ID Col1 Col2
0 ABC ABC_C1 ABC_C2
1 PQR PQR_C1 PQR_C2

Converting Excel to JSON using Pandas in Python 3.9

this is my first ever post here so go easy! :) I am attempting to convert data from Excel to JSON using the Python Pandas library.
I have data in Excel that looks like the table below, the columns detailed as "Unnamed: x" are blank, I used these headers as that's how they are output when converting to JSON. There are around 20 tests formatted like the sample below:
Unnamed: 1
Unnamed: 2
Unnamed: 3
Unnamed: 4
Test 1
Menu
Setting
Value
Menu1
Setting1
Value1
Test 2
A
B
C
1
2
3
I would like to put these in to JSON to look something like this:
{
"Test 1": [
"Menu":"Menu1",
"Setting":"Setting1",
"Value":"Value1",
]
}
And so on...
I can convert the current code to JSON (but not the format detailed above, and I have been experimenting with creating different Pandas dataframes in Python. At the moment the JSON data I get looks something like this:
"3":[
{
"Unnamed: 0":"Test1",
"Unnamed: 1":"Menu",
"Unnamed: 2":"Setting",
"Unnamed: 2":"Value"
}
"4":[
{
"Unnamed: 1":"Menu1",
"Unnamed: 2":"Setting1",
"Unnamed: 2":"Value1"
}
So I am doing some manual work (copying and pasting) to set it up in the desired format.
Here is my current code:
import pandas
# Pointing to file location and specifying the sheet name to convert
excel_data_fragment = pandas.read_excel('C:\\Users\\user_name\\tests\\data.xls', sheet_name='Tests')
# Converting to data frame
df = pandas.DataFrame(excel_data_fragment)
# This will get the values in Column A and removes empty values
test_titles = df['Unnamed: 0'].dropna(how="all")
# This is the first set of test values
columnB = df['Unnamed: 1'].dropna(how="all")
# Saving original data in df and removing rows which contain all NaN values to mod_df
mod_df = df.dropna(how="all")
# Converting data frame with NaN values removed to json
df_json = mod_df.apply(lambda x: [x.dropna()], axis=1).to_json()
print(mod_df)
Your Excel sheet is basically composed of several distinct subtables put together (one for each test). The way I would go to process them in pandas would be to use groupby and then process each group as a table. DataFrame.to_dict will be your friend here to output JSON-able objects.
First here is some sample data that ressembles what you have provided:
import pandas as pd
rows = [
[],
[],
["Test 1", "Menu", "Setting", "Value"],
[None, "Menu1", "Setting1", "Value1"],
[None, "Menu2", "Setting2", "Value2"],
[],
[],
["Test 2", "A", "B", "C"],
[None, 1, 2, 3],
[None, 4, 5, 6],
]
df = pd.DataFrame(rows, columns=[f"Unnamed: {i}" for i in range(1, 5)])
df looks like:
Unnamed: 1 Unnamed: 2 Unnamed: 3 Unnamed: 4
0 None None None None
1 None None None None
2 Test 1 Menu Setting Value
3 None Menu1 Setting1 Value1
4 None Menu2 Setting2 Value2
5 None None None None
6 None None None None
7 Test 2 A B C
8 None 1 2 3
9 None 4 5 6
Then use the following snippet, which cleans up all the missing values in df and turns each subtable into a dict.
# Remove entirely empty rows
df = df.dropna(how="all")
# Fill missing values in column 1
df["Unnamed: 1"] = df["Unnamed: 1"].fillna(method="ffill")
def process_group(g):
# Drop first column
g = g.drop("Unnamed: 1", axis=1)
# Use first row as column names
g = g.rename(columns=g.iloc[0])
# Drop first row
g = g.drop(g.index[0])
# Convert to dict
return g.to_dict(orient="records")
output = df.groupby("Unnamed: 1").apply(process_group).to_dict()
In the end, output is equal to:
{
"Test 1": [
{
"Menu": "Menu1",
"Setting": "Setting1",
"Value": "Value1"
},
{
"Menu": "Menu2",
"Setting": "Setting2",
"Value": "Value2"
}
],
"Test 2": [
{
"A": 1,
"B": 2,
"C": 3
},
{
"A": 4,
"B": 5,
"C": 6
}
]
}
You can finally get the JSON string by simply using:
import json
output_str = json.dumps(output)

Why is a string integer read incorrectly with pandas.read_json?

I am not the one for any hyperbole but I am really stumped by this error and i am sure you will be too..
Here is a simple json object:
[
{
"id": "7012104767417052471",
"session": -1332751885,
"transactionId": "515934477",
"ts": "2019-10-30 12:15:40 AM (+0000)",
"timestamp": 1572394540564,
"sku": "1234",
"price": 39.99,
"qty": 1,
"ex": [
{
"expId": 1007519,
"versionId": 100042440,
"variationId": 100076318,
"value": 1
}
]
}
]
Now I saved the file into ex.json and then executed the following python code:
import pandas as pd
df = pd.read_json('ex.json')
When i see the dataframe the value of my id has changed from "7012104767417052471" to "7012104767417052160"py
Does anyone understand why python does this? I tried it in node, js, and even excel and it is looking fine in everything else..
If I do this I get the right id:
with open('Siva.json') as data_file:
data = json.load(data_file)
df = json_normalize(data)
But I want to understand why pandas doesn't process json properly in a strange way.
This is a known issue:
This has been an OPEN issue since 2018-04-04
read_json reads large integers as strings incorrectly if dtype not explicitly mentioned #20608
As stated in the issue. Explicitly designate the dtype to get the correct number.
import pandas as pd
df = pd.read_json('test.json', dtype={'id': 'int64'})
id session transactionId ts timestamp sku price qty ex
7012104767417052471 -1332751885 515934477 2019-10-30 12:15:40 AM (+0000) 2019-10-30 00:15:40.564 1234 39.99 1 [{'expId': 1007519, 'versionId': 100042440, 'variationId': 100076318, 'value': 1}]

Original dict/json from a pd.io.json.json_normalize() dataframe row

I have a pandas dataframe with rows created from dicts, using pd.io.json.json_normalize(). The values(not the keys/columns names) in dataframe have been modified. I want to retrieve a dict, with the same nested format the original dict has, from a row of the dataframe.
sample = {
"A": {
"a": 7
},
"B": {
"a": "name",
"z":{
"dD": 20 ,
"f_f": 3 ,
}
}
}
df = pd.io.json.json_normalize(sample, sep='__')
as expected df.columns returns me:
Index(['A__a', 'B__a', 'B__z__dD', 'B__z__f_f'], dtype='object')
I want to "reverse" the process now.
I can guarantee no string in the original dict(key or value) has a '__' as a substring and neither starts or ends with '_'

Load a dataframe from a single json object

I have the following json object:
{
"Name": "David",
"Gender": "M",
"Date": "2014-01-01",
"Address": {
"Street": "429 Ford",
"City": "Oxford",
"State": "DE",
"Zip": 1009
}
}
How would I load this into a pandas dataframe so that it orients itself as:
name gender date address
David M 20140-01-01 {...}
What I'm trying now is:
pd.read_json(file)
But it orients it as four records instead of one.
You should read it as a Series and then (optionally) convert to a DataFrame:
df = pd.DataFrame(pd.read_json(file, typ='series')).T
df.shape
#(1, 4)
if your JSON file is composed of 1 JSON object per line (not an array, not a pretty printed JSON object)
then you can use:
df = pd.read_json(file, lines=True)
and it will do what you want
if file contains:
{"Name": "David","Gender": "M","Date": "2014-01-01","Address": {"Street": "429 Ford","City": "Oxford","State": "DE","Zip": 1009}}
on 1 line, then you get:
If you use
df = pd.read_json(file, orient='records')
you can load as 1 key per column, but the sub-keys will be split up into multiple rows.

Categories

Resources