import pandas as pd
df = pd.DataFrame([{"email": "test#gmail.com"}])
is_upper = lambda x: x.upper() if isinstance(x, str) else x
df = df.applymap(trim_strings)
a = df.to_dict("records")
The response I get :
[{'email': 'test#gmail.com'}]
The response I expected :
[{'email': 'TEST#GMAIL.COM'}]
What can be the issue here?
To get the expected output, consider try this:
df['email'] = df['email'].str.upper()
Your df:
email
0 TEST#GMAIL.COM
to get the dictionary:
foo_dict = df.to_dict()
foo_dict
{'email': {0: 'TEST#GMAIL.COM'}}
Block Code:
import pandas as pd
df = pd.DataFrame([{"email": "test#gmail.com"}])
df['email'] = df['email'].str.upper()
foo_dict = df.to_dict()
Related
The dataframe is created with the Join_Date and Name
data = {'Join_Date': ['2023-01', '2023-01', '2023-02', '2023-03'],
'Name': ['Tom', 'Amy', 'Peter', 'Nick']}
df = pd.DataFrame(data)
I have split the df by Join_Date, can it be printed into excel date by date by using for loop?
df_split = [df[df['Join_Date'] == i] for i in df['Join_Date'].unique()]
Expected result:
You can use the ExcelWriter method in pandas:
import pandas as pd
import xlsxwriter
data = {'Join_Date': ['2023-01', '2023-01', '2023-02', '2023-03'],
'Name': ['Tom', 'Amy', 'Peter', 'Nick']}
df = pd.DataFrame(data)
df_split = [df[df['Join_Date'] == i] for i in df['Join_Date'].unique()]
writer = pd.ExcelWriter("example.xlsx", engine='xlsxwriter')
skip_rows = 0
for df in df_split:
df.to_excel(writer, sheet_name='Sheet1', startcol=2, startrow=2+skip_rows, index=False)
skip_rows += df.shape[0]+2
writer.close()
You can use the pandas methods to do so, like this. (You can add a empty line if you really need it)
import pandas as pd
data = {'Join_Date': ['2023-01', '2023-01', '2023-02', '2023-03'],
'Name': ['Tom', 'Amy', 'Peter', 'Nick']}
df = pd.DataFrame(data)
def add_header(x):
x.loc[-1] = 'Join_date', 'Name'
return x.sort_index().reset_index(drop=True)
df_split = df.groupby(['Join_Date'], group_keys=False)
df_group = df_split.apply(add_header)
df_group.to_excel('output.xlsx', index=False, header=False)
You can add the empty line editing the add_header func like:
def add_header(x):
x.loc[-1] = ' ', ' '
x = x.sort_index().reset_index(drop=True)
x.loc[0.5] = 'Join_date', 'Name'
x = x.sort_index().reset_index(drop=True)
return x
I have this code i`ts work but i wont get another result
parser.add_argument('-f', '--fields', help='csv fields', type=lambda s: [str(item) for item in s.split(',')])
fields = parser.parse_args().fields
print(fields)
df = pd.read_csv('data/file.csv', usecols=fields)
print(df.to_json(orient='index'))
Run command python main.py --fields date,campaign,clicks
Result:
['date', 'campaign', 'clicks']
{"0":{"date":"2022-01-06","campaign":"retageting APAC","clicks":1}}
It should return the data in JSON format in a "data" envelope.
Need result:
['date', 'campaign', 'clicks']
{"data":[{"date":"2022-01-06","campaign":"retageting APAC","clicks":1}]}
How to do this?
Create dictionary with orient='records' first and then convert to json by json.dumps:
import json
d = {"0":{"date":"2022-01-06","campaign":"retageting APAC","clicks":1}}
df = pd.DataFrame.from_dict(d, orient='index')
print({"data":df.to_dict(orient='records')})
{'data': [{'date': '2022-01-06', 'campaign': 'retageting APAC', 'clicks': 1}]}
print(json.dumps({"data":df.to_dict(orient='records')}))
{"data": [{"date": "2022-01-06", "campaign": "retageting APAC", "clicks": 1}]}
I am trying to compare two json and then write another json with columns names and with differences as yes or no. I am using pandas and numpy
The below is sample files i am including actually, these json are dynamic, that mean we dont know how many key will be there upfront
Input files:
fut.json
[
{
"AlarmName": "test",
"StateValue": "OK"
}
]
Curr.json:
[
{
"AlarmName": "test",
"StateValue": "OK"
}
]
Below code I have tried:
import pandas as pd
import numpy as np
with open(r"c:\csv\fut.json", 'r+') as f:
data_b = json.load(f)
with open(r"c:\csv\curr.json", 'r+') as f:
data_a = json.load(f)
df_a = pd.json_normalize(data_a)
df_b = pd.json_normalize(data_b)
_, df_a = df_b.align(df_a, fill_value=np.NaN)
_, df_b = df_a.align(df_b, fill_value=np.NaN)
with open(r"c:\csv\report.json", 'w') as _file:
for col in df_a.columns:
df_temp = pd.DataFrame()
df_temp[col + '_curr'], df_temp[col + '_fut'], df_temp[col + '_diff'] = df_a[col], df_b[col], np.where((df_a[col] == df_b[col]), 'No', 'Yes')
#[df_temp.rename(columns={c:'Missing'}, inplace=True) for c in df_temp.columns if df_temp[c].isnull().all()]
df_temp.fillna('Missing', inplace=True)
with pd.option_context('display.max_colwidth', -1):
_file.write(df_temp.to_json(orient='records'))
Expected output:
[
{
"AlarmName_curr": "test",
"AlarmName_fut": "test",
"AlarmName_diff": "No"
},
{
"StateValue_curr": "OK",
"StateValue_fut": "OK",
"StateValue_diff": "No"
}
]
Coming output: Not able to parse it in json validator, below is the problem, those [] should be replaed by ',' to get right json dont know why its printing like that
[{"AlarmName_curr":"test","AlarmName_fut":"test","AlarmName_diff":"No"}][{"StateValue_curr":"OK","StateValue_fut":"OK","StateValue_diff":"No"}]
Edit1:
Tried below as well
_file.write(df_temp.to_json(orient='records',lines=True))
now i get json which is again not parsable, ',' is missing and unless i add , between two dic and [ ] at beginning and end manually , its not parsing..
[{"AlarmName_curr":"test","AlarmName_fut":"test","AlarmName_diff":"No"}{"StateValue_curr":"OK","StateValue_fut":"OK","StateValue_diff":"No"}]
Honestly pandas is overkill for this... however
load dataframes as you did
concat them as columns. rename columns
do calcs and map boolean to desired Yes/No
to_json() returns a string so json.loads() to get it back into a list/dict. Filter columns to get to your required format
import json
data_b = [
{
"AlarmName": "test",
"StateValue": "OK"
}
]
data_a = [
{
"AlarmName": "test",
"StateValue": "OK"
}
]
df_a = pd.json_normalize(data_a)
df_b = pd.json_normalize(data_b)
df = pd.concat([df_a, df_b], axis=1)
df.columns = [c+"_curr" for c in df_a.columns] + [c+"_fut" for c in df_a.columns]
df["AlarmName_diff"] = df["AlarmName_curr"] == df["AlarmName_fut"]
df["StateValue_diff"] = df["StateValue_curr"] == df["StateValue_fut"]
df = df.replace({True:"Yes", False:"No"})
js = json.loads(df.loc[:,(c for c in df.columns if c.startswith("Alarm"))].to_json(orient="records"))
js += json.loads(df.loc[:,(c for c in df.columns if c.startswith("State"))].to_json(orient="records"))
js
output
[{'AlarmName_curr': 'test', 'AlarmName_fut': 'test', 'AlarmName_diff': 'Yes'},
{'StateValue_curr': 'OK', 'StateValue_fut': 'OK', 'StateValue_diff': 'Yes'}]
I am able to import data from json file using this code...
import requests
from pandas.io.json import json_normalize
url = "https://datameetgeobk.s3.amazonaws.com/image_list.json"
resp = requests.get(url=url)
df = json_normalize(resp.json()['Images'])
df.head()
But the column "BlockDeviceMappings" is actually a list and each item has DeviceName and Ebs parameters those are string and dicts. How do I further normalize my dataframe to include all the details in separate columns?
My screenshot does not match with the one shown in the answer. The Ebs column (second from left) is a dictionary.
import requests
import pandas as pd
url = "https://datameetgeobk.s3.amazonaws.com/image_list.json"
resp = requests.get(url=url)
resp = resp.json()
What you have so far:
df = pd.json_normalize(resp['Images'])
BlockDeviceMappings cast to all columns
inner_keys = [x for x in resp['Images'][0].keys() if x != 'BlockDeviceMappings']
df_bdm = pd.json_normalize(resp['Images'], record_path=['BlockDeviceMappings'], meta=inner_keys, errors='ignore')
Separate bdm_df:
bdm_df = pd.json_normalize(resp['Images'], record_path=['BlockDeviceMappings'])
You will no doubt wonder why df has 39995 entries, while bdm_df has 131691 entries. This is because BlockDeviceMappings is a list of dicts of varying lengths:
bdm_len = [len(x) for x in df.BlockDeviceMappings]
max(bdm_len)
>>> 31
Sample BlockDeviceMappings entry:
[{'DeviceName': '/dev/sda1',
'Ebs': {'DeleteOnTermination': True,
'SnapshotId': 'snap-0aac2591b85fe677e',
'VolumeSize': 80,
'VolumeType': 'gp2',
'Encrypted': False}},
{'DeviceName': 'xvdb',
'Ebs': {'DeleteOnTermination': True,
'SnapshotId': 'snap-0bd8d7828225924a7',
'VolumeSize': 80,
'VolumeType': 'gp2',
'Encrypted': False}}]
df_bdm.head()
I have a list of places from an excel file which I would enrich with the geonames Ids. Starting from the excel file I made a pandas Data Frame then I would use the values from the DF as params in my request.
Here the script I made
import pandas as pd
import requests
import json
require_cols = [1]
required_df = pd.read_excel('grp.xlsx', usecols = require_cols)
print(required_df)
url = 'http://api.geonames.org/searchJSON?'
params = { 'username': "XXXXXXXX",
'name_equals': (required_df),
'maxRows': "1"}
e = requests.get(url, params=params)
pretty_json = json.loads(e.content)
print (json.dumps(pretty_json, indent=2))
The problem is related to the defintion of this parameter:
'name_equals': (required_df)
I would use the Places (around 15k) from the DF as param and recoursively retrieve the related geonames ID and write the output in a separate excel file.
The simple request works:
import requests
import json
url = 'http://api.geonames.org/searchJSON?'
params = { 'username': "XXXXXXX",
'name_equals': "Aire",
'maxRows': "1"}
e = requests.get(url, params=params)
pretty_json = json.loads(e.content)
print (json.dumps(pretty_json, indent=2))
#print(e.content)
As well as the definition of Pandas data frame:
# import pandas lib as pd
import pandas as pd
require_cols = [0,1]
# only read specific columns from an excel file
required_df = pd.read_excel('grp.xlsx', usecols = require_cols)
print(required_df)
I also tried via SPARQL without results so I decided to go via Python.
Thanks for your time.
You can use for-loop
import pandas as pd
df = pd.DataFrame({'Places': ['London', 'Paris', 'Berlin']})
for item in df['Places']:
print('requests for:', item)
# ... rest of code ...
or df.apply()
import pandas as pd
def run(item):
print('requests for:', item)
# ... rest of code ...
return 'result for ' + item
df = pd.DataFrame({'Places': ['London', 'Paris', 'Berlin']})
df['Results'] = df['Places'].apply(run)
Thanks #furas for your reply.
I solved like this:
import pandas as pd
import requests
import json
url = 'http://api.geonames.org/searchJSON?'
df = pd.read_excel('Book.xlsx', sheet_name='Sheet1', usecols="B")
for item in df.place_name:
df.place_name.head()
params ={ 'username': "XXXXXX",
'name_equals': item,
'maxRows': "1"}
e = requests.get(url, params=params)
pretty_json = json.loads(e.content)
for item in pretty_json["geonames"]:
print (json.dumps(item["geonameId"], indent=2))
with open('data.json', 'w', encoding='utf-8') as f:
json.dump(item["geonameId"], f, ensure_ascii=False, indent=4)
#print(e.content)
The only problem now is related to the json output: By print I'm having the complete IDs list however, when I'm going to write the output to a file I'm getting just the last ID from the list.