The dataframe is created with the Join_Date and Name
data = {'Join_Date': ['2023-01', '2023-01', '2023-02', '2023-03'],
'Name': ['Tom', 'Amy', 'Peter', 'Nick']}
df = pd.DataFrame(data)
I have split the df by Join_Date, can it be printed into excel date by date by using for loop?
df_split = [df[df['Join_Date'] == i] for i in df['Join_Date'].unique()]
Expected result:
You can use the ExcelWriter method in pandas:
import pandas as pd
import xlsxwriter
data = {'Join_Date': ['2023-01', '2023-01', '2023-02', '2023-03'],
'Name': ['Tom', 'Amy', 'Peter', 'Nick']}
df = pd.DataFrame(data)
df_split = [df[df['Join_Date'] == i] for i in df['Join_Date'].unique()]
writer = pd.ExcelWriter("example.xlsx", engine='xlsxwriter')
skip_rows = 0
for df in df_split:
df.to_excel(writer, sheet_name='Sheet1', startcol=2, startrow=2+skip_rows, index=False)
skip_rows += df.shape[0]+2
writer.close()
You can use the pandas methods to do so, like this. (You can add a empty line if you really need it)
import pandas as pd
data = {'Join_Date': ['2023-01', '2023-01', '2023-02', '2023-03'],
'Name': ['Tom', 'Amy', 'Peter', 'Nick']}
df = pd.DataFrame(data)
def add_header(x):
x.loc[-1] = 'Join_date', 'Name'
return x.sort_index().reset_index(drop=True)
df_split = df.groupby(['Join_Date'], group_keys=False)
df_group = df_split.apply(add_header)
df_group.to_excel('output.xlsx', index=False, header=False)
You can add the empty line editing the add_header func like:
def add_header(x):
x.loc[-1] = ' ', ' '
x = x.sort_index().reset_index(drop=True)
x.loc[0.5] = 'Join_date', 'Name'
x = x.sort_index().reset_index(drop=True)
return x
Related
I would like to save sorted data from A to Z by column C in my Excel file.
My code:
### EXCEL
# Set column names
A = 'SURNAME'
B = 'NAME'
C = 'sAMAccountName'
# Set worksheet
wb = Workbook() # create excel worksheet
ws_01 = wb.active # Grab the active worksheet
ws_01.title = "all inf" # Set the title of the worksheet
# Set first row
title_row = 1
ws_01.cell(title_row, 1, A) # cell(row, col, value)
ws_01.cell(title_row, 2, B)
ws_01.cell(title_row, 3, C)
data_row = 2
for user in retrieved_users:
attributes = user['attributes']
sAMAccountName = attributes['sAMAccountName']
if(user_validation(sAMAccountName) == True):
A = attributes['sn']
B = attributes['givenName']
C = sAMAccountName
ws_01.cell(data_row, 1, str(A))
ws_01.cell(data_row, 2, str(B))
ws_01.cell(data_row, 3, str(C))
data_row = data_row + 1
# Save it in an Excel file
decoded_users_all_inf = root_path + reports_dir + users_all_inf_excel_file
wb.save(decoded_users_all_inf)
Where and what I have put to my code to have this?
If you want to sort retrieved_users, then you can use the built-in list.sort with a key to access the sAMAccountName.
retrieved_users = [
{"attributes": {"sn": "a", "givenName": "Alice", "sAMAccountName": "z"}},
{"attributes": {"sn": "b", "givenName": "Bob", "sAMAccountName": "x"}},
{"attributes": {"sn": "c", "givenName": "Charlie", "sAMAccountName": "y"}},
]
retrieved_users.sort(key=lambda d: d["attributes"]["sAMAccountName"])
retrieved_users contains
[{'attributes': {'sn': 'b', 'givenName': 'Bob', 'sAMAccountName': 'x'}},
{'attributes': {'sn': 'c', 'givenName': 'Charlie', 'sAMAccountName': 'y'}},
{'attributes': {'sn': 'a', 'givenName': 'Alice', 'sAMAccountName': 'z'}}]
On another note, you can do ws.append(row) to append entire rows at a time rather than doing ws.cell(row, col, value) three times:
wb = Workbook()
ws = wb.active
ws.append(('SURNAME', 'NAME', 'sAMAccountName'))
is equivalent to
wb = Workbook()
ws = wb.active
ws.cell(1, 1, 'SURNAME')
ws.cell(1, 2, 'NAME')
ws.cell(1, 3, 'sAMAccountName')
import pandas as pd
df = pd.DataFrame([{"email": "test#gmail.com"}])
is_upper = lambda x: x.upper() if isinstance(x, str) else x
df = df.applymap(trim_strings)
a = df.to_dict("records")
The response I get :
[{'email': 'test#gmail.com'}]
The response I expected :
[{'email': 'TEST#GMAIL.COM'}]
What can be the issue here?
To get the expected output, consider try this:
df['email'] = df['email'].str.upper()
Your df:
email
0 TEST#GMAIL.COM
to get the dictionary:
foo_dict = df.to_dict()
foo_dict
{'email': {0: 'TEST#GMAIL.COM'}}
Block Code:
import pandas as pd
df = pd.DataFrame([{"email": "test#gmail.com"}])
df['email'] = df['email'].str.upper()
foo_dict = df.to_dict()
I have two excel files from two different wholesalers with products and stock quantity information.
Some of the products in the two files are common, so they exist in both files.
The number of products in the files is different e.g. the first has 65000 products and the second has 9000 products.
I need to iterate through the products of the first file based on the common column 'EAN CODE' and check if the current product exists also in the EAN column of the 2nd file.
Afterwards check which product has the lower price (which has stock > 0) and print the matching row of this product to another output excel file.
import os
import re
from datetime import datetime
import pandas
from utils import recognize_excel_type
dataframes = []
input_directory = 'in'
for file in os.listdir(input_directory):
file_path = os.path.join(input_directory, file)
if file.lower().endswith('xlsx') or file.lower().endswith('xls'):
dataframes.append(
pandas.read_excel(file_path)
)
elif file.lower().endswith('csv'):
dataframes.append(
pandas.read_csv(file_path, delimiter=';')
)
combined_dataframe = pandas.DataFrame(columns=['Price', 'Stock', 'EAN Code'])
for dataframe in dataframes:
this_type = recognize_excel_type(dataframe)
if this_type == 'DIFOX':
dataframe.rename(columns={
'retail price': 'Price',
'availability (steps)': 'Stock',
'EAN number 1': 'EAN Code',
}, inplace=True)
tuned_dataframe = pandas.DataFrame(
dataframe[combined_dataframe.columns],
)
combined_dataframe = combined_dataframe.append(tuned_dataframe, ignore_index=True)
elif this_type == 'ECOM_VGA':
headers = dataframe.iloc[2]
dataframe = dataframe[3:]
dataframe.columns = headers
dataframe.rename(columns={
'Price (€)': 'Price',
'Stock': 'Stock',
'EAN Code': 'EAN Code',
}, inplace=True)
tuned_dataframe = pandas.DataFrame(
dataframe[combined_dataframe.columns],
)
combined_dataframe = combined_dataframe.append(tuned_dataframe, ignore_index=True)
elif this_type == 'MAXCOM':
dataframe.rename(columns={
'VK-Preis': 'Price',
'Verfügbar': 'Stock',
'EAN-Code': 'EAN Code',
}, inplace=True)
tuned_dataframe = pandas.DataFrame(
dataframe[combined_dataframe.columns],
)
combined_dataframe = combined_dataframe.append(tuned_dataframe, ignore_index=True)
combined_dataframe.dropna(inplace=True)
combined_dataframe['Stock'].replace('> ?', '', inplace=True, regex=True)
combined_dataframe['Price'].replace('> ?', '', inplace=True, regex=True)
combined_dataframe = combined_dataframe.astype(
{'Stock': 'int32', 'Price': 'float32'}
)
combined_dataframe = combined_dataframe[combined_dataframe['Stock'] > 0]
combined_dataframe = combined_dataframe.loc[combined_dataframe.groupby('EAN Code')['Price'].idxmin()]
combined_dataframe.to_excel('output_backup/output-{}.xlsx'.format(datetime.now().strftime('%Y-%m-%d')), index=False)
if os.path.exists('output/output.xlsx'):
os.remove("output/output.xlsx")
combined_dataframe.to_excel('output/output.xlsx'.format(datetime.now().strftime('%Y-%m-%d')), index=False)
print('Output saved to output directory')
for file in os.listdir(input_directory):
file_path = os.path.join(input_directory, file)
os.remove(file_path)
print('All input files removed')
I have a data frame which contains 3 columns(Issue id, Creator, Versions).I need to extract the row which does not contain the value "<JIRA Version" in the "versions" column(Which is the third and fifth row in my case.Similarly there could be multiple rows in the data frame)
Below is the code i'm trying, but this is actually printing all the rows from the data frame. Any help/suggestions are appreciated.
allissues = []
for i in issues:
d = {
'Issue id': i.id,
'creator' : i.fields.creator,
'resolution': i.fields.resolution,
'status.name': i.fields.status.name,
'versions': i.fields.versions,
}
allissues.append(d)
df = pd.DataFrame(allissues, columns=['Issue id', 'creator', 'versions'])
matchers = ['<JIRA Version']
for ind in df.values:
if matchers not in df.values:
print(df['versions'][ind], df['Issue id'][ind])
some minor changes in your code:
allissues = []
for i in issues:
d = {
'Issue id': i.id,
'creator' : i.fields.creator,
'resolution': i.fields.resolution,
'status.name': i.fields.status.name,
'versions': i.fields.versions,
}
allissues.append(d)
df = pd.DataFrame(allissues, columns=['Issue id', 'creator', 'versions'])
matchers = '<JIRA Version'
for ind,row in df.iterrows():
if matchers not in row.versions:
print(row['versions'], row['Issue id'])
Python "Exec" command is not passing local values in exec shell. I thought this should be a simple question but all seem stumped. Here is a repeatable working version of the problem ... it took me a bit to recreate a working problem (my files are much larger than examples shown here, there are up to 10-dfs per loop, often 1800 items per df )
EXEC was only passing "PRODUCT" (as opposed to "PRODUCT.AREA" before I added "["{ind_id}"]" and then also it also shows an error "<string> in <module>".
datum_0 = {'Products': ['Stocks', 'Bonds', 'Notes'],'PRODUCT.AREA': ['10200', '50291','50988']}
df_0 = pd.DataFrame (datum_0, columns = ['Products','PRODUCT.AREA'])
datum_1 = {'Products': ['Stocks', 'Bonds', 'Notes'],'PRODUCT.CODE': ['66', '55','22']}
df_1 = pd.DataFrame (datum_1, columns = ['Products','PRODUCT.CODE'])
df_0
summary = {'Prodinfo': ['PRODUCT.AREA', 'PRODUCT.CODE']}
df_list= pd.DataFrame (summary, columns = ['Prodinfo'])
df_list
# Create a rankings column for the Prodinfo tables
for rows in df_list.itertuples():
row = rows.Index
ind_id = df_list.loc[row]['Prodinfo']
print(row, ind_id)
exec(f'df_{row}["rank"] = df_{row}["{ind_id}"].rank(ascending=True) ')
Of course its this last line that is throwing exec errors. Any ideas? Have you got a working global or local variable assignment that fixes it? etc... thanks!
I would use list to keep all DataFrames
all_df = [] # list
all_df.append(df_1)
all_df.append(df_2)
and then I would no need exec
for rows in df_list.itertuples():
row = rows.Index
ind_id = df_list.loc[row]['Prodinfo']
print(row, ind_id)
all_df[row]["rank"] = all_df[row][ind_id].rank(ascending=True)
Eventually I would use dictionary
all_df = {} # dict
all_df['PRODUCT.AREA'] = df_1
all_df['PRODUCT.CODE'] = df_2
and then I don't need exec and df_list
for key, df in all_df.items():
df["rank"] = df[key].rank(ascending=True)
Minimal working code with list
import pandas as pd
all_df = [] # list
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.AREA': ['10200', '50291', '50988']
}
all_df.append( pd.DataFrame(datum) )
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.CODE': ['66', '55', '22']
}
all_df.append( pd.DataFrame(datum) )
#print( all_df[0] )
#print( all_df[1] )
print('--- before ---')
for df in all_df:
print(df)
summary = {'Prodinfo': ['PRODUCT.AREA', 'PRODUCT.CODE']}
df_list = pd.DataFrame(summary, columns=['Prodinfo'])
#print(df_list)
for rows in df_list.itertuples():
row = rows.Index
ind_id = df_list.loc[row]['Prodinfo']
#print(row, ind_id)
all_df[row]["rank"] = all_df[row][ind_id].rank(ascending=True)
print('--- after ---')
for df in all_df:
print(df)
Minimal working code with dict
import pandas as pd
all_df = {} # dict
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.AREA': ['10200', '50291', '50988']
}
all_df['PRODUCT.AREA'] = pd.DataFrame(datum)
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.CODE': ['66', '55', '22']
}
all_df['PRODUCT.CODE'] = pd.DataFrame (datum)
print('--- before ---')
for df in all_df.values():
print(df)
for key, df in all_df.items():
df["rank"] = df[key].rank(ascending=True)
print('--- after ---')
for df in all_df.values():
print(df)
Frankly, for two dataframes I wouldn't waste time for df_list and for-loop
import pandas as pd
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.AREA': ['10200', '50291', '50988']
}
df_0 = pd.DataFrame(datum)
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.CODE': ['66', '55', '22']
}
df_1 = pd.DataFrame(datum)
print('--- before ---')
print( df_0 )
print( df_1 )
df_0["rank"] = df_0['PRODUCT.AREA'].rank(ascending=True)
df_1["rank"] = df_1['PRODUCT.CODE'].rank(ascending=True)
print('--- after ---')
print( df_0 )
print( df_1 )
And probably I would even put all in one dataframe
import pandas as pd
df = pd.DataFrame({
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.AREA': ['10200', '50291', '50988'],
'PRODUCT.CODE': ['66', '55', '22'],
})
print('--- before ---')
print( df )
#df["rank PRODUCT.AREA"] = df['PRODUCT.AREA'].rank(ascending=True)
#df["rank PRODUCT.CODE"] = df['PRODUCT.CODE'].rank(ascending=True)
for name in ['PRODUCT.AREA', 'PRODUCT.CODE']:
df[f"rank {name}"] = df[name].rank(ascending=True)
print('--- after ---')
print( df )
Result:
--- before ---
Products PRODUCT.AREA PRODUCT.CODE
0 Stocks 10200 66
1 Bonds 50291 55
2 Notes 50988 22
--- after ---
Products PRODUCT.AREA PRODUCT.CODE rank PRODUCT.AREA rank PRODUCT.CODE
0 Stocks 10200 66 1.0 3.0
1 Bonds 50291 55 2.0 2.0
2 Notes 50988 22 3.0 1.0
As expected, this was an easy fix. Thanks to answerers who gave much to think about ...
Kudos to #holdenweb and his answer at ... Create multiple dataframes in loop
dfnew = {} # CREATE A DICTIONARY!!! - THIS WAS THE TRICK I WAS MISSING
df_ = {}
for rows in df_list.itertuples():
row = rows.Index
ind_id = df_list.loc[row]['Prodinfo']
dfnew[row] = df_[row] # or pd.read_csv(csv_file) or database_query or ...
dfnew[row].dropna(inplace=True)
dfnew[row]["rank"] = dfnew[row][ind_id].rank(ascending=True)
Works well and very simple...