How to write to excel sheet only those rows which match the condition using Python pandas - python

I have a data frame which contains 3 columns(Issue id, Creator, Versions).I need to extract the row which does not contain the value "<JIRA Version" in the "versions" column(Which is the third and fifth row in my case.Similarly there could be multiple rows in the data frame)
Below is the code i'm trying, but this is actually printing all the rows from the data frame. Any help/suggestions are appreciated.
allissues = []
for i in issues:
d = {
'Issue id': i.id,
'creator' : i.fields.creator,
'resolution': i.fields.resolution,
'status.name': i.fields.status.name,
'versions': i.fields.versions,
}
allissues.append(d)
df = pd.DataFrame(allissues, columns=['Issue id', 'creator', 'versions'])
matchers = ['<JIRA Version']
for ind in df.values:
if matchers not in df.values:
print(df['versions'][ind], df['Issue id'][ind])

some minor changes in your code:
allissues = []
for i in issues:
d = {
'Issue id': i.id,
'creator' : i.fields.creator,
'resolution': i.fields.resolution,
'status.name': i.fields.status.name,
'versions': i.fields.versions,
}
allissues.append(d)
df = pd.DataFrame(allissues, columns=['Issue id', 'creator', 'versions'])
matchers = '<JIRA Version'
for ind,row in df.iterrows():
if matchers not in row.versions:
print(row['versions'], row['Issue id'])

Related

Create different dataframe inside of a 'for' loop

I have a dataset that looks something like the following. I would like to create dataframes that contains only texts for each authors, for example as you can see the df1 contains only texts from the author0, etc. Is there any way to do that for many authors?
import pandas as pd
data = {
'text' : ['text0', 'text1', 'text2'],
'author': ['author0', 'author1', 'author1'],
'title': ['Comunicación', 'Administración', 'Ventas']
}
df = pd.DataFrame(data)
df1 = df[df["author"]=="author0"]
df2 = df[df["author"]=="author1"]
I have tried this, but it's not working
list_author = df['author'].unique().tolist()
for i in list_author:
dt_str(i) = dt[dt["author"]=="i"]
It would be helpful if the data frames have the name df_'author' (eg df_George)
If you want to have separate dataframes per author, use a dictionary with the author names as the keys. See the below example:
data = {
'text' : ['text0', 'text1', 'text2'],
'author': ['author0', 'author1', 'author1'],
'title': ['Comunicación', 'Administración', 'Ventas']
}
df = pd.DataFrame(data)
df_dict = {}
for author in df['author'].unique():
df_dict[author] = df[df['author']==author]
print(df_dict.keys())
#dict_keys(['author0', 'author1'])
print(df_dict['author0'])
# text author title
# 0 text0 author0 Comunicación
print(df_dict['author1'])
# text author title
# 1 text1 author1 Administración
# 2 text2 author1 Ventas

How can I refactor my code to return a collection of dictionaries?

def read_data(service_client):
data = list_data(domain, realm) # This returns a data frame
building_data = []
building_names = {}
all_buildings = {}
for elem in data.iterrows():
building = elem[1]['building_name']
region_id = elem[1]['region_id']
bandwith = elem[1]['bandwith']
building_id = elem[1]['building_id']
return {
'Building': building,
'Region Id': region_id,
'Bandwith': bandwith,
'Building Id': building_id,
}
Basically I am able to return a single dictionary value upon a iteration here in this example. I have tried printing it as well and others.
I am trying to find a way to store multiple dictionary values on each iteration and return it, instead of just returning one.. Does anyone know any ways to achieve this?
You may replace your for-loop with the following to get all dictionaries in a list.
naming = {
'building_name': 'Building',
'region_id': 'Region Id',
'bandwith': 'Bandwith',
'building_id': 'Building Id',
}
return [
row[list(naming.values())].to_dict()
for idx, row in data.rename(naming, axis=1).iterrows()
]

Check two excel files for common products with Python Pandas and pick the product with the lowest price

I have two excel files from two different wholesalers with products and stock quantity information.
Some of the products in the two files are common, so they exist in both files.
The number of products in the files is different e.g. the first has 65000 products and the second has 9000 products.
I need to iterate through the products of the first file based on the common column 'EAN CODE' and check if the current product exists also in the EAN column of the 2nd file.
Afterwards check which product has the lower price (which has stock > 0) and print the matching row of this product to another output excel file.
import os
import re
from datetime import datetime
import pandas
from utils import recognize_excel_type
dataframes = []
input_directory = 'in'
for file in os.listdir(input_directory):
file_path = os.path.join(input_directory, file)
if file.lower().endswith('xlsx') or file.lower().endswith('xls'):
dataframes.append(
pandas.read_excel(file_path)
)
elif file.lower().endswith('csv'):
dataframes.append(
pandas.read_csv(file_path, delimiter=';')
)
combined_dataframe = pandas.DataFrame(columns=['Price', 'Stock', 'EAN Code'])
for dataframe in dataframes:
this_type = recognize_excel_type(dataframe)
if this_type == 'DIFOX':
dataframe.rename(columns={
'retail price': 'Price',
'availability (steps)': 'Stock',
'EAN number 1': 'EAN Code',
}, inplace=True)
tuned_dataframe = pandas.DataFrame(
dataframe[combined_dataframe.columns],
)
combined_dataframe = combined_dataframe.append(tuned_dataframe, ignore_index=True)
elif this_type == 'ECOM_VGA':
headers = dataframe.iloc[2]
dataframe = dataframe[3:]
dataframe.columns = headers
dataframe.rename(columns={
'Price (€)': 'Price',
'Stock': 'Stock',
'EAN Code': 'EAN Code',
}, inplace=True)
tuned_dataframe = pandas.DataFrame(
dataframe[combined_dataframe.columns],
)
combined_dataframe = combined_dataframe.append(tuned_dataframe, ignore_index=True)
elif this_type == 'MAXCOM':
dataframe.rename(columns={
'VK-Preis': 'Price',
'Verfügbar': 'Stock',
'EAN-Code': 'EAN Code',
}, inplace=True)
tuned_dataframe = pandas.DataFrame(
dataframe[combined_dataframe.columns],
)
combined_dataframe = combined_dataframe.append(tuned_dataframe, ignore_index=True)
combined_dataframe.dropna(inplace=True)
combined_dataframe['Stock'].replace('> ?', '', inplace=True, regex=True)
combined_dataframe['Price'].replace('> ?', '', inplace=True, regex=True)
combined_dataframe = combined_dataframe.astype(
{'Stock': 'int32', 'Price': 'float32'}
)
combined_dataframe = combined_dataframe[combined_dataframe['Stock'] > 0]
combined_dataframe = combined_dataframe.loc[combined_dataframe.groupby('EAN Code')['Price'].idxmin()]
combined_dataframe.to_excel('output_backup/output-{}.xlsx'.format(datetime.now().strftime('%Y-%m-%d')), index=False)
if os.path.exists('output/output.xlsx'):
os.remove("output/output.xlsx")
combined_dataframe.to_excel('output/output.xlsx'.format(datetime.now().strftime('%Y-%m-%d')), index=False)
print('Output saved to output directory')
for file in os.listdir(input_directory):
file_path = os.path.join(input_directory, file)
os.remove(file_path)
print('All input files removed')

How to pass list-like using .reindex as doing it in .loc has been deprecated?

I have a dataframe with multiple fields and I want to use some columns values to recreate a new dataframe as a JSON object:
Street City State Zip_Code
24 St. Kansas City KS 12345-213
... ... ... ....
In order to do so, I was using .loc and .apply like this in python:
def address_x(vals):
val = {
'street': None if not str(vals[0]) else vals[0],
'city': None if not str(vals[1]) else vals[1],
'state': None if not str(vals[2]) else state(vals[2]),
'postal_code': postal_code(str(vals[3]))
}
return val
def transform (dataset):
df = pd.DataFrame()
df['address'] = dataset.loc[['Street', 'City', 'State', 'Zip_Code']].apply(address_x, axis=1)
return df
obj = s3client.get_object(Bucket=bucket, Key=key)
new_df = transform(pd.read_csv(io.BytesIO(obj['Body'].read()), delimiter='|', sep='|'))
new_df.to_json('TEST.json', orient='records', lines=True)
That gives me this error message KeyError: 'Passing list-likes to .loc or [] with any missing labels is no longer supported, see https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike'
I am trying to use df['address'] = dataset.reindex(['STREET', 'CITY', 'STATE', 'ZIP CODE']).apply(lambda x: address_x(x)) but just stores all values as null instead of this:
{"address":{
"street": "24 St.",
"city": "Kansas City",
"state": "Kansas",
"postal_code": 12345-213}
}
The input is a regular csv file that is using '|' as separator and between all columns it has, this are just 4 of them in the example up.
Then I store it as a json and currently the output looks like: {"address":{"street":null,"city":null,"state":null,"postal_code":null}} for each record, instead of populating the json with the csv values.
Change to:
def address_x(vals):
val = {
'street': None if not str(vals['Street']) else vals['Street'],
'city': None if not str(vals['City']) else vals['City'],
'state': None if not str(vals['State']) else state(vals['State']),
'postal_code': postal_code(str(vals['Zip_Code']))
}
return val
df['address'] = dataset[['Street', 'City', 'State', 'Zip_Code']].apply(address_x, axis=1)

Python Exec not passing full variables to exec shell - with working errors

Python "Exec" command is not passing local values in exec shell. I thought this should be a simple question but all seem stumped. Here is a repeatable working version of the problem ... it took me a bit to recreate a working problem (my files are much larger than examples shown here, there are up to 10-dfs per loop, often 1800 items per df )
EXEC was only passing "PRODUCT" (as opposed to "PRODUCT.AREA" before I added "["{ind_id}"]" and then also it also shows an error "<string> in <module>".
datum_0 = {'Products': ['Stocks', 'Bonds', 'Notes'],'PRODUCT.AREA': ['10200', '50291','50988']}
df_0 = pd.DataFrame (datum_0, columns = ['Products','PRODUCT.AREA'])
datum_1 = {'Products': ['Stocks', 'Bonds', 'Notes'],'PRODUCT.CODE': ['66', '55','22']}
df_1 = pd.DataFrame (datum_1, columns = ['Products','PRODUCT.CODE'])
df_0
summary = {'Prodinfo': ['PRODUCT.AREA', 'PRODUCT.CODE']}
df_list= pd.DataFrame (summary, columns = ['Prodinfo'])
df_list
# Create a rankings column for the Prodinfo tables
for rows in df_list.itertuples():
row = rows.Index
ind_id = df_list.loc[row]['Prodinfo']
print(row, ind_id)
exec(f'df_{row}["rank"] = df_{row}["{ind_id}"].rank(ascending=True) ')
Of course its this last line that is throwing exec errors. Any ideas? Have you got a working global or local variable assignment that fixes it? etc... thanks!
I would use list to keep all DataFrames
all_df = [] # list
all_df.append(df_1)
all_df.append(df_2)
and then I would no need exec
for rows in df_list.itertuples():
row = rows.Index
ind_id = df_list.loc[row]['Prodinfo']
print(row, ind_id)
all_df[row]["rank"] = all_df[row][ind_id].rank(ascending=True)
Eventually I would use dictionary
all_df = {} # dict
all_df['PRODUCT.AREA'] = df_1
all_df['PRODUCT.CODE'] = df_2
and then I don't need exec and df_list
for key, df in all_df.items():
df["rank"] = df[key].rank(ascending=True)
Minimal working code with list
import pandas as pd
all_df = [] # list
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.AREA': ['10200', '50291', '50988']
}
all_df.append( pd.DataFrame(datum) )
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.CODE': ['66', '55', '22']
}
all_df.append( pd.DataFrame(datum) )
#print( all_df[0] )
#print( all_df[1] )
print('--- before ---')
for df in all_df:
print(df)
summary = {'Prodinfo': ['PRODUCT.AREA', 'PRODUCT.CODE']}
df_list = pd.DataFrame(summary, columns=['Prodinfo'])
#print(df_list)
for rows in df_list.itertuples():
row = rows.Index
ind_id = df_list.loc[row]['Prodinfo']
#print(row, ind_id)
all_df[row]["rank"] = all_df[row][ind_id].rank(ascending=True)
print('--- after ---')
for df in all_df:
print(df)
Minimal working code with dict
import pandas as pd
all_df = {} # dict
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.AREA': ['10200', '50291', '50988']
}
all_df['PRODUCT.AREA'] = pd.DataFrame(datum)
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.CODE': ['66', '55', '22']
}
all_df['PRODUCT.CODE'] = pd.DataFrame (datum)
print('--- before ---')
for df in all_df.values():
print(df)
for key, df in all_df.items():
df["rank"] = df[key].rank(ascending=True)
print('--- after ---')
for df in all_df.values():
print(df)
Frankly, for two dataframes I wouldn't waste time for df_list and for-loop
import pandas as pd
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.AREA': ['10200', '50291', '50988']
}
df_0 = pd.DataFrame(datum)
datum = {
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.CODE': ['66', '55', '22']
}
df_1 = pd.DataFrame(datum)
print('--- before ---')
print( df_0 )
print( df_1 )
df_0["rank"] = df_0['PRODUCT.AREA'].rank(ascending=True)
df_1["rank"] = df_1['PRODUCT.CODE'].rank(ascending=True)
print('--- after ---')
print( df_0 )
print( df_1 )
And probably I would even put all in one dataframe
import pandas as pd
df = pd.DataFrame({
'Products': ['Stocks', 'Bonds', 'Notes'],
'PRODUCT.AREA': ['10200', '50291', '50988'],
'PRODUCT.CODE': ['66', '55', '22'],
})
print('--- before ---')
print( df )
#df["rank PRODUCT.AREA"] = df['PRODUCT.AREA'].rank(ascending=True)
#df["rank PRODUCT.CODE"] = df['PRODUCT.CODE'].rank(ascending=True)
for name in ['PRODUCT.AREA', 'PRODUCT.CODE']:
df[f"rank {name}"] = df[name].rank(ascending=True)
print('--- after ---')
print( df )
Result:
--- before ---
Products PRODUCT.AREA PRODUCT.CODE
0 Stocks 10200 66
1 Bonds 50291 55
2 Notes 50988 22
--- after ---
Products PRODUCT.AREA PRODUCT.CODE rank PRODUCT.AREA rank PRODUCT.CODE
0 Stocks 10200 66 1.0 3.0
1 Bonds 50291 55 2.0 2.0
2 Notes 50988 22 3.0 1.0
As expected, this was an easy fix. Thanks to answerers who gave much to think about ...
Kudos to #holdenweb and his answer at ... Create multiple dataframes in loop
dfnew = {} # CREATE A DICTIONARY!!! - THIS WAS THE TRICK I WAS MISSING
df_ = {}
for rows in df_list.itertuples():
row = rows.Index
ind_id = df_list.loc[row]['Prodinfo']
dfnew[row] = df_[row] # or pd.read_csv(csv_file) or database_query or ...
dfnew[row].dropna(inplace=True)
dfnew[row]["rank"] = dfnew[row][ind_id].rank(ascending=True)
Works well and very simple...

Categories

Resources