extract certain columns in python with logic - python

I am trying to extract certain columns from my data.
Below is the list of columns that I want to extract:
GA01020, GA01030, GA01040, GA01050, GA01060, GA01070, GA01080, GA01090, GA01100
I have write a code which makes me to get column until GA01090 but I can't find a way that I can get GA01100 as well.
Can you please help me to find a way to get all the 9 columns?
engineCount = 8
engineData = {}
for no in range(engineCount):
engineNo = str(no+1)
cols = ['GA010' + str(i) + str(engineNo) for i in range(20, engineCount*10+11,10)]

You mean like this?
engineCount = 8
cols = ['GA0' + str(i+1000) + str(engineCount) for i in range(20,engineCount*10+21,10)]

Ok now that I understand your question better try this:
#Assuming that you have your data sets in dataframes in a list or dict
data_sets = [df1, df2, df3, df4]
cols =['GA01020', 'GA01030', 'GA01040', 'GA01050', 'GA01060', 'GA01070', 'GA01080', 'GA01090', 'GA01100']
new_set=[]
for data in data_sets:
new_set.append(data.loc[:, data.columns.isin(cols)])

Related

How to use wide_to_long (Pandas)

I have this code which I thought would reformat the dataframe so that the columns with the same column name would be replaced by their duplicates.
# Function that splits dataframe into two separate dataframes, one with all unique
# columns and one with all duplicates
def sub_dataframes(dataframe):
# Extract common prefix -> remove trailing digits
columns = dataframe.columns.str.replace(r'\d*$', '', regex=True).to_series().value_counts()
# Split columns
unq_cols = columns[columns == 1].index
dup_cols = dataframe.columns[~dataframe.columns.isin(unq_cols)] # All columns from
dataframe that is not in unq_cols
return dataframe[unq_cols], dataframe[dup_cols]
unq_df = sub_dataframes(df)[0]
dup_df = sub_dataframes(df)[1]
print("Unique columns:\n\n{}\n\nDuplicate
columns:\n\n{}".format(unq_df.columns.tolist(), dup_df.columns.tolist()))
Output:
Unique columns:
['total_tracks', 'popularity']
Duplicate columns:
['t_dur0', 't_dur1', 't_dur2', 't_dance0', 't_dance1', 't_dance2', 't_energy0', 't_energy1', 't_energy2',
't_key0', 't_key1', 't_key2', 't_speech0', 't_speech1', 't_speech2', 't_acous0', 't_acous1', 't_acous2',
't_ins0', 't_ins1', 't_ins2', 't_live0', 't_live1', 't_live2', 't_val0', 't_val1', 't_val2', 't_tempo0',
't_tempo1', 't_tempo2']
Then I tried to use wide_to_long to combine columns with the same name:
cols = unq_df.columns.tolist()
temp = pd.wide_to_long(dataset.reset_index(), stubnames=['t_dur','t_dance', 't_energy', 't_key', 't_mode',
't_speech', 't_acous', 't_ins', 't_live', 't_val',
't_tempo'], i=['index'] + cols, j='temp', sep='t_')
.reset_index().groupby(cols, as_index=False).mean()
temp
Which gave me this output:
I tried to look at this question, but the dataframe that's returned has "Nothing to show". What am I doing wrong here? How do I fix this?
EDIT
Here is an example of how I've done it "by-hand", but I am trying to do it more efficiently using the already defined built-in functions.
The desired output is the dataframe that is shown last.

Concatenating lists and removing duplicates

I have three spreadsheets that all have a month/year column. There is some overlap between the spreadsheets (ie one covers 1998 to 2015 and one covers 2012 to 2020). I want to have a combined list of all the months/years with no duplicates. I have achieved this but I feel there must be a cleaner way to do so.
Dataframes are somewhat similar:
Month
VALUE
1998M01
1`
1998M02
2
import pandas as pd
unemp8315 = pd.read_csv('Unemployment 19832015.csv')
unemp9821 = pd.read_csv('Unemployment 19982021.csv')
unempcovid = pd.read_csv('Unemployment Covid.csv')
print(unemp8315)
print(unemp9821)
print(unempcovid)
monthlist = []
for i in unemp8315['Month']:
monthlist.append(i)
monthlist2 = []
for b in unemp9821['Month']:
monthlist2.append(b)
monthlist3 = []
for c in unempcovid['Month']:
monthlist3.append(c)
full_month_list = monthlist + monthlist2 + monthlist3
fullpd = pd.DataFrame(data=full_month_list)
clean_month_list = fullpd.drop_duplicates()
print(clean_month_list)
There's no need to iterate over every single entry, you can easily concatenate the dataframes, select the month column and get rid of the duplicates there
fullpd = pd.concat([unemp8315, unemp9821, unepmcovid], axis=0)
clean_month_list = fullpd['Month'].drop_duplicates()
You can do something like this:
files = ['Unemployment 19832015.csv',
'Unemployment 19982021.csv',
'Unemployment Covid.csv']
dfs = [pd.read_csv(file)["Month"] for file in files]
clean_month_list = pd.concat(dfs).drop_duplicates()
load them into a dictionary instead of a list dict[month] = value ?

Python remove everything after specific string and loop through all rows in multiple columns in a dataframe

I have a file full of URL paths like below spanning across 4 columns in a dataframe that I am trying to clean:
Path1 = ["https://contentspace.global.xxx.com/teams/Australia/WA/Documents/Forms/AllItems.aspx?\
RootFolder=%2Fteams%2FAustralia%2FWA%2FDocuments%2FIn%20Scope&FolderCTID\
=0x012000EDE8B08D50FC3741A5206CD23377AB75&View=%7B287FFF9E%2DD60C%2D4401%2D9ECD%2DC402524F1D4A%7D"]
I want to remove everything after a specific string which I defined it as "string1" and I would like to loop through all 4 columns in the dataframe defined as "df_MasterData":
string1 = "&FolderCTID"
import pandas as pd
df_MasterData = pd.read_excel(FN_MasterData)
cols = ['Column_A', 'Column_B', 'Column_C', 'Column_D']
for i in cols:
# Objective: Replace "&FolderCTID", delete all string after
string1 = "&FolderCTID"
# Method 1
df_MasterData[i] = df_MasterData[i].str.split(string1).str[0]
# Method 2
df_MasterData[i] = df_MasterData[i].str.split(string1).str[1].str.strip()
# Method 3
df_MasterData[i] = df_MasterData[i].str.split(string1)[:-1]
I did search and google and found similar solutions which were used but none of them work.
Can any guru shed some light on this? Any assistance is appreciated.
Added below is a few example rows in column A and B for these URLs:
Column_A = ['https://contentspace.global.xxx.com/teams/Australia/NSW/Documents/Forms/AllItems.aspx?\
RootFolder=%2Fteams%2FAustralia%2FNSW%2FDocuments%2FIn%20Scope%2FA%20I%20TOPPER%20GROUP&FolderCTID=\
0x01200016BC4CE0C21A6645950C100F37A60ABD&View=%7B64F44840%2D04FE%2D4341%2D9FAC%2D902BB54E7F10%7D',\
'https://contentspace.global.xxx.com/teams/Australia/Victoria/Documents/Forms/AllItems.aspx?RootFolder\
=%2Fteams%2FAustralia%2FVictoria%2FDocuments%2FIn%20Scope&FolderCTID=0x0120006984C27BA03D394D9E2E95FB\
893593F9&View=%7B3276A351%2D18C1%2D4D32%2DADFF%2D54158B504FCC%7D']
Column_B = ['https://contentspace.global.xxx.com/teams/Australia/WA/Documents/Forms/AllItems.aspx?\
RootFolder=%2Fteams%2FAustralia%2FWA%2FDocuments%2FIn%20Scope&FolderCTID=0x012000EDE8B08D50FC3741A5\
206CD23377AB75&View=%7B287FFF9E%2DD60C%2D4401%2D9ECD%2DC402524F1D4A%7D',\
'https://contentspace.global.xxx.com/teams/Australia/QLD/Documents/Forms/AllItems.aspx?RootFolder=%\
2Fteams%2FAustralia%2FQLD%2FDocuments%2FIn%20Scope%2FAACO%20GROUP&FolderCTID=0x012000E689A6C1960E8\
648A90E6EC3BD899B1A&View=%7B6176AC45%2DC34C%2D4F7C%2D9027%2DDAEAD1391BFC%7D']
This is how i would do it,
first declare a variable with your target columns.
Then use stack() and str.split to get your target output.
finally, unstack and reapply the output to your original df.
cols_to_slice = ['ColumnA','ColumnB','ColumnC','ColumnD']
string1 = "&FolderCTID"
df[cols_to_slice].stack().str.split(string1,expand=True)[1].unstack(1)
if you want to replace these columns in your target df then simply do -
df[cols_to_slice] = df[cols_to_slice].stack().str.split(string1,expand=True)[1].unstack(1)
You should first get the index of string using
indexes = len(string1) + df_MasterData[i].str.find(string1)
# This selected the final location of this string
# if you don't want to add string in result just use below one
indexes = len(string1) + df_MasterData[i].str.find(string1)
Now do
df_MasterData[i] = df_MasterData[i].str[:indexes]

Performing similar analysis on multiple dataframes

I am reading data from multiple dataframes.
Since the indexing and inputs are different, I need to repeat the pairing and analysis. I need dataframe specific outputs. This pushes me to copy paste and repeat the code.
Is there a fast way to refer to multiple dataframes to do the same analysis?
DF1= pd.read_csv('DF1 Price.csv')
DF2= pd.read_csv('DF2 Price.csv')
DF3= pd.read_csv('DF3 Price.csv') # These CSV's contain main prices
DF1['ParentPrice'] = FamPrices ['Price1'] # These CSV's contain second prices
DF2['ParentPrice'] = FamPrices ['Price2']
DF3['ParentPrice'] = FamPrices ['Price3']
DF1['Difference'] = DF1['ParentPrice'] - DF1['Price'] # Price difference is the output
DF2['Difference'] = DF2['ParentPrice'] - DF2['Price']
DF3['Difference'] = DF3['ParentPrice'] - DF3['Price']```
It is possible to parametrize strings using f-strings, available in python >= 3.6. In an f string, it is possible to insert the string representation of the value of a variable inside the string, as in:
>> a=3
>> s=f"{a} is larger than 11"
>> print(s)
3 is larger than 1!
Your code would become:
list_of_DF = []
for symbol in ["1", "2", "3"]:
df = pd.read_csv(f"DF{symbol} Price.csv")
df['ParentPrice'] = FamPrices [f'Price{symbol}']
df['Difference'] = df['ParentPrice'] - df['Price']
list_of_DF.append(df)
then DF1 would be list_of_DF[0] and so on.
As I mentioned, this answer is only valid if you are using python 3.6 or later.
for the third part ill suggest to create a something like
DFS=[DF1,DF2,DF3]
def create_difference(dataframe):
dataframe['Difference'] = dataframe['ParentPrice'] - dataframe['Price']
for dataframe in DFS:
create_difference(dataframe)
for the second way there is no like superconvenient and short way i might think about , except maybe of
for i in range len(DFS) :
DFS[i]['ParentPrice'] = FamPrices [f'Price{i}']

Pandas: how to add a dataframe inside a cell of another dataframe?

I have an empty dataframe like the following:
simReal2013 = pd.DataFrame(index = np.arange(0,1,1))
Then I read as dataframes some .csv files.
stat = np.arange(0,5)
xv = [0.005, 0.01, 0.05]
br = [0.001,0.005]
for i in xv:
for j in br:
I = 0
for s in stat:
string = 'results/2013/real/run_%d_%f_%f_15.0_10.0_T0_RealNet.csv'%(s,i,j)
sim = pd.read_csv(string, sep=' ')
I += np.array(sim.I)
sim.I = I / 5
col = '%f_%f'%(i,j)
simReal2013.insert(0, col, sim)
I would like to add the dataframe that I read in a cell of simReal2013. In doing so I get the following error:
ValueError: Wrong number of items passed 9, placement implies 1
Yes, putting a dataframe inside of a dataframe is probably not the way you want to go, but if you must, this is one way to do it:
df_in=pd.DataFrame([[1,2,3]]*2)
d={}
d['a']=df
df_out=pd.DataFrame([d])
type(df_out.loc[0,"a"])
>>> pandas.core.frame.DataFrame
Maybe a dictionary of dataframes would suffice for your use.

Categories

Resources