How to find and replace substrings at the end of column headers - python

I have the following columns, among others, in my dataframe: dom_pop', 'an_dom_n', 'an_dom_ncmplt. Equivalent columns exist in multiple dataframes, with the suffix changing. For example, in another dataframe they may be called out as pa_pop', 'an_pa_n', 'an_pa_ncmplt. I want to append '_kwh' to these cols across all my dataframes.
I wrote the following code:
cols = ['_n$', '_ncmplt', '_pop'] << the $ is added to indicate string ending in _n.
filterfuel = 'kwh'
for c in cols:
dfdom.columns = [col.replace(f'{c}', f'{c}_{filterfuel}') for col in dfdom.columns]
dfpa.columns = [col.replace(f'{c}', f'{c}_{filterfuel}') for col in dfpa.columns]
dfsw.columns = [col.replace(f'{c}', f'{c}_{filterfuel}') for col in dfsw.columns]
kwh gets appended to _ncmplt and _pop cols, but not the _n column. If I remove the $ _n gets appended but then _ncmplt looks like 'an_dom_n_kwh_cmplt'.
for df dom the corrected names should look like dom_pop_kwh', 'an_dom_n_kwh', 'an_dom_ncmplt_kwh'
Why is $ not being recongnised as an end of string parameter?

You can use np.where with a regex
cols = ['_n$', '_ncmplt', '_pop']
filterfuel = 'kwh'
pattern = fr"(?:{'|'.join(cols)})"
for df in [dfdom, dfpa, dfsw]:
df.columns = np.where(df.columns.str.contains(pattern, regex=True),
df.columns + f"_{filterfuel}", df.columns)
Output:
>>> pattern
'(?:_n$|_ncmplt|_pop)'
# dfdom = pd.DataFrame([[0]*4], columns=['dom_pop', 'an_dom_n', 'an_dom_ncmplt', 'hello'])
# After:
>>> dfdom
dom_pop_kwh an_dom_n_kwh an_dom_ncmplt_kwh hello
0 0 0 0 0

Related

how to extract all repeating patterns from a string into a dataframe

i have a dataframe with the equiptment codes of certain trucks, this is a similar list o list of the cells
x = [[A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A],
[A0A,A0B,A1C,A1Z,A2I,A5L,B1B,B1F,B1H,B2A,B2X,B3H,B4L,B5E,B5J,C0G,C1W,C5B,C5D],
[A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I,C0A,C1J,C5B,C5D,C6C,C6J,C6Q]]
i want to extract all the values with match with "B" for example ("B1B,B1F,B1H");("B1B,B1F,B1H,B2A,B2X,B3H")("B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I") i try this code but every row each line has a different length
sublista = ['B1B','B1F','B1H','B2A','B2X','B4L','B5C','B5I']
df3 = pd.DataFrame(columns=['FIN', 'Equipmentcodes', 'AQUATARDER', 'CAJA'])
for elemento in sublista:
df_aux=(df2[df2['Equipmentcodes'].str.contains(elemento, case=False)])
df_aux['CAJA'] = elemento
df3 = df3.append(df_aux, ignore_index=True)
Assuming your column contains strings, you could use a regex:
df['selected'] = (df['code']
.str.extractall(r'\b(B[^,]*)\b')[0]
.groupby(level=0).apply(','.join)
)
example input:
x = ['A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A',
'A0A,A0B,A1C,A1Z,A2I,A5L,B1B,B1F,B1H,B2A,B2X,B3H,B4L,B5E,B5J,C0G,C1W,C5B,C5D',
'A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I,C0A,C1J,C5B,C5D,C6C,C6J,C6Q']
df = pd.DataFrame({'code': x})
output:
selected code
0 B1B,B1F,B1H,B2A A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A
1 B1B,B1F,B1H,B2A,B2X,B3H,B4L,B5E,B5J A0A,A0B,A1C,A1Z,A2I,A5L,B1B,B1F,B1H,B2A,B2X,B3H,B4L,B5E,B5J,C0G,C1W,C5B,C5D
2 B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I,C0A,C1J,C5B,C5D,C6C,C6J,C6Q

Pandas read_csv for a no quote file

I'm trying to read a file that doesn't have any quotes, which is causing inconsistent number of row lengths
Data looks as follows:
col_a, col_b
abc, inc., 5
xyz corb, 10
Since there are no quotes around "abc, inc.", this is causing the first row to get split into 3 values, but it should actually be just 2 values.
This column is not necessarily in the first position, and that there can be another bad column like this. The data has around 250 columns.
I'm reading this using pd.read_csv, how can this be resolved?
Thanks!
Its not a CSV but since there is only one column with the errant commas you can process with the csv module and fix the slice that holds too many column values. When a row has too many cells, assume they are the ones from the unescaped comma.
import pandas as pd
import csv
def split_badrows(fileobj, bad_col, total_cols):
"""Iterate rows, colapsing extra columns at bad_col"""
for row in csv.reader(fileobj):
row = [cell.strip() for cell in row]
extras = len(row) - total_cols
if extras > 0:
# colapse slice at troubled column into single value
extras += 1 # python slice doesn't include right endpoint
row[bad_col] = ", ".join(row[bad_col:bad_col+extras])
del row[bad_col+1:bad_col+extras]
yield row
def df_from_badtext(fileobj, bad_col):
"""Make pandas.DataFrame from badly formatted text"""
columns = [cell.strip() for cell in next(fileobj).split(",")]
total_cols = len(columns)
return pd.DataFrame(split_badrows(fileobj, bad_col, total_cols),
columns=columns)
# test
open("testme.txt", "w").write("""col_a, col_b
abc, inc., 5
xyz corb, 10""")
df = df_from_badtext(open("testme.txt"), bad_col=0)
print(df)
Data split to list then transform to dataframe.
csv = '''col_a, col_b
abc, inc., 5
xyz corb, 10'''+'\n'
import re
import pandas as pd
reArr = re.findall('(.*),([^,]+)\n',csv)
df=pd.DataFrame(reArr[1:],columns=reArr[0])
print(df)
col_a
col_b
0
abc, inc.
5
1
xyz corb
10
EDIT:
Thanks to tdelaney comment below:
see if this works
pd.read_csv('foo.csv',delimiter=",(?!( [\w\d]*).,)").dropna(axis=1)
OLD:
using delimiter as ",(?!.*,)" in read_csv seems to be solving this for me
EDIT (after updated question with an additional column):
Solution 1:
You can create a function with the bad column as a parameter and use split and concat to correct the dataframe depending on that bad column. Please note that the bad_col parameter in my function is the column number, where we start counting at 1, rather than 0 (e.g. 1, 2, 3, etc. instead of 0, 1, 2, etc.):
import pandas as pd
import numpy as np
from io import StringIO
data = StringIO('''
col, col_a, col_b
000, abc, inc., 5
111, xyz corb, 10
''')
df = pd.read_csv(data, sep="|")
def fix_csv(df, bad_col):
cols = df.columns.str.split(', ')[0]
x = len(cols) - bad_col
tmp = df.iloc[:,0].str.split(', ', expand=True, n=x)
df = pd.concat([tmp.iloc[:,0],
tmp.iloc[:,-1].str.rsplit(', ', expand=True, n=x)],
axis=1)
df.columns = cols
return df
fix_csv(df, bad_col=2)
Solution 2 (this is if you have issues in multiple columns and you need to use more brute force):
It sounds like there is a possibility that you there could be multiple columns affected from the comments as you mentioned only 1 "so far".
As such, this might be a little bit of a project to clean up the data. The following code can give you an idea how to do that. The bottom-line is that you can create two different dataframes: 1) The first dataframe has the minimum number of commas (i.e. they should be the rows without any issues). 2) The other dataframe will be the dataframe with all of the issues. I've shown how you can clean the data to get to the correct number of columns and then change the data back and concat the two dataframes.
import pandas as pd
import numpy as np
from io import StringIO
data = StringIO('''
col, col_a, col_b
000, abc, inc., 5
111, xyz corb, 10
''')
df = pd.read_csv(data, sep="|")
cols = df.columns.str.split(', ')[0]
s = df.iloc[:,0].str.count(',')
df1 = df.copy()[s.eq(s.min())]
df1 = df1.iloc[:,0].str.split(', ', expand=True)
df1.columns = cols
df2 = df.copy()[s.gt(s.min())]
#inspect this dataframe manually to see how many rows affected, which columns, etc.
#cleanup df2 with some .replace so all equal commas
original = [', inc.', ', corp.']
temp = [' inc.', ' corp.']
df2.iloc[:,0] = df2.iloc[:,0].replace(original, temp, regex=True)
df2 = df2.iloc[:,0].str.split(', ', expand=True)
df2.columns = cols
#cleanup df2 by changing back to original values
df2['col_a'] = df2['col_a'].replace(temp, original, regex=True) # you can do this with other columns as well
df3 = pd.concat([df1, df2]).sort_index()
df3
Out[1]:
col col_a col_b
0 000 abc, inc. 5
1 111 xyz corb 10
Solution 3: Previous Solution (for original question when problem was only in first column - for reference)
You can read in with sep="|" as that | character is not in your .csv, so it reads all of the data into one column.
The main assumption to my solution is that the problematic column is only the first column. I use rsplit(', ') and limit the number of splits to the total number of columns minus 1 (with the example data, this is 2-1=1). Hopefully, this solves with your actual data or at least gives you some idea. If your data is separated by , instead of , , please note whether or not to adjust my splits as well.
import pandas as pd
import numpy as np
from io import StringIO
data = StringIO('''
col_a, col_b
abc, inc., 5
xyz corb, 10
''')
df = pd.read_csv(data, sep="|")
cols = df.columns.str.split(', ')[0]
x = len(cols) - 1
df = df.iloc[:,0].str.rsplit(', ', expand=True, n=x)
df.columns = cols
df
Out[1]:
col_a col_b
0 abc, inc. 5
1 xyz corb 10

Python remove everything after specific string and loop through all rows in multiple columns in a dataframe

I have a file full of URL paths like below spanning across 4 columns in a dataframe that I am trying to clean:
Path1 = ["https://contentspace.global.xxx.com/teams/Australia/WA/Documents/Forms/AllItems.aspx?\
RootFolder=%2Fteams%2FAustralia%2FWA%2FDocuments%2FIn%20Scope&FolderCTID\
=0x012000EDE8B08D50FC3741A5206CD23377AB75&View=%7B287FFF9E%2DD60C%2D4401%2D9ECD%2DC402524F1D4A%7D"]
I want to remove everything after a specific string which I defined it as "string1" and I would like to loop through all 4 columns in the dataframe defined as "df_MasterData":
string1 = "&FolderCTID"
import pandas as pd
df_MasterData = pd.read_excel(FN_MasterData)
cols = ['Column_A', 'Column_B', 'Column_C', 'Column_D']
for i in cols:
# Objective: Replace "&FolderCTID", delete all string after
string1 = "&FolderCTID"
# Method 1
df_MasterData[i] = df_MasterData[i].str.split(string1).str[0]
# Method 2
df_MasterData[i] = df_MasterData[i].str.split(string1).str[1].str.strip()
# Method 3
df_MasterData[i] = df_MasterData[i].str.split(string1)[:-1]
I did search and google and found similar solutions which were used but none of them work.
Can any guru shed some light on this? Any assistance is appreciated.
Added below is a few example rows in column A and B for these URLs:
Column_A = ['https://contentspace.global.xxx.com/teams/Australia/NSW/Documents/Forms/AllItems.aspx?\
RootFolder=%2Fteams%2FAustralia%2FNSW%2FDocuments%2FIn%20Scope%2FA%20I%20TOPPER%20GROUP&FolderCTID=\
0x01200016BC4CE0C21A6645950C100F37A60ABD&View=%7B64F44840%2D04FE%2D4341%2D9FAC%2D902BB54E7F10%7D',\
'https://contentspace.global.xxx.com/teams/Australia/Victoria/Documents/Forms/AllItems.aspx?RootFolder\
=%2Fteams%2FAustralia%2FVictoria%2FDocuments%2FIn%20Scope&FolderCTID=0x0120006984C27BA03D394D9E2E95FB\
893593F9&View=%7B3276A351%2D18C1%2D4D32%2DADFF%2D54158B504FCC%7D']
Column_B = ['https://contentspace.global.xxx.com/teams/Australia/WA/Documents/Forms/AllItems.aspx?\
RootFolder=%2Fteams%2FAustralia%2FWA%2FDocuments%2FIn%20Scope&FolderCTID=0x012000EDE8B08D50FC3741A5\
206CD23377AB75&View=%7B287FFF9E%2DD60C%2D4401%2D9ECD%2DC402524F1D4A%7D',\
'https://contentspace.global.xxx.com/teams/Australia/QLD/Documents/Forms/AllItems.aspx?RootFolder=%\
2Fteams%2FAustralia%2FQLD%2FDocuments%2FIn%20Scope%2FAACO%20GROUP&FolderCTID=0x012000E689A6C1960E8\
648A90E6EC3BD899B1A&View=%7B6176AC45%2DC34C%2D4F7C%2D9027%2DDAEAD1391BFC%7D']
This is how i would do it,
first declare a variable with your target columns.
Then use stack() and str.split to get your target output.
finally, unstack and reapply the output to your original df.
cols_to_slice = ['ColumnA','ColumnB','ColumnC','ColumnD']
string1 = "&FolderCTID"
df[cols_to_slice].stack().str.split(string1,expand=True)[1].unstack(1)
if you want to replace these columns in your target df then simply do -
df[cols_to_slice] = df[cols_to_slice].stack().str.split(string1,expand=True)[1].unstack(1)
You should first get the index of string using
indexes = len(string1) + df_MasterData[i].str.find(string1)
# This selected the final location of this string
# if you don't want to add string in result just use below one
indexes = len(string1) + df_MasterData[i].str.find(string1)
Now do
df_MasterData[i] = df_MasterData[i].str[:indexes]

Split dataframe by certain condition but keep the original dataframe

I have a dataframe "bb" like this:
Response Unique Count
I love it so much! 246_0 1
This is not bad, but can be better. 246_1 2
Well done, let's do it. 247_0 1
If count is lager than 1, I would like to split the string and make the dataframe "bb" become this: (result I expected)
Response Unique
I love it so much! 246_0
This is not bad 246_1_0
but can be better. 246_1_1
Well done, let's do it. 247_0
My code:
bb = DataFrame(bb[bb['Count'] > 1].Response.str.split(',').tolist(), index=bb[bb['Count'] > 1].Unique).stack()
bb = bb.reset_index()[[0, 'Unique']]
bb.columns = ['Response','Unique']
bb=bb.replace('', np.nan)
bb=bb.dropna()
print(bb)
But the result is like this:
Response Unique
0 This is not bad 246_1
1 but can be better. 246_1
How can I keep the original dataframe in this case?
First split only values per condition with to new helper Series and then add counter values by GroupBy.cumcount only per duplicated index values by Index.duplicated:
s = df.loc[df.pop('Count') > 1, 'Response'].str.split(',', expand=True).stack()
df1 = df.join(s.reset_index(drop=True, level=1).rename('Response1'))
df1['Response'] = df1.pop('Response1').fillna(df1['Response'])
mask = df1.index.duplicated(keep=False)
df1.loc[mask, 'Unique'] += df1[mask].groupby(level=0).cumcount().astype(str).radd('_')
df1 = df1.reset_index(drop=True)
print (df1)
Response Unique
0 I love it so much! 246_0
1 This is not bad 246_1_0
2 but can be better. 246_1_1
3 Well done! 247_0
EDIT: If need _0 for all another values remove mask:
s = df.loc[df.pop('Count') > 1, 'Response'].str.split(',', expand=True).stack()
df1 = df.join(s.reset_index(drop=True, level=1).rename('Response1'))
df1['Response'] = df1.pop('Response1').fillna(df1['Response'])
df1['Unique'] += df1.groupby(level=0).cumcount().astype(str).radd('_')
df1 = df1.reset_index(drop=True)
print (df1)
Response Unique
0 I love it so much! 246_0_0
1 This is not bad 246_1_0
2 but can be better. 246_1_1
3 Well done! 247_0_0
Step wise we can solve this problem the following:
Split your dataframes by count
Use this function to explode the string to rows
We groupby on index and use cumcount to get the correct unique column values.
Finally we concat the dataframes together again.
df1 = df[df['Count'].ge(2)] # all rows which have a count 2 or higher
df2 = df[df['Count'].eq(1)] # all rows which have count 1
df1 = explode_str(df1, 'Response', ',') # explode the string to rows on comma delimiter
# Create the correct unique column
df1['Unique'] = df1['Unique'] + '_' + df1.groupby(df1.index).cumcount().astype(str)
df = pd.concat([df1, df2]).sort_index().drop('Count', axis=1).reset_index(drop=True)
Response Unique
0 I love it so much! 246_0
1 This is not bad 246_1_0
2 but can be better. 246_1_1
3 Well done! 247_0
Function used from linked answer:
def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})

How can I split a column into 2 in the correct way?

I am web-scraping tables from a website, and I am putting it to the Excel file.
My goal is to split a columns into 2 columns in the correct way.
The columns what i want to split: "FLIGHT"
I want this form:
First example: KL744 --> KL and 0744
Second example: BE1013 --> BE and 1013
So, I need to separete the FIRST 2 character (in the first column), and after that the next characters which are 1-2-3-4 characters. If 4 it's oke, i keep it, if 3, I want to put a 0 before it, if 2 : I want to put 00 before it (so my goal is to get 4 character/number in the second column.)
How Can I do this?
Here my relevant code, which is already contains a formatting code.
df2 = pd.DataFrame(datatable,columns = cols)
df2["UPLOAD_TIME"] = datetime.now()
mask = np.column_stack([df2[col].astype(str).str.contains(r"Scheduled", na=True) for col in df2])
df3 = df2.loc[~mask.any(axis=1)]
if os.path.isfile("output.csv"):
df1 = pd.read_csv("output.csv", sep=";")
df4 = pd.concat([df1,df3])
df4.to_csv("output.csv", index=False, sep=";")
else:
df3.to_csv
df3.to_csv("output.csv", index=False, sep=";")
Here the excel prt sc from my table:
You can use indexing with str with zfill:
df = pd.DataFrame({'FLIGHT':['KL744','BE1013']})
df['a'] = df['FLIGHT'].str[:2]
df['b'] = df['FLIGHT'].str[2:].str.zfill(4)
print (df)
FLIGHT a b
0 KL744 KL 0744
1 BE1013 BE 1013
I believe in your code need:
df2 = pd.DataFrame(datatable,columns = cols)
df2['a'] = df2['FLIGHT'].str[:2]
df2['b'] = df2['FLIGHT'].str[2:].str.zfill(4)
df2["UPLOAD_TIME"] = datetime.now()
...
...

Categories

Resources