Extract integers after double space with regex - python

I have a dataframe where I want to extract stuff after double space. For all rows in column NAME there is a double white space after the company names before the integers.
NAME INVESTMENT PERCENT
0 APPLE COMPANY A 57 638 232 stocks OIL LTD 0.12322
1 BANANA 1 COMPANY B 12 946 201 stocks GOLD LTD 0.02768
2 ORANGE COMPANY C 8 354 229 stocks GAS LTD 0.01786
df = pd.DataFrame({
'NAME': ['APPLE COMPANY A 57 638 232 stocks', 'BANANA 1 COMPANY B 12 946 201 stocks', 'ORANGE COMPANY C 8 354 229 stocks'],
'PERCENT': [0.12322, 0.02768 , 0.01786]
})
I have this earlier, but it also includes integers in the company name:
df['STOCKS']=df['NAME'].str.findall(r'\b\d+\b').apply(lambda x: ''.join(x))
Instead I tried to extract after double spaces
df['NAME'].str.split('(\s{2})')
which gives output:
0 [APPLE COMPANY A, , 57 638 232 stocks]
1 [BANANA 1 COMPANY B, , 12 946 201 stocks]
2 [ORANGE COMPANY C, , 8 354 229 stocks]
However, I want the integers that occur after double spaces to be joined/merged and put into a new column.
NAME PERCENT STOCKS
0 APPLE COMPANY A 0.12322 57638232
1 BANANA 1 COMPANY B 0.02768 12946201
2 ORANGE COMPANY C 0.01786 12946201
How can I modify my second function to do what I want?

Following the original logic you may use
df['STOCKS'] = df['NAME'].str.extract(r'\s{2,}(\d+(?:\s\d+)*)', expand=False).str.replace(r'\s+', '')
df['NAME'] = df['NAME'].str.replace(r'\s{2,}\d+(?:\s\d+)*\s+stocks', '')
Output:
NAME PERCENT STOCKS
0 APPLE COMPANY A 0.12322 57638232
1 BANANA 1 COMPANY B 0.02768 12946201
2 ORANGE COMPANY C 0.01786 8354229
Details
\s{2,}(\d+(?:\s\d+)*) is used to extract the first occurrence of whitespace-separated consecutive digit chunks after 2 or more whitespaces and .replace(r'\s+', '') removes any whitespaces in that extracted text afterwards
.replace(r'\s{2,}\d+(?:\s\d+)*\s+stocks' updates the text in the NAME column, it removes 2 or more whitespaces, consecutive whitespace-separated digit chunks and then 1+ whitespaces and stocks. Actually, the last \s+stocks may be replaced with .* if there are other words.

Another pandas approach, which will cast STOCKS to numeric type:
df_split = (df['NAME'].str.extractall('^(?P<NAME>.+)\s{2}(?P<STOCKS>[\d\s]+)')
.reset_index(level=1, drop=True))
df_split['STOCKS'] = pd.to_numeric(df_split.STOCKS.str.replace('\D', ''))
Assign these columns back into your original DataFrame:
df[['NAME', 'STOCKS']] = df_split[['NAME', 'STOCKS']]
COMPANY_NAME STOCKS PERCENT
0 APPLE COMPANY A 57638232 0.12322
1 BANANA 1 COMPANY B 12946201 0.02768
2 ORANGE COMPANY C 8354229 0.01786

You can use look behind and look ahead operators.
''.join(re.findall(r'(?<=\s{2})(.*)(?=stocks)',string)).replace(' ','')
This catches all characters between two spaces and the word stocks and replace all the spaces with null.
Another Solution using Split
df["NAME"].apply(lambda x:x[x.find(' ')+2:x.find('stocks')-1].replace(' ',''))
Reference:-
Look_behind

You can try
df['STOCKS'] = df['NAME'].str.split(',')[2].replace(' ', '')
df['NAME'] = df['NAME'].str.split(',')[0]

This can be done without using regex by using split.
df['STOCKS'] = df['NAME'].apply(lambda x: ''.join(x.split(' ')[1].split(' ')[:-1]))
df['NAME'] = df['NAME'].str.replace(r'\s?\d+(?:\s\d+).*', '')

Related

Can't properly replace blank values using pandas

I'm a python beginner, so I'm practicing some data analysis using pandas in a dataframe with a list of restaurants with a Michelin star (restaurants_df).
When I show, for example, the first 5 rows I notice that in the "price" column (object type) of row 4 I have a blank value:
In [ ]: restaurants_df.head()
Out[ ]:
name year latitude longitude city region zipCode cuisine price
0 Kilian Stuba 2019 47.348580 10.17114 Kleinwalsertal Austria 87568 Creative $
1 Pfefferschiff 2019 47.837870 13.07917 Hallwang Austria 5300 Classic cuisine $
2 Esszimmer 2019 47.806850 13.03409 Salzburg Austria 5020 Creative $
3 Carpe Diem 2019 47.800010 13.04006 Salzburg Austria 5020 Market cuisine $
4 Edvard 2019 48.216503 16.36852 Wien Austria 1010 Modern cuisine
Then I check how many NaN values are in each column. In the case of the price column there are 151 values:
In [ ]: restaurants_df.isnull().sum()
Out[ ]: name 0
year 0
latitude 0
longitude 0
city 2
region 0
zipCode 149
cuisine 0
price 151
dtype: int64
After, I replace those values with the string "No Price", and confirm that all values have been replaced.
In [ ]: restaurants_df["price"].fillna("No Price", inplace = True)
restaurants_df.isnull().sum()
Out[ ]: name 0
year 0
latitude 0
longitude 0
city 0
region 0
zipCode 0
cuisine 0
price 0
dtype: int64
However, when I show the first 5 rows, the problem persists.
In [ ]: restaurants_df.head()
Out[ ]:
name year latitude longitude city region zipCode cuisine price
0 Kilian Stuba 2019 47.348580 10.17114 Kleinwalsertal Austria 87568 Creative $
1 Pfefferschiff 2019 47.837870 13.07917 Hallwang Austria 5300 Classic cuisine $
2 Esszimmer 2019 47.806850 13.03409 Salzburg Austria 5020 Creative $
3 Carpe Diem 2019 47.800010 13.04006 Salzburg Austria 5020 Market cuisine $
4 Edvard 2019 48.216503 16.36852 Wien Austria 1010 Modern cuisine
Any idea why this is happening and how I can solve it? Thanks in advance!
Viewing the dataset over at kaggle shows that the first four restaurants are 5 '$' while the fifth is 4 '$'. Thus, I'm guessing that jupyter notebook is just not displaying all the '$' visually, however the data internally is correct.
To double check if I'm correct try running
df.price
and see what you get. I think this might have something to do with jupyter's HTML handler when it tries to display four dollar signs. You can look at this issue that is similar to yours
If you're bothered by this, simplay replace the '$' symbols with a number using something like
df.replace({'price': {'$': 1, '$$': 2, '$$$': 3, '$$$$': 4, '$$$$$': 5}})
What I understand is that you are dealing with both blank values and null values. These are handled differently. Check out this question to understand how to handle them.
I don't think pandas will recognize areas with '' as null. for instance:
df2 = pd.DataFrame(np.array([[1, 2, ''], [4, 5, 6], [7, 8, 9]]),
columns=['a', 'b', 'c'])
then:
df2.isnull()
a b c
0 False False False
1 False False False
2 False False False
see here, and try:
pandas.options.mode.use_inf_as_na = True
EDIT:
you could also try replaying with:
df2.replace({'': 'No Price'}, inplace=True)
EDIT2: I believe #AKareem has the solution, but to expand you can use this to escape the latex
restaurants_df.replace({'price': {
'$': '\$',
'$$': '\$$',
'$$$': '\$$$',
'$$$$': '\$$$$',
'$$$$$': '\$$$$$'}}
, inplace=True)

Use Excel sheet to create dictionary in order to replace values

I have an excel file with product names. First row is the category (A1: Water, A2: Sparkling, A3:Still, B1: Soft Drinks, B2: Coca Cola, B3: Orange Juice, B4:Lemonade etc.), each cell below is a different product. I want to keep this list in a viewable format (not comma separated etc.) as this is very easy for anybody to update the product names (I have a second person running the script without understanding the script)
If it helps I can also have the excel file in a CSV format and I can also move the categories from the top row to the first column
I would like to replace the cells of a dataframe (df) with the product categories. For example, Coca Cola would become Soft Drinks. If the product is not in the excel it would not be replaced (ex. Cookie).
print(df)
Product Quantity
0 Coca Cola 1234
1 Cookie 4
2 Still 333
3 Chips 88
Expected Outcome:
print (df1)
Product Quantity
0 Soft Drinks 1234
1 Cookie 4
2 Water 333
3 Snacks 88
Use DataFrame.melt with DataFrame.dropna or DataFrame.stack for helper Series and then use Series.replace:
s = df1.melt().dropna().set_index('value')['variable']
Alternative:
s = df1.stack().reset_index(name='v').set_index('v')['level_1']
df['Product'] = df['Product'].replace(s)
#if performance is important
#df['Product'] = df['Product'].map(s).fillna(df['Product'])
print (df)
Product Quantity
0 Soft Drinks 1234
1 Cookie 4
2 Water 333
3 Snacks 88

Removing non-alphanumeric symbols in dataframe

How do I remove non-alphabet from the values in the dataframe? I only managed to convert all to lower case
def doubleAwardList(self):
dfwinList = pd.DataFrame()
dfloseList = pd.DataFrame()
dfwonandLost = pd.DataFrame()
#self.dfWIN... and self.dfLOSE... is just the function used to call the files chosen by user
groupby_name= self.dfWIN.groupby("name")
groupby_nameList= self.dfLOSE.groupby("name _List")
list4 = []
list5 = []
notAwarded = "na"
for x, group in groupby_name:
if x != notAwarded:
list4.append(str.lower(str(x)))
dfwinList= pd.DataFrame(list4)
for x, group in groupby_nameList:
list5.append(str.lower(str(x)))
dfloseList = pd.DataFrame(list5)
data sample: Basically I mainly need to remove the full stops and hyphens as I will require to compare it to another file but the naming isn't very consistent so i had to remove the non-alphanumeric for much more accurate result
creative-3
smart tech pte. ltd.
nutritive asia
asia's first
desired result:
creative 3
smart tech pte ltd
nutritive asia
asia s first
Use DataFrame.replace only and add whitespace to pattern:
df = df.replace('[^a-zA-Z0-9 ]', '', regex=True)
If one column - Series:
df = pd.DataFrame({'col': ['creative-3', 'smart tech pte. ltd.',
'nutritive asia', "asia's first"],
'col2':range(4)})
print (df)
col col2
0 creative-3 0
1 smart tech pte. ltd. 1
2 nutritive asia 2
3 asia's first 3
df['col'] = df['col'].replace('[^a-zA-Z0-9 ]', '', regex=True)
print (df)
col col2
0 creative3 0
1 smart tech pte ltd 1
2 nutritive asia 2
3 asias first 3
EDIT:
If multiple columns is possible select only object, obviously string columns and if necessary cast to strings:
cols = df.select_dtypes('object').columns
print (cols)
Index(['col'], dtype='object')
df[cols] = df[cols].astype(str).replace('[^a-zA-Z0-9 ]', '', regex=True)
print (df)
col col2
0 creative3 0
1 smart tech pte ltd 1
2 nutritive asia 2
3 asias first 3
Why not just the below, (i did make into lower btw):
df=df.replace('[^a-zA-Z0-9]', '',regex=True).str.lower()
Then now:
print(df)
Will get the desired data-frame
Update:
try:
df=df.apply(lambda x: x.str.replace('[^a-zA-Z0-9]', '').lower(),axis=0)
If only one column do:
df['your col']=df['your col'].str.replace('[^a-zA-Z0-9]', '').str.lower()

Pandas dataframe: count number of string value is in row for specific ID

I have the following use case:
I want to make a dataframe where for each row I have a column where I can see how many interactions there have been for this ID (user) in the categories. The hardest thing to me is that they can't be double counted, while a match in just one of the categories is enough to be counted as 1.
So for example I have:
richtingen id
0 Marketing, Sales 1110
1 Marketing, Sales 1110
2 Finance 220
3 Marketing, Engineering 1110
4 IT 3300
Now I want to create a third row where I can see how many times this ID has interacted with any of these categories in total. Each comma is a category on it's own so for example: "Marketing, Sales" are the two categories Marketing and Sales. To get a +1 you only need to have a match with another row where ID is the same and one of the categories matches, so for example for the index 0 it would be 3 (indexes 0, 1 and 3 match). The output data for the example should be:
richtingen id freq
0 Marketing, Sales 1110 3
1 Marketing, Sales 1110 3
2 Finance 220 1
3 Marketing, Engineering 1110 3
4 IT 3300 1
The hard part for me seems to be that I can't all categories into new rows, as then you perhaps will start counting double. For example index 0 matches both Marketing and Sales of index 1 and I want it just to add 1, not 2.
The code I have so far is:
df['freq'] = df.groupby(['id', 'richtingen'])['id'].transform('count')
this only matches identical combination of categories though.
Other things I've tried:
- creating a new column with all vacancies split into an array:
df['splitted'] = df.richtingen.apply(lambda x: str(x.split(",")))
and then the plan was to use something along this code in combination with groupby on id to count number of times it is true per item:
if any(t < 0 for t in x):
# do something
I couldn't get this to work either.
I tried splitting categories in new rows, or columns but then got an issue of double counting.
For example using code suggested:
df['richtingen'].str.split(', ',expand=True)
Gives me the following:
0 1 id
0 Marketing Sales 1110
1 Marketing Sales 1110
2 dDD None 220
3 Marketing Engineering 1110
4 ddsad None 3300
But then I will need to create code that goes over every row, then checks the ID, lists the values in the columns and checks if they are contained in any of the other columns (where ID is the same) and if one of them matches add 1 to freq. This code I suspect might be able with groupby, but am not sure, and can't figure it out.
(Solution suggested by Jezrael below):
If need count unique catagories per id first split, create MultiIndex Series by stack and last use SeriesGroupBy.nunique with map for new column of original DataFrame.
I think this solution perhaps is something similar to this, but at the moment it counts the total number of unique categories (not the unique number of interaction with categories). For example output at index 2 here is 2, while it should be 1 (as the user only interacted with the categories once).
richtingen id freq
0 Marketing, Sales 1110 3
1 Marketing, Sales 1110 3
2 Finance, Accounting 220 2
3 Marketing, Engineering 1110 3
4 IT 3300 1
Hope I made myself clear and anyone knows how to fix this! In total there will be around 13 categories, always in one cell, but divided by a comma.
For msr_003:
id richtingen freq_x freq_y
0 220 Finance, IT 0 2
1 1110 Finance, IT 1 2
2 1110 Marketing, Sales 2 4
3 1110 Marketing, Sales 3 4
4 220 Marketing 4 1
5 220 Finance 5 2
6 1110 Marketing, Sales 6 4
7 3300 IT 7 1
8 1110 Marketing, IT 8 4
If need count unique catagories per id first split, create MultiIndex Series by stack and last use SeriesGroupBy.nunique with map for new column of original DataFrame:
s = (df.set_index('id')['richtingen']
.str.split(', ',expand=True)
.stack()
.groupby(level=0)
.nunique())
print (s)
id
220 1
1110 3
3300 1
dtype: int64
df['freq'] = df['id'].map(s)
print (df)
richtingen id freq
0 Marketing, Sales 1110 3
1 Marketing, Sales 1110 3
2 Finance 220 1
3 Marketing, Engineering 1110 3
4 IT 3300 1
Detail:
print (df.set_index('id')['richtingen'].str.split(', ',expand=True).stack())
id
1110 0 Marketing
1 Sales
0 Marketing
1 Sales
220 0 Finance
1110 0 Marketing
1 Engineering
3300 0 IT
dtype: object
I just modified your code as below.
count_unique = pd.DataFrame({'richtingen' : ["Finance, IT","Finance, IT", "Marketing, Sales", "Marketing, Sales", "Marketing","Finance", "Marketing, Sales", "IT", "Marketing, IT"], 'id': [220,1110,1110, 1110,220, 220,1110,3300,1110]})
count_unique['freq'] = list(range(0,len(count_unique)))
grp = count_unique.groupby(['richtingen', 'id']).agg({'freq' : 'count' }).reset_index(level = [0,1])
pd.merge(count_unique,grp, on = ('richtingen','id'), how = 'left')
I am not that into pandas. But I think you may have some luck by adding 13 new columns based on richtingen each column containing 1 or no category . You can use dataframe.apply or a similar function to compute the values when creating the columns.
Then you can take it from there by ORing stuff...

Pandas - how to remove spaces in each column in a dataframe?

I'm trying to remove spaces, apostrophes, and double quote in each column data using this for loop
for c in data.columns:
data[c] = data[c].str.strip().replace(',', '').replace('\'', '').replace('\"', '').strip()
but I keep getting this error:
AttributeError: 'Series' object has no attribute 'strip'
data is the data frame and was obtained from an excel file
xl = pd.ExcelFile('test.xlsx');
data = xl.parse(sheetname='Sheet1')
Am I missing something? I added the str but that didn't help. Is there a better way to do this.
I don't want to use the column labels, like so data['column label'], because the text can be different. I would like to iterate each column and remove the characters mentioned above.
incoming data:
id city country
1 Ontario Canada
2 Calgary ' Canada'
3 'Vancouver Canada
desired output:
id city country
1 Ontario Canada
2 Calgary Canada
3 Vancouver Canada
UPDATE: using your sample DF:
In [80]: df
Out[80]:
id city country
0 1 Ontario Canada
1 2 Calgary ' Canada'
2 3 'Vancouver Canada
In [81]: df.replace(r'[,\"\']','', regex=True).replace(r'\s*([^\s]+)\s*', r'\1', regex=True)
Out[81]:
id city country
0 1 Ontario Canada
1 2 Calgary Canada
2 3 Vancouver Canada
OLD answer:
you can use DataFrame.replace() method:
In [75]: df.to_dict('r')
Out[75]:
[{'a': ' x,y ', 'b': 'a"b"c', 'c': 'zzz'},
{'a': "x'y'z", 'b': 'zzz', 'c': ' ,s,,'}]
In [76]: df
Out[76]:
a b c
0 x,y a"b"c zzz
1 x'y'z zzz ,s,,
In [77]: df.replace(r'[,\"\']','', regex=True).replace(r'\s*([^\s]+)\s*', r'\1', regex=True)
Out[77]:
a b c
0 xy abc zzz
1 xyz zzz s
r'\1' - is a numbered capturing RegEx group
data[c] does not return a value, it returns a series (a whole column of data).
You can apply the strip operation to an entire column df.apply. You can apply the strip function this way.

Categories

Resources