I have the following data where i would like to extract out source= from the values. Is there a way to create a general regex function so that i can apply on other columns as well to extract words after equal sign?
Data Data2
source=book social-media=facebook
source=book social-media=instagram
source=journal social-media=facebook
Im using python and i have tried the following:
df['Data'].astype(str).str.replace(r'[a-zA-Z]\=', '', regex=True)
but it didnt work
you can try this :
df.replace(r'[a-zA-Z]+-?[a-zA-Z]+=', '', regex=True)
It gives you the following result :
Data Data2
0 book facebook
1 book instagram
2 journal facebook
Regex is not required in this situation:
print(df['Data'].apply(lambda x : x.split('=')[-1]))
print(df['Data2'].apply(lambda x : x.split('=')[-1]))
You have to repeat the character class 1 or more times and you don't have to escape the equals sign.
What you can do is make the match a bit broader matching all characters except a whitespace char or an equals sign.
Then set the result to the new value.
import pandas as pd
data = [
"source=book",
"source=journal",
"social-media=facebook",
"social-media=instagram"
]
df = pd.DataFrame(data, columns=["Data"])
df['Data'] = df['Data'].astype(str).str.replace(r'[^\s=]+=', '', regex=True)
print(df)
Output
Data
0 book
1 journal
2 facebook
3 instagram
If there has to be a value after the equals sign, you can also use str.extract
df['Data'] = df['Data'].astype(str).str.extract(r'[^\s=]+=([^\s=]+)')
Related
I have a pandas dataframe of postcodes which have been concatenated with the two-letter country code. Some of these are Brazilian postcodes and I want to replace the last three characters of any postcode which starts with 'BR' with '000'.
import pandas as pd
data = ['BR86037-890', 'GBBB7', 'BR86071-570','BR86200-000','BR86026-480','BR86082-701', 'GBCW9', 'NO3140']
df = pd.DataFrame(data, columns=['postcode'])
I have tried the below, but it is not changing any of the postcodes:
if df['postcode'].str.startswith('BR').all():
df["postcode"] = df["postcode"].str.replace(r'.{3}$', '000')
Use str.replace with a capturing group:
df['postcode'] = df['postcode'].str.replace(r'(BR.*)...', r'\g<1>000', regex=True)
# or, more generic
df['postcode'] = df['postcode'].str.replace(r'(BR.*).{3}', r'\g<1>'+'0'*3, regex=True)
Output:
postcode
0 BR86037-000
1 GBBB7
2 BR86071-000
3 BR86200-000
4 BR86026-000
5 BR86082-000
6 GBCW9
7 NO3140
regex demo
The code is not working because df['postcode'].str.startswith('BR').all() will return a boolean value indicating whether all postcodes in the column start with 'BR'.
try this
data = ['BR86037-890', 'GBBB7', 'BR86071-570','BR86200-000','BR86026-480','BR86082-701', 'GBCW9', 'NO3140']
df = pd.DataFrame(data, columns=['postcode'])
mask = df['postcode'].str.startswith('BR')
df.loc[mask, 'postcode'] = df.loc[mask, 'postcode'].str.replace(r'.{3}$', '000')
I have a pandas DataFrame that's being read in from a CSV that has hostnames of computers including the domain they belong to along with a bunch of other columns. I'm trying to strip out the Domain information such that I'm left with ONLY the Hostname.
DataFrame ex:
name
domain1\computername1
domain1\computername45
dmain3\servername1
dmain3\computername3
domain1\servername64
....
I've tried using both str.strip() and str.replace() with a regex as well as a string literal, but I can't seem to correctly target the domain information correctly.
Examples of what I've tried thus far:
df['name'].str.strip('.*\\')
df['name'].str.replace('.*\\', '', regex = True)
df['name'].str.replace(r'[.*\\]', '', regex = True)
df['name'].str.replace('domain1\\\\', '', regex = False)
df['name'].str.replace('dmain3\\\\', '', regex = False)
None of these seem to make any changes when I spit the DataFrame out using logging.debug(df)
You are already close to the answer, just use:
df['name'] = df['name'].str.replace(r'.*\\', '', regex = True)
which just adds using r-string from one of your tried code.
Without using r-string here, the string is equivalent to .*\\ which will be interpreted to only one \ in the final regex. However, with r-string, the string will becomes '.*\\\\' and each pair of \\ will be interpreted finally as one \ and final result becomes 2 slashes as you expect.
Output:
0 computername1
1 computername45
2 servername1
3 computername3
4 servername64
Name: name, dtype: object
You can use .str.split:
df["name"] = df["name"].str.split("\\", n=1).str[-1]
print(df)
Prints:
name
0 computername1
1 computername45
2 servername1
3 computername3
4 servername64
No regex approach with ntpath.basename:
import pandas as pd
import ntpath
df = pd.DataFrame({'name':[r'domain1\computername1']})
df["name"] = df["name"].apply(lambda x: ntpath.basename(x))
Results: computername1.
With rsplit:
df["name"] = df["name"].str.rsplit('\\').str[-1]
I'm wondering if someone in the community could help with the following:
Aim to regex replace substrings in a pandas DataFrame (based on a dictionary I pass as argument). Though the key:value replacement should only take place, if the dict key is found as a standalone substring (not as part of a word). By standalone substring I mean it starts after a whitespace
e.x:
mapping = {
"sweatshirt":"sweat_shirt",
"sweat shirt":"sweat_shirt",
"shirt":"shirts"
}
df = pd.DataFrame([
["men sweatshirt"]
["men sweat shirt"]
["yellow shirt"]
])
df = df.replace(mapping,regex=True)
expected result:
substring "shirt" within sweatshirt should NOT be replaced with "shirts" as value is part of another string not a standalone value(\b)
NOTE:
the dictionary I pass is rather long so ideally there is a way to pass the standalone requirement (\b) as part of the dict I pass onto df.replace(dict, regex=True)
Thanks upfront
You can use
df[0].str.replace(fr"\b(?:{'|'.join([x for x in mapping])})\b", lambda x: mapping[x.group()])
The regex will look like \b(?:sweatshirt|shirt)\b, it will match sweatshirt or shirt as whole words. The match will be passed to a lambda and the corresponding value will be fetched using mapping[x.group()].
Multiword Search Term Update
Since you may have multiword terms to search in the mapping keys, you should make sure the longest search terms come first in the alternation group. That is, \b(?:abc def|abc)\b and not \b(?:abc|abc def)\b.
import pandas as pd
mapping = {
"sweat shirt": "sweat_shirt",
"shirt": "shirts"
}
df = pd.DataFrame([
["men sweatshirt"],
["men sweat shirt"]
])
rx = fr"\b(?:{'|'.join(sorted([x for x in mapping],key=len,reverse=True))})\b"
df[0].str.replace(rx, lambda x: mapping[x.group()])
Output:
0 men sweatshirt
1 men sweat_shirt
Name: 0, dtype: object
Include the white-space in your pattern! :)
mapping = {
" sweatshirt":" sweat_shirt",
" shirt":" shirts"
}
df = ([
["men sweatshirt"]
])
df = df.replace(mapping,regex=True)
Try this code-
mapping = {
" sweatshirt":" sweat_shirt",
" shirt":" shirts"
}
import pandas as pd
df = pd.DataFrame ({'ID':["men sweatshirt", "black shirt"]}
)
df = df.apply(lambda x: ' '+x, axis=1).replace(mapping,regex=True).ID.str.strip()
print(df)
I have a dataframe which has some id's. I want to check the pattern of those column values.
Here is how the column looks like-
id: {ASDH12HK,GHST67KH,AGSH90IL,THKI86LK}
I want to to write a code that can distinguish characters and numerics in the pattern above and display an output like 'SSSS99SS' as the pattern of the column above where 'S' represents a character and '9' represents a numeric.This dataset is a large dataset so I can't predefine the position the characters and numeric will be in.I want the code to calculate the position of the characters and numerics. I am new to python so any leads will be helpful!
You can try something like:
my_string = "ASDH12HK"
def decode_pattern(my_string):
my_string = ''.join(str(9) if s.isdigit() else s for s in my_string)
my_string = ''.join('S' if s.isalpha() else s for s in my_string)
return my_string
decode_pattern(my_string)
Output:
'SSSS99SS'
You can apply this to the column in your dataframe as well as below:
import pandas as pd
df = pd.DataFrame(['ASDH12HK','GHST67KH','AGSH90IL','THKI86LK', 'SOMEPATTERN123'], columns=['id'])
df['pattern'] = df['id'].map(decode_pattern)
df
Output:
id pattern
0 ASDH12HK SSSS99SS
1 GHST67KH SSSS99SS
2 AGSH90IL SSSS99SS
3 THKI86LK SSSS99SS
4 SOMEPATTERN123 SSSSSSSSSSS999
You can use regular experssion:
st = "SSSS99SSSS"
a = re.match("[A-Za-z]{4}[0-9]{2}[A-Za-z]{4}", st)
It will return a match if the string starting with 4 Char followed by 2 numeric and again 4 char
So you can use this in your df to filter the df
You can use the function findall() from the re module:
import re
text = "ASDH12HK,GHST67KH,AGSH90IL,THKI86LK"
result = re.findall("[A-Za-z]{4}[0-9]{2}[A-Za-z]{2}", text)
print(result)
I have a column with addresses, and sometimes it has these characters I want to remove => ' - " - ,(apostrophe, double quotes, commas)
I would like to replace these characters with space in one shot. I'm using pandas and this is the code I have so far to replace one of them.
test['Address 1'].map(lambda x: x.replace(',', ''))
Is there a way to modify these code so I can replace these characters in one shot? Sorry for being a noob, but I would like to learn more about pandas and regex.
Your help will be appreciated!
You can use str.replace:
test['Address 1'] = test['Address 1'].str.replace(r"[\"\',]", '')
Sample:
import pandas as pd
test = pd.DataFrame({'Address 1': ["'aaa",'sa,ss"']})
print (test)
Address 1
0 'aaa
1 sa,ss"
test['Address 1'] = test['Address 1'].str.replace(r"[\"\',]", '')
print (test)
Address 1
0 aaa
1 sass
Here's the pandas solution:
To apply it to an entire dataframe use, df.replace. Don't forget the \ character for the apostrophe.
Example:
import pandas as pd
df = #some dataframe
df.replace('\'','', regex=True, inplace=True)