Remove grave accent from IDs - python

I have an ID column with grave accent like this `1234ABC40 and I want to remove just that character from this column but keep the dataframe form.
I tried this on the column only. I have a file name x here and has multiple columns. id is the col i want to fix.
pd.read_csv(r'C:\filename.csv', index_col = False)
id = str(x['id'])
id2 = unidecode.unidecode(id)
id3 = id2.replace('`','')
This changes to str but I want that column back in the dataframe form

DataFrames have their own replace() function. Note, for partial replacements you must enable regex=True in the parameters:
import pandas as pd
d = {'id': ["12`3", "32`1"], 'id2': ["004`", "9`99"]}
df = pd.DataFrame(data=d)
df["id"] = df["id"].replace('`','', regex=True)
print df
id id2
0 123 004`
1 321 9`99

Related

How to replace last three characters of a string in a column if it starts with character

I have a pandas dataframe of postcodes which have been concatenated with the two-letter country code. Some of these are Brazilian postcodes and I want to replace the last three characters of any postcode which starts with 'BR' with '000'.
import pandas as pd
data = ['BR86037-890', 'GBBB7', 'BR86071-570','BR86200-000','BR86026-480','BR86082-701', 'GBCW9', 'NO3140']
df = pd.DataFrame(data, columns=['postcode'])
I have tried the below, but it is not changing any of the postcodes:
if df['postcode'].str.startswith('BR').all():
df["postcode"] = df["postcode"].str.replace(r'.{3}$', '000')
Use str.replace with a capturing group:
df['postcode'] = df['postcode'].str.replace(r'(BR.*)...', r'\g<1>000', regex=True)
# or, more generic
df['postcode'] = df['postcode'].str.replace(r'(BR.*).{3}', r'\g<1>'+'0'*3, regex=True)
Output:
postcode
0 BR86037-000
1 GBBB7
2 BR86071-000
3 BR86200-000
4 BR86026-000
5 BR86082-000
6 GBCW9
7 NO3140
regex demo
The code is not working because df['postcode'].str.startswith('BR').all() will return a boolean value indicating whether all postcodes in the column start with 'BR'.
try this
data = ['BR86037-890', 'GBBB7', 'BR86071-570','BR86200-000','BR86026-480','BR86082-701', 'GBCW9', 'NO3140']
df = pd.DataFrame(data, columns=['postcode'])
mask = df['postcode'].str.startswith('BR')
df.loc[mask, 'postcode'] = df.loc[mask, 'postcode'].str.replace(r'.{3}$', '000')

KEGG Drug database Python script

I have a drug database saved in a SINGLE column in CSV file that I can read with Pandas. The file containts 750000 rows and its elements are devided by "///". The column also ends with "///". Seems every row is ended with ";".
I would like to split it to multiple columns in order to create structured database. Capitalized words (drug information) like "ENTRY", "NAME" etc. will be headers of these new columns.
So it has some structure, although the elements can be described by different number and sort of information. Meaning some elements will just have NaN in some cells. I have never worked with such SQL-like format, it is difficult to reproduce it as Pandas code, too. Please, see the PrtScs for more information.
An example of desired output would look like this:
df = pd.DataFrame({
"ENTRY":["001", "002", "003"],
"NAME":["water", "ibuprofen", "paralen"],
"FORMULA":["H2O","C5H16O85", "C14H24O8"],
"COMPONENT":[NaN, NaN, "paracetamol"]})
I am guessing there will be .split() involved based on CAPITALIZED words? The Python 3 code solution would be appreciated. It can help a lot of people. Thanks!
Whatever he could, he helped:
import pandas as pd
cols = ['ENTRY', 'NAME', 'FORMULA', 'COMPONENT']
# We create an additional dataframe.
dfi = pd.DataFrame()
# We read the file, get two columns and leave only the necessary lines.
df = pd.read_fwf(r'drug', header=None, names=['Key', 'Value'])
df = df[df['Key'].isin(cols)]
# To "flip" the dataframe, we first prepare an additional column
# with indexing by groups from one 'ENTRY' row to another.
dfi['Key1'] = dfi['Key'] = df[(df['Key'] == 'ENTRY')].index
dfi = dfi.set_index('Key1')
df = df.join(dfi, lsuffix='_caller', rsuffix='_other')
df.fillna(method="ffill", inplace=True)
df = df.astype({"Key_other": "Int64"})
# Change the shape of the table.
df = df.pivot(index='Key_other', columns='Key_caller', values='Value')
df = df.reindex(columns=cols)
# We clean up the resulting dataframe a little.
df['ENTRY'] = df['ENTRY'].str.split(r'\s+', expand=True)[0]
df.reset_index(drop=True, inplace=True)
pd.set_option('display.max_columns', 10)
Small code refactoring:
import pandas as pd
cols = ['ENTRY', 'NAME', 'FORMULA', 'COMPONENT']
# We read the file, get two columns and leave only the necessary lines.
df = pd.read_fwf(r'C:\Users\ф\drug\drug', header=None, names=['Key', 'Value'])
df = df[df['Key'].isin(cols)]
# To "flip" the dataframe, we first prepare an additional column
# with indexing by groups from one 'ENTRY' row to another.
df['Key_other'] = None
df.loc[(df['Key'] == 'ENTRY'), 'Key_other'] = df[(df['Key'] == 'ENTRY')].index
df['Key_other'].fillna(method="ffill", inplace=True)
# Change the shape of the table.
df = df.pivot(index='Key_other', columns='Key', values='Value')
df = df.reindex(columns=cols)
# We clean up the resulting dataframe a little.
df['ENTRY'] = df['ENTRY'].str.split(r'\s+', expand=True)[0]
df['NAME'] = df['NAME'].str.split(r'\(', expand=True)[0]
df.reset_index(drop=True, inplace=True)
pd.set_option('display.max_columns', 10)
print(df)
Key ENTRY NAME FORMULA \
0 D00001 Water H2O
1 D00002 Nadide C21H28N7O14P2
2 D00003 Oxygen O2
3 D00004 Carbon dioxide CO2
4 D00005 Flavin adenine dinucleotide C27H33N9O15P2
... ... ... ...
11983 D12452 Fostroxacitabine bralpamide hydrochloride C22H30BrN4O8P. HCl
11984 D12453 Guretolimod C24H34F3N5O4
11985 D12454 Icenticaftor C12H13F6N3O3
11986 D12455 Lirafugratinib C28H24FN7O2
11987 D12456 Lirafugratinib hydrochloride C28H24FN7O2. HCl
Key COMPONENT
0 NaN
1 NaN
2 NaN
3 NaN
4 NaN
... ...
11983 NaN
11984 NaN
11985 NaN
11986 NaN
11987 NaN
[11988 rows x 4 columns]
Need a little more to bring to mind, I leave it to your work.

Pandas read_csv for a no quote file

I'm trying to read a file that doesn't have any quotes, which is causing inconsistent number of row lengths
Data looks as follows:
col_a, col_b
abc, inc., 5
xyz corb, 10
Since there are no quotes around "abc, inc.", this is causing the first row to get split into 3 values, but it should actually be just 2 values.
This column is not necessarily in the first position, and that there can be another bad column like this. The data has around 250 columns.
I'm reading this using pd.read_csv, how can this be resolved?
Thanks!
Its not a CSV but since there is only one column with the errant commas you can process with the csv module and fix the slice that holds too many column values. When a row has too many cells, assume they are the ones from the unescaped comma.
import pandas as pd
import csv
def split_badrows(fileobj, bad_col, total_cols):
"""Iterate rows, colapsing extra columns at bad_col"""
for row in csv.reader(fileobj):
row = [cell.strip() for cell in row]
extras = len(row) - total_cols
if extras > 0:
# colapse slice at troubled column into single value
extras += 1 # python slice doesn't include right endpoint
row[bad_col] = ", ".join(row[bad_col:bad_col+extras])
del row[bad_col+1:bad_col+extras]
yield row
def df_from_badtext(fileobj, bad_col):
"""Make pandas.DataFrame from badly formatted text"""
columns = [cell.strip() for cell in next(fileobj).split(",")]
total_cols = len(columns)
return pd.DataFrame(split_badrows(fileobj, bad_col, total_cols),
columns=columns)
# test
open("testme.txt", "w").write("""col_a, col_b
abc, inc., 5
xyz corb, 10""")
df = df_from_badtext(open("testme.txt"), bad_col=0)
print(df)
Data split to list then transform to dataframe.
csv = '''col_a, col_b
abc, inc., 5
xyz corb, 10'''+'\n'
import re
import pandas as pd
reArr = re.findall('(.*),([^,]+)\n',csv)
df=pd.DataFrame(reArr[1:],columns=reArr[0])
print(df)
col_a
col_b
0
abc, inc.
5
1
xyz corb
10
EDIT:
Thanks to tdelaney comment below:
see if this works
pd.read_csv('foo.csv',delimiter=",(?!( [\w\d]*).,)").dropna(axis=1)
OLD:
using delimiter as ",(?!.*,)" in read_csv seems to be solving this for me
EDIT (after updated question with an additional column):
Solution 1:
You can create a function with the bad column as a parameter and use split and concat to correct the dataframe depending on that bad column. Please note that the bad_col parameter in my function is the column number, where we start counting at 1, rather than 0 (e.g. 1, 2, 3, etc. instead of 0, 1, 2, etc.):
import pandas as pd
import numpy as np
from io import StringIO
data = StringIO('''
col, col_a, col_b
000, abc, inc., 5
111, xyz corb, 10
''')
df = pd.read_csv(data, sep="|")
def fix_csv(df, bad_col):
cols = df.columns.str.split(', ')[0]
x = len(cols) - bad_col
tmp = df.iloc[:,0].str.split(', ', expand=True, n=x)
df = pd.concat([tmp.iloc[:,0],
tmp.iloc[:,-1].str.rsplit(', ', expand=True, n=x)],
axis=1)
df.columns = cols
return df
fix_csv(df, bad_col=2)
Solution 2 (this is if you have issues in multiple columns and you need to use more brute force):
It sounds like there is a possibility that you there could be multiple columns affected from the comments as you mentioned only 1 "so far".
As such, this might be a little bit of a project to clean up the data. The following code can give you an idea how to do that. The bottom-line is that you can create two different dataframes: 1) The first dataframe has the minimum number of commas (i.e. they should be the rows without any issues). 2) The other dataframe will be the dataframe with all of the issues. I've shown how you can clean the data to get to the correct number of columns and then change the data back and concat the two dataframes.
import pandas as pd
import numpy as np
from io import StringIO
data = StringIO('''
col, col_a, col_b
000, abc, inc., 5
111, xyz corb, 10
''')
df = pd.read_csv(data, sep="|")
cols = df.columns.str.split(', ')[0]
s = df.iloc[:,0].str.count(',')
df1 = df.copy()[s.eq(s.min())]
df1 = df1.iloc[:,0].str.split(', ', expand=True)
df1.columns = cols
df2 = df.copy()[s.gt(s.min())]
#inspect this dataframe manually to see how many rows affected, which columns, etc.
#cleanup df2 with some .replace so all equal commas
original = [', inc.', ', corp.']
temp = [' inc.', ' corp.']
df2.iloc[:,0] = df2.iloc[:,0].replace(original, temp, regex=True)
df2 = df2.iloc[:,0].str.split(', ', expand=True)
df2.columns = cols
#cleanup df2 by changing back to original values
df2['col_a'] = df2['col_a'].replace(temp, original, regex=True) # you can do this with other columns as well
df3 = pd.concat([df1, df2]).sort_index()
df3
Out[1]:
col col_a col_b
0 000 abc, inc. 5
1 111 xyz corb 10
Solution 3: Previous Solution (for original question when problem was only in first column - for reference)
You can read in with sep="|" as that | character is not in your .csv, so it reads all of the data into one column.
The main assumption to my solution is that the problematic column is only the first column. I use rsplit(', ') and limit the number of splits to the total number of columns minus 1 (with the example data, this is 2-1=1). Hopefully, this solves with your actual data or at least gives you some idea. If your data is separated by , instead of , , please note whether or not to adjust my splits as well.
import pandas as pd
import numpy as np
from io import StringIO
data = StringIO('''
col_a, col_b
abc, inc., 5
xyz corb, 10
''')
df = pd.read_csv(data, sep="|")
cols = df.columns.str.split(', ')[0]
x = len(cols) - 1
df = df.iloc[:,0].str.rsplit(', ', expand=True, n=x)
df.columns = cols
df
Out[1]:
col_a col_b
0 abc, inc. 5
1 xyz corb 10

How to match values of a dataframe with another dataframe in Python? [duplicate]

I am merging two csv(data frame) using below code:
import pandas as pd
a = pd.read_csv(file1,dtype={'student_id': str})
df = pd.read_csv(file2)
c=pd.merge(a,df,on='test_id',how='left')
c.to_csv('test1.csv', index=False)
I have the following CSV files
file1:
test_id, student_id
1, 01990
2, 02300
3, 05555
file2:
test_id, result
1, pass
3, fail
after merge
test_id, student_id , result
1, 1990, pass
2, 2300,
3, 5555, fail
If you notice student_id has 0 appended at the beginning and it's supposed to be considered as text but after merging and using to_csv function it converts it into numeric and removes leading 0.
How can I keep the column as "text" even after to_csv?
I think its to_csv function which saves back again as numeric
Added dtype={'student_id': str} while reading csv.. but while saving it as to_csv .. it again convert it to numeric
Caveat Please use merge or join. This answer is provided to give perspective on the flexibility pandas gives you and how many different ways there are to answer the same question.
a = pd.read_csv('file1.csv', converters=dict(student_id=str), skipinitialspace=True)
df = pd.read_csv('file2.csv')
results = pd.concat(
[d.set_index('test_id') for d in [a, df]],
axis=1, join='outer'
).reset_index()
It's not dropping the leading zero on the merge, it's dropping it on the read_csv. You can fix this by specifying that column is a string at import time:
a = pd.read_csv('file1.csv', dtype={'student_id': str}, skipinitialspace=True)
The important part is the dtype parameter. You are telling pandas to import this column as a string. The skipinitialspace parameter is set to True, because the column headers are defined with spaces, so we strip it:
test_id, student_id
^ The student_id starts here, at the space
The final code looks like this:
a = pd.read_csv('file1.csv', dtype={'student_id': str}, skipinitialspace=True)
df = pd.read_csv('file2.csv')
results = a.merge(df, how='left', on='test_id')
With the results dataframe looking like this:
test_id student_id result
0 1 01990 pass
1 2 02300 NaN
2 3 05555 fail
Then when you run to_csv your result should be:
test_id,student_id, result
1,01990, pass
2,02300,
3,05555, fail
Solution with join, first need read_csv with parameter dtype for convert student_id to string and remove whitespaces by skipinitialspace:
df1 = pd.read_csv(file1, dtype={'student_id': str}, skipinitialspace=True)
df2 = pd.read_csv(file2, skipinitialspace=True)
df = df1.join(df2.set_index('test_id'), on='test_id')
print (df)
test_id student_id result
0 1 01990 pass
1 2 02300 NaN
2 3 05555 fail
a = pd.read_csv(file1, dtype={'test_id': object})
df = pd.read_csv(file2, dtype={'test_id': object})
==============================================================
In[28]: pd.merge(a, b, on='test_id', how='left')
Out[28]:
test_id student_id result
0 01 1990 pass
1 02 2300 NaN
2 003 5555 fail

How can I split a column into 2 in the correct way?

I am web-scraping tables from a website, and I am putting it to the Excel file.
My goal is to split a columns into 2 columns in the correct way.
The columns what i want to split: "FLIGHT"
I want this form:
First example: KL744 --> KL and 0744
Second example: BE1013 --> BE and 1013
So, I need to separete the FIRST 2 character (in the first column), and after that the next characters which are 1-2-3-4 characters. If 4 it's oke, i keep it, if 3, I want to put a 0 before it, if 2 : I want to put 00 before it (so my goal is to get 4 character/number in the second column.)
How Can I do this?
Here my relevant code, which is already contains a formatting code.
df2 = pd.DataFrame(datatable,columns = cols)
df2["UPLOAD_TIME"] = datetime.now()
mask = np.column_stack([df2[col].astype(str).str.contains(r"Scheduled", na=True) for col in df2])
df3 = df2.loc[~mask.any(axis=1)]
if os.path.isfile("output.csv"):
df1 = pd.read_csv("output.csv", sep=";")
df4 = pd.concat([df1,df3])
df4.to_csv("output.csv", index=False, sep=";")
else:
df3.to_csv
df3.to_csv("output.csv", index=False, sep=";")
Here the excel prt sc from my table:
You can use indexing with str with zfill:
df = pd.DataFrame({'FLIGHT':['KL744','BE1013']})
df['a'] = df['FLIGHT'].str[:2]
df['b'] = df['FLIGHT'].str[2:].str.zfill(4)
print (df)
FLIGHT a b
0 KL744 KL 0744
1 BE1013 BE 1013
I believe in your code need:
df2 = pd.DataFrame(datatable,columns = cols)
df2['a'] = df2['FLIGHT'].str[:2]
df2['b'] = df2['FLIGHT'].str[2:].str.zfill(4)
df2["UPLOAD_TIME"] = datetime.now()
...
...

Categories

Resources