I have column in a pandas df that has this format "1_A01_1_1_NA I want to extract the text that is between the underscores e.g. "A01" "1" "1" and "NA" , I tried to use left right and mid but the problem is that at some point the column value changes into something like this 11_B40_11_8_NA.
Pd the df has 7510 rows.
Use str.split:
df = pd.DataFrame({'Col1': ['1_A01_1_1_NA', '11_B40_11_8_NA']})
out = df['Col1'].str.split('_', expand=True)
Output:
>>> out
0 1 2 3 4
0 1 A01 1 1 NA
1 11 B40 11 8 NA
The function you are looking for is Pandas.series.str.split().
You should be able to take your nasty column as a series and use the str.split("_", expand = True) method. You can see the "expand" keyword is exactly what you need to make new columns out of the results (splitting on the "_" character, not any specific index).
So, something like this:
First we need to create a little bit of nonsense like yours.
(Please forgive my messy and meandering code, I'm still new)
import pandas as pd
from random import choice
import string
# Creating Nonsense Data Frame
def make_nonsense_codes():
"""
Returns a string of nonsense like '11_B40_11_8_NA'
"""
nonsense = "_".join(
[
"".join(choice(string.digits) for i in range(2)),
"".join(
[choice(string.ascii_uppercase),
"".join([choice(string.digits) for i in range(2)])
]
),
"".join(choice(string.digits) for i in range(2)),
choice(string.digits),
"NA"
]
)
return nonsense
my_nonsense_df = pd.DataFrame(
{"Nonsense" : [make_nonsense_codes() for i in range(5)]}
)
print(my_nonsense_df)
# Nonsense
# 0 25_S91_13_1_NA
# 1 80_O54_58_4_NA
# 2 01_N98_68_3_NA
# 3 88_B37_14_9_NA
# 4 62_N65_73_7_NA
Now we can select our "Nonsense" column, and use str.split().
# Wrangling the nonsense column with series.str.split()
wrangled_nonsense_df = my_nonsense_df["Nonsense"].str.split("_", expand = True)
print(wrangled_nonsense_df)
# 0 1 2 3 4
# 0 25 S91 13 1 NA
# 1 80 O54 58 4 NA
# 2 01 N98 68 3 NA
# 3 88 B37 14 9 NA
# 4 62 N65 73 7 NA
Related
I'm a Data Science beginner and have the following task.
I have a huge list of data and need to pick the rows starting with scope_list but also the next 4 following rows of the filtered data. I have the row scope_list 1 to x times in the list, see below. To select the first row is no problem for me, but not the next 3 rows.
df_new = df.loc[df['parts'] == 'Scope_list']
df_new
which get all ids and rows where the value in column Parts is Scope_list
parts
0 Scope_list
10 Scope_list
18 Scope_list
but I need not only the first row "Scope_list" also the next 3 rows
like
Parts
0 Scope_list
1 Light_front
2 Box1
3 Cable1
4 Scope_list
5 Light_front
6 Cable1
7 Connector
8 Scope_list
9 Light_left
10 Box2
11 Cable3
so thats a part of my df:
import pandas as pd
df = pd.DataFrame(['Scope_list', 'Light_front', 'Box1', 'Cable1', 'Connector', 'Switch', 'Info_list', 'can be used for 1', '456 not used','','Scope_list', 'Light_front', 'Cable1', 'Connector', 'Code_list', '345,456,567', '567', '', 'Scope_list', 'Light_left', 'Box2', 'Cable3', 'Switch3'], columns = ['parts'])
May anybody can give me a hint and help would be great. I use jupyter notebook and python 3.
First get the indexes where 'Scope_list' is the value and then get the next 3 values:
scope_idx = df.loc[df.parts == 'Scope_list'].index
out = df.loc[[e for lst in [range(idx, idx + 4) for idx in
scope_idx] for e in lst]].copy()
out = out.reset_index(drop=True)
print(out):
parts
0 Scope_list
1 Light_front
2 Box1
3 Cable1
4 Scope_list
5 Light_front
6 Cable1
7 Connector
8 Scope_list
9 Light_left
10 Box2
11 Cable3
indexes = df[df['parts'].str.contains('Scope_list')].index
pd.concat([df.iloc[indexes[i]:indexes[i]+3] for i in range(len(indexes))])
I hope this will work fine. you can also bind this code in a function just pass the keyword you wanna search and the column name.
def func(column_name : string , keyword : string, show_items_after_keyword : int):
indexes = df[df[column_name].str.contains(keyword)].index
result = pd.concat([df.iloc[indexes[i]:indexes[i]+show_items_after_keyword] for i in range(len(indexes))])
return result
For example, I have a pandas dataframe like this :
Ignoring the "Name" column, I want a dataframe that looks like this, labelling the Hashes of the same group with their "ID"
Here, we traverse each row, we encounter "8a43", and assign ID 1 to it, and wherever we find the same hash value, we assign ID as 1. Then we move on to the next row, and encounter 79e2 and b183. We then traverse all the rows and wherever we find these values, we store their IDs as 2. Now the issue will arise when we reach "abc7". It will be assigned ID=5 as it was previously encountered in "abc5". But I also want that in rows after the current one, wherever I find "26ea", assign the ID=5 to those as well.
I hope all this makes sense. If not, feel free to reach out to me via comments or message. I will clear it out quickly.
Solution using dict
import numpy as np
import pandas as pd
hashvalues = list(df['Hash_Value'])
dic, i = {}, 1
id_list = []
for hashlist in hashvalues:
# convert to list
if isinstance(hashlist, str):
hashlist = hashlist.replace('[','').replace(']', '')
hashlist = hashlist.split(',')
# check if the hash is unknown
if hashlist[0] not in dic:
# Assign a new id
dic[hashlist[0]] = i
k = i
i += 1
else:
# if known use existing id
k = dic[hashlist[0]]
for h in hashlist[1:]:
# set id of the rest of the list hashes
# equal to the first hashes's id
dic[h] = k
id_list.append(k)
else:
id_list.append(np.nan)
print(df)
Hash Name ID
0 [8a43] abc1 1
1 [79e2,b183] abc2 2
2 [f82a] abc3 3
3 [b183] abc4 2
4 [eaa7,5ea9,1cee] abc5 4
5 [5ea9] abc6 4
6 [1cee,26ea] abc7 4
7 [79e2] abc8 2
8 [8a43] abc9 1
9 [26ea] abc10 4
Use networkx solution for dictionary for common values, select first value in Hash_Value by str and use Series.map:
#if necessary convert to lists
#df['Hash_Value'] = df['Hash_Value'].str.strip('[]').str.split(', ')
import networkx as nx
G=nx.Graph()
for l in df['Hash_Value']:
nx.add_path(G, l)
new = list(nx.connected_components(G))
print (new)
[{'8a43'}, {'79e2', 'b183'}, {'f82a'}, {'5ea9', '1cee', '26ea', 'eaa7'}]
mapped = {node: cid for cid, component in enumerate(new) for node in component}
df['ID'] = df['Hash_Value'].str[0].map(mapped) + 1
print (df)
Hash_Value Name ID
0 [8a43] abcl 1
1 [79e2, b183] abc2 2
2 [f82a] abc3 3
3 [b183] abc4 2
4 [eaa7, 5ea9, 1cee] abc5 4
5 [5ea9] abc6 4
6 [1cee, 26ea] abc7 4
7 [79e2] abc8 2
8 [8a43] abc9 1
9 [26ea] abc10 4
When you print a pandas DataFrame, which calls DataFrame.to_string, it normally inserts a minimum of 2 spaces between the columns. For example, this code
import pandas as pd
df = pd.DataFrame( {
"c1" : ("a", "bb", "ccc", "dddd", "eeeeee"),
"c2" : (11, 22, 33, 44, 55),
"a3235235235": [1, 2, 3, 4, 5]
} )
print(df)
outputs
c1 c2 a3235235235
0 a 11 1
1 bb 22 2
2 ccc 33 3
3 dddd 44 4
4 eeeeee 55 5
which has a minimum of 2 spaces between each column.
I am copying DataFarames printed on the console and pasting it into documents, and I have received feedback that it is hard to read: people would like more spaces between the columns.
Is there a standard way to do that?
I see no option in either DataFrame.to_string or pandas.set_option.
I have done a web search, and not found an answer. This question asks how to remove those 2 spaces, while this question asks why sometimes only 1 space is between columns instead of 2 (I also have seen this bug, hope someone answers that question).
My hack solution is to define a function that converts a DataFrame's columns to type str, and then prepends each element with a string of the specified number of spaces.
This code (added to the code above)
def prependSpacesToColumns(df: pd.DataFrame, n: int = 3):
spaces = ' ' * n
# ensure every column name has the leading spaces:
if isinstance(df.columns, pd.MultiIndex):
for i in range(df.columns.nlevels):
levelNew = [spaces + str(s) for s in df.columns.levels[i]]
df.columns.set_levels(levelNew, level = i, inplace = True)
else:
df.columns = spaces + df.columns
# ensure every element has the leading spaces:
df = df.astype(str)
df = spaces + df
return df
dfSp = prependSpacesToColumns(df, 3)
print(dfSp)
outputs
c1 c2 a3235235235
0 a 11 1
1 bb 22 2
2 ccc 33 3
3 dddd 44 4
4 eeeeee 55 5
which is the desired effect.
But I think that pandas surely must have some builtin simple standard way to do this. Did I miss how?
Also, the solution needs to handle a DataFrame whose columns are a MultiIndex. To continue the code example, consider this modification:
idx = (("Outer", "Inner1"), ("Outer", "Inner2"), ("Outer", "a3235235235"))
df.columns = pd.MultiIndex.from_tuples(idx)
You can accomplish this through formatters; it takes a bit of code to create the dictionary {'col_name': format_string}. Find the max character length in each column or the length of the column header, whichever is greater, add some padding, and then pass a formatting string.
Use partial from functools as the formatters expect a one parameter function, yet we need to specify a different width for each column.
Sample Data
import pandas as pd
df = pd.DataFrame({"c1": ("a", "bb", "ccc", "dddd", 'eeeeee'),
"c2": (1, 22, 33, 44, 55),
"a3235235235": [1,2,3,4,5]})
Code
from functools import partial
# Formatting string
def get_fmt_str(x, fill):
return '{message: >{fill}}'.format(message=x, fill=fill)
# Max character length per column
s = df.astype(str).agg(lambda x: x.str.len()).max()
pad = 6 # How many spaces between
fmts = {}
for idx, c_len in s.iteritems():
# Deal with MultIndex tuples or simple string labels.
if isinstance(idx, tuple):
lab_len = max([len(str(x)) for x in idx])
else:
lab_len = len(str(idx))
fill = max(lab_len, c_len) + pad - 1
fmts[idx] = partial(get_fmt_str, fill=fill)
print(df.to_string(formatters=fmts))
c1 c2 a3235235235
0 a 11 1
1 bb 22 2
2 ccc 33 3
3 dddd 44 4
4 eeeeee 55 5
# MultiIndex Output
Outer
Inner1 Inner2 a3235235235
0 a 11 1
1 bb 22 2
2 ccc 33 3
3 dddd 44 4
4 eeeeee 55 5
I have generated a dataframe containing all the possible two combinations of electrocardiogram (ECG) leads using itertools using the code below
source = [ 'I-s', 'II-s', 'III-s', 'aVR-s', 'aVL-s', 'aVF-s', 'V1-s', 'V2-s', 'V3-s', 'V4-s', 'V5-s', 'V6-s', 'V1Long-s', 'IILong-s', 'V5Long-s', 'Information-s' ]
target = [ 'I-t', 'II-t', 'III-t', 'aVR-t', 'aVL-t', 'aVF-t', 'V1-t', 'V2-t', 'V3-t', 'V4-t', 'V5-t', 'V6-t', 'V1Long-t', 'IILong-t', 'V5Long-t', 'Information-t' ]
from itertools import product
test = pd.DataFrame(list(product(source, target)), columns=['source', 'target'])
The test dataframe contains 256 rows/lines containing all the two possible combinations.
The value for each combination is zero as follows
test['value'] = 0
The test df looks like this:
I have another dataframe called diagramDF that contains the combinations where the value column is non-zero. The diagramDF is significanntly smaller than the test dataframe.
source target value
0 I-s II-t 137
1 II-s I-t 3
2 II-s III-t 81
3 II-s IILong-t 13
4 II-s V1-t 21
5 III-s II-t 3
6 III-s aVF-t 19
7 IILong-s II-t 13
8 IILong-s V1Long-t 353
9 V1-s aVL-t 11
10 V1Long-s IILong-t 175
11 V1Long-s V3-t 4
12 V1Long-s aVF-t 4
13 V2-s V3-t 8
14 V3-s V2-t 6
15 V3-s V6-t 2
16 V5-s aVR-t 5
17 V6-s III-t 4
18 aVF-s III-t 79
19 aVF-s V1Long-t 235
20 aVL-s I-t 1
21 aVL-s aVF-t 16
22 aVR-s aVL-t 1
Note that the first two columns source and target have the same notations
I have tried to replace the zero values of the test dataframe with the nonzero values of the diagramDF using merge like below:
df = pd.merge(test, diagramDF, how='left', on=['source', 'target'])
However, I get an error informing me that:
ValueError: The column label 'source' is not unique. For a
multi-index, the label must be a tuple with elements corresponding to
each level
Is there something that I am getting wrong? Is there a more efficient and fast way to do this?
Might help,
pd.merge(test, diagramDF, how='left', on=['source', 'target'],right_index=True,left_index=True)
Check this:
test = test.reset_index()
diagramDF = diagramDF.reset_index()
new = pd.merge(test, diagramDF, how='left', on=['source', 'target'])
I was hoping you would be able to help me solve a small problem.
I am using a small device that prints out two properties that I save to a file. The device rasters in X and Y direction to form a grid. I am interested in plotting the relative intensity of these two properties as a function of the X and Y dimensions. I record the data in 4 columns that are comma separated (X, Y, property 1, property 2).
The grid is examined in lines, so for each Y value, it will move from X1 to X2 which are separated several millimeters apart. Then it will move to the next line and over again.
I am able to process the data in python with pandas/numpy but it doesn't work too well when there are any missing rows (which unfortunately does happen).
I have attached a sample of the output (and annotated the problems):
44,11,500,1
45,11,120,2
46,11,320,3
47,11,700,4
New << used as my Y axis separator
44,12,50,5
45,12,100,6
46,12,1500,7
47,12,2500,8
Sometimes, however a line or a few will be missing making it not possible to process and plot. Currently I have not been able to automatically fix it and have to do it manually. The bad output looks like this:
44,11,500,1
45,11,120,2
46,11,320,3
47,11,700,4
New << used as my Y axis separator
45,12,100,5 << missing 44,12...
46,12,1500,6
47,12,2500,7
I know the number of lines I expect since I know my range of X and Y.
What would be the best way to deal with this? Currently I manually enter the missing X and Y values and populate property 1 and 2 with values of 0. This can be time consuming and I would like to automate it. I have two questions.
Question 1: How can I automatically fill in my missing data with the corresponding values of X and Y and two zeros? This could be obtained from a pre-generated array of X and Y values that correspond to the experimental range.
Question 2: Is there a better way to split the file into separate arrays for plotting (rather than using the 'New' line?) For instance, by having a 'if' function that will output each line between X(start) and X(end) to a separate array? I've tried doing that but with no success.
I've attached my current (crude) code:
df = pd.read_csv('FileName.csv', delimiter = ',', skiprows=0)
rows = [-1] + np.where(df['X']=='New')[0].tolist() + [len(df.index)]
dff = {}
for i, r in enumerate(rows[:-1]):
dff[i] = df[r+1: rows[i+1]]
maxY = len(dff)
data = []
data2 = []
for yaxes in range(0, maxY):
data2.append(dff[yaxes].ix[:,2])
<data2 is then used for plotting using matplotlib>
To answer my Question 1, I was thinking about using the 'reindex' and 'reset_index' functions, however haven't managed to make them work.
I would appreciate any suggestions.
Is this meet what you want?
Q1: fill X using reindex, and others using fillna
Q2: Passing separated StringIO to read_csv is easier (change if you use Python 3)
# read file and split the input
f = open('temp.csv', 'r')
chunks = f.read().split('New')
# read csv as separated dataframes, using first column as index
dfs = [pd.read_csv(StringIO(unicode(chunk)), header=None, index_col=0) for chunk in chunks]
def pad(df):
# reindex, you should know the range of x
df = df.reindex(np.arange(44, 48))
# pad y from forward / backward, assuming y should have the single value
df[1] = df[1].fillna(method='bfill')
df[1] = df[1].fillna(method='ffill')
# padding others
df = df.fillna(0)
# revert index to values
return df.reset_index(drop=False)
dfs = [pad(df) for df in dfs]
dfs[0]
# 0 1 2 3
# 0 44 11 500 1
# 1 45 11 120 2
# 2 46 11 320 3
# 3 47 11 700 4
# dfs[1]
# 0 1 2 3
# 0 44 12 0 0
# 1 45 12 100 5
# 2 46 12 1500 6
# 3 47 12 2500 7
First Question
I've included print statements inside function to explain how this function works
In [89]:
def replace_missing(df , Ids ):
# check what are the mssing values
missing = np.setdiff1d(Ids , df[0])
if len(missing) > 0 :
missing_df = pd.DataFrame(data = np.zeros( (len(missing) , 4 )))
#print('---missing df---')
#print(missing_df)
missing_df[0] = missing
#print('---missing df---')
#print(missing_df)
missing_df[1].replace(0 , df[1].iloc[0] , inplace = True)
#print('---missing df---')
#print(missing_df)
df = pd.concat([df , missing_df])
#print('---final df---')
#print(df)
return df
In [91]:
Ids = np.arange(44,48)
final_df = df1.groupby(df1[1] , as_index = False).apply(replace_missing , Ids).reset_index(drop = True)
final_df
Out[91]:
0 1 2 3
44 11 500 1
45 11 120 2
46 11 320 3
47 11 700 4
45 12 100 5
46 12 1500 6
47 12 2500 7
44 12 0 0
Second question
In [92]:
group = final_df.groupby(final_df[1])
In [99]:
separate = [group.get_group(key) for key in group.groups.keys()]
separate[0]
Out[104]:
0 1 2 3
44 11 500 1
45 11 120 2
46 11 320 3
47 11 700 4