Don't understand output of Pandas.Series.from_csv() - python

I have three txt files with data,4 columns of numbers.I need to load them to one data frame (dimension [3,n] where n is lenght of column).Becouse I need only one column from each file I decided to use Series.from_csv() function but I cannot comprehend the output.
I have write this code:
names = glob.glob("*.txt")
for i in names:
rank = pd.Series.from_csv(i,sep=" ",index_col = 3)
print rank
And this print me one column of my data(thats good) but also one column filled entire with zeros like this:
0.039157 0
0.039001 0
0.038524 0
0.038579 0
0.038385 0
What I find more bizzare is when I use
rank = pd.Series.from_csv(i,sep=" ",index_col = 3).values
I got this:
[0 0 0 ..., 0 0 0]
[0 0 0 ..., 0 0 0]
[0 0 0 ..., 0 0 0]
So its mean that this zeros were values read from files? Then what is the first column from from before?I have tried many method,but I have failed to understand this.

I think you can use more common read_csv with delim_whitespace=True and usecols for filtering column, first append all DataFrames to list dfs and then use concat:
dfs = []
names = glob.glob("*.txt")
for i in names:
rank = pd.read_csv(i,delim_whitespace=True,usecols=[3])
print rank
dfs.append(rank)
df = pd.concat(dfs, axis=1)
Or with sep='\s+' - separator is arbitrary whitespace:
dfs = []
names = glob.glob("*.txt")
for i in names:
rank = pd.read_csv(i,sep='\s+',usecols=[3])
print rank
dfs.append(rank)
df = pd.concat(dfs, axis=1)
You can use also list comprehension:
files = glob.glob("*.txt")
dfs = [pd.read_csv(fp, delim_whitespace=True,usecols=[3]) for fp in files]
df = pd.concat(dfs, axis=1)

Related

Adding Column to Pandas Dataframes from Python Generator

I'm reading a number of csv files into python using a glob matching and would like to add the filename as a column in each of the dataframes. I'm currently matching on a pattern and then using a generator to read in the files as so:
base_list_of_files = glob.glob(matching_pattern)
loaded_csv_data_frames = (pd.read_csv(csv, encoding= 'latin-1') for csv in base_list_of_files)
for idx, df in enumerate(loaded_csv_data_frames):
df['file_origin'] = base_list_of_files[idx]
combined_data = pd.concat(loaded_csv_data_frames)
I however get the error ValueError: No objects to concatenate when I come to do the concatenation - why does the adding the column iteratively break the list of dataframes ?
Generators can only go through one iteration, at the end of which they throw a StopIteration exception which is automatically handled by the for loop. If you try to consume them again they will just raise StopIteration, as demonstrated here:
def consume(gen):
while True:
print(next(gen))
except StopIteration:
print("Stop iteration")
break
>>> gen = (i for i in range(2))
>>> consume(gen)
0
1
Stop iteration
>>> consume(gen)
Stop iteration
That's why you get the ValueError when you try to use loaded_csv_data_frames for a second time.
I cannot replicate your example, but here it is something that should be similar enough:
df1 = pd.DataFrame(0, columns=["a", "b"], index=[0, 1])
df2 = pd.DataFrame(1, columns=["a", "b"], index=[0, 1])
loaded_csv_data_frames = iter((df1, df2)) # Pretend that these are read from a csv file
base_list_of_files = iter(("df1.csv", "df2.csv")) # Pretend these file names come from glob
You can add the file of origin as a key when you concatenate. Add names too to give titles to your index levels.
>>> df = pd.concat(loaded_csv_data_frames, keys=base_list_of_files, names=["file_origin", "index"])
>>> df
a b
file_origin index
df1.csv 0 0 0
1 0 0
df2.csv 0 1 1
1 1 1
If you want file_origin to be one of your columns, just reset first level of the index.
>>> df.reset_index("file_origin")
file_origin a b
index
0 df1.csv 0 0
1 df1.csv 0 0
0 df2.csv 1 1
1 df2.csv 1 1

how to extract all repeating patterns from a string into a dataframe

i have a dataframe with the equiptment codes of certain trucks, this is a similar list o list of the cells
x = [[A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A],
[A0A,A0B,A1C,A1Z,A2I,A5L,B1B,B1F,B1H,B2A,B2X,B3H,B4L,B5E,B5J,C0G,C1W,C5B,C5D],
[A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I,C0A,C1J,C5B,C5D,C6C,C6J,C6Q]]
i want to extract all the values with match with "B" for example ("B1B,B1F,B1H");("B1B,B1F,B1H,B2A,B2X,B3H")("B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I") i try this code but every row each line has a different length
sublista = ['B1B','B1F','B1H','B2A','B2X','B4L','B5C','B5I']
df3 = pd.DataFrame(columns=['FIN', 'Equipmentcodes', 'AQUATARDER', 'CAJA'])
for elemento in sublista:
df_aux=(df2[df2['Equipmentcodes'].str.contains(elemento, case=False)])
df_aux['CAJA'] = elemento
df3 = df3.append(df_aux, ignore_index=True)
Assuming your column contains strings, you could use a regex:
df['selected'] = (df['code']
.str.extractall(r'\b(B[^,]*)\b')[0]
.groupby(level=0).apply(','.join)
)
example input:
x = ['A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A',
'A0A,A0B,A1C,A1Z,A2I,A5L,B1B,B1F,B1H,B2A,B2X,B3H,B4L,B5E,B5J,C0G,C1W,C5B,C5D',
'A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I,C0A,C1J,C5B,C5D,C6C,C6J,C6Q']
df = pd.DataFrame({'code': x})
output:
selected code
0 B1B,B1F,B1H,B2A A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A
1 B1B,B1F,B1H,B2A,B2X,B3H,B4L,B5E,B5J A0A,A0B,A1C,A1Z,A2I,A5L,B1B,B1F,B1H,B2A,B2X,B3H,B4L,B5E,B5J,C0G,C1W,C5B,C5D
2 B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I A0B,A1C,A1Z,A2E,A5C,B1B,B1F,B1H,B2A,B2X,B4L,B5C,B5I,C0A,C1J,C5B,C5D,C6C,C6J,C6Q

Split a dataframe into two dataframe using first column that have a string values in python

I have two .txt file where I want to separate the data frame into two parts using the first column value. If the value is less than "H1000", we want in a first dataframe and if it is greater or equal to "H1000" we want in a second dataframe.First column starts the value with H followed by a four numbers. I want to ignore H when comparing numbers less than 1000 or greater than 1000 in python.
What I have tried this thing,but it is not working.
ht_data = all_dfs.index[all_dfs.iloc[:, 0] == "H1000"][0]
print(ht_data)
Here is my code:
if (".txt" in str(path_txt).lower()) and path_txt.is_file():
txt_files = [Path(path_txt)]
else:
txt_files = list(Path(path_txt).glob("*.txt"))
for fn in txt_files:
all_dfs = pd.read_csv(fn,sep="\t", header=None) #Reading file
all_dfs = all_dfs.dropna(axis=1, how='all') #Drop the columns where all columns are NaN
all_dfs = all_dfs.dropna(axis=0, how='all') #Drop the rows where all columns are NaN
print(all_dfs)
ht_data = all_dfs.index[all_dfs.iloc[:, 0] == "H1000"][0]
print(ht_data)
df_h = all_dfs[0:ht_data] # Head Data
df_t = all_dfs[ht_data:] # Tene Data
Can anyone help me how to achieve this task in python?
Assuming this data
import pandas as pd
data = pd.DataFrame(
[
["H0002", "Version", "5"],
["H0003", "Date_generated", "8-Aug-11"],
["H0004", "Reporting_period_end_date", "19-Jun-11"],
["H0005", "State", "AW"],
["H1000", "Tene_no/Combined_rept_no", "E75/3794"],
["H1001", "Tenem_holder Magnetic Resources", "NL"],
],
columns = ["id", "col1", "col2"]
)
We can create a mask of over and under a pre set threshold, like 1000.
mask = data["id"].str.strip("H").astype(int) < 1000
df_h = data[mask]
df_t = data[~mask]
If you want to compare values of the format val = HXXXX where X is a digit represented as a character, try this:
val = 'H1003'
val_cmp = int(val[1:])
if val_cmp < 1000:
# First Dataframe
else:
# Second Dataframe

How to find and replace substrings at the end of column headers

I have the following columns, among others, in my dataframe: dom_pop', 'an_dom_n', 'an_dom_ncmplt. Equivalent columns exist in multiple dataframes, with the suffix changing. For example, in another dataframe they may be called out as pa_pop', 'an_pa_n', 'an_pa_ncmplt. I want to append '_kwh' to these cols across all my dataframes.
I wrote the following code:
cols = ['_n$', '_ncmplt', '_pop'] << the $ is added to indicate string ending in _n.
filterfuel = 'kwh'
for c in cols:
dfdom.columns = [col.replace(f'{c}', f'{c}_{filterfuel}') for col in dfdom.columns]
dfpa.columns = [col.replace(f'{c}', f'{c}_{filterfuel}') for col in dfpa.columns]
dfsw.columns = [col.replace(f'{c}', f'{c}_{filterfuel}') for col in dfsw.columns]
kwh gets appended to _ncmplt and _pop cols, but not the _n column. If I remove the $ _n gets appended but then _ncmplt looks like 'an_dom_n_kwh_cmplt'.
for df dom the corrected names should look like dom_pop_kwh', 'an_dom_n_kwh', 'an_dom_ncmplt_kwh'
Why is $ not being recongnised as an end of string parameter?
You can use np.where with a regex
cols = ['_n$', '_ncmplt', '_pop']
filterfuel = 'kwh'
pattern = fr"(?:{'|'.join(cols)})"
for df in [dfdom, dfpa, dfsw]:
df.columns = np.where(df.columns.str.contains(pattern, regex=True),
df.columns + f"_{filterfuel}", df.columns)
Output:
>>> pattern
'(?:_n$|_ncmplt|_pop)'
# dfdom = pd.DataFrame([[0]*4], columns=['dom_pop', 'an_dom_n', 'an_dom_ncmplt', 'hello'])
# After:
>>> dfdom
dom_pop_kwh an_dom_n_kwh an_dom_ncmplt_kwh hello
0 0 0 0 0

create names of dataframes in a loop

I need to give names to previously defined dataframes.
I have a list of dataframes :
liste_verif = ( dffreesurfer,total,qcschizo)
And I would like to give them a name by doing something like:
for h in liste_verif:
h.name = str(h)
Would that be possible ?
When I'm testing this code, it's doesn't work : instead of considering h as a dataframe, python consider each column of my dataframe.
I would like the name of my dataframe to be 'dffreesurfer', 'total' etc...
You can use dict comprehension and map DataFrames by values in list L:
dffreesurfer = pd.DataFrame({'col1': [7,8]})
total = pd.DataFrame({'col2': [1,5]})
qcschizo = pd.DataFrame({'col2': [8,9]})
liste_verif = (dffreesurfer,total,qcschizo)
L = ['dffreesurfer','total','qcschizo']
dfs = {L[i]:x for i,x in enumerate(liste_verif)}
print (dfs['dffreesurfer'])
col1
0 7
1 8
print (dfs['total'])
col2
0 1
1 5

Categories

Resources