raise H2OValueError (message=message, var_name=vname, skip_frames=1) - python

I'm trying to set column names, but I encounter an error:
H2OValueError: Argument names
Code:
index_columns_names = ["Date"]
generator_output_columns_names = ["GenOut"]
generator_v_columns_names = ["GenVar"]
turb_bearing_vib_columns_names =["TurbBearingVib"+str(i) for i in range(1,6)]
gen_bearing_vib_columns_names = ["GenBearingVib"+str(i) for i in range(7,9)]
input_file_column_names = index_columns_names + generator_output_columns_names + generator_v_columns_names + turb_bearing_vib_columns_names + gen_bearing_vib_columns_names
data = h2o.upload_file("data\Data_SLA_Unit_1_2018.csv")
data.set_names(input_file_column_names);
How to fix this problem?

Based on your naming convention, are you expecting input_file_column_names to be a list of 12 strings? When printing we see the following 10 column names:
['Date',
'GenOut',
'GenVar',
'TurbBearingVib1',
'TurbBearingVib2',
'TurbBearingVib3',
'TurbBearingVib4',
'TurbBearingVib5',
'GenBearingVib7',
'GenBearingVib8']
In H2O-3 version 3.22.1.3, data.set_names(input_file_column_names) worked successfully for any dataset that had 10 columns but gave the following error if the number of columns was more or less than the number of strings:
H2OValueError: Argument names (= ['Date', 'GenOut', 'GenVar']) does not satisfy the condition len(names) == self.ncol

Related

Fill pandas dataframe with a for loop

I have 4 dataframes for 4 newspapers (newspaper1,newspaper2,newspaper3,newspaper4])
which have a single column for author name.
Now I'd like to merge these 4 dataframes into one, which has 5 columns: author, and newspaper1,newspaper2,newspaper3,newspaper4 which contain 1/0 value (1 for author writing for that newspaper)
import pandas as pd
listOfMedia =[newspaper1,newspaper2,newspaper3,newspaper4]
merged = pd.DataFrame(columns=['author','newspaper1','newspaper2', 'newspaper4', 'newspaper4'])
while this loop does what I intended (fills the merged df author columns with the name):
for item in listOfMedia:
merged.author = item.author
I can't figure out how to fill the newspapers columns with the 1/0 values...
for item in listOfMedia:
if item == newspaper1:
merged['newspaper1'] = '1'
elif item == newspaper2:
merged['newspaper2'] = '1'
elif item == newspaper3:
merged['newspaper3'] = '1'
else:
merged['newspaper4'] = '1'
I keep getting error
During handling of the above exception, another exception occurred:
TypeError: attrib() got an unexpected keyword argument 'convert'
Tried to google that error but didn't help me identify what the problem is.
What am I missing here? I also think there must be smarter way to fill the newspaper/author matrix, however don't seem to be able to figure out even this simple way. I am using jupyter notebook.
Actually you are setting all rows to 1 so use:
for col in merged.columns:
merged[col].values[:] = 1
I've taken a guess at what I think your dataframes look like.
newspaper1 = pd.DataFrame({'author': ['author1', 'author2', 'author3']})
newspaper2 = pd.DataFrame({'author': ['author1', 'author2', 'author4']})
newspaper3 = pd.DataFrame({'author': ['author1', 'author2', 'author5']})
newspaper4 = pd.DataFrame({'author': ['author1', 'author2', 'author6']})
Firstly we will copy the dataframes so we don't affect the originals:
newspaper1_temp = newspaper1.copy()
newspaper2_temp = newspaper2.copy()
newspaper3_temp = newspaper3.copy()
newspaper4_temp = newspaper4.copy()
Next we replace the index of each dataframe with the author name:
newspaper1_temp.index = newspaper1['author']
newspaper2_temp.index = newspaper2['author']
newspaper3_temp.index = newspaper3['author']
newspaper4_temp.index = newspaper4['author']
Then we concatenate these dataframes (matching them together by the index we set):
merged = pd.concat([newspaper1_temp, newspaper2_temp, newspaper3_temp, newspaper4_temp], axis =1)
merged.columns = ['newspaper1', 'newspaper2', 'newspaper3', 'newspaper4']
And finally we replace NaN's with 0 and then non-zero entries (they will still have the author names in them) as 1:
merged = merged.fillna(0)
merged[merged != 0] = 1

Counting the repeated values in one column base on other column

Using Panda, I am dealing with the following CSV data type:
f,f,f,f,f,t,f,f,f,t,f,t,g,f,n,f,f,t,f,f,f,f,f,f,f,f,f,f,f,f,f,f,f,t,t,t,nowin
t,f,f,f,f,f,f,f,f,f,t,f,g,f,b,f,f,t,f,f,f,f,f,t,f,t,f,f,f,f,f,f,f,t,f,n,won
t,f,f,f,t,f,f,f,t,f,t,f,g,f,b,f,f,t,f,f,f,t,f,t,f,t,f,f,f,f,f,f,f,t,f,n,won
f,f,f,f,f,f,f,f,f,f,t,f,g,f,b,f,f,t,f,f,f,f,f,t,f,t,f,f,f,f,f,f,f,t,f,n,nowin
t,f,f,f,t,f,f,f,t,f,t,f,g,f,b,f,f,t,f,f,f,t,f,t,f,t,f,f,f,f,f,f,f,t,f,n,won
f,f,f,f,f,f,f,f,f,f,t,f,g,f,b,f,f,t,f,f,f,f,f,t,f,t,f,f,f,f,f,f,f,t,f,n,win
For this part of the raw data, I was trying to return something like:
Column1_name -- t -- counts of nowin = 0
Column1_name -- t -- count of wins = 3
Column1_name -- f -- count of nowin = 2
Column1_name -- f -- count of win = 1
Based on this idea get dataframe row count based on conditions I was thinking in doing something like this:
print(df[df.target == 'won'].count())
However, this would return always the same number of "wons" based on the last column without taking into consideration if this column it's a "f" or a "t". In other others, I was hoping to use something from Panda dataframe work that would produce the idea of a "group by" from SQL, grouping based on, for example, the 1st and last column.
Should I keep pursing this idea of should I simply start using for loops?
If you need, the rest of my code:
import pandas as pd
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/chess/king-rook-vs-king-pawn/kr-vs-kp.data"
df = pd.read_csv(url,names=[
'bkblk','bknwy','bkon8','bkona','bkspr','bkxbq','bkxcr','bkxwp','blxwp','bxqsq','cntxt','dsopp','dwipd',
'hdchk','katri','mulch','qxmsq','r2ar8','reskd','reskr','rimmx','rkxwp','rxmsq','simpl','skach','skewr',
'skrxp','spcop','stlmt','thrsk','wkcti','wkna8','wknck','wkovl','wkpos','wtoeg','target'
])
features = ['bkblk','bknwy','bkon8','bkona','bkspr','bkxbq','bkxcr','bkxwp','blxwp','bxqsq','cntxt','dsopp','dwipd',
'hdchk','katri','mulch','qxmsq','r2ar8','reskd','reskr','rimmx','rkxwp','rxmsq','simpl','skach','skewr',
'skrxp','spcop','stlmt','thrsk','wkcti','wkna8','wknck','wkovl','wkpos','wtoeg','target']
# number of lines
#tot_of_records = np.size(my_data,0)
#tot_of_records = np.unique(my_data[:,1])
#for item in my_data:
# item[:,0]
num_of_won=0
num_of_nowin=0
for item in df.target:
if item == 'won':
num_of_won = num_of_won + 1
else:
num_of_nowin = num_of_nowin + 1
print(num_of_won)
print(num_of_nowin)
print(df[df.target == 'won'].count())
#print(df[:1])
#print(df.bkblk.to_string(index=False))
#print(df.target.unique())
#ini_entropy = (() + ())
This could work -
outdf = df.apply(lambda x: pd.crosstab(index=df.target,columns=x).to_dict())
Basically we are going in on each feature column and making a crosstab with target column
Hope this helps! :)

Python - Pandas library returns wrong column values after parsing a CSV file

SOLVED Found the solution by myself. Turns out that when you want to retrieve specific columns by their names you should pass the names in the order they appear inside the csv (which is really stupid for a library that is intended to save some parsing time for a developer IMO). Correct me if I am wrong but i dont see a on option to get a specific columns values by its name if the columns are in a different order...
I am trying to read a comma separated value file with python and then
parse it using Pandas library. Since the file has many values (columns) that are not needed I make a list of the column names i do need.
Here's a look at the csv file format.
Div,Date,HomeTeam,AwayTeam,FTHG,FTAG,FTR,HTHG,HTAG,HTR,Attendance,Referee,HS,AS,HST,AST,HHW,AHW,HC,AC,HF,AF,HO,AO,HY,AY,HR,AR,HBP,ABP,GBH,GBD,GBA,IWH,IWD,IWA,LBH,LBD,LBA,SBH,SBD,SBA,WHH,WHD,WHA
E0,19/08/00,Charlton,Man City,4,0,H,2,0,H,20043,Rob
Harris,17,8,14,4,2,1,6,6,13,12,8,6,1,2,0,0,10,20,2,3,3.2,2.2,2.9,2.7,2.2,3.25,2.75,2.2,3.25,2.88,2.1,3.2,3.1
E0,19/08/00,Chelsea,West Ham,4,2,H,1,0,H,34914,Graham
Barber,17,12,10,5,1,0,7,7,19,14,2,3,1,2,0,0,10,20,1.47,3.4,5.2,1.6,3.2,4.2,1.5,3.4,6,1.5,3.6,6,1.44,3.6,6.5
E0,19/08/00,Coventry,Middlesbrough,1,3,A,1,1,D,20624,Barry
Knight,6,16,3,9,0,1,8,4,15,21,1,3,5,3,1,0,75,30,2.15,3,3,2.2,2.9,2.7,2.25,3.2,2.75,2.3,3.2,2.75,2.3,3.2,2.62
E0,19/08/00,Derby,Southampton,2,2,D,1,2,A,27223,Andy
D'Urso,6,13,4,6,0,0,5,8,11,13,0,2,1,1,0,0,10,10,2,3.1,3.2,1.8,3,3.5,2.2,3.25,2.75,2.05,3.2,3.2,2,3.2,3.2
E0,19/08/00,Leeds,Everton,2,0,H,2,0,H,40010,Dermot
Gallagher,17,12,8,6,0,0,6,4,21,20,6,1,1,3,0,0,10,30,1.65,3.3,4.3,1.55,3.3,4.5,1.55,3.5,5,1.57,3.6,5,1.61,3.5,4.5
E0,19/08/00,Leicester,Aston Villa,0,0,D,0,0,D,21455,Mike
Riley,5,5,4,3,0,0,5,4,12,12,1,4,2,3,0,0,20,30,2.15,3.1,2.9,2.3,2.9,2.5,2.35,3.2,2.6,2.25,3.25,2.75,2.4,3.25,2.5
E0,19/08/00,Liverpool,Bradford,1,0,H,0,0,D,44183,Paul
Durkin,16,3,10,2,0,0,6,1,8,8,5,0,1,1,0,0,10,10,1.25,4.1,7.2,1.25,4.3,8,1.35,4,8,1.36,4,8,1.33,4,8
This list is passed to pandas.read_csv()'s names parameter.
See code.
# Returns an array of the column names needed for our raw data table
def cols_to_extract():
cols_to_use = [None] * RawDataCols.COUNT
cols_to_use[RawDataCols.DATE] = 'Date'
cols_to_use[RawDataCols.HOME_TEAM] = 'HomeTeam'
cols_to_use[RawDataCols.AWAY_TEAM] = 'AwayTeam'
cols_to_use[RawDataCols.FTHG] = 'FTHG'
cols_to_use[RawDataCols.HG] = 'HG'
cols_to_use[RawDataCols.FTAG] = 'FTAG'
cols_to_use[RawDataCols.AG] = 'AG'
cols_to_use[RawDataCols.FTR] = 'FTR'
cols_to_use[RawDataCols.RES] = 'Res'
cols_to_use[RawDataCols.HTHG] = 'HTHG'
cols_to_use[RawDataCols.HTAG] = 'HTAG'
cols_to_use[RawDataCols.HTR] = 'HTR'
cols_to_use[RawDataCols.ATTENDANCE] = 'Attendance'
cols_to_use[RawDataCols.HS] = 'HS'
cols_to_use[RawDataCols.AS] = 'AS'
cols_to_use[RawDataCols.HST] = 'HST'
cols_to_use[RawDataCols.AST] = 'AST'
cols_to_use[RawDataCols.HHW] = 'HHW'
cols_to_use[RawDataCols.AHW] = 'AHW'
cols_to_use[RawDataCols.HC] = 'HC'
cols_to_use[RawDataCols.AC] = 'AC'
cols_to_use[RawDataCols.HF] = 'HF'
cols_to_use[RawDataCols.AF] = 'AF'
cols_to_use[RawDataCols.HFKC] = 'HFKC'
cols_to_use[RawDataCols.AFKC] = 'AFKC'
cols_to_use[RawDataCols.HO] = 'HO'
cols_to_use[RawDataCols.AO] = 'AO'
cols_to_use[RawDataCols.HY] = 'HY'
cols_to_use[RawDataCols.AY] = 'AY'
cols_to_use[RawDataCols.HR] = 'HR'
cols_to_use[RawDataCols.AR] = 'AR'
return cols_to_use
# Extracts raw data from the raw data csv and populates the raw match data table in the database
def extract_raw_data(csv):
# Clear the database table if it has any logs
# if MatchRawData.objects.count != 0:
# MatchRawData.objects.delete()
cols_to_use = cols_to_extract()
# Read and parse the csv file
parsed_csv = pd.read_csv(csv, delimiter=',', names=cols_to_use, header=0)
for col in cols_to_use:
values = parsed_csv[col].values
for val in values:
print(str(col) + ' --------> ' + str(val))
Where RawDataCols is an IntEnum.
class RawDataCols(IntEnum):
DATE = 0
HOME_TEAM = 1
AWAY_TEAM = 2
FTHG = 3
HG = 4
FTAG = 5
AG = 6
FTR = 7
RES = 8
...
The column names are obtained using it. That part of code works ok. The correct column name is obtained but after trying to get its values using
values = parsed_csv[col].values
pandas return the values of a wrong column. The wrong column index is around 13 indexes away from the one i am trying to get. What am i missing?
You can select column by name wise.Just use following line
values = parsed_csv[["Column Name","Column Name2"]]
Or you select Index wise by
cols = [1,2,3,4]
values = parsed_csv[parsed_csv.columns[cols]]

Python - ValueError: could not broadcast input array from shape (5) into shape (2)

I have written some code which takes in my dataframe which consists of two columns - one is a string and the other is an idea count - the code takes in the dataframe, tries several delimeters and cross references it with the count to check it is using the correct one. The result I am looking for is to add a new column called "Ideas" which contains the list of broken out ideas. My code is below:
def getIdeas(row):
s = str(row[0])
ic = row[1]
# Try to break on lines ";;"
my_dels = [";;", ";", ",", "\\", "//"]
for d in my_dels:
ideas = s.split(d)
if len(ideas) == ic:
return ideas
# Try to break on numbers "N)"
ideas = re.split(r'[0-9]\)', s)
if len(ideas) == ic:
return ideas
ideas = []
return ideas
# k = getIdeas(str_contents3, idea_count3)
xl = pd.ExcelFile("data/Total Dataset.xlsx")
df = xl.parse("Sheet3")
df1 = df.iloc[:,1:3]
df1 = df1.loc[df1.iloc[:,1] != 0]
df1["Ideas"] = df1.apply(getIdeas, axis=1)
When I run this I am getting an error
ValueError: could not broadcast input array from shape (5) into shape (2)
Could someone tell me how to fix this?
You have 2 option with apply with axis=1, ether you return a single value or a list of length that match the length your number of columns. if you match the number of columns in will be broadcast to the entire row. if you return a single value it will return a pandas Series
one work around would be not to use apply.
result = []
for idx, row in df1.iterrows():
result.append(getIdeas(row))
df1['Ideas'] = result

read_hdf where fails 'all of the variables refrences must be a reference to an axis...'

Stuck on the following.
log_iter = pd.read_hdf(FN, dspath,
where = [pd.Term('hashID','=',idList)],
iterator=True,
chunksize=3000)
The dspath has 35 columns and can be quite large causing MemoryError.
So trying to go the iteator/chunksize route. But the 'where=' clause is failing with
ValueError: The passed where expression: [hashID=[147685,...,147197]]
contains an invalid variable reference
all of the variable refrences must be a reference to
an axis (e.g. 'index' or 'columns'), or a data_column
The currently defined references are: ** list of column names **
The problem is that hashID is not in the list of column names. Yet, if I do
read_hdf(FN, dspath).columns
The hashID is in the columns. Any suggestions? My goal is to read in all rows x 35 columns whose hashID is in idList.
Update. The following works and shows that the hashID exists as a column once the dataset is read in.
def dsIterator(self, q, idList):
hID = u'hashID'
FN = self.db._hdf_FN()
dspath = self.getdatasetname(q)
log_iter = pd.read_hdf(FN, dspath,
#where = [pd.Term(u'logid_hashID','=',idList)],
iterator=True,
chunksize=30000)
n_all = 0
retDF = None
for dfChunk in log_iter:
goodChunk = dfChunk.loc[dfChunk[hID].isin(idList)]
if retDF is None : retDF = goodChunk
else:
retDF = pd.concat([retDF, goodChunk], ignore_index=True)
n_all += dfChunk[hID].count()
n_ret = retDF[hID].count()
return retDF
Does
log_iter = pd.read_hdf(FN, dspath,
where = ['logid_hashID={:d}'.format(id_) for id_ in idList]
iterator=True,
chunksize=3000)
work?
If the idList is large, this might be a bad idea.

Categories

Resources