On a fresh installation of Anaconda under Ubuntu... I am preprocessing my data in various ways prior to a classification task using Scikit-Learn.
from sklearn import preprocessing
scaler = preprocessing.MinMaxScaler().fit(train)
train = scaler.transform(train)
test = scaler.transform(test)
This all works fine but if I have a new sample (temp below) that I want to classify (and thus I want to preprocess in the same way then I get
temp = [1,2,3,4,5,5,6,....................,7]
temp = scaler.transform(temp)
Then I get a deprecation warning...
DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17
and will raise ValueError in 0.19. Reshape your data either using
X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1)
if it contains a single sample.
So the question is how should I be rescaling a single sample like this?
I suppose an alternative (not very good one) would be...
temp = [temp, temp]
temp = scaler.transform(temp)
temp = temp[0]
But I'm sure there are better ways.
Just listen to what the warning is telling you:
Reshape your data either X.reshape(-1, 1) if your data has a single feature/column
and X.reshape(1, -1) if it contains a single sample.
For your example type(if you have more than one feature/column):
temp = temp.reshape(1,-1)
For one feature/column:
temp = temp.reshape(-1,1)
Well, it actually looks like the warning is telling you what to do.
As part of sklearn.pipeline stages' uniform interfaces, as a rule of thumb:
when you see X, it should be an np.array with two dimensions
when you see y, it should be an np.array with a single dimension.
Here, therefore, you should consider the following:
temp = [1,2,3,4,5,5,6,....................,7]
# This makes it into a 2d array
temp = np.array(temp).reshape((len(temp), 1))
temp = scaler.transform(temp)
This might help
temp = ([[1,2,3,4,5,6,.....,7]])
.values.reshape(-1,1) will be accepted without alerts/warnings
.reshape(-1,1) will be accepted, but with deprecation war
I faced the same issue and got the same deprecation warning. I was using a numpy array of [23, 276] when I got the message. I tried reshaping it as per the warning and end up in nowhere. Then I select each row from the numpy array (as I was iterating over it anyway) and assigned it to a list variable. It worked then without any warning.
array = []
array.append(temp[0])
Then you can use the python list object (here 'array') as an input to sk-learn functions. Not the most efficient solution, but worked for me.
You can always, reshape like:
temp = [1,2,3,4,5,5,6,7]
temp = temp.reshape(len(temp), 1)
Because, the major issue is when your, temp.shape is:
(8,)
and you need
(8,1)
-1 is the unknown dimension of the array. Read more about "newshape" parameters on numpy.reshape documentation -
# X is a 1-d ndarray
# If we want a COLUMN vector (many/one/unknown samples, 1 feature)
X = X.reshape(-1, 1)
# you want a ROW vector (one sample, many features/one/unknown)
X = X.reshape(1, -1)
from sklearn.linear_model import LinearRegression
X = df[['x_1']]
X_n = X.values.reshape(-1, 1)
y = df['target']
y_n = y.values
model = LinearRegression()
model.fit(X_n, y)
y_pred = pd.Series(model.predict(X_n), index=X.index)
Related
I am reading functions from an existing file using h5py library.
readFile = h5py.File('File',r)
using readFile.keys() I obtained the list of the functions stored in 'File'. One of these functions is the function phi. To print the function phi, I did
phi = numpy.array(readFile['phi'])[:,0,:,:]
in [:,0,:,:] we find the way how the data is stored [blocks, z, y, x]. z= 0 because it is a 2D case. x is divided in 2 blocks, and y is divided to 2 blocks. each x block is divided to nxb (x1, x2, ....,x20), and each y block is divided to nyb. (nxb and nyb can also be obtained directly from the file using h5py as they are also stored in the file. The domain of the data is also stored in the file and it is called ['bounding box'])
Then , coding the grid will be:
nxb = numpy.array(readFile['integer scalars'])[0][1]
nyb = numpy.array(readFile['integer scalars'])[1][1]
X = numpy.zeros([block, nxb, nyb])
Y = numpy.zeros([block, nxb, nyb])
for block in range(block):
x_min, x_max = numpy.array(readFile['bounding box'])[block,0,:]
y_min, y_max = numpy.array(readFile['bounding box'])[block,1,:]
X[block,:,:], Y[block,:,:] = numpy.meshgrid(numpy.linspace(x_min,x_max,nxb),
numpy.linspace(y_min,y_max,nyb))
My question, is that I am trying to restructure the data (see the figure). I want to bring the data of the block 2 up to the data of the block 1 and not next to him. Which means that I need to create new coordinates I' and J' related to the old coordinates I , and J. I tried this but it is not working:
for i in range(X):
for j in range(Y):
i' = i -len(X[0:1,:,:]
j' = j + len(Y[0:1,:,:]
phi(i',j') = phi
When working with HDF5 data, it's important to understand your data schema before you start writing code. Here are my initial observations and suggestions.
Your question is a little hard to follow. (For example, you are using the term "functions" to describe HDF5 datasets.) HDF5 organizes data in datasets and groups. Your data of interest is in 2 datasets: 'phi' and 'integer scalars'.
You can simplify code to access the datasets as a Numpy arrays using the following:
with h5py.File('File','r') as readFile:
# to get the axis dimensions for 'phi':
print(f"Shape of Dataset phi: {readFile['phi'].shape}")
phi_ds = readFile['phi'] # to get a dataset object
phi_arr = readFile['phi'][()] # to read dataset as a numpy array
# to get the axis dimensions for 'integer scalars'
nxb, nyb = readFile['integer scalars'].shape
I don't understand what you mean by "blocks". Are you referering to the axis simensions? Also, why you are using meshgrid? If you simply want to change dimensions, use Numpy's .reshape() method to change the axis dimensions of the Numpy array.
Here is a simple example that creates a 2x2 dataset, then reads it into a new array and reshapes it to 1x4. I think this is what you want to do. Change the values of a0 and a1 if you want to increase the size. The reshape operation will read the shape from the first array and reshape the new array to (N,1), where N is your nxb*nyb value.
with h5py.File('SO_72340647.h5','w') as h5f:
a0, a1 = 2,2
arr = np.arange(a0*a1).reshape(a0,a1)
h5f.create_dataset('ds_2x2',data=arr)
with h5py.File('SO_72340647.h5','r') as h5f:
print(f"Shape of Dataset ds_2x2: {h5f['ds_2x2'].shape}")
ds_arr = h5f['ds_2x2'][()]
print(ds_arr)
ds0, ds1 = ds_arr.shape
new_arr = ds_arr.reshape(ds0*ds1,1)
print(f"Shape of new (reshaped) array: {new_arr.shape}")
print(new_arr)
Note: h5py dataset objects "behave like" Numpy arrays. So, you frequently don't have to read into an array to use the data.
I am working on a classification problem in python and would like to scale the dataset in the first step.
I have 3463 images each with a dimension of (40,90,3) respectively (x, y, channel) . Overall, the array has a dimension of (3463, 40, 90,3)
How can I use the standard scale correctly and how can I display the image?
Code:
#------------- Image Preprocessing -----------------------------------
Eingangsbilder2 = np.asarray(Eingangsbilder2)
print("Image-dim: ",Eingangsbilder2.shape)
scalers = {}
for x in range(0, len(Eingangsbilder2)):
for i in range(0,Eingangsbilder2[x].shape[2]):
scalers[i] = StandardScaler()
Eingangsbilder2[x][:, :, i] = scalers[i].fit_transform(Eingangsbilder2[x][:, :, i])
plt.imshow(Eingangsbilder[2010])
You can get rid of the for loop altogether by applying z-scoring, which is equivallent to scikit-learn StandardScaler to the first "image number" axis:
Eingangsbilder2 = scipy.stats.zscore(Eingangsbilder2, axis=0)
Hint: In Python you can simply write range(len(Eingangsbilder2)), since the first index (unlike MATLAB) always starting with 0
I am trying to perform a k-prototype clustering to mixed data(categorical and numeric). My input file is a csv which looks like this(it contains 300000 rows):
Unnamed: 0.1,market,vendor_name,price,ship_from,category_cl
0,mark,03welle,1.79367196,DE,Drugs
1,aruna,03welle,0.05880975,DE,Drugs
2,ny,03welle,0.11344859,DE,Drugs
3,mi,03welle,0.18655316,DE,Drugs
I am trying to implement a k-prototypes clustering as can cluster mixed data. The problem is I am getting an error and I cannot understand it(and of course fix it). I am using the code I found in the relative repo:
import numpy as np
print("initialising")
syms = np.genfromtxt('pameteliko.csv', dtype=str, delimiter='\t')[:, 0]
print("******")
print(syms)
X = np.genfromtxt('pameteliko.csv', dtype=object, delimiter='\t')[:, 1:]
print("################")
X[:, 0] = X[:, 0].astype(float)
from kmodes.kprototypes import KPrototypes
kproto = KPrototypes(n_clusters=6, init='Cao', verbose=2)
clusters = kproto.fit_predict(X, categorical=[1, 2])
#Print cluster centroids of the trained model.
print(kproto.cluster_centroids_)
#Print training statistics
print(kproto.cost_)
print(kproto.n_iter_)
(The prints are there for debugging purposes). I am getting the following error:
IndexError: too many indices for array
I have also some doubts regarding the syms and the X. Any help would be really appreciated.
Change delimiter '\t' to ','
syms = np.genfromtxt('pameteliko.csv', dtype=str, delimiter=',')[:, 0]
print("******")
print(syms)
X = np.genfromtxt('pameteliko.csv', dtype=object, delimiter=',')[:, 1:]
because you are using comma-separated value files. I hope it works!
I'd like to normalize my training set before passing it to my NN so instead of doing it manually (subtract mean and divide by std), I tried keras.utils.normalize() and I am amazed about the results I got.
Running this:
r = np.random.rand(3000) * 1000
nr = normalize(r)
print(np.mean(r))
print(np.mean(nr))
print(np.std(r))
print(np.std(nr))
print(np.min(r))
print(np.min(nr))
print(np.max(r))
print(np.max(nr))
Results in that:
495.60440066771866
0.015737914577213984
291.4440194021
0.009254802974329002
0.20755517410064872
6.590913227674956e-06
999.7631481267636
0.03174747238214018
Unfortunately, the docs don't explain what's happening under the hood. Can you please explain what it does and if I should use keras.utils.normalize instead of what I would have done manually?
It is not the kind of normalization you expect. Actually, it uses np.linalg.norm() under the hood to normalize the given data using Lp-norms:
def normalize(x, axis=-1, order=2):
"""Normalizes a Numpy array.
# Arguments
x: Numpy array to normalize.
axis: axis along which to normalize.
order: Normalization order (e.g. 2 for L2 norm).
# Returns
A normalized copy of the array.
"""
l2 = np.atleast_1d(np.linalg.norm(x, order, axis))
l2[l2 == 0] = 1
return x / np.expand_dims(l2, axis)
For example, in the default case, it would normalize the data using L2-normalization (i.e. the sum of squared of elements would be equal to one).
You can either use this function, or if you don't want to do mean and std normalization manually, you can use StandardScaler() from sklearn or even MinMaxScaler().
I'm training a python (2.7.11) classifier for text classification and while running I'm getting a deprecated warning message that I don't know which line in my code is causing it! The error/warning. However, the code works fine and give me the results...
\AppData\Local\Enthought\Canopy\User\lib\site-packages\sklearn\utils\validation.py:386: DeprecationWarning: Passing 1d arrays as data is deprecated in 0.17 and willraise ValueError in 0.19. Reshape your data either using X.reshape(-1, 1) if your data has a single feature or X.reshape(1, -1) if it contains a single sample.
My code:
def main():
data = []
folds = 10
ex = [ [] for x in range(0,10)]
results = []
for i,f in enumerate(sys.argv[1:]):
data.append(csv.DictReader(open(f,'r'),delimiter='\t'))
for f in data:
for i,datum in enumerate(f):
ex[i % folds].append(datum)
#print ex
for held_out in range(0,folds):
l = []
cor = []
l_test = []
cor_test = []
vec = []
vec_test = []
for i,fold in enumerate(ex):
for line in fold:
if i == held_out:
l_test.append(line['label'].rstrip("\n"))
cor_test.append(line['text'].rstrip("\n"))
else:
l.append(line['label'].rstrip("\n"))
cor.append(line['text'].rstrip("\n"))
vectorizer = CountVectorizer(ngram_range=(1,1),min_df=1)
X = vectorizer.fit_transform(cor)
for c in cor:
tmp = vectorizer.transform([c]).toarray()
vec.append(tmp[0])
for c in cor_test:
tmp = vectorizer.transform([c]).toarray()
vec_test.append(tmp[0])
clf = MultinomialNB()
clf .fit(vec,l)
result = accuracy(l_test,vec_test,clf)
print result
if __name__ == "__main__":
main()
Any idea which line raises this warning?
Another issue is that running this code with different data sets gives me the same exact accuracy, and I can't figure out what causes this?
If I want to use this model in another python process, I looked at the documentation and I found an example of using pickle library, but not for joblib. So, I tried following the same code, but this gave me errors:
clf = joblib.load('model.pkl')
pred = clf.predict(vec);
Also, if my data is CSV file with this format: "label \t text \n"
what should be in the label column in test data?
Thanks in advance
Your 'vec' input into your clf.fit(vec,l).fit needs to be of type [[]], not just []. This is a quirk that I always forget when I fit models.
Just adding an extra set of square brackets should do the trick!
It's:
pred = clf.predict(vec);
I used this in my code and it worked:
#This makes it into a 2d array
temp = [2 ,70 ,90 ,1] #an instance
temp = np.array(temp).reshape((1, -1))
print(model.predict(temp))
2 solution: philosophy___make your data from 1D to 2D
Just add: []
vec = [vec]
Reshape your data
import numpy as np
vec = np.array(vec).reshape(1, -1)
If you want to find out where the Warning is coming from you can temporarly promote Warnings to Exceptions. This will give you a full Traceback and thus the lines where your program encountered the warning.
with warnings.catch_warnings():
warnings.simplefilter("error")
main()
If you run the program from the commandline you can also use the -W flag. More information on Warning-handling can be found in the python documentation.
I know it is only one part of your question I answered but did you debug your code?
Since 1D array would be deprecated. Try passing 2D array as a parameter. This might help.
clf = joblib.load('model.pkl')
pred = clf.predict([vec]);
Predict method expects 2-d array , you can watch this video , i have also located the exact time https://youtu.be/KjJ7WzEL-es?t=2602 .You have to change from [] to [[]].