Alternative to scikit learn labelBinarizer in R - python

I'm doing machine learning for time series prediction and I need to transform dates to vectors of zeros and ones.
If I decide that the relvant information of the date is the day of the week on which the observation was made, I'd like to have a time series of vectors of length 7, that contains only one "1" placed in the first slot if it's a Monday, second if it's a Tuesday etc...
I'd like, for example for an input (like "2015-12-22 22:48:00") to be transformed into
0 1 0 0 0 0 0
if the relevant information is that it's a tuesday. Or a
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0
If it's that it's 10 p.m
The labelBinarizer() from sklearn.preprocessing does that nicely in python, and I've looked for the equivalent in R, but haven't found it. Do any of you guys happen to know what I'm looking for ?
Here is the labelBinarizer() : http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelBinarizer.html
Right now I'm doing this in python : where Hour is a time series of the the exact hours at which my observations were made;
import sklearn.preprocessing as pp
lbday = pp.LabelBinarizer()
lbday.fit(list(range(24)))
pp.LabelBinarizer(neg_label=0, pos_label=1)
Hour = lbday.transform(Hour)
Then i export a csv of the binarized dates that I read with R.
Thank you !

Try this:
binarizer <- function(levels){
f = function(v){
m = matrix(0, nrow=length(v), ncol=length(levels))
vf = as.numeric(factor(v, levels=levels))
m[cbind(1:length(v),vf)]=1
colnames(m)=levels
m
}
f
}
Example:
> ab = binarizer(letters[1:5]) # valid values a to e
> ab(c("a","e","a"))
a b c d e
[1,] 1 0 0 0 0
[2,] 0 0 0 0 1
[3,] 1 0 0 0 0

Related

AttributeError: 'Series' object has no attribute 'to_coo'

I am trying to use a Naive Bayes classifier from the sklearn module to classify whether movie reviews are positive. I am using a bag of words as the features for each review and a large dataset with sentiment scores attached to reviews.
df_bows = pd.DataFrame.from_records(bag_of_words)
df_bows = df_bows.fillna(0).astype(int)
This code creates a pandas dataframe which looks like this:
The Rock is destined to ... Staggeringly ’ ve muttering dissing
0 1 1 1 1 2 ... 0 0 0 0 0
1 2 0 1 0 0 ... 0 0 0 0 0
2 0 0 0 0 0 ... 0 0 0 0 0
3 0 0 1 0 4 ... 0 0 0 0 0
4 0 0 0 0 0 ... 0 0 0 0 0
I then try and fit this data frame with the sentiment of each review using this code
nb = MultinomialNB()
nb = nb.fit(df_bows, movies.sentiment > 0)
However I get an error which says
AttributeError: 'Series' object has no attribute 'to_coo'
This is what the df movies looks like.
sentiment text
id
1 2.266667 The Rock is destined to be the 21st Century's ...
2 3.533333 The gorgeously elaborate continuation of ''The...
3 -0.600000 Effective but too tepid biopic
4 1.466667 If you sometimes like to go to the movies to h...
5 1.733333 Emerges as something rare, an issue movie that...
Can you help with this?
When you're trying to fit your MultinomialNB model, sklearn's routine checks if the input df_bows is sparse or not. If it is, just like in our case, it is required to change the dataframe's type to 'Sparse'. Here is the way I fixed it :
df_bows = pd.DataFrame.from_records(bag_of_words)
# Keep NaN values and convert to Sparse type
sparse_bows = df_bows.astype('Sparse')
nb = nb.fit(sparse_bows, movies['sentiment'] > 0)
Link to Pandas doc : pandas.Series.sparse.to_coo

pandas DataFrame: match a dataframe and a dict by intervals

I have a question concerning DataFrames. I have a Dataframe, with intervals of 0.1 sec and features belong to that interval. I want to add a column that contains the prediction (is this interval a silence or a sounding) from a previous algorithm. I have a dictionary containing all predicted silence intervals per audio recording. My Dataframe will look like this. Here the df is filtered on audio_id==0 and ordered on interval_x.
audio_id interval_x interval_y predicted_value
0 0 0.579367 0.679367 0
1 0 0.679367 0.779367 0
2 0 0.779367 0.879367 0
3 0 0.879367 0.979367 0
4 0 0.979367 1.079367 0
... ... ... ... ...
518 0 50.805830 50.905830 0
519 0 50.905830 51.005830 0
520 0 51.005830 51.105830 0
521 0 51.105830 51.205830 0
522 0 51.205830 51.212938 0
My dictionary containing the silence intervals looks like this:
{'0': [[1.4501383219954658, 2.058138321995466],
[3.298138321995466, 4.762138321995465],
[7.682138321995467, 8.266138321995465],
[11.266138321995466, 11.938138321995465],
[13.242138321995466, 13.706138321995466],
[16.73013832199547, 17.82613832199547],
[24.53813832199547, 25.130138321995467],
[26.394138321995467, 27.042138321995466],
[28.21013832199547, 28.722138321995466]],
'1': [[0.0, 0.31253968253968023],
[4.296539682539681, 5.040539682539681],
[8.64053968253968, 9.296539682539679],
etc for each audiofile.
What is an efficient way to do this?
Here's a solution, using merge_asof to match intervals to their closest silent times. d is the dictionary from the question, and intervals is the data frame.
silent_times = pd.DataFrame.from_records([(file, from_time, to_time) for file, values in d.items()
for [from_time, to_time] in values],
columns = ["audio_id", "from_time", "to_time"])
silent_times.audio_id = silent_times.audio_id.astype(int)
res = pd.DataFrame()
for inx in intervals.audio_id.unique():
intervals_slice = intervals[intervals.audio_id == inx]
silent_times_slice = silent_times[silent_times.audio_id == inx]
t = pd.merge_asof(intervals_slice, silent_times_slice, left_on=["interval_x"], right_on=["from_time"])
t.loc[(t.interval_x>=t.from_time) & (t.interval_y <=t.to_time), "predicted_value"] = 1
res = res.append(t)
The result for the dataframe from the question, and for this silent intervals:
d = {'0': [
[1.4501383219954658, 2.058138321995466],
[3.298138321995466, 4.762138321995465],
[7.682138321995467, 8.266138321995465],
[50.01, 51.01]
],
'1': [
[0.0, 0.31253968253968023],
[4.296539682539681, 5.040539682539681],
[8.64053968253968, 9.296539682539679]]}
Is as follows:
print(res[["audio_id_x", "interval_x", "interval_y", "predicted_value"]])
audio_id_x interval_x interval_y predicted_value
0 0 0.579367 0.679367 0
1 0 0.679367 0.779367 0
2 0 0.779367 0.879367 0
3 0 0.879367 0.979367 0
4 0 0.979367 1.079367 0
5 0 50.805830 50.905830 1
6 0 50.905830 51.005830 1
7 0 51.005830 51.105830 0
8 0 51.105830 51.205830 0
9 0 51.205830 51.212938 0

How to split a list using two nested conditions

Basically I have list of 0s and 1s. Each value in the list represents a data sample from an hour. Thus, if there are 24 0s and 1s in the list that means there are 24 hours, or a single day. I want to capture the first time the data cycles from 0s to 1s back to 0s in a span of 24 hours (or vice versa from 1s to 0s back to 1s).
signal = [1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,1]
expected output:
# D
signal = [1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,1,1,0,0,0]
output = [0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0]
# ^ cycle.1:day.1 |dayline ^cycle.1:day.2
In the output list, when there is 1 that means 1 cycle is completed at that position of the signal list and at rest of the position there are 0. There should only 1 cycle in a days that's why only 1 is there.
I don't how to split this list according to that so can someone please help?
It seams to me like what you are trying to do is split your data first into blocks of 24, and then to find either the first rising edge, or the first falling edge depending on the first hour in that block.
Below I have tried to distill my understanding of what you are trying to accomplish into the following function. It takes in a numpy.array containing zeros and ones, as in your example. It checks to see what the first hour in the day is, and decides what type of edge to look for.
it detects an edge by using np.diff. This gives us an array containing -1's, 0's, and 1's. We then look for the first index of either a -1 falling edge, or 1 rising edge. The function returns that index, or if no edges were found it returns the index of the last element, or nothing.
For more info see the docs for descriptions on numpy features used here np.diff, np.array.nonzero, np.array_split
import numpy as np
def get_cycle_index(day):
'''
returns the first index of a cycle defined by nipun vats
if no cycle is found returns nothing
'''
first_hour = day[0]
if first_hour == 0:
edgetype = -1
else:
edgetype = 1
edges = np.diff(np.r_[day, day[-1]])
if (edges == edgetype).any():
return (edges == edgetype).nonzero()[0][0]
elif (day.sum() == day.size) or day.sum() == 0:
return
else:
return day.size - 1
Below is an example of how you might use this function in your case.
import numpy as np
_data = [1,1,1,1,1,0,0,0,0,0,1,1,1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]
#_data = np.random.randint(0,2,280, dtype='int')
data = np.array(_data, 'int')
#split the data into a set of 'day' blocks
blocks = np.array_split(data, np.arange(24,data.size, 24))
_output = []
for i, day in enumerate(blocks):
print(f'day {i}')
buffer = np.zeros(day.size, dtype='int')
print('\tsignal:', *day, sep = ' ')
cycle_index = get_cycle_index(day)
if cycle_index:
buffer[cycle_index] = 1
print('\toutput:', *buffer, sep=' ')
_output.append(buffer)
output = np.concatenate(_output)
print('\nfinal output:\n', *output, sep=' ')
this yeilds the following output:
day 0
signal: 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 0 0 0 1 1 1 1 1 0
output: 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
day 1
signal: 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
output: 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
day 2
signal: 0 0 0 0 0 0
output: 0 0 0 0 0 0
final output:
0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Extend a matrix by interpolating zeros

I am trying to implement a python code to extend a matrix in such a way as given below:
Given Matrix:
1 2
3 4
Now I want to convert it to the following:
1 0 0 2 0 0
0 0 0 0 0 0
0 0 0 0 0 0
3 0 0 4 0 0
0 0 0 0 0 0
0 0 0 0 0 0
I am trying the same for a matrix of the dimensions 60x80. I tried out numpy.insert(). But for larger matrix I am not able to apply the same thing(as it becomes too much hardcoding). So need some suggestions to do such interpolation.
You can use the step part of the slice to achieve this, if you preallocate yourself a result
repeat = 3
result = np.zeros((arr.shape[0]*repeat, arr.shape[1]*repeat))
result[::repeat,::repeat] = arr

python, read '.dat' file with differents columns for each lines

I need to extract some data from .dat file which I usually do with
import numpy as np
file = np.loadtxt('blablabla.dat')
Here my data are not separated by a specific delimiter but have predefined length (digits) and some lines don't have any values for some columns.
Here an sample to be clear :
3 0 36 0 0 0 0 0 0 0 99.
-2 0 0 0 0 0 0 0 0 0 99.
2 0 0 0 0 0 0 0 0 0 .LA.0?. 3.
5 0 0 0 0 2 4 0 0 0 .SAS7?. 99.
-5 0 0 0 0 0 0 0 0 0 99.
99 0 0 0 0 0 0 0 0 0 .S..3*. 3.5
My little code above get the error :
# Convert each value according to its column and store
ValueError: Wrong number of columns at line 3
Does someone have an idea about how to collect this kind of data?
numpy.genfromtxt seems to be what you want; it you can specify field widths for each column and treats missing data as NaNs.
For this case:
import numpy as np
data = np.genfromtxt('blablabla.dat',delimiter=[2,3,4,3,3,2,3,4,5,3,8,5])
If you want to keep information in the string part of the file, you could read twice and specify the usecols parameter:
import numpy as np
number_data = np.genfromtxt('blablabla.dat',delimiter=[2,3,4,3,3,2,3,4,5,3,8,5],\
usecols=(0,1,2,3,4,5,6,7,8,9,11))
string_data = np.genfromtxt('blablabla.dat',delimiter=[2,3,4,3,3,2,3,4,5,3,8,5],\
usecols=(10),dtype=str)
What you essentially need is to get list of empty "columns" position that serve as delimiters
That will get you started
In [108]: table = ''' 3 0 36 0 0 0 0 0 0 0 99.
.....: -2 0 0 0 0 0 0 0 0 0 99.
.....: 2 0 0 0 0 0 0 0 0 0 .LA.0?. 3.
.....: 5 0 0 0 0 2 4 0 0 0 .SAS7?. 99.
.....: -5 0 0 0 0 0 0 0 0 0 99.
.....: 99 0 0 0 0 0 0 0 0 0 .S..3*. 3.5'''.split('\n')
In [110]: max_row_len = max(len(row) for row in table)
In [117]: spaces = reduce(lambda res, row: res.intersection(idx for idx, c in enumerate(row) if c == ' '), table, set(range(max_row_len)))
This code builds set of character positions in the longest row - and reduce leaves only set of positions that have spaces in all rows

Categories

Resources