I'd like to perform a convolution in a Lambda layer, but I can't get it to work any way.
kernel = [1.0,2.0,1.0] # weighted moving average
x = [ # history_size=5, num_features=10
[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0],
[2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0],
[3.0,3.0,3.0,3.0,3.0,3.0,3.0,3.0,3.0,3.0],
[4.0,4.0,4.0,4.0,4.0,4.0,4.0,4.0,4.0,4.0],
[5.0,5.0,5.0,5.0,5.0,5.0,5.0,5.0,5.0,5.0],
]
k = tf.constant(kernel, dtype=tf.float32)
y = tf.nn.conv1d(x, k, stride=1, padding='SAME')
I realize dimensions are not correct in the above example, but that's my data's actual format. The training samples have a shape of (history_size, num_features) and the kernel has to convolve along history_size, each feature separately. Any help would be appreciated. I cannot find an example on how to perform tf.nn.conv1d manually.
You could use numpy.convolve() for this.
import numpy as np
kernel = [1.0,2.0,1.0] # weighted moving average
x = [ # history_size=5, num_features=10
[1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0,1.0],
[2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0,2.0],
[3.0,3.0,3.0,3.0,3.0,3.0,3.0,3.0,3.0,3.0],
[4.0,4.0,4.0,4.0,4.0,4.0,4.0,4.0,4.0,4.0],
[5.0,5.0,5.0,5.0,5.0,5.0,5.0,5.0,5.0,5.0],
]
output = []
for i in range(len(x)):
output.append(list(np.convolve(x[i], kernel, mode = 'same')))
output
'''
[[3.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 4.0, 3.0],
[6.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0],
[9.0, 12.0, 12.0, 12.0, 12.0, 12.0, 12.0, 12.0, 12.0, 9.0],
[12.0, 16.0, 16.0, 16.0, 16.0, 16.0, 16.0, 16.0, 16.0, 12.0],
[15.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 20.0, 15.0]]
'''
You could try changing the mode whichever fits best to you according to the documentation.
Related
I am trying to train a LSTM with a dataset in which both the input and the output are a sequence of numbers of different lenght. Each number in the input represents a timestep. Example of input and output:
Input:
ent
229 [3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 2.0, 2.0, ...
511 [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 3.0, 3.0, ...
110 [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 4.0, 4.0, ...
243 [2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 2.0, 3.0, 3.0, ...
334 [3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 3.0, 2.0, 2.0, ...
Output:
sal
229 [6.0, 7.0, 3.0, 0.0, 1.0, 4.0, 5.0, 2.0]
511 [0.0, 1.0, 6.0, 7.0, 2.0, 4.0, 5.0, 6.0, 7.0]
110 [3.0, 5.0, 0.0, 1.0, 5.0, 6.0, 7.0, 3.0]
243 [3.0, 6.0, 7.0, 4.0, 6.0, 7.0, 0.0, 1.0, 4.0]
334 [6.0, 7.0, 3.0, 4.0, 3.0, 5.0, 4.0]
When executing the train of the model always appears this error:
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray).
model = keras.Sequential()
model.add(layers.Input(shape=(None, 200)))
model.add(layers.LSTM(20))
Should I select a different NN or include padding?
I have also tried changing the dimension to:
ent
229 [[3.0], [3.0], [3.0], [3.0], [3.0], [3.0], [3....
Do you know what could I do?
Traceback (most recent call last):
at block 8, line 8
at /opt/python/envs/default/lib/python3.8/site-packages/keras/utils/traceback_utils.pyline 67, in error_handler(*args, **kwargs)
at /opt/python/envs/default/lib/python3.8/site-packages/tensorflow/python/framework/constant_op.pyline 106, in convert_to_eager_tensor(value, ctx, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type numpy.ndarray).
I am making a grouped bar chart of proficiency levels on a standardized test. Here is my code:
bush_prof_boy = bush.groupby(['BOY Prof'])['BOY Prof'].count()
bush_prof_pct_boy = bush_prof_boy/bush['BOY Prof'].count() * 100
bush_prof_eoy = bush.groupby(['EOY Prof'])['EOY Prof'].count()
bush_prof_pct_eoy = bush_prof_eoy/bush['EOY Prof'].count() * 100
labels = ['Remedial', 'Below Proficient', 'Proficient', 'Advanced']
fig, ax = plt.subplots()
rects1 = ax.bar(x - width/2, bush_prof_pct_boy, width, label='BOY',
color='mediumorchid')
rects2 = ax.bar(x + width/2, bush_prof_pct_eoy, width, label='EOY', color='teal')
ax.set_ylabel('% of Students at Proficiency Level', fontsize=18)
ax.set_title('Bushwick Middle Change in Proficiency Levels', fontsize=25)
ax.set_xticks(x)
ax.set_xticklabels(labels, fontsize=25)
ax.legend(fontsize=25)
plt.yticks(fontsize=15)
plt.figure(figsize=(5,15))
plt.show()
"BOY" stands for "Beginning of Year" and "EOY" "End of Year" so the bar graph is intended to show percent of students who fell into each proficiency level at the beginning and end of the year. The graph looks alright but when I drill into the numbers, I can see that the labels for EOY are incorrect. Here is my graph:
The percentages for BOY are graphed correctly, but the EOY ones are with the wrong labels. Here are the actual percentages, which I am certain are correct:
BOY %
Advanced 14.0
Below Proficient 38.0
Proficient 34.0
Remedial 14.0
EOY %
Advanced 39.0
Below Proficient 18.0
Proficient 32.0
Remedial 11.0
Using data from Kaggle: Brooklyn NY Schools
Calculating the bar groups separately can be problematic. It is better to make the calculations within one dataframe, shape the dataframe, and then plot, because this will ensure the bars are plotted in the correct groups.
Since no data is provided, this begins with wide form numeric data and then cleans and shapes the dataframe.
Numeric values are converted to categorical with .cut
Dataframe is converted to long form with .melt, and then use .groupby to calculate percentage within the 'x of Year'
Reshaped with .pivot, and plot with pandas.DataFrame.plot
Tested in python 3.8, pandas 1.3.1, and matplotlib 3.4.2
Imports, Load and Clean the DataFrame
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import numpy as np
# data
data = {'BOY': [11.0, 11.0, 11.0, 11.0, 11.0, 8.0, 11.0, 14.0, 12.0, 13.0, 11.0, 14.0, 10.0, 9.0, 10.0, 10.0, 10.0, 12.0, 12.0, 13.0, 12.0, 11.0, 9.0, 12.0, 16.0, 12.0, 12.0, 12.0, 15.0, 10.0, 10.0, 10.0, 8.0, 11.0, 12.0, 14.0, 10.0, 8.0, 11.0, 12.0, 14.0, 12.0, 13.0, 15.0, 13.0, 8.0, 8.0, 11.0, 10.0, 11.0, 13.0, 11.0, 13.0, 15.0, 10.0, 8.0, 10.0, 9.0, 8.0, 11.0, 13.0, 11.0, 8.0, 11.0, 15.0, 11.0, 12.0, 17.0, 12.0, 11.0, 18.0, 14.0, 15.0, 16.0, 7.0, 11.0, 15.0, 16.0, 13.0, 13.0, 13.0, 0.0, 11.0, 15.0, 14.0, 11.0, 13.0, 16.0, 14.0, 12.0, 8.0, 13.0, 13.0, 14.0, 7.0, 10.0, 16.0, 10.0, 13.0, 10.0, 14.0, 8.0, 16.0, 13.0, 12.0, 14.0, 12.0, 14.0, 16.0, 15.0, 13.0, 13.0, 10.0, 14.0, 8.0, 10.0, 10.0, 11.0, 12.0, 10.0, 12.0, 14.0, 17.0, 13.0, 14.0, 16.0, 15.0, 13.0, 16.0, 9.0, 16.0, 15.0, 11.0, 11.0, 15.0, 14.0, 12.0, 15.0, 11.0, 16.0, 14.0, 14.0, 15.0, 14.0, 14.0, 14.0, 16.0, 15.0, 12.0, 12.0, 14.0, 15.0, 13.0, 14.0, 13.0, 17.0, 14.0, 13.0, 14.0, 13.0, 13.0, 12.0, 10.0, 15.0, 14.0, 12.0, 12.0, 14.0, 12.0, 14.0, 13.0, 15.0, 13.0, 14.0, 14.0, 12.0, 11.0, 15.0, 14.0, 14.0, 10.0], 'EOY': [16.0, 16.0, 16.0, 14.0, 10.0, 14.0, 16.0, 14.0, 15.0, 15.0, 15.0, 11.0, 11.0, 15.0, 10.0, 14.0, 17.0, 14.0, 9.0, 15.0, 14.0, 16.0, 14.0, 13.0, 11.0, 13.0, 12.0, 14.0, 15.0, 13.0, 14.0, 15.0, 12.0, 19.0, 9.0, 13.0, 11.0, 14.0, 17.0, 17.0, 14.0, 13.0, 14.0, 10.0, 16.0, 15.0, 12.0, 11.0, 12.0, 14.0, 15.0, 10.0, 15.0, 14.0, 14.0, 15.0, 18.0, 15.0, 10.0, 10.0, 15.0, 15.0, 13.0, 15.0, 19.0, 13.0, 18.0, 20.0, 21.0, 17.0, 18.0, 17.0, 18.0, 17.0, 12.0, 16.0, 15.0, 18.0, 19.0, 17.0, 20.0, 11.0, 18.0, 19.0, 11.0, 12.0, 17.0, 20.0, 17.0, 15.0, 13.0, 18.0, 14.0, 17.0, 12.0, 12.0, 16.0, 12.0, 14.0, 15.0, 14.0, 10.0, 20.0, 13.0, 18.0, 20.0, 11.0, 20.0, 17.0, 20.0, 13.0, 17.0, 15.0, 18.0, 14.0, 13.0, 13.0, 18.0, 10.0, 13.0, 12.0, 18.0, 20.0, 20.0, 16.0, 18.0, 15.0, 20.0, 22.0, 18.0, 21.0, 18.0, 18.0, 18.0, 17.0, 16.0, 19.0, 16.0, 20.0, 19.0, 19.0, 20.0, 20.0, 14.0, 18.0, 20.0, 20.0, 18.0, 16.0, 21.0, 20.0, 18.0, 15.0, 14.0, 17.0, 19.0, 21.0, 14.0, 18.0, 15.0, 18.0, 21.0, 19.0, 17.0, 16.0, 16.0, 15.0, 20.0, 19.0, 16.0, 21.0, 17.0, 19.0, 15.0, 18.0, 20.0, 18.0, 20.0, 18.0, 16.0, 16.0]}
df = pd.DataFrame(data)
# replace numbers with categorical labels; could also create new columns
labels = ['Remedial', 'Below Proficient', 'Proficient', 'Advanced']
bins = [1, 11, 13, 15, np.inf]
df['BOY'] = pd.cut(x=df.BOY, labels=labels, bins=bins, right=True)
df['EOY'] = pd.cut(x=df.EOY, labels=labels, bins=bins, right=True)
# melt the relevant columns into a long form
dfm = df.melt(var_name='Tested', value_name='Proficiency')
# set the categorical label order, which makes the xaxis labels print in the specific order
dfm['Proficiency'] = pd.Categorical(dfm['Proficiency'], labels, ordered=True)
Groupby, Percent Calculation, and Shape for Plotting
# groupby and get the value counts
dfg = dfm.groupby('Tested')['Proficiency'].value_counts().reset_index(level=1, name='Size').rename({'level_1': 'Proficiency'}, axis=1)
# divide by the Tested value counts to get the percent
dfg['percent'] = dfg['Size'].div(dfm.Tested.value_counts()).mul(100).round(1)
# reshape to plot
dfp = dfg.reset_index().pivot(index='Proficiency', columns='Tested', values='percent')
# display(dfp)
Tested BOY EOY
Proficiency
Remedial 34.8 9.9
Below Proficient 28.7 12.7
Proficient 27.1 25.4
Advanced 8.8 51.9
Plot
ax = dfp.plot(kind='bar', figsize=(15, 5), rot=0, color=['orchid', 'teal'])
# formatting
ax.yaxis.set_major_formatter(mtick.PercentFormatter())
ax.set_ylabel('Students at Proficiency Level', fontsize=18)
ax.set_xlabel('')
ax.set_title('Bushwick Middle Change in Proficiency Levels', fontsize=25)
ax.set_xticklabels(ax.get_xticklabels(), fontsize=25)
ax.legend(fontsize=25)
_ = plt.yticks(fontsize=15)
# add bar labels
for p in ax.containers:
ax.bar_label(p, fmt='%.1f%%', label_type='edge', fontsize=12)
# pad the spacing between the number and the edge of the figure
ax.margins(y=0.2)
See the bar labels match dfp
I have an numpy array called expected which is a list of a list of a list.
expected = [[[45.0, 10.0, 10.0], [110.0, 10.0, 8.0], [60.0, 10.0, 5.0], [170.0, 10.0, 4.0]], [[-80.0, 20.0, 10.0], [97.0, 15.0, 12.0], [5.0, 20.0, 8.0], [93.0, 10.0, 8.0], [12.0, 5.0, 15.0], [-88.0, 10.0, 10.0], [176.0, 10.0, 8.0]]]
I want to put it through a loop without having to hardcode so its applicable to different lengths of list.
When the loop runs for the first time i want it to solve this:
horizontal_exp = expected[0][0][1]*expected[0][0][2]
*np.cos(np.deg2rad(expected[0][0][0]))
Then the next loop to be like this:
horizontal_exp = expected[1][1][1]*expected[1][1][2]
*np.cos(np.deg2rad(expected[1][1][0]))
And the following loop to be like this:
horizontal_exp = expected[2][2][1]*expected[2][2][2]
*np.cos(np.deg2rad(expected[2][2][0]))
and so on until it finished the different sections of rows.
I don't understand why the 'i' never worked??
In the end I want horizontal expected to be a list of a list
e.g.
expected = [ [12,21,23,34], [12,32,54,65,76,87,65] ] # These are not the values I'm just giving an example
where the [12,21,23,24] corresponds to the [[45.0, 10.0, 10.0], [110.0, 10.0, 8.0], [60.0, 10.0, 5.0], [170.0, 10.0, 4.0]]
and the [12,32,54,65,76,87,65] corresponds to the [[-80.0, 20.0, 10.0], [97.0, 15.0, 12.0], [5.0, 20.0, 8.0], [93.0, 10.0, 8.0], [12.0, 5.0, 15.0], [-88.0, 10.0, 10.0], [176.0, 10.0, 8.0]]
I'm unsure how to do this, I know you have to append it with a for loop but how do you separate it into a list of a list??
horizontal_expected = []
for i in list(range(len(expected[i]))):
horizontal_exp = expected[i][i][1]*expected[i][i][2]
*np.cos(np.deg2rad(expected[i][i][0]))
horizontal_expected.append(horizontal_exp)
print(horizontal_expected)
The reason why you don't see the desired output is that, even though you have nested list expected, you are iterating only through the nested lists. You first need to iterate through the outer lists and then iterate through the nested lists internally:
import numpy as np
expected = [ [[45.0, 10.0, 10.0], [110.0, 10.0, 8.0], [60.0, 10.0, 5.0], [170.0, 10.0, 4.0]], [[-80.0, 20.0, 10.0], [97.0, 15.0, 12.0], [5.0, 20.0, 8.0], [93.0, 10.0, 8.0], [12.0, 5.0, 15.0], [-88.0, 10.0, 10.0], [176.0, 10.0, 8.0]] ]
horizontal_expected = []
for i in range(len(expected)):
tmp_list = []
for j in range(len(expected[i])):
horizontal_exp = expected[i][i][1]*expected[i][i][2]*np.cos(np.deg2rad(expected[i][i][0]))
tmp_list.append(horizontal_exp)
horizontal_expected.append(tmp_list)
print(horizontal_expected)
The output of that is a list of lists:
>>> print(horizontal_expected)
[[70.71067811865476, 70.71067811865476, 70.71067811865476, 70.71067811865476], [-21.936481812926527, -21.936481812926527, -21.936481812926527, -21.936481812926527, -21.936481812926527, -21.936481812926527, -21.936481812926527]]
As you can see, it holds a value for each of the lists in the input, but the value is the same. This is due to the way that your equation was set up.
You want the indices to be updated based on the level of the loop:
horizontal_exp = expected[i][j][1]*expected[i][j][2]*np.cos(np.deg2rad(expected[i][j][0]))
The full working code would look like this:
import numpy as np
expected = [ [[45.0, 10.0, 10.0], [110.0, 10.0, 8.0], [60.0, 10.0, 5.0], [170.0, 10.0, 4.0]], [[-80.0, 20.0, 10.0], [97.0, 15.0, 12.0], [5.0, 20.0, 8.0], [93.0, 10.0, 8.0], [12.0, 5.0, 15.0], [-88.0, 10.0, 10.0], [176.0, 10.0, 8.0]] ]
horizontal_expected = []
for i in range(len(expected)):
tmp_list = []
for j in range(len(expected[i])):
horizontal_exp = expected[i][j][1]*expected[i][j][2]*np.cos(np.deg2rad(expected[i][j][0]))
tmp_list.append(horizontal_exp)
horizontal_expected.append(tmp_list)
print(horizontal_expected)
And the output:
>>> print(horizontal_expected)
[[70.71067811865476, -27.361611466053496, 25.000000000000007, -39.39231012048832], [34.72963553338608, -21.936481812926527, 159.39115169467928, -4.186876499435507, 73.36107005503543, 3.489949670250108, -79.80512402078594]]
I have a tensor like below
x = tf.Variable(tf.truncated_normal([batch, input]), stddev=0.1))
Assume that batch = 99, input= 5, and I would like to split up into a small tensor.
If x is below:
[[1.0, 2.0, 3.0, 4.0, 5.0]
[2.0, 3.0, 4.0, 5.0, 6.0]
[3.0, 4.0, 5.0, 6.0, 7.0]
[4.0, 5.0, 6.0, 7.0, 8.0]
.........................
.........................
.........................
[44.0, 55.0, 66.0, 77.0, 88.0]
[55.0, 66.0, 77.0, 88.0, 99.0]]
I want to split up into two tensors
[[1.0, 2.0, 3.0, 4.0, 5.0]
[2.0, 3.0, 4.0, 5.0, 6.0]
[3.0, 4.0, 5.0, 6.0, 7.0]]
and
[4.0, 5.0, 6.0, 7.0, 8.0]
.........................
.........................
[44.0, 55.0, 66.0, 77.0, 88.0]
[55.0, 66.0, 77.0, 88.0, 99.0]]
I don't know how to use tf.split to split row.
An expedient way would be to call tf.slice twice.
I'm writing a python script.
I have a list of numbers:
b = [55.0, 54.0, 54.0, 53.0, 52.0, 51.0, 50.0, 49.0, 48.0, 47.0,
45.0, 45.0, 44.0, 43.0, 41.0, 40.0, 39.0, 39.0, 38.0, 37.0, 36.0, 35.0, 34.0, 33.0, 32.0, 31.0, 30.0, 28.0, 27.0, 27.0, 26.0, 25.0, 24.0, 23.0, 22.0, 22.0, 20.0, 19.0, 18.0, 17.0, 16.0, 15.0, 14.0, 13.0, 11.0, 11.0, 10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]
I need to parse the list and see if the list contains '50'. If it does not,I have to search for one less number 49. if it is not there I have to look for 48. I can do this down to 47.
In python, is there a one liner code I can do this, or can I use a lambda for this?
You could use min() and abs():
>>> b = [55.0, 54.0, 54.0, 53.0, 52.0, 51.0, 50.0, 49.0, 48.0, 47.0, 45.0, 45.0, 44.0, 43.0, 41.0, 40.0, 39.0, 39.0, 38.0, 37.0, 36.0, 35.0, 34.0, 33.0, 32.0, 31.0, 30.0, 28.0, 27.0, 27.0, 26.0, 25.0, 24.0, 23.0, 22.0, 22.0, 20.0, 19.0, 18.0, 17.0, 16.0, 15.0, 14.0, 13.0, 11.0, 11.0, 10.0, 9.0, 8.0, 7.0, 6.0, 5.0, 4.0, 3.0, 2.0, 1.0]
>>> min(b, key=lambda x:abs(x-50))
50.0
>>> min(b, key=lambda x:abs(x-20.1))
20.0
max(i for i in b if i <= 50)
It will raise a ValueError if there are no elements that match the condition.
max(filter(lambda i: i<=50, b))
or, to handle list with all elements above 50:
max(filter(lambda i: i<=50, b) or [None])
You can do this with a generator expression and max.
max(n for n in b if n >= 47 and n <= 50)
highestValue = max(b)
lowestValue = min(b)
if 50 in b:
pass
Three different ways of finding numbers, highest, lowest and if 50 is in the mix.
And if you need to check if multiple numbers is in your hughe list, say you need to know if 50, 30 and 40 is in there:
set(b).issuperset(set([50, 40, 30]))
Oneliner without any lambda (raises ValueError if value not found):
max((x for x in b if 46 < x <= 50))
or version that returns None in this case:
from itertools import chain
max(chain((x for x in b if 46 < x <= 50), (None,)))