select first nth list items within a for loop - python

What I have at the moment is the below loop which lights each led on a strand of 24 in turn.
while True:
for i in range(24):
pixels = [ (0,0,0) ] * numLEDs
pixels[i] = (100, 100, 100)
client.put_pixels(pixels)
time.sleep(0.02)
What I would like to have is that the previously lit leds stay on for each iteration. So the number of lit leds increases by one each time round.
I assumed I could simply select from the start of the list to the i'th item as below.
However this returns a "TypeError: 'int' object is not iterable".
I'm not really clear why this doesn't work.
while True:
for i in range(24):
pixels = [ (0,0,0) ] * numLEDs
pixels[:i] = (100, 100, 100)
client.put_pixels(pixels)
time.sleep(0.02)
While I've got your attention, is there a better way to time these loops other than using time.sleep()? Although I am using threading, the sleeps still cause some delays when the led patterns change.

The problem is that you are forever overwriting your current pixel state with all zeroes. If you define the pixel structure outside of your infinite while loop, and then adjust one at a time only, it should fix your problem. Try something like this:
numLEDs = 24
pixels = [ (0,0,0) ] * numLEDs
while True:
for i in range(numLEDs):
pixels[i] = (100, 100, 100)
client.put_pixels(pixels)
time.sleep(0.02)

In your first example, the loop turns off the LEDs by setting them to (0,0,0). Instead, why not use this:
while True:
for i in range(24):
pixels = [ (100,100,100) ] * (i) + [ (0,0,0) ] * (numLEDs - i)
client.put_pixels(pixels)
time.sleep(0.02)
This sets the first i elements of the list to (100,100,100) then the remaining to be (0,0,0).
If i = 5 and numLEDs = 15, you will get this output:
[(100, 100, 100), (100, 100, 100), (100, 100, 100), (100, 100, 100), (100, 100, 100), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)]
As you can see, this will leave the others on.

If you want to light ALL leds in a sequence, then turn them off at once etc etc etc ... etc
# constants
ON = (100, 100, 100)
OFF = (0, 0, 0)
N = 24
n = 0
while 1:
if n%N == 0:
px = [ OFF ] * N
client.put_pixels(px)
time.sleep(0.02)
px[n%N] = ON
client.put_pixels(px)
time.sleep(0.02)
n += 1

This is an answer dealing with the task of having two consecutive leds always turned on, so that if you mount your leds' array in the shape of a circle you see a double light moving on the circle with a frequency of approx 2 Hz.
I answered under a false impression of what the OP effectively asked, I have given another answer that addresses the REAL question but I'd like to leave here this answer 'cause it has its beauty, at least in the eyes of the beholder...
You want to compute, using simple module arithmetic, which LED must be turned OFF and which has to be turned ON, using the trick (that I hope it is allowed under your requirements) that we start with a partially lit board.
# constants
ON = (100, 100, 100)
OFF = (0, 0, 0)
N = 24
# initial state of the leds
px = [ OFF ] * N
client.put_pixels(px) # no leds on
# set initial conditions for the iteration
px[0] = ON
n = -1
# the never-ending loop
while True:
n +=1
i, j = (n-1)%N, (n+1)%N
px[i], px[j] = OFF, ON
client.put_pixels(px)
time.sleep(0.02)
Before the 1st iteration you have led 0 ON in the px list, after the 1st iteration you have 2 leds ON, in all further iterations you put one led OFF and turn another led ON, so in every moment you have 2 led ON.

Related

How to vectorize this matrix building operation?

Problem:
Vectorize the building of a matrix where certain elements are a function determined by whether or not a particular triple index exists in a dictionary. Other elements are 0.
Code:
This is simple to implement using for loops:
# specify matrix size. In this example it is 4*4 but they get much, much bigger
mat_size = 4
# create empty numpy array
matt = np.zeros((mat_size, mat_size))
# index_list is a list of tuples which are subtracted element-wise in the for loops to form a new triple index used to access the coming dictionary
# the list of tuples gets larger as the matrix size increases
index_list = [(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 0, 2), (0, 1, 1), (0, 2, 0), (1, 1, 0)]
# the dictionary of functions:
# keys are a tuple of integers
# values are the functions to be evaluated to form the matrix
# there are more keys in the actual problem but this is for the minimal working example
func_dict = {(0, 0, 0): lambda a : a + 10,
(0, 0, -2): lambda a : a - 3,
(0, 0, -1): lambda a : a**5,
(0, 0, 1): lambda a : a*10 + 5,
(0, 1, 0): lambda a : a/3 + 2,
(0, 1, -2): lambda a : a*2 + a**4,
(0, 1, -1): lambda a : a/4 * 76,
(0, 1, 1): lambda a : a*19 / 3,
}
# only loop over half the matrix as they are always symmetric
for H in range(mat_size):
for J in range(H, mat_size):
x = index_list[J][0] - index_list[H][0] # subtract first tuple elements
y = index_list[J][1] - index_list[H][1] # subtract second tuple elements
z = index_list[J][2] - index_list[H][2] # subtract third tuple elements
# check if x,y,z is in the dictionary
if (x, y, z) in func_dict:
matt[H,J] = func_dict[x, y, z](3.43)
else:
matt[H,J] = 0
Output:
An upper triangular (with diagonal) matrix:
[[13.43 39.3 3.14333333 0. ]
[ 0. 13.43 65.17 39.3 ]
[ 0. 0. 13.43 0. ]
[ 0. 0. 0. 13.43 ]]
I parallelized the outer loop which is the current implementation I have been using and has been suitably fast enough. The matrix sizes are now becoming larger and after profiling this is the limiting step which I need to speed up as it gets called repeatedly.
I can feel this problem will allow for vectorization but after several attempts always results in a loop which then ends up being comparable with the parallelized version.
Is there an effective way to vectorize this problem, removing any loop(s)?

Wrong difference returned between two pixelaccess objects

So I have a function which takes as parameters two PixelAccess objects which essentially are two images which are converted to a multi-dimensional array of pixels of the type (image1pixels[x,y]) , then it subtracts each tuple of pixels on the width and height of both the images and then appends the subtractions of pixels inside an array named c ; The function then returns the sum of all the tuples in the array.
Here is the function:
def difference(pix1, pix2):
size = width, height = img.size;
result = 0;
array = [];
for x in range(width):
for y in range(height):
c = tuple(map(sub, pix2[x, y], pix1[x, y]));
array.append(c);
result = abs(add(map(sum, array)));
return result;
Here to have an idea, when I print c, this is what is printed:
(0, 0, 0)
(0, 0, 0)
(0, 0, 0)
(-253, -253, -253)
(-210, -210, -210)
(-168, -168, -168)
(-147, -147, -147)
(-48, -48, -48)
(-13, -13, -13)
(-29, -29, -29)
(-48, -48, -48)
(-48, -48, -48)
(0, 0, 0)
(0, 0, 0)
(0, 0, 0)
I have to compare two images using this function, the expected difference should be 17988 but my function returns 9174.
I just want to know if my logic is wrong or if I'm coding the wrong way here knowing python is not my primary everyday language.
Thanks in advance.

How to convert array values 0 and 255 to corresponding 0 and 1 array

I have an image represented as numpy array which has values of 0 and 255 (no other value within the range). What is the best way to convert it to 0 and 1 array.
my_array = np.array([255,255,0,0])
my_array = my_array / 255
Will output
array([ 1., 1., 0., 0.])
In other words, it will work to normalize all values in the range of 0-255 (even though you said it's the only 2 values, it will work for everything in between as well, while keeping the ratios)
Sounds like a job for numpy.clip:
>>> a = np.array([0, 255, 0, 255, 255, 0])
>>> a.clip(max=1)
array([0, 1, 0, 1, 1, 0])
From the docs:
Given an interval, values outside the interval are clipped to the interval edges. For example, if an interval of [0, 1] is specified, values smaller than 0 become 0, and values larger than 1 become 1.
Because there are so many answers that give the right answer, I just wanted to test the different approaches and decide which is the best in terms of computational cost. I wrote the following code that creates a given data set, which is an image of 0 and 255 values, placed at random, and then I study the mean elapsed time for each proposed algorithm, varying the number of pixels of the image (note that I use the mean to reduce the noise in the measurement):
import numpy as np
import time
times1_all = []
times2_all = []
times3_all = []
for i in xrange(20):
times1 = []
times2 = []
times3 = []
xsizes = np.arange(100,10000,500)
print len(xsizes)
for xsize in xsizes:
#Create the dataset
ysize = xsize
xrand = np.random.randint(0,xsize, xsize)
yrand = np.random.randint(0,ysize, xsize)
a = np.zeros([xsize,ysize])
a[xrand, yrand] = 255
start = time.time()
b = (a == 255).astype('int')
stop = time.time()
time1 = stop-start
start = time.time()
b = a/255
stop = time.time()
time2 = stop-start
start = time.time()
b = a.clip(max=1)
stop = time.time()
time3 = stop-start
print time3
times1.append(time1)
times2.append(time2)
times3.append(time3)
print 'Elapsed times --> (1)/(1)=%.2f, (2)/(1)=%.2f, (3)/(1)=%.2f' %(time1/time1,time2/time1,time3/time1)
times1_all.append(times1)
times2_all.append(times2)
times3_all.append(times3)
times1_mean = np.mean(times1_all, axis=0)
times2_mean = np.mean(times2_all, axis=0)
times3_mean = np.mean(times3_all, axis=0)
The results of this test are shown in the image below, which shows the elapsed time of different algorithms as a function of the image pixel number (I only quote the side number of pixels in the x-axis). As expected, the bigger the image, the longer it takes to do the job. However, it is clear that there are systematic differences amongst the algorithms. For any number of pixels, the algorithms proposed by #randomir and #Ofer perform better than that proposed by #user1717828. So, According to this metric, #Ofer and $randomir are equivalent.
You can mask (either with >0 or ==255 or really anything else) then convert to type int:
npa = np.array([0,255,0,255,255,255,0])
npa
array([ 0, 255, 0, 255, 255, 255, 0])
(npa>0).astype('int')
array([0, 1, 0, 1, 1, 1, 0])

Python how to loop through an array in numpy at C speed and store some positions

I am new to python, numpy and opencv. I am playing with the first example of harris corner detector from here. My objective is to get an ordered list of all the corners. With this simple code I am able to get the X and Y coordinates of the corners and their value:
height, width, depth = img.shape
print height, width
for i in range(0, height): #looping at python speed
for j in range(0, (width)):
if dst[i,j] > 0.9*dst.max():
print i, j, dst[i,j]
However, it is dead slow. I don't know how this is called but apparently with numpy one can loop through arrays at C speed and even assign values, example:
img[0:height, 0:width, 0:depth] = 0
Can I loop through an array and assign the position of interesting values in another variable? I.e. can I use this on my code to make it faster?
You can get a mask of elements that would pass the IF conditional statement. Next up, if you need the indices that would pass the condition, use np.where or np.argwhere on the mask. For the valid dst elements, index dst with the same mask, thus using boolean indexing. The implementation would look something like this -
mask = dst > 0.9*dst.max()
out = np.column_stack((np.argwhere(mask),dst[mask]))
If you would like to get those three printed outputs separately, you could do -
I,J = np.where(mask)
valid_dst = dst[mask]
Finally, if you would like to edit the 3D array img based on the 2D mask, you could do -
img[mask] = 0
This way, you would change the corresponding elements in img across all channels in one go.
First of all if you are using Python 2.X you should use xrange instead of range, this speeds things up. range in Python 3.X has the same implementation as xrange in Python 2.X.
If you want to iterate over numpy arrays why not use the numpy enumerator for iteration?
# creating a sample img array with size 2x2 and RGB values
# so the shape is (2, 2, 3)
> img = np.random.randint(0, 255, (2,2,3))
> img
> array([[[228, 61, 154],
[108, 25, 52]],
[[237, 207, 127],
[246, 223, 101]]])
# iterate over the values, getting key and value at the same time
# additionally, no nasty nested for loops
> for key, val in np.ndenumerate(img):
> print key, val
# this prints the following
> (0, 0, 0) 228
> (0, 0, 1) 61
> (0, 0, 2) 154
> (0, 1, 0) 108
> (0, 1, 1) 25
> (0, 1, 2) 52
> (1, 0, 0) 237
> (1, 0, 1) 207
> (1, 0, 2) 127
> (1, 1, 0) 246
> (1, 1, 1) 223
> (1, 1, 2) 101

Create new numpy array-scalar of flexible dtype

I have a working solution to my problem, but when trying different things I was astounded there wasn't a better solution that I could find. It all boils down to creating a single flexible dtype value for comparing and inserting into an array.
I have an RGB 24-bit image (so 8-bits for each R, G, and B) image array. It turns out for some actions it is best to use it as a 3D array with HxWx3 other times it is best to use it as a structured array with the dtype([('R',uint8),('G',uint8),('B',uint8)]). One example is when trying to relabel the image colors so that every unique color is given a different value. I do this with the following code:
# Given im as an array of HxWx3, dtype=uint8
from numpy import dtype, uint8, unique, insert, searchsorted
rgb_dtype = dtype([('R',uint8),('G',uint8),('B',uint8)]))
im = im.view(dtype=rgb_dtype).squeeze() # need squeeze to remove the third dim
values = unique(im)
if tuple(values[0]) != (0, 0, 0):
values = insert(values, 0, 0) # value 0 needs to always be (0, 0, 0)
labels = searchsorted(values, im)
This works beautifully, however I am tried to make the if statement look nicer and just couldn't find a way. So lets look at the comparison first:
>>> values[0]
(0, 0, 0)
>>> values[0] == 0
False
>>> values[0] == (0, 0, 0)
False
>>> values[0] == array([0, 0, 0])
False
>>> values[0] == array([uint8(0), uint8(0), uint8(0)]).view(dtype=rgb_dtype)[0]
True
>>> values[0] == zeros((), dtype=rgb_dtype)
True
But what if you wanted something besides (0, 0, 0) or (1, 1, 1) and something that was not ridiculous looking? It seems like there should be an easier way to construct this, like rgb_dtype.create((0,0,0)).
Next with the insert statement, you need to insert 0 for (0, 0, 0). For other values this really does not work, for example inserting (1, 2, 3) actually inserts (1, 1, 1), (2, 2, 2), (3, 3, 3).
So in the end, is there a nicer way? Thanks!
I could make insert() work for your case doing (note that instead of 0 it is used [0]):
values = insert(values, [0], (1,2,3))
giving (for example):
array([(0, 1, 3), (0, 0, 0), (0, 0, 4), ..., (255, 255, 251), (255, 255, 253), (255, 255, 255)],
dtype=[('R', 'u1'), ('G', 'u1'), ('B', 'u1')])
Regarding another way to do your if, you can do this:
str(values[0]) == str((0,0,0))
or, perhaps more robust:
eval(str(values[0])) == eval(str(0,0,0))

Categories

Resources