GIMP python line drawing very slow - python

pdb.gimp_paintbrush_default seems to be very slow (several seconds, for 500 dots using a standard brush. Lines are worse, obviously). Is this the way it is? Is there a way to speed things up when drawing straight lines using the user selected brush?
pythonfu console code:
from random import randint
img=gimp.image_list()[0]
drw = pdb.gimp_image_active_drawable(img)
width = pdb.gimp_image_width(img)
height = pdb.gimp_image_height(img)
point_number = 500
while (point_number > 0):
x = randint(0, width)
y = randint(0, height)
pdb.gimp_paintbrush_default(drw,2,[x,y])
point_number -= 1

I've been working on something very similar and ran into this problem also. Here's one technique that I found that made my function about 5 times faster:
Create a temporary image
Copy the layer you are working with to the temporary image
Do the drawing on the temporary layer
Copy the temporary layer on top of the original layer
I believe this speeds stuff up because GIMP doesn't have to draw the edits to the screen, but I'm not 100% sure. Here's my function:
def splotches(img, layer, size, variability, quantity):
gimp.context_push()
img.undo_group_start()
width = layer.width
height = layer.height
temp_img = pdb.gimp_image_new(width, height, img.base_type)
temp_img.disable_undo()
temp_layer = pdb.gimp_layer_new_from_drawable(layer, temp_img)
temp_img.insert_layer(temp_layer)
brush = pdb.gimp_brush_new("Splotch")
pdb.gimp_brush_set_hardness(brush, 1.0)
pdb.gimp_brush_set_shape(brush, BRUSH_GENERATED_CIRCLE)
pdb.gimp_brush_set_spacing(brush, 1000)
pdb.gimp_context_set_brush(brush)
for i in range(quantity):
random_size = size + random.randrange(variability)
x = random.randrange(width)
y = random.randrange(height)
pdb.gimp_context_set_brush_size(random_size)
pdb.gimp_paintbrush(temp_layer, 0.0, 2, [x, y, x, y], PAINT_CONSTANT, 0.0)
gimp.progress_update(float(i) / float(quantity))
temp_layer.flush()
temp_layer.merge_shadow(True)
# Delete the original layer and copy the new layer in its place
new_layer = pdb.gimp_layer_new_from_drawable(temp_layer, img)
name = layer.name
img.remove_layer(layer)
pdb.gimp_item_set_name(new_layer, name)
img.insert_layer(new_layer)
gimp.delete(temp_img)
img.undo_group_end()
gimp.context_pop()

Related

How to I properly fill outside a curved shape in pycairo?

I am trying to fill the image area outside of a custom curved shape in Pycairo, however am struggling to achieve this. I have managed to get the result I require by stroking the shape with a large thickness and drawing multiple shapes of increasing size on top of each other, however this solution is inefficient (I care about efficiency as I will be needing to draw 1200 shapes quickly, which currently takes 1 minute). I think there might be a way to use a mask or clip or something similar, but can't find anything online that helps. If there is a way to specify that the stroke is drawn only outside the path, not on both sides, that could also be a solution.
Anyone out there no of a better way to achieve this?
Here's the code I use to draw a curved shape, the calculate_curve_handles function just returns two curve handles between the two sides of the shape based on the curve_point_1 and 2 offsets. The polygon function returns the vertex locations for an N sided polygon
vertices = polygon(num_sides, shape_radius + (scale * (line_thickness-20)), rotation, [x + offset[0], y + offset[1]])
for i in range(len(vertices)):
start_point = [vertices[i][0], vertices[i][1]]
cr.move_to(start_point[0], start_point[1])
if i == len(vertices)-1:
end_point = [vertices[0][0], vertices[0][1]]
else:
end_point = [vertices[i+1][0], vertices[i+1][1]]
point_1, point_2 = calculate_curve_handles(start_point, end_point, curve_point_1_offset, curve_point_2_offset)
cr.curve_to(point_1[0], point_1[1], point_2[0], point_2[1], end_point[0], end_point[1])
cr.set_line_cap(cairo.LINE_CAP_ROUND)
cr.fill()
This is the desired result, achieved with many stroked objects layered on top of each other:
This is what I get when I try to use cr.fill() on the curved path:
Ok, I just figured out that if I move the move_to() function outside of the for loop for the vertices, it draws the shape properly.
Then by setting the fill rule to cr.set_fill_rule(cairo.FILL_RULE_EVEN_ODD) and drawing a large rectangle behind the shape, I can get the desired effect int even less time.
cr.move_to(vertices[0][0], vertices[0][1])
for i in range(0, len(vertices)):
start_point = [vertices[i][0], vertices[i][1]]
if i == len(vertices)-1:
end_point = [vertices[0][0], vertices[0][1]]
else:
end_point = [vertices[i+1][0], vertices[i+1][1]]
point_1, point_2 = calculate_curve_handles(start_point, end_point, curve_point_1_offset, curve_point_2_offset)
cr.curve_to(point_1[0], point_1[1], point_2[0], point_2[1], end_point[0], end_point[1])
I found a solution that works for now. Basically for every side of the shape, I find a point that extends from the vector between the centre of the object and the vertex, well outside the drawing area. Then I fill each line segment as a separate shape
def calculate_bounds(start_point, end_point, centre_point):
direction = np.subtract(start_point, centre_point)
normalised_dir = direction / np.sqrt(np.sum(direction ** 2))
bound_1 = start_point + normalised_dir * 5000
direction = np.subtract(end_point, centre_point)
normalised_dir = direction / np.sqrt(np.sum(direction ** 2))
bound_2 = end_point + normalised_dir * 5000
return bound_1, bound_2
Then the code for drawing the polygon is:
for i in range(0, len(vertices)):
start_point = [vertices[i][0], vertices[i][1]]
cr.move_to(start_point[0], start_point[1])
if i == len(vertices)-1:
end_point = [vertices[0][0], vertices[0][1]]
else:
end_point = [vertices[i+1][0], vertices[i+1][1]]
point_1, point_2 = calculate_curve_handles(start_point, end_point, curve_point_1_offset, curve_point_2_offset)
cr.curve_to(point_1[0], point_1[1], point_2[0], point_2[1], end_point[0], end_point[1])
bound_1, bound_2 = calculate_bounds(start_point, end_point, [x + offset[0], y + offset[1]])
cr.line_to(bound_2[0], bound_2[1])
cr.line_to(bound_1[0], bound_1[1])
cr.fill_preserve()
cr.stroke()

How to code up an image stitching software for these 'simple' images?

TLDR:
Need help trying to calculate overlap region between 2 graphs.
So I'm trying to stitch these 2 images:
Since I know that the images I will be stitching definitely come from the same image, I feel that I should be able to code this up myself. Using libraries like OpenCV feels a little like overkill for me for this task.
My current idea is that I can simplify this task by doing the following steps for each image:
Load image using PIL
Convert image to black and white (PIL image mode “L”)
[Optional: crop images to overlapping region by inspection by eye]
Create vector row_sum, which is a sum of each row
[Optional: log row_sum, to reduce the size of values we're working with]
Plot row_sum.
This would reduce the (potentially) (3*2)-dimensional problem, with 3 RGB channels for each pixel on the 2D image to a (1*2)-D problem with the black and white pixel for the 2D image instead. Then, summing across the rows reduces this to a 1D problem.
I used the following code to implement the above:
import matplotlib.pyplot as plt
import numpy as np
from PIL import Image
class Stitcher():
def combine_2(self, img1, img2):
# thr1, thr2 = self.get_cropped_bw(img1, 115, img2, 80)
thr1, thr2 = self.get_cropped_bw(img1, 0, img2, 0)
row_sum1 = np.log(thr1.sum(1))
row_sum2 = np.log(thr2.sum(1))
self.plot_4x4(thr1, thr2, row_sum1, row_sum2)
def get_cropped_bw(self, img1, img1_keep_from, img2, img2_keep_till):
im1 = Image.open(img1).convert("L")
im2 = Image.open(img2).convert("L")
data1 = (np.array(im1)[img1_keep_from:]
if img1_keep_from != 0 else np.array(im1))
data2 = (np.array(im2)[:img2_keep_till]
if img2_keep_till != 0 else np.array(im2))
return data1, data2
def plot_4x4(self, thr1, thr2, row_sum1, row_sum2):
fig, ax = plt.subplots(2, 2, sharey="row", constrained_layout=True)
ax[0, 0].imshow(thr1, cmap="Greys")
ax[0, 1].imshow(thr2, cmap="Greys")
ax[1, 0].plot(row_sum1, "k.")
ax[1, 1].plot(row_sum2, "r.")
ax[1, 0].set(
xlabel="Index Value",
ylabel="Row Sum",
)
plt.show()
imgs = (r"combine\imgs\test_image_part_1.jpg",
r"combine\imgs\test_image_part_2.jpg")
s = Stitcher()
s.combine_2(*imgs)
This gave me this graph:
(I've added in those yellow boxes, to indicate the overlap regions.)
This is the bit I'm stuck at. I want to find exactly:
the index value of the left-side of the yellow box for the 1st image and
the index value of the right-side of the yellow box for the 2nd image.
I define the overlap region as the longest range for which the end of the 1st graph 'matches' the start of the 2nd graph. For the method to find the overlap region, what should I do if the row sum values aren't exactly the same (what if one is the other scaled by some factor)?
I feel like this could be a problem that could use dot products to find the similarity between the 2 graphs? But I can't think of how to implement this.
I had a lot more fun with this than I expected. I wrote this using opencv, but that's just to load and show the image. Everything else is done with numpy so swapping this to PIL shouldn't be too difficult.
I'm using a brute-force matcher. I also wrote a random-start hillclimber that runs in much less time, but I can't guarantee it'll find the correct answer since the gradient space isn't smooth. I won't include it in my code since it's long and janky, but if you really need the time efficiency I can add it back in later.
I added a random crop and some salt and pepper noise to the images to test for robustness.
The brute-force matcher operates on the idea that we don't know which section of the two images overlap, so we need to convolve the smaller image over the larger image from left to right, top to bottom. This means our search space is:
horizontal = small_width + big_width
vertical = small_height + big_height
area = horizontal * vertical
This will grow very quickly with image size. I motivate the algorithm by giving it points for having a larger overlap, but it loses more points for having differences in color for the overlapped area.
Here are some pictures from an execution of this program
import cv2
import numpy as np
import random
# randomly snips edges
def randCrop(image, maxMargin):
c = [random.randint(0,maxMargin) for a in range(4)];
return image[c[0]:-c[1], c[2]:-c[3]];
# adds noise to image
def saltPepper(image, minNoise, maxNoise):
h,w = image.shape;
randNum = random.randint(minNoise, maxNoise);
for a in range(randNum):
x = random.randint(0, w-1);
y = random.randint(0, h-1);
image[y,x] = random.randint(0, 255);
return image;
# evaluate layout
def getScore(one, two):
# do raw subtraction
left = one - two;
right = two - one;
sub = np.minimum(left, right);
return np.count_nonzero(sub);
# return 2d random position within range
def randPos(img, big_shape):
th,tw = big_shape;
h,w = img.shape;
x = random.randint(0, tw - w);
y = random.randint(0, th - h);
return [x,y];
# overlays small image onto big image
def overlay(small, big, pos):
# unpack
h,w = small.shape;
x,y = pos;
# copy and place
copy = big.copy();
copy[y:y+h, x:x+w] = small;
return copy;
# calculates overlap region
def overlap(one, two, pos_one, pos_two):
# unpack
h1,w1 = one.shape;
h2,w2 = two.shape;
x1,y1 = pos_one;
x2,y2 = pos_two;
# set edges
l1 = x1;
l2 = x2;
r1 = x1 + w1;
r2 = x2 + w2;
t1 = y1;
t2 = y2;
b1 = y1 + h1;
b2 = y2 + h2;
# go
left = max(l1, l2);
right = min(r1, r2);
top = max(t1, t2);
bottom = min(b1, b2);
return [left, right, top, bottom];
# wrapper for overlay + getScore
def fullScore(one, two, pos_one, pos_two, big_empty):
# check positions
x,y = pos_two;
h,w = two.shape;
th,tw = big_empty.shape;
if y+h > th or x+w > tw or x < 0 or y < 0:
return -99999999;
# overlay
temp_one = overlay(one, big_empty, pos_one);
temp_two = overlay(two, big_empty, pos_two);
# get overlap
l,r,t,b = overlap(one, two, pos_one, pos_two);
temp_one = temp_one[t:b, l:r];
temp_two = temp_two[t:b, l:r];
# score
diff = getScore(temp_one, temp_two);
score = (r-l) * (b-t);
score -= diff*2;
return score;
# do brute force
def bruteForce(one, two):
# calculate search space
# unpack size
h,w = one.shape;
one_size = h*w;
h,w = two.shape;
two_size = h*w;
# small and big
if one_size < two_size:
small = one;
big = two;
else:
small = two;
big = one;
# unpack size
sh, sw = small.shape;
bh, bw = big.shape;
total_width = bw + sw * 2;
total_height = bh + sh * 2;
# set up empty images
empty = np.zeros((total_height, total_width), np.uint8);
# set global best
best_score = -999999;
best_pos = None;
# start scrolling
ybound = total_height - sh;
xbound = total_width - sw;
for y in range(ybound):
print("y: " + str(y) + " || " + str(empty.shape));
for x in range(xbound):
# get score
score = fullScore(big, small, [sw,sh], [x,y], empty);
# show
# prog = overlay(big, empty, [sw,sh]);
# prog = overlay(small, prog, [x,y]);
# cv2.imshow("prog", prog);
# cv2.waitKey(1);
# compare
if score > best_score:
best_score = score;
best_pos = [x,y];
print("best_score: " + str(best_score));
return best_pos, [sw,sh], small, big, empty;
# do a step of hill climber
def hillStep(one, two, best_pos, big_empty, step):
# make a step
new_pos = best_pos[1][:];
new_pos[0] += step[0];
new_pos[1] += step[1];
# get score
return fullScore(one, two, best_pos[0], new_pos, big_empty), new_pos;
# hunt around for good position
# let's do a random-start hillclimber
def randHill(one, two, shape):
# set up empty images
big_empty = np.zeros(shape, np.uint8);
# set global best
g_best_score = -999999;
g_best_pos = None;
# lets do 200 iterations
iters = 200;
for a in range(iters):
# progress check
print(str(a) + " of " + str(iters));
# start with random position
h,w = two.shape[:2];
pos_one = [w,h];
pos_two = randPos(two, shape);
# get score
best_score = fullScore(one, two, pos_one, pos_two, big_empty);
best_pos = [pos_one, pos_two];
# hill climb (only on second image)
while True:
# end condition: no step improves score
end_flag = True;
# 8-way
for y in range(-1, 1+1):
for x in range(-1, 1+1):
if x != 0 or y != 0:
# get score and update
score, new_pos = hillStep(one, two, best_pos, big_empty, [x,y]);
if score > best_score:
best_score = score;
best_pos[1] = new_pos[:];
end_flag = False;
# end
if end_flag:
break;
else:
# show
# prog = overlay(one, big_empty, best_pos[0]);
# prog = overlay(two, prog, best_pos[1]);
# cv2.imshow("prog", prog);
# cv2.waitKey(1);
pass;
# check for new global best
if best_score > g_best_score:
g_best_score = best_score;
g_best_pos = best_pos[:];
print("top score: " + str(g_best_score));
return g_best_score, g_best_pos;
# load both images
top = cv2.imread("top.jpg");
bottom = cv2.imread("bottom.jpg");
top = cv2.cvtColor(top, cv2.COLOR_BGR2GRAY);
bottom = cv2.cvtColor(bottom, cv2.COLOR_BGR2GRAY);
# randomly crop
top = randCrop(top, 20);
bottom = randCrop(bottom, 20);
# randomly add noise
saltPepper(top, 200, 1000);
saltPepper(bottom, 200, 1000);
# set up max image (assume no overlap whatsoever)
tw = 0;
th = 0;
h, w = top.shape;
tw += w;
th += h;
h, w = bottom.shape;
tw += w*2;
th += h*2;
# do random-start hill climb
_, best_pos = randHill(top, bottom, (th, tw));
# show
empty = np.zeros((th, tw), np.uint8);
pos1, pos2 = best_pos;
image = overlay(top, empty, pos1);
image = overlay(bottom, image, pos2);
# do brute force
# small_pos, big_pos, small, big, empty = bruteForce(top, bottom);
# image = overlay(big, empty, big_pos);
# image = overlay(small, image, small_pos);
# recolor overlap
h,w = empty.shape;
color = np.zeros((h,w,3), np.uint8);
l,r,t,b = overlap(top, bottom, pos1, pos2);
color[:,:,0] = image;
color[:,:,1] = image;
color[:,:,2] = image;
color[t:b, l:r, 0] += 100;
# show images
cv2.imshow("top", top);
cv2.imshow("bottom", bottom);
cv2.imshow("overlayed", image);
cv2.imshow("Color", color);
cv2.waitKey(0);
Edit: I added in the random-start hillclimber

PyTorch: Vectorizing patch selection from a batch of images

Suppose I have a batch of images as a tensor, for example:
images = torch.zeros(64, 3, 1024, 1024)
Now, I want to select a patch from each of those images. All the patches are of the same size, but have different starting positions for each image in the batch.
size_x = 100
size_y = 100
start_x = torch.zeros(64)
start_y = torch.zeros(64)
I can achieve the desired result like that:
result = []
for i in range(arr.shape[0]):
result.append(arr[i, :, start_x[i]:start_x[i]+size_x, start_y[i]:start_y[i]+size_y])
result = torch.stack(result, dim=0)
The question is -- is it possible to do the same thing faster, without a loop? Perhaps there is some form of advanced indexing, or a PyTorch function that can do this?
You can use torch.take to get rid of a for loop. But first, an array of indices should be created with this function
def convert_inds(img_a,img_b,patch_a,patch_b,start_x,start_y):
all_patches = np.zeros((len(start_x),3,patch_a,patch_b))
patch_src = np.zeros((patch_a,patch_b))
inds_src = np.arange(patch_b)
patch_src[:] = inds_src
for ind,info in enumerate(zip(start_x,start_y)):
x,y = info
if x + patch_a + 1 > img_a: return False
if y + patch_b + 1 > img_b: return False
start_ind = img_b * x + y
end_ind = img_b * (x + patch_a -1) + y
col_src = np.linspace(start_ind,end_ind,patch_b)[:,None]
all_patches[ind,:] = patch_src + col_src
return all_patches.astype(np.int)
As you can see, this function essentially creates the indices for each patch you want to slice. With this function, the problem can be easily solved by
size_x = 100
size_y = 100
start_x = torch.zeros(64)
start_y = torch.zeros(64)
images = torch.zeros(64, 3, 1024, 1024)
selected_inds = convert_inds(1024,1024,100,100,start_x,start_y)
selected_inds = torch.tensor(selected_inds)
res = torch.take(images,selected_inds)
UPDATE
OP's observation is correct, the approach above is not faster than a naive approach. In order to avoid building indices every time, here is another solution based on unfold
First, build a tensor of all the possible patches
# create all possible patches
all_patches = images.unfold(2,size_x,1).unfold(3,size_y,1)
Then, slice the desired patches from all_patches
img_ind = torch.arange(images.shape[0])
selected_patches = all_patches[img_ind,:,start_x,start_y,:,:]

Creating an image from a dictionary using PIL

I have a dictionary that maps coordinate tuples (in the range (0,0) to (199, 199) to grayscale values (integers between 0 and 255.) Is there a good way to create an PIL Image that has the specified values at the specified coordinates? I'd prefer a solution that only uses PIL to one that uses scipy.
You can try image.putpixel() to change the color of a pixel at a particular position. Example code -
from PIL import Image
from random import randint
d = {(x,y):randint(0,255) for x in range(200) for y in range(200)}
im = Image.new('L',(200,200))
for i in d:
im.putpixel(i,d[i])
im.save('blah.png')
It gave me a result like -
You could do it with putpixel(), but that could potentially involve tens of thousands of calls. How much this matters depends on how many coordinate tuples are defined in the dictionary. I've included the method shown in each of the current answers for comparison (including my own before any benchmarking was added, but just now I made a small change to how it initializes the data buffer which measurably sped it up).
To make a level playing field, for testing purposes the input dictionary randomly selects only ½ of the possible pixels in the image to define and allows the rest to be set to a default background color. Anand S Kumar's answer currently doesn't do the latter, but the slightly modified version shown below does.
All produce the same image from the data.
from __future__ import print_function
import sys
from textwrap import dedent
import timeit
N = 100 # number of executions of each algorithm
R = 3 # number of repeations of executions
# common setup for all algorithms - is not included in algorithm timing
setup = dedent("""
from random import randint, sample, seed
from PIL import Image
seed(42)
background = 0 # default color of pixels not defined in dictionary
width, height = 200, 200
# create test dict of input data defining half of the pixel coords in image
coords = sample([(x,y) for x in xrange(width) for y in xrange(height)],
width * height // 2)
d = {coord: randint(0, 255) for coord in coords}
""")
algorithms = {
"Anand S Kumar": dedent("""
im = Image.new('L', (width, height), color=background) # set bgrd
for i in d:
im.putpixel(i, d[i])
"""),
"martineau": dedent("""
data = bytearray([background] * width * height)
for (x, y), v in d.iteritems():
data[x + y*width] = v
im = Image.frombytes('L', (width, height), str(data))
"""),
"PM 2Ring": dedent("""
data = [background] * width * height
for i in d:
x, y = i
data[x + y * width] = d[i]
im = Image.new('L', (width, height))
im.putdata(data)
"""),
}
# execute and time algorithms, collecting results
timings = [
(label,
min(timeit.repeat(algorithms[label], setup=setup, repeat=R, number=N)),
) for label in algorithms
]
print('fastest to slowest execution speeds (Python {}.{}.{})\n'.format(
*sys.version_info[:3]),
' ({:,d} executions, best of {:d} repetitions)\n'.format(N, R))
longest = max(len(timing[0]) for timing in timings) # length of longest label
ranked = sorted(timings, key=lambda t: t[1]) # ascending sort by execution time
fastest = ranked[0][1]
for timing in ranked:
print("{:>{width}} : {:9.6f} secs, rel speed {:4.2f}x, {:6.2f}% slower".
format(timing[0], timing[1], round(timing[1]/fastest, 2),
round((timing[1]/fastest - 1) * 100, 2), width=longest))
Output:
fastest to slowest execution speeds (Python 2.7.10)
(100 executions, best of 3 repetitions)
martineau : 0.255203 secs, rel speed 1.00x, 0.00% slower
PM 2Ring : 0.307024 secs, rel speed 1.20x, 20.31% slower
Anand S Kumar : 1.835997 secs, rel speed 7.19x, 619.43% slower
As martineau suggests putpixel() is ok when you're modifying a few random pixels, but it's not so efficient for building whole images. My approach is similar to his, except I use a list of ints and .putdata(). Here's some code to test these 3 different approaches.
from PIL import Image
from random import seed, randint
width, height = 200, 200
background = 0
seed(42)
d = dict(((x, y), randint(0, 255)) for x in range(width) for y in range(height))
algorithm = 2
print('Algorithm', algorithm)
if algorithm == 0:
im = Image.new('L', (width, height))
for i in d:
im.putpixel(i, d[i])
elif algorithm == 1:
buff = bytearray((background for _ in xrange(width * height)))
for (x,y), v in d.items():
buff[y*width + x] = v
im = Image.frombytes('L', (width,height), str(buff))
elif algorithm == 2:
data = [background] * width * height
for i in d:
x, y = i
data[x + y * width] = d[i]
im = Image.new('L', (width, height))
im.putdata(data)
#im.show()
fname = 'qrand%d.png' % algorithm
im.save(fname)
print(fname, 'saved')
Here are typical timings on my 2GHz machine running Python 2.6.6
$ time ./qtest.py
Algorithm 0
qrand0.png saved
real 0m0.926s
user 0m0.768s
sys 0m0.040s
$ time ./qtest.py
Algorithm 1
qrand1.png saved
real 0m0.733s
user 0m0.548s
sys 0m0.020s
$ time ./qtest.py
Algorithm 2
qrand2.png saved
real 0m0.638s
user 0m0.520s
sys 0m0.032s

Backprop implementation issue

What I am supposed to do. I have an black and white image (100x100px):
I am supposed to train a backpropagation neural network with this image. The inputs are x, y coordinates of the image (from 0 to 99) and output is either 1 (white color) or 0 (black color).
Once the network has learned, I would like it to reproduce the image based on its weights and get the closest possible image to the original.
Here is my backprop implementation:
import os
import math
import Image
import random
from random import sample
#------------------------------ class definitions
class Weight:
def __init__(self, fromNeuron, toNeuron):
self.value = random.uniform(-0.5, 0.5)
self.fromNeuron = fromNeuron
self.toNeuron = toNeuron
fromNeuron.outputWeights.append(self)
toNeuron.inputWeights.append(self)
self.delta = 0.0 # delta value, this will accumulate and after each training cycle used to adjust the weight value
def calculateDelta(self, network):
self.delta += self.fromNeuron.value * self.toNeuron.error
class Neuron:
def __init__(self):
self.value = 0.0 # the output
self.idealValue = 0.0 # the ideal output
self.error = 0.0 # error between output and ideal output
self.inputWeights = []
self.outputWeights = []
def activate(self, network):
x = 0.0;
for weight in self.inputWeights:
x += weight.value * weight.fromNeuron.value
# sigmoid function
if x < -320:
self.value = 0
elif x > 320:
self.value = 1
else:
self.value = 1 / (1 + math.exp(-x))
class Layer:
def __init__(self, neurons):
self.neurons = neurons
def activate(self, network):
for neuron in self.neurons:
neuron.activate(network)
class Network:
def __init__(self, layers, learningRate):
self.layers = layers
self.learningRate = learningRate # the rate at which the network learns
self.weights = []
for hiddenNeuron in self.layers[1].neurons:
for inputNeuron in self.layers[0].neurons:
self.weights.append(Weight(inputNeuron, hiddenNeuron))
for outputNeuron in self.layers[2].neurons:
self.weights.append(Weight(hiddenNeuron, outputNeuron))
def setInputs(self, inputs):
self.layers[0].neurons[0].value = float(inputs[0])
self.layers[0].neurons[1].value = float(inputs[1])
def setExpectedOutputs(self, expectedOutputs):
self.layers[2].neurons[0].idealValue = expectedOutputs[0]
def calculateOutputs(self, expectedOutputs):
self.setExpectedOutputs(expectedOutputs)
self.layers[1].activate(self) # activation function for hidden layer
self.layers[2].activate(self) # activation function for output layer
def calculateOutputErrors(self):
for neuron in self.layers[2].neurons:
neuron.error = (neuron.idealValue - neuron.value) * neuron.value * (1 - neuron.value)
def calculateHiddenErrors(self):
for neuron in self.layers[1].neurons:
error = 0.0
for weight in neuron.outputWeights:
error += weight.toNeuron.error * weight.value
neuron.error = error * neuron.value * (1 - neuron.value)
def calculateDeltas(self):
for weight in self.weights:
weight.calculateDelta(self)
def train(self, inputs, expectedOutputs):
self.setInputs(inputs)
self.calculateOutputs(expectedOutputs)
self.calculateOutputErrors()
self.calculateHiddenErrors()
self.calculateDeltas()
def learn(self):
for weight in self.weights:
weight.value += self.learningRate * weight.delta
def calculateSingleOutput(self, inputs):
self.setInputs(inputs)
self.layers[1].activate(self)
self.layers[2].activate(self)
#return round(self.layers[2].neurons[0].value, 0)
return self.layers[2].neurons[0].value
#------------------------------ initialize objects etc
inputLayer = Layer([Neuron() for n in range(2)])
hiddenLayer = Layer([Neuron() for n in range(10)])
outputLayer = Layer([Neuron() for n in range(1)])
learningRate = 0.4
network = Network([inputLayer, hiddenLayer, outputLayer], learningRate)
# let's get the training set
os.chdir("D:/stuff")
image = Image.open("backprop-input.gif")
pixels = image.load()
bbox = image.getbbox()
width = 5#bbox[2] # image width
height = 5#bbox[3] # image height
trainingInputs = []
trainingOutputs = []
b = w = 0
for x in range(0, width):
for y in range(0, height):
if (0, 0, 0, 255) == pixels[x, y]:
color = 0
b += 1
elif (255, 255, 255, 255) == pixels[x, y]:
color = 1
w += 1
trainingInputs.append([float(x), float(y)])
trainingOutputs.append([float(color)])
print "\nOriginal image ... Black:"+str(b)+" White:"+str(w)+"\n"
#------------------------------ let's train
for i in range(500):
for j in range(len(trainingOutputs)):
network.train(trainingInputs[j], trainingOutputs[j])
network.learn()
for w in network.weights:
w.delta = 0.0
#------------------------------ let's check
b = w = 0
for x in range(0, width):
for y in range(0, height):
out = network.calculateSingleOutput([float(x), float(y)])
if 0.0 == round(out):
color = (0, 0, 0, 255)
b += 1
elif 1.0 == round(out):
color = (255, 255, 255, 255)
w += 1
pixels[x, y] = color
#print out
print "\nAfter learning the network thinks ... Black:"+str(b)+" White:"+str(w)+"\n"
Obviously, there is some issue with my implementation. The above code returns:
Original image ... Black:21 White:4
After learning the network thinks ...
Black:25 White:0
It does the same thing if I try to use larger training set (I'm testing just 25 pixels from the image above for testing purposes). It returns that all pixels should be black after learning.
Now, if I use a manual training set like this instead:
trainingInputs = [
[0.0,0.0],
[1.0,0.0],
[2.0,0.0],
[0.0,1.0],
[1.0,1.0],
[2.0,1.0],
[0.0,2.0],
[1.0,2.0],
[2.0,2.0]
]
trainingOutputs = [
[0.0],
[1.0],
[1.0],
[0.0],
[1.0],
[0.0],
[0.0],
[0.0],
[1.0]
]
#------------------------------ let's train
for i in range(500):
for j in range(len(trainingOutputs)):
network.train(trainingInputs[j], trainingOutputs[j])
network.learn()
for w in network.weights:
w.delta = 0.0
#------------------------------ let's check
for inputs in trainingInputs:
print network.calculateSingleOutput(inputs)
The output is for example:
0.0330125791296 # this should be 0, OK
0.953539182136 # this should be 1, OK
0.971854575477 # this should be 1, OK
0.00046146137467 # this should be 0, OK
0.896699762781 # this should be 1, OK
0.112909223162 # this should be 0, OK
0.00034058462280 # this should be 0, OK
0.0929886299643 # this should be 0, OK
0.940489647869 # this should be 1, OK
In other words the network guessed all pixels right (both black and white). Why does it say all pixels should be black if I use actual pixels from the image instead of hard coded training set like the above?
I tried changing the amount of neurons in the hidden layers (up to 100 neurons) with no success.
This is a homework.
This is also a continuation of my previous question about backprop.
It's been a while, but I did get my degree in this stuff, so I think hopefully some of it has stuck.
From what I can tell, you're too deeply overloading your middle layer neurons with the input set. That is, your input set consists of 10,000 discrete input values (100 pix x 100 pix); you're attempting to encode those 10,000 values into 10 neurons. This level of encoding is hard (I suspect it's possible, but certainly hard); at the least, you'd need a LOT of training (more than 500 runs) to get it to reproduce reasonably. Even with 100 neurons for the middle layer, you're looking at a relatively dense compression level going on (100 pixels to 1 neuron).
As to what to do about these problems; well, that's tricky. You can increase your number of middle neurons dramatically, and you'll get a reasonable effect, but of course it'll take a long time to train. However, I think there might be a different solution; if possible, you might consider using polar coordinates instead of cartesian coordinates for the input; quick eyeballing of the input pattern indicates a high level of symmetry, and effectively you'd be looking at a linear pattern with a repeated predictable deformation along the angular coordinate, which it seems would encode nicely in a small number of middle layer neurons.
This stuff is tricky; going for a general solution for pattern encoding (as your original solution does) is very complex, and can usually (even with large numbers of middle layer neurons) require a lot of training passes; on the other hand, some advance heuristic task breakdown and a little bit of problem redefinition (i.e. advance converting from cartesian to polar coordinates) can give good solutions for well defined problem sets. Therein, of course, is the perpetual rub; general solutions are hard to come by, but slightly more specified solutions can be quite nice indeed.
Interesting stuff, in any event!

Categories

Resources