How do I concatenate two matrices in Python OpenCV? - python

How do I concatenate two matrices into one matrix? The resulting matrix should have the same height as the two input matrices, and its width will equal the sum of the width of the two input matrices.
I am looking for a pre-existing method that will perform the equivalent of this code:
def concatenate(mat0, mat1):
# Assume that mat0 and mat1 have the same height
res = cv.CreateMat(mat0.height, mat0.width + mat1.width, mat0.type)
for x in xrange(res.height):
for y in xrange(mat0.width):
cv.Set2D(res, x, y, mat0[x, y])
for y in xrange(mat1.width):
cv.Set2D(res, x, y + mat0.width, mat1[x, y])
return res

If you are using OpenCV, (you will get Numpy support then), you can use Numpy function np.hstack((img1,img2)) to do this.
eg :
import cv2
import numpy as np
# Load two images of same size
img1 = cv2.imread('img1.jpg')
img2 = cv2.imread('img2.jpg')
both = np.hstack((img1,img2))

You should use OpenCV. Legacy uses cvmat. But numpy arrays are really easy to work with.
As suggested by #abid-rahman-k, you can use hstack(which I didn't know about) so I had used this.
h1, w1 = img.shape[:2]
h2, w2 = img1.shape[:2]
nWidth = w1+w2
nHeight = max(h1, h2)
hdif = (h1-h2)/2
newimg = np.zeros((nHeight, nWidth, 3), np.uint8)
newimg[hdif:hdif+h2, :w2] = img1
newimg[:h1, w2:w1+w2] = img
But if you want to work with Legacy code, this should help
Let's assume that height of img0 is greater than height of image
nW = img0.width+image.width
nH = img0.height
newCanvas = cv.CreateImage((nW,nH), cv.IPL_DEPTH_8U, 3)
cv.SetZero(newCanvas)
yc = (img0.height-image.height)/2
cv.SetImageROI(newCanvas,(0,yc,image.width,image.height))
cv.Copy(image, newCanvas)
cv.ResetImageROI(newCanvas)
cv.SetImageROI(newCanvas,(image.width,0,img0.width,img0.height))
cv.Copy(img0,newCanvas)
cv.ResetImageROI(newCanvas)

OpenCV has in-built functions for concatenating images vertically/horizontally:
cv2.vconcat()
cv2.hconcat()
Note: While concatenating, images must be of the same dimensions or else you will come across an error message similar to: error: (-215:Assertion failed)....
Code:
img = cv2.imread('flower.jpg', 1)
# concatenate images vertically
vertical_concat = cv2.vconcat([img, img])
# concatenate images horizontally
horizontal_concat = cv2.hconcat([img, img])

I know this question is old, but I stumbled across it because I was looking to concatenate arrays that are two dimensions ( not just concatenate in 1 dimension ).
np.hstack will not do this.
Assuming you have two 640x480 images that are simply two dimensions use dstack.
a = cv2.imread('imgA.jpg')
b = cv2.imread('imgB.jpg')
a.shape # prints (480,640)
b.shape # prints (480,640)
imgBoth = np.dstack((a,b))
imgBoth.shape # prints (480,640,2)
imgBothH = np.hstack((a,b))
imgBothH.shape # prints (480,1280)
# = not what I wanted, first dimension not preserverd

Related

Storing pixel values of a greyscale video ,averaging them and then showing the resulting video

`n = 3
array = np.ones((n,n)) / (n*n)
n = array.shape[0] * array.shape1
while(True):
ret, frame = cap.read()
if ret is True:
print("newframe")
gframe = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
dst = cv2.copyMakeBorder(gframe, 1, 1, 1, 1, borderType, None, None)
blur = cv2.blur(dst,(3,3))
if k == 1 :
lastframe = gframe
curframe = gframe
nextframe = gframe
newFrame = gframe
k = 0
else :
lf = ndimage.convolve(lastframe, array, mode='constant', cval= 0.0)
cf = ndimage.convolve(curframe, array, mode='constant', cval= 0.0)
nf = ndimage.convolve(nextframe, array, mode='constant', cval= 0.0)
lastframe = curframe
curframe = nextframe
nextframe = gframe
b = np.zeros((3, 528, 720))
b[0] = lf
b[1] = cf
b[2] = nf
result = np.mean(b, axis=0)
cv2.imshow('frame',result)
cv2.imshow('frame2',gframe)
`enter image description here
I am trying to add all pixel values of a 3x3 pixel and then average them. I need to do that for every pixel and every frame and replace the primary pixel with the averaged one. However the way i am trying to do it makes it really slow and not really accurate.
This sounds like a convolution.
import numpy as np
from scipy import ndimage
a = np.random.random((5, 5))
a
[[0.14742615 0.83548453 0.67433445 0.59162829 0.21160044]
[0.1700598 0.89074466 0.84155171 0.65092969 0.3842437 ]
[0.22662423 0.2266929 0.47757456 0.34480112 0.06261333]
[0.89402116 0.00101947 0.90503461 0.93112109 0.44817247]
[0.21788789 0.3338606 0.07323461 0.28944439 0.91217591]]
Convolution operation with window size 3x3
n = 3
k = np.ones((n, n)) / (n * n)
n = k.shape[0] * k.shape[1]
b = ndimage.convolve(a, k, mode='constant', cval=0.0)
b
[[0.22707946 0.39551126 0.49829704 0.3726987 0.2042669 ]
[0.27744803 0.49894366 0.61486021 0.47103081 0.24953517]
[0.26768469 0.51481368 0.58549664 0.56067136 0.31354238]
[0.21112292 0.37288334 0.39808704 0.4937969 0.33203648]
[0.16075435 0.26945093 0.28152386 0.39546479 0.28676821]]
Now you just have to do it for the current frame, and the two prior frames.
-------- EDIT: For three frames -----------
For 3D you could write a convolution function like in this post, but its quite complex as it uses FFTs
If you just want to average across three frames, you could do:
f1 = np.random.random((5, 5)) # Frame1
f2 = np.random.random((5, 5)) # Frame2
f3 = np.random.random((5, 5)) # Frame3
n = 3
k = np.ones((n, n)) / (n * n)
n = k.shape[0] * k.shape[1]
b0 = ndimage.convolve(f1, k, mode='constant', cval=0.0)
b1 = ndimage.convolve(f2, k, mode='constant', cval=0.0)
b2 = ndimage.convolve(f3, k, mode='constant', cval=0.0)
# Create a 3D Matrix, with each fame placed along the first dimension
b = np.zeros((3, 5, 5))
b[0] = b0
b[1] = b1
b[2] = b2
# Take the average across the first dimension (across frames)
result = np.mean(b, axis=0)
There probably is a more elegant solution than this, but it gets the job done.
-------- EDIT: For Movies -----------
Based on all the questions in the comments I've decided to attempt to add some more code to help with implementation.
Firstly I'm starting out with these 7 consecutive stills from a movie:
I have not verified that the following code is bug proof or actually returns the correct result.
import cv2
import numpy as np
from scipy import ndimage
# this is a function to do previous code
def mean_frames(frames, kernel):
b = np.zeros(frames.shape)
for i in range(frames.shape[0]):
b[i] = ndimage.convolve(frames[i], k, mode='constant', cval=0.0)
b = np.mean(b, axis=0) / frames.shape[0]
return b
mean_N = 3 # frames to average
# read in 1 file to get dimenions
im = cv2.imread(f'{root}1.png', cv2.IMREAD_GRAYSCALE)
# setup numpy matrix that will hold mean_N frames at a time
frames = np.zeros((mean_N, im.shape[0], im.shape[1]))
avg_frames = [] # list to store our 3 averaged frames
count = 0 # counter to position frames in 1st dim of 3D matrix for avg
k = np.ones((3, 3)) / (3 * 3) # kernel for 2D convolution
for j in range(1, 7): # 7 images
file_name = root + str(j) + '.png'
im = cv2.imread(file_name, cv2.IMREAD_GRAYSCALE)
frames[count, ::] = im # store in 3D matrix
# if loaded more than min req. for avg, we average
if j >= mean_N:
# average and store to list
avg_frames.append(mean_frames(frames, k))
# if the count is mean_N - 1, that means we need to replace
# the 0th matrix in frames so that we are doing a 'moving avg'
if count == (mean_N - 1):
count = 0
else:
count += 1 #increase position in 0th dim for 3D matrix storage
# ouput averaged frames
for i, f in enumerate(avg_frames):
cv2.imwrite(f'{path}output{i}.jpg', f)
Then looking at the folder, there are 5 files (as expected if we did a moving average of 3 frames over 7 stills:
looking at before and after:
Image 3:
and averaged image #1:
The image not only is in gray scale (as expected) but seems quite dark. Perhaps some brightening would make things look better/more apparent.
Your question is very interesting.
I saw that you use many loops for activating this function. Let's process analysis.
Just for a frame.
You want to add all pixel values of a 3x3 pixel neighborhood. So I think Image interpolation is very suitable for this case. In OpenCV, we use resize() to interpolate pixel for image. So the INTER_NEAREST is best for this situation.
This is the formula for INTER_NEAREST.
Now you get the pixel added image.
Then you want to do that for every pixel and every frame and replace the primary pixel with the average one. And I think the Average filtering is a better solution.
The filter will work every pixel.
The code of a temporary example.
Interpolation
img = cv2.resize(img, (img.size[0]*3, img.size[1]*3), cv2.INTER_NEAREST)
Filter
img = cv2.blur(img, (3, 3))

List loses shape when appending a NumPy array

I'm using PIL to load images and then transform them to NumPy arrays. Then I've to create a new image based on a list of images, so I append all theearrays to a list and then transform the list back to an array, so the shape for the list of images has 4 dimensions (n_images, height, width, rgb_channels). I'm using this code:
def gallery(array, ncols=4):
nindex, height, width, intensity = array.shape
nrows = nindex // ncols
# want result.shape = (height*nrows, width*ncols, intensity)
result = (array.reshape(nrows, ncols, height, width, intensity)
.swapaxes(1,2)
.reshape(height*nrows, width*ncols, intensity))
return result
def make_array(dim_x):
for i in range(dim_x):
print('series',i)
series = []
for j in range(TIME_STEP-1):
print('photo',j)
aux = np.asarray(Image.open(dirpath+'/images/pre_images /series_{0}_Xquakemap_{1}.jpg'.format(i,j)).convert('RGB'))
print(np.shape(aux))
series.append(aux)
print(np.shape(series))
im = Image.fromarray(gallery(np.array(series)))
im.save(dirpath+'/images/gallery/series_{0}_Xquakemap.jpg'.format(i))
im_shape = (im.size)
make_array(n_photos)
# n_photos is the total of photos in the dirpath
The problem is when the append on the series list happened, the shape of the image (the NumPy array added) gets lost. So when trying to reshape the array in the function gallery it causes a problem. A snippet of the output for the code above is this one:
...
series 2
photo 0
(585, 619, 3)
(1, 585, 619, 3)
photo 1
(587, 621, 3)
(2,)
photo 2
(587, 621, 3)
(3,)
photo 3
(587, 621, 3)
(4,)
...
As you can see, when appending the second photo the list loses a dimension. This is weird because the code works the first two iterations, which use fairly the same images. I tried using np.stack() but the error prevails.
I also find this issue on Github but I think it doesn't apply to this case even if the behavior is similar.
Working on Ubuntu 18, Python 3.7.3 and Numpy 1.16.2.
edit: added what #kwinkunks asked
In the second function, I think you need to move series = [] to before the outer loop.
Here's my reproduction of the problem:
import numpy as np
from PIL import Image
TIME_STEP = 3
def gallery(array, ncols=4):
"""Stitch images together."""
nindex, height, width, intensity = array.shape
nrows = nindex // ncols
result = array.reshape(nrows, ncols, height, width, intensity)
result = result.swapaxes(1,2)
result = result.reshape(height*nrows, width*ncols, intensity)
return result
def make_array(dim_x):
"""Make an image from a list of arrays."""
series = [] # <<<<<<<<<<< This is the line you need to check.
for i in range(dim_x):
for j in range(TIME_STEP - 1):
aux = np.ones((100, 100, 3)) * np.random.randint(0, 256, 3)
series.append(aux.astype(np.uint8))
im = Image.fromarray(gallery(np.array(series)))
return im
make_array(4)
This results in:

How to combine dimensions in numpy array?

I'm using OpenCV to read images into numpy.array, and they have the following shape.
import cv2
def readImages(path):
imgs = []
for file in os.listdir(path):
if file.endswith('.png'):
img = cv2.imread(file)
imgs.append(img)
imgs = numpy.array(imgs)
return (imgs)
imgs = readImages(...)
print imgs.shape # (100, 718, 686, 3)
Each of the image has 718x686 pixels/dimension. There are 100 images.
I don't want to work on 718x686, I'd like to combine the pixels into a single dimension. That is, the shape should look like: (100,492548,3). Is there anyway either in OpenCV (or any other library) or Numpy that allows me to do that?
Without modifying your reading function:
imgs = readImages(...)
print imgs.shape # (100, 718, 686, 3)
# flatten axes -2 and -3, using -1 to autocalculate the size
pixel_lists = imgs.reshape(imgs.shape[:-3] + (-1, 3))
print pixel_lists.shape # (100, 492548, 3)
In case anyone wants it. Here's a general way of doing this
import functools
def combine_dims(a, i=0, n=1):
"""
Combines dimensions of numpy array `a`,
starting at index `i`,
and combining `n` dimensions
"""
s = list(a.shape)
combined = functools.reduce(lambda x,y: x*y, s[i:i+n+1])
return np.reshape(a, s[:i] + [combined] + s[i+n+1:])
With this function you could use it like this:
imgs = combine_dims(imgs, 1) # combines dimension 1 and 2
# imgs.shape = (100, 718*686, 3)
def combine_dims(a, start=0, count=2):
""" Reshapes numpy array a by combining count dimensions,
starting at dimension index start """
s = a.shape
return numpy.reshape(a, s[:start] + (-1,) + s[start+count:])
This function does what you need in a more general way.
imgs = combine_dims(imgs, 1) # combines dimension 1 and 2
# imgs.shape == (100, 718*686, 3)
It works by using numpy.reshape, which turns an array of one shape into an array with the same data but viewed as another shape. The target shape is just the initial shape, but with the dimensions to be combined replaced by -1. numpy uses -1 as a flag to indicate that it should work out itself how big that dimension should be (based on the total number of elements.)
This code is essentially a simplified version of Multihunter's answer, but my edit was rejected and hinted that it should be a separate answer. So there you go.
import cv2
import os
import numpy as np
def readImages(path):
imgs = np.empty((0, 492548, 3))
for file in os.listdir(path):
if file.endswith('.png'):
img = cv2.imread(file)
img = img.reshape((1, 492548, 3))
imgs = np.append(imgs, img, axis=0)
return (imgs)
imgs = readImages(...)
print imgs.shape # (100, 492548, 3)
The trick was to reshape and append to a numpy array. It's not good practice to hardcode the length of the vector (492548) so if I were you I'd also add a line that calculates this number and puts it in a variable, for use in the rest of the script.

scipy.ndimage.interpolation.zoom uses nearest-neighbor-like algorithm for scaling-down

While testing scipy's zoom function, I found that the results of scailng-down an array are similar to the nearest-neighbour algorithm, rather than averaging. This increases noise drastically, and is generally suboptimal for many application.
Is there an alternative that does not use nearest-neighbor-like algorithm and will properly average the array when downsizing? While coarsegraining works for integer scaling factors, I would need non-integer scaling factors as well.
Test case: create a random 100*M x 100*M array, for M = 2..20
Downscale the array by the factor of M three ways:
1) by taking the mean in MxM blocks
2) by using scipy's zoom with a scaling factor 1/M
3) by taking a first point within a
Resulting arrays have the same mean, the same shape, but scipy's array has the variance as high as the nearest-neighbor. Taking a different order for scipy.zoom does not really help.
import scipy.ndimage.interpolation
import numpy as np
import matplotlib.pyplot as plt
mean1, mean2, var1, var2, var3 = [],[],[],[],[]
values = range(1,20) # down-scaling factors
for M in values:
N = 100 # size of an array
a = np.random.random((N*M,N*M)) # large array
b = np.reshape(a, (N, M, N, M))
b = np.mean(np.mean(b, axis=3), axis=1)
assert b.shape == (N,N) #coarsegrained array
c = scipy.ndimage.interpolation.zoom(a, 1./M, order=3, prefilter = True)
assert c.shape == b.shape
d = a[::M, ::M] # picking one random point within MxM block
assert b.shape == d.shape
mean1.append(b.mean())
mean2.append(c.mean())
var1.append(b.var())
var2.append(c.var())
var3.append(d.var())
plt.plot(values, mean1, label = "Mean coarsegraining")
plt.plot(values, mean2, label = "mean scipy.zoom")
plt.plot(values, var1, label = "Variance coarsegraining")
plt.plot(values, var2, label = "Variance zoom")
plt.plot(values, var3, label = "Variance Neareset neighbor")
plt.xscale("log")
plt.yscale("log")
plt.legend(loc=0)
plt.show()
EDIT: Performance of scipy.ndimage.zoom on a real noisy image is also very poor
The original image is here http://wiz.mit.edu/lena_noisy.png
The code that produced it:
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage.interpolation import zoom
im = Image.open("/home/magus/Downloads/lena_noisy.png")
im = np.array(im)
plt.subplot(131)
plt.title("Original")
plt.imshow(im, cmap="Greys_r")
plt.subplot(132)
im2 = zoom(im, 1 / 8.)
plt.title("Scipy zoom 8x")
plt.imshow(im2, cmap="Greys_r", interpolation="none")
im.shape = (64, 8, 64, 8)
im3 = np.mean(im, axis=3)
im3 = np.mean(im3, axis=1)
plt.subplot(133)
plt.imshow(im3, cmap="Greys_r", interpolation="none")
plt.title("averaging over 8x8 blocks")
plt.show()
Nobody posted a working answer, so I will post a solution I currently use. Not the most elegant, but works.
import numpy as np
import scipy.ndimage
def zoomArray(inArray, finalShape, sameSum=False,
zoomFunction=scipy.ndimage.zoom, **zoomKwargs):
"""
Normally, one can use scipy.ndimage.zoom to do array/image rescaling.
However, scipy.ndimage.zoom does not coarsegrain images well. It basically
takes nearest neighbor, rather than averaging all the pixels, when
coarsegraining arrays. This increases noise. Photoshop doesn't do that, and
performs some smart interpolation-averaging instead.
If you were to coarsegrain an array by an integer factor, e.g. 100x100 ->
25x25, you just need to do block-averaging, that's easy, and it reduces
noise. But what if you want to coarsegrain 100x100 -> 30x30?
Then my friend you are in trouble. But this function will help you. This
function will blow up your 100x100 array to a 120x120 array using
scipy.ndimage zoom Then it will coarsegrain a 120x120 array by
block-averaging in 4x4 chunks.
It will do it independently for each dimension, so if you want a 100x100
array to become a 60x120 array, it will blow up the first and the second
dimension to 120, and then block-average only the first dimension.
Parameters
----------
inArray: n-dimensional numpy array (1D also works)
finalShape: resulting shape of an array
sameSum: bool, preserve a sum of the array, rather than values.
by default, values are preserved
zoomFunction: by default, scipy.ndimage.zoom. You can plug your own.
zoomKwargs: a dict of options to pass to zoomFunction.
"""
inArray = np.asarray(inArray, dtype=np.double)
inShape = inArray.shape
assert len(inShape) == len(finalShape)
mults = [] # multipliers for the final coarsegraining
for i in range(len(inShape)):
if finalShape[i] < inShape[i]:
mults.append(int(np.ceil(inShape[i] / finalShape[i])))
else:
mults.append(1)
# shape to which to blow up
tempShape = tuple([i * j for i, j in zip(finalShape, mults)])
# stupid zoom doesn't accept the final shape. Carefully crafting the
# multipliers to make sure that it will work.
zoomMultipliers = np.array(tempShape) / np.array(inShape) + 0.0000001
assert zoomMultipliers.min() >= 1
# applying scipy.ndimage.zoom
rescaled = zoomFunction(inArray, zoomMultipliers, **zoomKwargs)
for ind, mult in enumerate(mults):
if mult != 1:
sh = list(rescaled.shape)
assert sh[ind] % mult == 0
newshape = sh[:ind] + [sh[ind] // mult, mult] + sh[ind + 1:]
rescaled.shape = newshape
rescaled = np.mean(rescaled, axis=ind + 1)
assert rescaled.shape == finalShape
if sameSum:
extraSize = np.prod(finalShape) / np.prod(inShape)
rescaled /= extraSize
return rescaled
myar = np.arange(16).reshape((4,4))
rescaled = zoomArray(myar, finalShape=(3, 5))
print(myar)
print(rescaled)
FWIW i found that order=1 at least preserves the mean a lot better than the default or order=3 (as expected really)

Numpy Error - Only On Linux

The following bit of Python takes two images and performs an 'alpha composite' of them, or in other words, sticks one on top of the other, and returns a single image. The code isn't really something I quite grasp, as it came from another Stack Overflow answer.
import numpy as np
import Image
def alpha_composite(src, dst):
src = np.asarray(src)
dst = np.asarray(dst)
out = np.empty(src.shape, dtype = 'float')
alpha = np.index_exp[:, :, 3:]
rgb = np.index_exp[:, :, :3]
src_a = src[alpha]/255.0
dst_a = dst[alpha]/255.0
out[alpha] = src_a+dst_a*(1-src_a)
old_setting = np.seterr(invalid = 'ignore')
out[rgb] = (src[rgb]*src_a + dst[rgb]*dst_a*(1-src_a))/out[alpha]
np.seterr(**old_setting)
out[alpha] *= 255
np.clip(out,0,255)
# astype('uint8') maps np.nan (and np.inf) to 0
out = out.astype('uint8')
out = Image.fromarray(out, 'RGBA')
return out
It works great on Windows, but as soon as I move it over to Ubuntu Server, it gives me the following error:
File "ImageStitcher.py", line 21, in alpha_composite
src_a = src[alpha]/255.0
IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index
I'm using the same version of PIL and the same version of numpy on both.
Any idea what might be going on here?

Categories

Resources