Arguments to cv2::imshow - python

Edit: original title "convert numpy array to cvmat" was a mistake - OpenCV's less than useful error message and my not reading the docs!
With OpenCV 2, IPython now uses NumPy arrays by default.
cvimage = cv2.imread("image.png") #using OpenCV 2
type(cvimage)
Out: numpy.ndarray #dtype is uint8
pltimage = plt.imread("image.png") #using Matplotlib
type(pltimage)
Out: numpy.ndarray #dtype is float
plt.imshow(cvimage) # works great
cv2.imshow(cvimage)
TypeError: Required argument 'mat' (pos 2) not found
Since cv2 uses NumPy arrays by default, there is no longer any cv::Mat constructor and NumPy has no functions to convert to a cv::Mat array.
Any ideas?

The function has the following docstring: imshow(winname, mat) -> None.
You can see the doc string by typing cv2.imshow.__doc__ in the interpreter.
Try cv2.imshow('Image', cvimage).
tl;dr : In original question, first argument of "window name" was missing. "imshow" takes two parameters and only one was supplied.

The question technically asks how to convert a NumPy Array (analogous to CV2 array) into a Mat object (CV). For anyone who is interested, this can be done by:
mat_array = cv.fromarray(numpy_array)
where mat_array is a Mat object, and numpy_array is a NumPy array or image.
I would suggest staying away from older CV structures where possible. Numpy arrays offer much better performance than implemenations in native Python

Mat object was needed because C/C++ lacked a standard/native implementation of matrices.
However, numpy's array is a perfect replacement for that functionality. Hence, the cv2 module accepts numpy.arrays wherever a matrix is indicated in the docs.

Related

How can I make my Python program use 4 bytes for an int instead of 24 bytes?

To save memory, I want to use less bytes (4) for each int I have instead of 24.
I looked at structs, but I don't really understand how to use them.
https://docs.python.org/3/library/struct.html
When I do the following:
myInt = struct.pack('I', anInt)
sys.getsizeof(myInt) doesn't return 4 like I expected.
Is there something that I am doing wrong? Is there another way for Python to save memory for each variable?
ADDED: I have 750,000,000 integers in an array that I wish to be able to use given an index.
If you want to hold many integers in an array, use a numpy ndarray. Numpy is a very popular third-party package that handles arrays more compactly than Python alone does. Numpy is not in the standard library so that it could be updated more frequently than Python itself is updated--it was considered to be added to the standard library. Numpy is one of the reasons Python has become so popular for Data Science and for other scientific uses.
Numpy's np.int32 type uses four bytes for an integer. Declare your array full of zeros with
import numpy as np
myarray = np.zeros((750000000,), dtype=np.int32)
Or if you just want the array and do not want to spend any time initializing the values,
myarray = np.empty((750000000,), dtype=np.int32)
You then fill and use the array as you like. There is some Python overhead for the complete array, so the array's size will be slightly larger than 4 * 750000000, but the size will be close.

zip does not work for imshow: TypeError: Image data cannot be converted to float

I saw similar questions elsewhere, but they came from different codes. My case is to transpose the data with zip, then use the imshow:
import matplotlib.pyplot as plt
a=[[1,2,3],[4,5,6]]
img_data=zip(*a)
plt.imshow(img_data)
I got
TypeError: Image data cannot be converted to float
zip returns an iterator object (in python3, and not a container such as a list/array). What you'd want to do is convert the zip object to a format that imshow understands. There are a couple of options.
Option 1
Convert to a list -
img_data = list(zip(*a))
plt.imshow(img_data)
Option 2
Convert a to a numpy array and transpose. Since you're using zip to the same effect, this makes sense.
plt.imshow(np.array(a).T)

Does numpy methods work correctly on numbers too big to fit numpy dtypes?

I would like to know if numbers bigger than what int64 or float128 can be correctly processed by numpy functions
EDIT: numpy functions applied to numbers/python objects outside of any numpy array. Like using a np function in a list comprehension that applies to the content of a list of int128?
I can't find anything about that in their docs, but I really don't know what to think and expect. From tests, it should work but I want to be sure, and a few trivial test won't help for that. So I come here for knowledge:
If np framework is not handling such big numbers, are its functions able to deal with these anyway?
EDIT: sorry, I wasn't clear. Please see the edit above
Thanks by advance.
See the Extended Precision heading in the Numpy documentation here. For very large numbers, you can also create an array with dtype set to 'object', which will allow you essentially to use the Numpy framework on the large numbers but with lower performance than using native types. As has been pointed out, though, this will break when you try to call a function not supported by the particular object saved in the array.
import numpy as np
arr = np.array([10**105, 10**106], dtype='object')
But the short answer is that you you can and will get unexpected behavior when using these large numbers unless you take special care to account for them.
When storing a number into a numpy array with a dtype not sufficient to store it, you will get truncation or an error
arr = np.empty(1, dtype=np.int64)
arr[0] = 2**65
arr
Gives OverflowError: Python int too large to convert to C long.
arr = np.empty(1, dtype=float16)
arr[0] = 2**64
arr
Gives inf (and no error)
arr[0] = 2**15 + 2
arr
Gives [ 32768.] (i.e., 2**15), so truncation occurred. It would be harder for this to happen with float128...
You can have numpy arrays of python objects, which could be a python integer which is too big to fit in np.int64. Some of numpy's functionality will work, but many functions call underlying c code which will not work. Here is an example:
import numpy as np
a = np.array([123456789012345678901234567890]) # a has dtype object now
print((a*2)[0]) # Works and gives the right result
print(np.exp(a)) # Does not work, because "'int' object has no attribute 'exp'"
Generally, most functionality will probably be lost for your extremely large numbers. Also, as it has been pointed out, when you have an array with a dtype of np.int64 or similar, you will have overflow problems, when you increase the size of your array elements over that types limit. With numpy, you have to be careful about what your array's dtype is!

image not displayed correctly when scaled with a decimal

I am using OpenCV to read and display an image. I am trying to do a scalar multiplication but it is being displayed very differently for two similar approaches:
img = cv2.imread('C:/Python27/user_scripts/images/g1.jpg', -1)
cv2.imshow('img_scaled1', 0.5*img)
cv2.waitKey(0)
cv2.imshow('img_scaled2', img/2)
cv2.waitKey(0)
In the 1st case, hardly anything is displayed. 2nd case works fine.
It seems to me that imshow() does not support numpy array of floats.
I want to use the first method. Can somebody help?
There are lot of pitfall when using images. This one seems like a type issue.
imshowaccept uint8 arrays in range(0,256) (256 excluded), and float arrays in range(0.0,1.0). When doing a=a*.5, you have a float array out of range, so no warranty on the result.
A solution is to cast the array in the uint8 type by:
imshow((a*.5).astype(np.uint8))
or
imshow((a*.5).astype('uint8'))

Why does the numpy angle function return values also for the masked array values

If you try the following code segment
import numpy as np
import numpy.ma as ma
a = np.random.random(100) + 1j*np.random.random(100)
mask = np.ones_like(a, dtype='bool')
mask[0:9] = False
a = ma.masked_array(a, mask)
phase = np.angle(a)
The phase array will not be masked. The angle function will return values for the whole array, even for the masked out values. Am I doing something wrong here or is this the way it should be? If so, why?
Had a quick look at the numpy source, and it might be a bug/not implemented yet.
It's listed as a "missing feature (work in progress)" on the numpy.ma page, issue #1: http://projects.scipy.org/numpy/wiki/MaskedArray.
The problem is that a number of unary functions such as np.angle, np.quantile call [np.]asarray in the source, which strips out the mask.
As the devs explain in the page I linked to, if these functions used ma.asarray instead of np.asarray they'd work, but they don't :(.
I guess this is a patch yet to be submitted?
As a temporary workaround, np.angle basically calls np.arctan2(a.imag,a.real) (optionally multiplying by 180/pi to get degrees), so you could use that.

Categories

Resources