different result between opencv convertTo in c++ and manual conversion in Python - python

I'm trying to port a code from c++ to python, where at some point a frame is extracted from a .oni recording (OpenNI2), scaled to 8 bit and saved as jpg.
I use OpenCV function convertTo in c++, which is not available in python, so reading the documentation I'm triying to do the same operation manually, but something is wrong.
This is the c++
cv::Mat depthImage8;
double maxVal = 650.0;
double minVal = 520.0;
depthImage.convertTo(depthImage8, CV_8UC1, 255.0/(maxVal - minVal), -minVal * 255.0/(maxVal - minVal));
cv::imwrite(dst_folder + "/" + std::to_string(DepthFrameIndex) + "_8bit.jpg", depthImage8);
which produce:
This is the Python version:
depth_scale_factor = 255.0 / (650.0-520.0)
depth_scale_beta_factor = -520.0*255.0/(650.0-520.0)
depth_uint8 = (depth_array*depth_scale_factor+depth_scale_beta_factor).astype('uint8')
which produce:
This code seems to work, but however images generated are different, while the original one (16UC1) are identical (already checked and they match pixel by pixel), so there should be something wrong in the conversion functions.

Thanks to the comments I came up with the solution. As stated by users michelson and Dan Masek Opencv performs saturate_cast operation, while numpy don't. So in order to get the same result, Python version must be:
depth_uint8 = depth_array*depth_scale_factor+depth_scale_beta_factor
depth_uint8[depth_uint8>255] = 255
depth_uint8[depth_uint8<0] = 0
depth_uint8 = depth_uint8.astype('uint8')

Related

Problem to get old series data (Python for Finance)

I've converted this formula (ZLEMA Moving Average)
But I have many issues with "Data(Lag Days Ago)", it seems that it cant go back to find the result. Here's the function but unfortunately it doesn't produce the desired result.
def fzlema(source,period):
zxLag = period / 2 if (period / 2) == np.round(period / 2) else (period - 1) / 2
zxLag = int(zxLag)
zxEMAData = source + (source - source.iloc[zxLag]) # Probably error is in this line
zlema = zxEMAData.ewm(span=period, adjust=False).mean()
zlema = np.round(zlema,2)
return zlema
zlema = fzlema(dataframe['close'], 50)
To be clear, the script runs perfectly but what I got is unmatched as it's been calculated on Tradingview.
I tried used iloc[..] and tail(..) but neither return exact results.
I can use the libraries pandas and numpy.
Any point of view?
SOLVED:
Simply using source.shift(zxLag)

PyTorch C++ extension: How to index tensor and update it?

I'm creating a PyTorch C++ extension and after much research I can't figure out how to index a tensor and update its values. I found out how to iterate over a tensor's entries using the data_ptr() method, but that's not applicable to my use case.
Given is a matrix M, a list of lists (blocks) of index pairs P and a function f: dtype(M)^2 -> dtype(M)^2 that takes two values and spits out two new values.
I'm trying to implement the following pseudo code:
for each block B in P:
for each row R in M:
for each index-pair (i,j) in B:
M[R,i], M[R,j] = f(M[R,i], M[R,j])
After all, this code is going to run on the GPU using CUDA, but since I don't have any experience with that, I wanted to first write a pure C++ program and then convert it.
Can anyone suggest how to do this or how to convert the algorithm to do something equivalent?
What I wanted to do can be done using the
tensor.accessor<scalar_dtype, num_dimensions>()
method. If executing on the GPU instead use scalars.packed_accessor64<scalar_dtype, num_dimensions, torch::RestrictPtrTraits>()
or
scalars.packed_accessor32<scalar_dtype, num_dimensions, torch::RestrictPtrTraits>() (depending on the size of your tensor).
auto num_rows = scalars.size(0);
matrix = torch::rand({10, 8});
auto a = matrix.accessor<float, 2>();
for (auto i = 0; i < num_rows; ++i) {
auto x = a[i][some_index];
auto new_x = some_function(x);
a[i][some_index] = new_x;
}

Using astropy.fits and numpy to apply coincidence corrections to SWIFT fits image

This question may be a little specialist, but hopefully someone might be able to help. I normally use IDL, but for developing a pipeline I'm looking to use python to improve running times.
My fits file handling setup is as follows:
import numpy as numpy
from astropy.io import fits
#Directory: /Users/UCL_Astronomy/Documents/UCL/PHASG199/M33_UVOT_sum/UVOTIMSUM/M33_sum_epoch1_um2_norm.img
with fits.open('...') as ima_norm_um2:
#Open UVOTIMSUM file once and close it after extracting the relevant values:
ima_norm_um2_hdr = ima_norm_um2[0].header
ima_norm_um2_data = ima_norm_um2[0].data
#Individual dimensions for number of x pixels and number of y pixels:
nxpix_um2_ext1 = ima_norm_um2_hdr['NAXIS1']
nypix_um2_ext1 = ima_norm_um2_hdr['NAXIS2']
#Compute the size of the images (you can also do this manually rather than calling these keywords from the header):
#Call the header and data from the UVOTIMSUM file with the relevant keyword extensions:
corrfact_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
coincorr_um2_ext1 = numpy.zeros((ima_norm_um2_hdr['NAXIS2'], ima_norm_um2_hdr['NAXIS1']))
#Check that the dimensions are all the same:
print(corrfact_um2_ext1.shape)
print(coincorr_um2_ext1.shape)
print(ima_norm_um2_data.shape)
# Make a new image file to save the correction factors:
hdu_corrfact = fits.PrimaryHDU(corrfact_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_corrfact]).writeto('.../M33_sum_epoch1_um2_corrfact.img')
# Make a new image file to save the corrected image to:
hdu_coincorr = fits.PrimaryHDU(coincorr_um2_ext1, header=ima_norm_um2_hdr)
fits.HDUList([hdu_coincorr]).writeto('.../M33_sum_epoch1_um2_coincorr.img')
I'm looking to then apply the following corrections:
# Define the variables from Poole et al. (2008) "Photometric calibration of the Swift ultraviolet/optical telescope":
alpha = 0.9842000
ft = 0.0110329
a1 = 0.0658568
a2 = -0.0907142
a3 = 0.0285951
a4 = 0.0308063
for i in range(nxpix_um2_ext1 - 1): #do begin
for j in range(nypix_um2_ext1 - 1): #do begin
if (numpy.less_equal(i, 4) | numpy.greater_equal(i, nxpix_um2_ext1-4) | numpy.less_equal(j, 4) | numpy.greater_equal(j, nxpix_um2_ext1-4)): #then begin
#UVM2
corrfact_um2_ext1[i,j] == 0
coincorr_um2_ext1[i,j] == 0
else:
xpixmin = i-4
xpixmax = i+4
ypixmin = j-4
ypixmax = j+4
#UVM2
ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax])
xvec_UVM2 = ft*ima_UVM2sum
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2*xvec_UVM2) + (a3*xvec_UVM2*xvec_UVM2*xvec_UVM2) + (a4*xvec_UVM2*xvec_UVM2*xvec_UVM2*xvec_UVM2)
Ctheory_UVM2 = - alog(1-(alpha*ima_UVM2sum*ft))/(alpha*ft)
corrfact_um2_ext1[i,j] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum)
coincorr_um2_ext1[i,j] = corrfact_um2_ext1[i,j]*ima_sk_um2[i,j]
The above snippet is where it is messing up, as I have a mixture of IDL syntax and python syntax. I'm just not sure how to convert certain aspects of IDL to python. For example, the ima_UVM2sum = total(ima_norm_um2[xpixmin:xpixmax,ypixmin:ypixmax]) I'm not quite sure how to handle.
I'm also missing the part where it will update the correction factor and coincidence correction image files, I would say. If anyone could have the patience to go over it with a fine tooth comb and suggest the neccessary changes I need that would be excellent.
The original normalised image can be downloaded here: Replace ... in above code with this file
One very important thing about numpy is that it does every mathematical or comparison function on an element-basis. So you probably don't need to loop through the arrays.
So maybe start where you convolve your image with a sum-filter. This can be done for 2D images by astropy.convolution.convolve or scipy.ndimage.filters.uniform_filter
I'm not sure what you want but I think you want a 9x9 sum-filter that would be realized by
from scipy.ndimage.filters import uniform_filter
ima_UVM2sum = uniform_filter(ima_norm_um2_data, size=9)
since you want to discard any pixel that are at the borders (4 pixel) you can simply slice them away:
ima_UVM2sum_valid = ima_UVM2sum[4:-4,4:-4]
This ignores the first and last 4 rows and the first and last 4 columns (last is realized by making the stop value negative)
now you want to calculate the corrections:
xvec_UVM2 = ft*ima_UVM2sum_valid
fxvec_UVM2 = 1 + (a1*xvec_UVM2) + (a2*xvec_UVM2**2) + (a3*xvec_UVM2**3) + (a4*xvec_UVM2**4)
Ctheory_UVM2 = - np.alog(1-(alpha*ima_UVM2sum_valid*ft))/(alpha*ft)
these are all arrays so you still do not need to loop.
But then you want to fill your two images. Be careful because the correction is smaller (we inored the first and last rows/columns) so you have to take the same region in the correction images:
corrfact_um2_ext1[4:-4,4:-4] = Ctheory_UVM2*(fxvec_UVM2/ima_UVM2sum_valid)
coincorr_um2_ext1[4:-4,4:-4] = corrfact_um2_ext1[4:-4,4:-4] *ima_sk_um2
still no loop just using numpys mathematical functions. This means it is much faster (MUCH FASTER!) and does the same.
Maybe I have forgotten some slicing and that would yield a Not broadcastable error if so please report back.
Just a note about your loop: Python's first axis is the second axis in FITS and the second axis is the first FITS axis. So if you need to loop over the axis bear that in mind so you don't end up with IndexErrors or unexpected results.

Using python binding for flycapture to retrieve color image

I am working with the CMLN-13S2C-CS CCD camera from PointGrey Systems. It uses FlyCapture API to grab images. I would like to grab these images and do some stuff in OpenCV with them using python.
I am aware of the following python binding: pyflycapture2. With this binding I am able to retrieve images. However, I cannot retrieve the images in color, which is what the camera should be able to do.
The videomode and framerate that the camera is able to handle are VIDEOMODE_1280x960Y8, and FRAMERATE_15, respectively. I think it has something to do with the pixel_format, which I think should be raw8.
Is anyone able to retrieve a color image using this or any existing python binding for flycapture? Note that I am working on Linux.
You don't need to use the predefined modes. The Context class has the set_format7_configuration(mode, x_offset, y_offset, width, height, pixel_format) method with which you can use your custom settings. Using this you can at least change the resolution of the grabbed image.
Usage example:
c.set_format7_configuration(fc2.MODE_0, 320, 240, 1280, 720, fc2.PIXEL_FORMAT_MONO8)
As for the coloring issue. I've so far managed to get a colored image using PIXEL_FORMAT_RGB8 and modifying the Image class in flycapture2.pyx as follows:
def __array__(self):
cdef np.ndarray r
cdef np.npy_intp shape[3] # From 2 to 3
cdef np.dtype dtype
numberofdimensions = 2 # New variable
if self.img.format == PIXEL_FORMAT_MONO8:
dtype = np.dtype("uint8")
elif self.img.format == PIXEL_FORMAT_MONO16:
dtype = np.dtype("uint16")
elif self.img.format == PIXEL_FORMAT_RGB8: # New condition
dtype = np.dtype("uint8")
numberofdimensions = 3
shape[2] = 3
else:
dtype = np.dtype("uint8")
Py_INCREF(dtype)
shape[0] = self.img.rows
shape[1] = self.img.cols
# nd value (numberofdimensions) was always 2; stride set to NULL
r = PyArray_NewFromDescr(np.ndarray, dtype,
numberofdimensions, shape, NULL,
self.img.pData, np.NPY_DEFAULT, None)
r.base = <PyObject *>self
Py_INCREF(self)
return r
This code is most likely not flawless (i.e I removed the stride stuff) for the simple reason that I have pretty much 0 experience with C and Cython but this way I at least managed to get a colored frame (now in the process of trying to get the PIXEL_FORMAT_RAW8 working).
And just as a reminder: the flycapture2.pyx is a Cython file so you need to recompile it before you can use it (I just run the pyflycap2 install script again).
I'm using the same camera with Matlab and also got an issues with "raw8" format. So, I've chose "rgb8", specifically, "F7_RGB_644x482_Mode1" and all things starts to work (not sure, how it should look at Python).
P.S. At the moment I'm trying to start work with Python and pyflycapture2, let's see, if I would be able to find workaround.
UPD: Okay, now I know the things. :)
Your (and mine) issue reasons are buried inside the pyflycapture2 itself, especially "Image" class definition. You can have a look here: https://github.com/jordens/pyflycapture2/blob/eec14acd761e89d8e63a0961174e7f5900180d54/src/flycapture2.pyx
if self.img.format == PIXEL_FORMAT_MONO8:
dtype = np.dtype("uint8")
stride[1] = 1
elif self.img.format == PIXEL_FORMAT_MONO16:
dtype = np.dtype("uint16")
stride[1] = 2
else:
dtype = np.dtype("uint8")
stride[1] = self.img.stride/self.img.cols
ANY image will be converted into grayscale, even if it was RGB initially. So, we need to update that file somehow.

Numba function slower than C++ and loop re-order further slows down x10

The following code simulates extracting binary words from different locations within a set of images.
The Numba wrapped function, wordcalc in the code below, has 2 problems:
It is 3 times slower compared to a similar implementation in C++.
Most strangely, if you switch the order of the "ibase" and "ibit" for-loops, speed drops by a factor of 10 (!). This does not happen in the C++ implementation which remains unaffected.
I'm using Numba 0.18.2 from WinPython 2.7
What could be causing this?
imDim = 80
numInsts = 10**4
numInstsSub = 10**4/4
bitsNum = 13;
Xs = np.random.rand(numInsts, imDim**2)
iInstInds = np.array(range(numInsts)[::4])
baseInds = np.arange(imDim**2 - imDim*20 + 1)
ofst1 = np.random.randint(0, imDim*20, bitsNum)
ofst2 = np.random.randint(0, imDim*20, bitsNum)
#nb.jit(nopython=True)
def wordcalc(Xs, iInstInds, baseInds, ofst, bitsNum, newXz):
count = 0
for i in iInstInds:
Xi = Xs[i]
for ibit in range(bitsNum):
for ibase in range(baseInds.shape[0]):
u = Xi[baseInds[ibase] + ofst[0, ibit]] > Xi[baseInds[ibase] + ofst[1, ibit]]
newXz[count, ibase] = newXz[count, ibase] | np.uint16(u * (2**ibit))
count += 1
return newXz
ret = wordcalc(Xs, iInstInds, baseInds, np.array([ofst1, ofst2]), bitsNum, np.zeros((iInstInds.size, baseInds.size), dtype=np.uint16))
I get 4x speed-up by changing from np.uint16(u * (2**ibit)) to np.uint16(u << ibit); i.e. replace the power of 2 with a bitshift, which should be equivalent (for integers).
It seems reasonably likely that your C++ compiler might be making this substitution itself.
Swapping the order of the two loops makes a small difference for me for both your original version (5%) and my optimized version (15%), so I can't think I can make a useful comment on that.
If you really wanted to compare the Numba and C++ you can look at the compiled Numba function by doing os.environ['NUMBA_DUMP_ASSEMBLY']='1' before you import Numba. (That's clearly quite involved though).
For reference, I'm using Numba 0.19.1.

Categories

Resources