I'm in the process of moving some of my code off of openzoom.py and onto Libvips but am not sure how dictate the interpolation method, which is important. I at the very least need to be able to use bicubic/bilinear in one case and nearest neighbors in the other case.
My old code is as follows:
creator = deepzoom.ImageCreator(tile_size=128, tile_overlap=2, tile_format="png",
image_quality=0.8, resize_filter="nearest")
creator.create(sourceFile, destFile)
Currently, using pyvips I have the following
image = pyvips.Image.new_from_file(sourceFile)
image.dzsave(destFile, tile_size=128, overlap=2,
suffix='.png[Q=80]')
Any help would be greatly appreciated :)
By default, dzsave will average each 2x2 block of pixels, which is equivalent to bilinear.
Sometimes, for example with images where pixel values represent labels rather than intensity, you need a non-interpolatory downsize. For these cases, you can use the region_shrink parameter to select median or mode, which will both preserve label values.
I would use:
image = pyvips.Image.new_from_file(sourceFile, access='sequential')
image.dzsave(destFile,
overlap=1,
tile_size=126,
region_shrink='mode',
suffix='.png')
Don't forget to set the access hint. It'll give you a huge improvement in speed and memory behaviour for large images that don't support random access.
The PNG Q number sets quantization quality when outputting palettized images. Perhaps you mean compression? libvips defaults to 6, the PNG standard.
Are you sure you want overlap=2? The deepzoom standard is overlap 1. Overlap 1 means there is one pixel extra around the edge of every tile, so tiles in the centre of the image will share two pixels on every edge with their neighbours. Setting overlap=2 means you'll have four pixel overlaps, confusingly.
Likewise, tile_size=128 means most of your tiles will be 132x132 pixels. It doesn't matter for PNG, but JPG works best with multiples of 8 on an axis. I would set tile_size to (128 - 2 * overlap), as deepzoom does by default.
git master libvips adds max, min and nearest (always pick the top-left pixel) as well. A branch has lanczos3, but it was never merged for various reasons.
Related
I am building a device that uses a motorized stage and camera to raster scans a sample and store their images, for downstream visualization or perception tasks.
I have attached an image for illustration. (in the image, red and yellow squares are images that map the sample area. Red square maps smaller area, and yellow square maps a larger area)
The stage has its own coordinate system (black dots), which can be mapped to the image data, as each image will have variable number of black dots (>1) in them depending on their magnification (red or yellow dots).
I have been wondering how I could design a system where I'll be able to store these images. My stage coordinate system extends from -50,000 to + 50,000 with a stepsize of 0.1, so it wouldn't be useful to create a reference array of 500k * 500k size to map each point to any pixels that might belong to those points.
I'm trying to do this in python.
There are well-known techniques for addressing such problems.
Define an underlying spatial coordinate system in mm
or some other convenient unit of measure.
Define three (invertible) functions (black, red, yellow)
that will convert back and forth between spatial coords and pixel coords.
It sounds like linear functions will suffice.
If there is noise in your measurements you might find that discretizing
by setting low order bits to zero is convenient.
You will take multiple photos and store them in a filesystem.
Base the filename on the spatial coordinate of the center pixel.
Compose the name in this way:
Let x_bits be spatial X coordinate, with MSB first.
Similarly for y_bits.
Let filename be alternating X and Y bits,
so e.g. the coord (0, 3) at two-bit resolution becomes "0101".
Turn groups of four bits into hex nybbles,
and treat early parts of filename as directory names,
as you find convenient.
Now at query time, nearby images of different resolutions all appear together.
Alternatively, use arbitrary filenames,
and store coordinate + filename in a postgres PostGIS table.
Then geometry queries like ST_Distance or ST_Within
will efficiently retrieve relevant images,
using a technique similar to the "interleaved bits" quadtree approach I outlined above.
Here`s the deal. I want to create a mask that visualizes all the changes between two images (GeoTiffs which are converted to 2D numpy arrays).
For that I simply subtract the pixel values and normalize the absolute value of the subtraction:
Since the result will be covered in noise, I use a treshold and remove all pixels with a value below a certain limit.
def treshold(array, thresholdLimit):
print("Treshold...")
result = (array > thresholdLimit) * array
return result
This works without a problem. Now comes the issue. When applying the treshold, outliers remain, which is not intended:
What is a good way to remove those outliers?
Sometimes the outliers are small chunks of pixels, like 5-6 pixels together, how could those be removed?
Additionally, the images I use are about 10000x10000 pixels.
I would appreciate all advice!
EDIT:
Both images are landsat satelite images, covering the exact same area.
The difference here is that one image shows cloud coverage and the other one is free of clouds.
The bright snakey line in the top right is part of a river that has been covered by a cloud. Since water bodies like the ocean or rivers are depicted black in those images, the difference between the bright cloud and the dark river results in the river showing a high degree of change.
I hope the following images make this clear:
Source tiffs :
Subtraction result:
I also tried to smooth the result of the tresholding by using a median filter but the result was still covered in outliers:
from scipy.ndimage import median_filter
def filter(array, limit):
print("Median-Filter...")
filteredImg = np.array(median_filter(array, size=limit)).astype(np.float32)
return filteredImg
I would suggest the following:
Before proceeding please double check if the two images are 100% registered. To check that you should overlay them using e.g. different color channels. Even minimal registration errors can render your task impossible
Smooth both input images slightly (before the subtraction). For that I would suggest you use standard implementations. Play around with the filter parameters to find an acceptable compromise between smoothness (or reduction of graininess of source image 1) and resolution
Then try to match the image statistics by applying histogram normalization, using the histogram of image 2 as a target for the histogram of image 1. For this you can also use e.g. the OpenCV implementation
Subtract the images
If you then still observe obvious noise, look at the histogram of the subtraction result and see if you can relate the noise to intensity outliers. If you can clearly separate signal and noise based on intensity, apply again a thresholding (informed by your histogram). Alternatively (or additionally), if the noise is structurally different from your signal (e.g. clustered), you could look into morphological operations to remove it.
I have an Image (or several hundreds of them) that need to be analyzed. The goal is to find all black spots close to each other.
For example all black spots with a Horizontal distance of 160 pixel and vertical 40 pixel.
For now I just look at each Pixel and if there is a black pixel I call a recursive Method to find its neighbours (i can post the code too if you want to)
It works, but its very slow. At the moment the script runs about 3-4 minutes depending on image size.
Is there some easy/fast way to accomplish this (best would be a scikit-image method to help out here) I'm using Python.
edit: I tried to use scikit.measure.find_contours, now i have an array with arrays containing the contours of the black spots. Now I only need to find the contours in the neighbourhood of these contours.
When you get the coordinates of the different black spots, rather than computing all distances between all pairs of black pixels, you can use a cKDTree (in scipy.spatial, http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.cKDTree.html#scipy.spatial.cKDTree). The exact method of cKDTree to use depends on your exact criterion (you can for example use cKDTree.query_ball_tree to know whether there exists a pair of points belonging to two different labels, with a maximal distance that you give).
KDTrees are a great method to reduce the complexity of problems based on neighboring points. If you want to use KDTrees, you'll need to rescale the coordinates so that you can use one of the classical norms to compute the distance between points.
Disclaimer: I'm not proficient with the scikit image library at all, but I've tackled similar problems using MATLAB so I've searched for the equivalent methods in scikit, and I hope my findings below help you.
First you can use skimage.measure.label which returns label_image, i.e. an image where all connected regions are labelled with the same number. I believe you should call this function with background=255 because from your description it seems that the background in your images is the while region (hence the value 255).
This is essentially an image where the background pixels are assigned the value 0 and the pixels that make up each (connected) spot are assigned the value of an integer label, so all the pixels of one spot will be labelled with the value 1, the pixels of another spot will be labelled with the value 2, and so on. Below I'll refer to "spots" and "labelled regions" interchangeably.
You can then call skimage.measure.regionprops, that takes as input the label_image obtained in the previous step. This function returns a list of RegionProperties (one for each labelled region), which is a summary of properties of a labelled region.
Depending on your definition of
The goal is to find all black spots close to each other.
there are different fields of the RegionProperties that you can use to help solve your problem:
bbox gives you the set of coordinates of the bounding box that contains that labelled region,
centroid gives you the coordinates of the centroid pixel of that labelled region,
local_centroid gives you the centroid relative to the bounding box bbox
(Note there are also area and bbox_area properties which you can use to decide whether to throw away very small spots that you might not be interested in, thus reducing computation time when it comes to comparing proximity of each pair of spots)
If you're looking for a coarse comparison, then comparing the centroid or local_centroid between each pair of labelled regions might be enough.
Otherwise you can use the bbox coordinates to measure the exact distance between the outer bounds of any two regions.
If you want to base the decision on the precise distance between the pixel(s) of each pair of regions that are closest to each other, then you'll likely have to use the coords property.
If your input image is binary, you could separate your regions of interest as follows:
"grow" all the regions by the expected distance (actually half of it, as you grow from "both sides of the gap") with binary_dilation, where the structure is a kernel (e.g. rectangular: http://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.rectangle) of, let's say, 20x80pixels;
use the resulting mask as an input to skimage.measure.label to assign different values for different regions' pixels;
multiply your input image by the mask created above to zero dilated pixels.
Here are the results of proposed method on your image and kernel = rectange(5,5):
Dilated binary image (output of step 1):
Labeled version of the above (output of step 2):
Multiplication results (output of step 3):
I have a large stack of images showing a bar with some dark blobs, whose position changes with time (see Figure, b). To detect the blobs I am now using an intensity threshold (c in Figure, where all intensity values below the threshold are set to 1) and then I search for blobs in the binary image using the Matlab code below. As you see the binary image is quite noisy, complicating the blobs detection process. Do you have any suggestion on how to improve the shape detection, maybe including some machine learning algorithm? Thanks!
Code:
se = strel('disk',1);
se_1 = strel('disk',3);
pw2 = imclose(IM,se);
pw3 = imopen(pw2,se_1);
pw4 = imfill(pw3, 'holes');
% Consider only the blobs with more than threshold pixels
[L,num] = bwlabel(pw4);
counts = sum(bsxfun(#eq,L(:),1:num));
number_valid_counts = length(find(counts>threshold));
This might help.
Extract texture features of the boundary of the blobs you want to extract. This can be done using Local binary patterns. There are many other texture features, you can get a detailed survey here.
Then use them to train a binary classifier.
It seems that the data come like pulses in the lower side of the image, I suggest to get some images and to slice vertical lines of the pixels perpendicular to the pulse direction, each time you take a line of values, little bit above and lower the pulse, the strip width is one pixel, and its height is little bit larger than the pulse image to take some of the light values lower and above the pulse, you may start from pixel 420-490, each time you save 70 grey values, those will form the feature vector, take also lines from the non blob areas to save for class 2, do this on several images and lines from each image.
now you get your training data, you may use any machine learning algorithm to train the computer for pulses and non pulses,
in the test step, you scan the image reading each time 70 pixels vertically and test them against the trained model, create a new black image if they belong to class "bolob" draw white vertical line starting from little below the tested pixel, else draw nothing on the output image.
at the end of scanning the image: check if there is an isolated white line you may delete considering it as false accepted . if you find a dark line within a group of white line, then convert it to white, considering false rejection.
you may use my classifier: https://www.researchgate.net/publication/265168466_Solving_the_Problem_of_the_K_Parameter_in_the_KNN_Classifier_Using_an_Ensemble_Learning_Approach
if you decide I will send you coed to do it. the distance metric is a problem, because the values varies between 0 and 255, so the light values will dominate the distance, to solve this problem you may use Hassanat distance metric at : https://www.researchgate.net/publication/264995324_Dimensionality_Invariant_Similarity_Measure
because it is invariant to scale in data, as each feature output a value between 0 and 1 no more, thus the highest values will not dominate the final distance.
Good luck
OS X 10.7.5, Python 2.7, GE 7.1.2.2041
I have some .kml data that includes a moderately large number of polygons. Each polygon has an image associated with it. I want to use each image in a <GroundOverlay> mode with its associated polygon.
The raw images are all a bit larger than the polygons. I can easily resize the images with Python's Image Library (PIL), but the amount of missizing is not consistent across the entire set. Some are as good as only ~5% larger, and some go up to ~20% larger.
What I would like to do is either find (or calculate) the approximate sizes of the polygons in pixels so that I can automate the resizing of their associated images with that data.
Any suggestions?
You could use the width and height of the polygon/rectangle in longitude and latitude coordinates. Then use the ratio of that rectangle to the image size and it should fit.
Edit: I should note that depending on where your images are going to show up you might need some special math for the dateline (-180 to 180) or prime-meridian (0 to 360).