Scanline Fill Algorithm in Python/Numpy - python

I have thousands of polygons given their 4 corner coordinates (quadrilaterals) and would like to convert them to a raster representation as a numpy 2d array.
A lot of gridding algorithms exist like the popular scanline fill in graphics. (see http://www.cs.rit.edu/~icss571/filling/how_to.html or http://cs.uvm.edu/~rsnapp/teaching/cs274/lectures/scanlinefill.pdf )
Octave implements this in the poly2mask function (e.g. http://octave.sourceforge.net/image/function/poly2mask.html).
Is there a similar function also in Numpy?
I still don't get how this algorithms works in detail and, thus, I would be very grateful if you can give me some hints on how to implement it in Python/Numpy efficiently.
Or would it be better to code it in CPython (which I am not familiar with either) for speed reasons?

There are a few different functions for this in the scipy ecosystem (in no order):
1) The most widely-available option is to use matplotlib's points_inside_poly. However, it's very suboptimal for filling a regular grid (i.e. it's an explicit point in polygon test, rather than a "scanline" approach).
2) mahotas implements a fill_polygon function that's quite efficient: http://mahotas.readthedocs.org/en/latest/polygon.html#drawing
3) skimage (scikits-image) implements a draw.polygon function that should be at least as efficient, if not more so: http://scikit-image.org/docs/dev/api/skimage.draw.html#skimage.draw.polygon
4) Finally, you can also use PIL for this and convert the image to a numpy array. Have a look at the ImageDraw module: http://effbot.org/imagingbook/imagedraw.htm
Overally, I'd reccommend installing skimage and using it. It's a very useful library. However, if you can't install scikits image for some reason, the other options should help.

OpenCVproject also has polygon fill function: cv2.fillPoly

Related

Is there a Matplotlib equivalent to the Borland C command putpixel?

I am interested in porting some of my old fractal imaging programs over from Borland C to python. In Borland C, the putpixel command would place a specified color pixel within a rasterized graphical field. Is there a simple way to do this in matplotlib?
So the answer depends on what you're trying to do here. matplotlib has a lot of utilities for working with representing image data, this might give a good starting point for getting familiar with matplotlib's workflow. You can directly edit the values of the numpy array that you're using matplotlib to visualize, but matplotlib doesn't give you access to the data that you're rendering.
I imagine that you already have written some colormap and other rendering tools tools, but to get an idea of what matplotlib might have built in, I recommend looking at this example. It's a simple Mandelbrot, escape time, but it makes use of nonlinear colormapping and shading.
In my experience, I've normally computed the fractal as a 2D numpy array, and then allowed matplotlib to handle the coloring, and scaling of the final output image. Matplotlib doesn't work like the canvas experience it sounds like you're used to using. I'd recommend filling a numpy array with the desired pixel values as you've computed them, and then sending that array off to matplotlib to be rendered.
After posting this I discovered that there is a putpixel command in PIL (Python Imaging Library), which has tools for dealing with pixel oriented graphics. Matplotlib can also do the job as suggested by the answer above.

Smoothing HEALPix maps with `healpy`: Why does the output map appear "patchy"?

I have a HEALPix all-sky map, from the AKARI Far Infrared Surveyor databse (publicly released). I have tried to "smooth" the map using healpy, but the result looks very strange. Is there a better way? My question however relates to any all-sky HEALPix map (i.e. IRAS, Planck, WISE, WMAP).
My objective is to "smooth" the effective point-spread function of this AKARI map to an angular resolution of 1-degree (the original data has a PSF of about 1 arcminute). This is so that I can compare the far infrared AKARI map to lower resolution microwave maps (specifically, those of the anomalous microwave foreground).
In my example below, I'm using a degraded version of the map, so it'd be small enough to upload to Github. This means that the pixels are about 3.42 arcminutes. I wouldn't degrade the pixel scale so much, before PSF smoothing, normally- but this is just an example:
#Load the packages needed for visualization, and HEALPix processing
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import healpy as hp
import healpy.projector as pro
#Loads the HEALPix .FITS file into an array
map_in = hp.read_map("akari_WideL_1_1024.fits", nest = True)
#Visualizes the all-sky map, before any processing is done.
hp.mollview(map_in, title='AKARI All-Sky Map:', nest = True, norm = 'hist')
#Smoothes the map with a 1-degree FWHM Gaussian (fwhm given in radians).
map_out = hp.sphtfunc.smoothing(map_out, fwhm = 0.017, iter = 1)
#Visualizes the the map after smoothing
hp.mollview(map_out, title='AKARI All-Sky Map:', nest = True, norm = 'hist')
I have tried the healpy.sphtfunct.smoothing routine (https://healpy.readthedocs.org/en/latest/generated/healpy.sphtfunc.smoothing.html#healpy.sphtfunc.smoothing).As far as I understand, smoothing converts the map into spherical harmonics, then convolves with the gaussian, and then converts it back into a spatial map.
I've saved the ipython notebook as well as the low-res .FITS HEALpix map in a github repository, here:
https://github.com/aaroncnb/healpy_smoothing_test
(You'll need to have the healpy package installed)
By running the code in the notebook, you can easily visualize the trouble I'm having- after smoothing the map, there are some strange "artifacts", as if the pixels had been iteratively box-averaged, rather than smoothed with a circular guassian profile. What I expect to see, is just a blurrier version of the input map.
I think I'm missing something fundamental about the conversion to spherical harmonics, before the smoothing is done.
Has anyone tried to do this kind of all-sky smoothing before, on a HEALPix map?
I believe another option is to convert the map to a standard, rectangular array, and then conduct the smoothing. However I remain curious about solving the problem without leaving the HEALPix format.
It appears smoothing works on a RINGed map only (it kind of makes sense to me, since this seems a bit easier to handle mathematically). Thus, you'll need to convert your input map to a RINGed format:
map_ring = hp.pixelfunc.reorder(map_in, inp='NEST', out='RING')
map_out = hp.sphtfunc.smoothing(map_ring, fwhm = 0.17, iter = 1)
hp.mollview(map_out, title='AKARI All-Sky Map:', nest = False, norm = 'hist')
This answer is from a bit of trial and error, because I can't find anything definitive on it in the documentation, and I haven't dived into the source code (though, with the below result, it may be easy to verify whether my assumption is correct by looking through the relevant source code).
Or, you may want to ask the healpix/healpy people directly.
(I'd suggest it is in fact a shortcoming in the documentation: the docs for healpy.sphtfunc.smoothing don't mention the required form for the input. I guess that's a healpy issue/PR for another day.)
Btw, bonus points for creating a SSCCE as a notebook file on Github! (Now if only StackOverflow also rendered notebooks.)

Differences in projecting satellite images with cartopy and pyresample

I have written a python script to project and overlay geostationary satellite images from the university of Dundee so the resulting image can be used for xplanet to render the surface of the earth. The source code of the tool can be found at https://github.com/jmozmoz/cloudmap/tree/cartopy (this is the branch with cartopy support)
The tool support two different python libraries to project the geostationary images on a flat map: pyresample and cartopy.
I have found the following differences/problems:
pyresample is much faster than cartopy (depending on the size of the output image up to a factor of 10)
The output images differ: The results using pyresample show a stronger contrast.
For examples see the debug directory at https://github.com/jmozmoz/cloudmap/tree/cartopy/debug
If the multiprocessing library is used to do the projection in parallel, the cartopy version crashes with the following error message:
Fatal Python error: PyEval_RestoreThread: NULL tstate
So why is cartopy so much slower? Is pyresample doing the work in C code? Should cartopy support multiprocessing? And how to fix the problem with the contrast?
Thank you for your help
1. pyresample is much faster than cartopy (depending on the size of the output image up to a factor of 10)
The cartopy reprojection functionality hasn't been optimized in any way, and although it is using the scipy ckdtree functionality under the hood, the algorithm itself is written in Python. I seem to remember that a quick win was to use https://pypi.python.org/pypi/kdtree which from memory gave quite a reasonable speedup with little work, cartopy.img_transform would be the place where changes would be needed.
Cartopy's re-projection functionality is probably also paying the cost of being very general - you can provide an image in any projection, and it will put it into any other projection, dealing with discontinuities and tears without a problem. It would be really cool to hook into pyresample's functionality though (and GDAL's for that matter) to give users the opportunity to speed up the reprojection in certain cases.
2. The output images differ: The results using pyresample show a stronger contrast.
Looks like you're creating a matplotlib figure to resample the image and using mpl's savefig functionality. It is possible that this process is causing the contrast to be lost. I'd advise just using cartopy's reprojection functionality without adding an image to a figure and saving the figure (example at the end).
3. If the multiprocessing library is used to do the projection in parallel, the cartopy version crashes with the following error message:
This really surprised me as there is no C code in cartopy which is doing the reprojecting. Therefore you've either found a bug with scipy, or more likely you are hitting a problem with numpy/matplotlib (google brings up a few results with your exception and matplotlib and/or numpy, e.g. https://github.com/numpy/numpy/issues/1270).
Ok, so here is how I would do the reprojection without using matplotlib at all:
import cartopy.crs as ccrs
from cartopy.img_transform import warp_array
import numpy as np
import PIL.Image
# I've downloaded the file from https://github.com/jmozmoz/cloudmap/blob/78923d15ad906eaa6d1dcab168a6364643d3fc94/debug/2014_8_7_1800_GOES15_4_S1.jpeg
# and clipped the image.
fname = '2014_8_7_1800_GOES15_4_S1.jpeg'
img = PIL.Image.open(fname)
result_array, extent = warp_array(np.array(img),
source_proj=ccrs.Geostationary(),
target_proj=ccrs.PlateCarree(),
target_res=(4000, 2000))
result = PIL.Image.fromarray(result_array)
result.save('reprojected.jpeg')
With the resulting image (eventually) looking something like:
There are some real possibilities for some optimisations with this functionality - quite a large amount of work is done creating the kdtree in the first place (which could potentially be cached) and another large chunk of the work is computing the indices from the original image (again, caches very well) which would essentially reduce and repeat reprojections to an numpy indexing problem.
If you want to look into the performance possibilities or the contrast issue (which I'm uncertain whether my solution fixes or not) please feel free to open up an issue on the github repo and we can talk through some of the options.
Thanks for asking, and HTH!

Template Matching (Image Search) function in Python Imaging Library

I had a problem where I need to search for a pattern (present as a numpy ndarray) within another image (also present as a numpy ndarray) and compute a template match (minimum difference position in the image). My question is... is there any in-built image that I can possibly use in the Python Imaging Library or Numpy or anything possible that can do this without me manually writing a function to do so???
Thank you....
This is likely best done as an inverse convolution or correlation. Numpy/scipy has code to do both.
edit: including a little example.
Go here for the ipython notebook file: http://nbviewer.ipython.org/4020770/
I made a little gaussian and then use scipy.signal.correlate2d with the original image and a small subset.
you can see that the highest values of the correlation are centered around where the subset of the image was taken. note that for large kernels or images, this code can take a while (because correlation is expensive)

Rewriting 2D array of integers to bitmap in Python using PyQT

I want to do convert array of integers into some sort of 'picture' using PyQt (I've decided to do my app in Qt). I have array like this:
Array = [
[0,0,1,0,0],
[0,1,0,1,0],
[1,0,0,0,1],
[0,1,0,1,0],
[0,0,1,0,0]]
Now I want to rewrite it into picture, by replacing each integer by for example square 10x10 pixels. I have definition for each value in array in RGB. What's more This is some kind of game of life, so it must refresh on each step and shouldn't be slow. Maybe somethinf similar to OpenCV?
Thanks in advance!
Cheers,
Mateusz
You could easily do the above with QGraphicsScene and QGraphicsView. In order to get good performance, you'll want to call setViewport(QGLWidget()) on your QGraphicsView instance. Create a subclass of QGraphicsItem to represent an element in your array. You'll then even be able to animate the changes if you want.
If you do want animations or are demonstrating some progression such as in Conway's Game of Life you might also want to take a look at QTimeLine.
You can look up the equivalent python-based documentation on either the PyQt* or PySide websites. Both PyQt and PySide use a nearly identical API so for most everything you can use them interchangeably.
*Note: The PyQt website is inaccessible at the time of this writing
You should probably use QT’s graphics libraries for performance. Another, maybe simpler way could be to use PIL (Python Imaging Library) or some Python bindings to the ImageMagick or MagickWand library (I haven't found a good and current one) and use NumPy’s arrays for calculations and manipulation, and draw on a surface or canvas using PyGame, QT or some other GUI toolkit.
In PIL there is PIL.Image.fromarray(np_array, 'RGBA'), that reads suitable NumPy arrays – the datatype must usually be dtype=int8 and the shape is (height, width, n_channels).
For a very simple graphics format that uses ascii byte values, see NetPBM.

Categories

Resources