I am implementing a small play tool to understand polynomial interpolation, bezier curves, b-splines...
I tried using tk, but the quality of the generated lines is pretty bad.
You can see there's a lot of aliasing and noise along the curve, and the width is not very consistent. It does its job, but I would like my lines to look more like this:
i.e much less aliased and consistent curve width.
Any suggestions?
EDIT:
For the people voting to close this question because it's "opinion based". Line quality isn't opinion based. Anti aliasing techniques are a well defined algorithm that is commonly used to improve line rendering. The screenshot shows that tk does not anti alias its lines. Thus there are objective methods to improve the quality of the line. The question is whether there are other python tools which do the effort of rendering lines properly.
Related
I've used Kirsch filter to try and obtain the blood vessels, but the result isn't the best, as shown below:
Although the vessels have been obtained, they aren't bright enough. How do I go about making them 'more visible'?
I worked on retina vessel detection for a bit few years ago, and there are different ways to do it:
If you don't need a top result but something fast, you can use oriented openings, see here and here.
Then you have an other version using mathematical morphology version here.
For better results, here are some ideas:
Personally, I used combination of Gabor filters, and results where pretty good. See the segmentation result here on the first image of drive.
And Gabor can be combined with learning for a good result, or here.
Few years ago, they claimed to have the best algorithm, but I've never had the opportunity to test it. I was sceptic about the performance gap and the way they thresholded the line detector results, it was kind of obscure.
But I know that nowadays, many people try to tackle the problem using CNN, but I've not heard about significant improvements.
[EDIT] To answer your specific question, you can erase the bright ring, and then apply a histogram stretching. But I think that the methods I introduced before will work better than the filter you are using.
looks like, the solution for your problem is histogram equalization (we had the same problem for homework)
http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html#gsc.tab=0
I have load an obj file to render my opengl model using pyopengl and pygame. The 3D model show successfully.
Below is the 3D model i render with obj file, Now i cut my model into ten pieces through y axis , my question is how to get the sectional drawing in each piece?
I'm really very new to openGL, Is there any way can do that?
There are two ways to do this and both use clipping to "slice" the object.
In older versions of OpenGL you can use user clip planes to "isolate" the slices you desire. You probably want to rotate the object before you clip it, but it's unclear from your question. You will need to call glClipPlane() and you will need to enable it using glEnable with the argument GL_CLIP_PLANE0, GL_CLIP_PLANE1, ...
If you don't understand what a plane equation is you will have to read up on that.
In theory you should check to see how many user clip planes exist on your GPU by calling glGetIntegerv with argument GL_MAX_CLIP_PLANES but all GPUs support at least 6.
Since user clip planes are deprecated in modern Core OpenGL you will need to use a shader to get the same effect. See gl_ClipDistance[]
Searching around on Google should get you plenty of examples for either of these.
Sorry not to provide source code but I don't like to post code unless I am 100% sure it works and I don't have the time right now to check it. However I am 100% sure you can easily find some great examples on the internet.
Finally, if you can't make it work with clip planes and some hacks to make the cross sections visible then this may indeed be complicated because creating closed cross sections from an existing model is a hard problem.
You would need to split the object, and then rotate the pieces so that they are seen from the side. (Or move the camera. The two ideas are equivalent. But if you're coding this from scratch, you don't really have the abstraction of a 'camera'.) At that point, you can just render all the slices.
This is complicated to do in raw OpenGL and python, essentially because objects in OpenGL are not solid. I would highly recommend that you slice the object into pieces ahead of time in a modeling program. If you need to drive those operations with scripting, perhaps look into Blender's python scripting system.
Now, to explain why:
When you slice a real-life orange, you expect to get cross sections. You expect to be able to see the flesh of the fruit inside, with all those triangular pieces.
There is nothing inside a standard polygonal 3D model.
Additionally, as the rind of a real orange has thickness, it is possible to view the rind from the side. In contrast, one face of a 3D model is infinitely thin, so when you view it from the side, you will see nothing at all. So if you were to render the slices of this simple model, from the side, each render would be completely blank.
(Well, the bits at the end will have 'caps', like the ends of a loaf a bread, but the middle sections will be totally invisible.)
Without a programming library that has a conception of what a cut is, this will get very complicated, very fast. Simply making the cuts is not enough. You must seal up the holes created by slicing into the original shape, if you want to see the cross-sections. However, filling up the cross sections has to be done intelligently, otherwise you'll wind up with all sorts of weird shading artifacts (fyi: this is caused by n-gons, if you want to go discover more about those issues).
To return to the original statement:
Modeling programs are designed to address problems such as these, so I would suggest you leverage their power if possible. Or at least, you can examine how Blender implements this functionality, as it is open source.
In Blender, you could make these cuts with the knife tool*, and then fill up the holes with the 'make face' command (just hit F). Very simple, even for those who are not great at art. I encourage you to learn a little bit about 3D modeling before doing too much 3D programming. It personally helped me a lot.
*(The loop cut tool may do the job as well, but it's hard to tell without understanding the topology of your model. You probably don't want to get into understanding topology right now, so just use the knife)
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I made a heightmap generator which uses gradient/value noise to generate a terrain. The problem is, that the height map is too chaotic to look realistic.
Here's what I am talking about:
Here's the map without the colors:
I used a 257x257 grid of blocks with 17x17 gradients.
As it is visible, there are too many islands as well as there are some random small beach islands in the middle of the ocean.
Also, There are a lot of sharp edges, especially for the mountain terrain (dark gray).
What I would like is a smoother and less chaotic terrain, such as a large island, etcetera. How do I do that?
In games, the most common noise generator for textures and heightmaps is the Perlin Noise.
I don't know from your answer is you actually want to create the noise generator or use it in your application.
If you are looking to create your own Perlin Noise Generator, this would be a good starting point.
I would however recommend using the noise (https://pypi.python.org/pypi/noise/) library available through pip using:
pip install noise
You can then use the noise.snoise2(x,y,a,b,c) function and fiddle with with the different parameters.
I would recommend reading this article: http://simblob.blogspot.ch/2010/01/simple-map-generation.html if you want to learn more about terrain generation.
Look at this article where Amit walks through some map generation techniques. He even has sample code online.
In the article, he takes perlin noise as a randomization parameter to his terrain generator, but doesn't use it as the whole generator. The result looks really good. (I'd post a picture of the result, but I don't know of copyright issues just yet.)
While you're at it, Amit has written and curated on things game programming for years and years. Here and here are a few more articles of his on the subject. I hope this doesn't become a time sink for you, I've certainly spent many hours on his blog. :)
(PS. I prefer simplex noise over perlin noise. Same inventor, simpler implementation, and looks better to me.)
From what I see, your sample may lack octaves and interpolation.
Depending on the implementation you are using, you may play with octave number, frequency, persistence / lacunarity, various interpolation techniques, etc...
Try playing / mixing with turbulence too (easy way to add fancy features to your height maps).
Many simplex noise (Ken Perlin's too, but scales better / faster on more dimensions) implementations deal with pretty complete set of parameters for you to play with, when generating your height maps.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to locate a (possibly perspective-deformed) book in an image and extract it so that it is "straight" and "front-on" (i.e. perspective-corrected).
The particular book is unknown -- there is no query or reference image to check for matches against (i.e. by some sort of feature descriptor matching process). In other words, I'm trying to hunt through the image and find a bunch of pixels that look like they belong to the object class "book", not a particular book.
The book may be somewhat rotated or otherwise perspective-deformed. However, it is assumed the amount of deformation is within fairly reasonable bounds: the person taking the photo is working "with" me. This means as well that the book should feature prominently in the image -- perhaps 30-90% of total image area (and not as some random item amidst a bunch of other clutter).
Good resources exist for (superficially) similar problems online. For example, this well-written tutorial covers automatic perspective-correction of playing cards: https://opencv-code.com/tutorials/automatic-perspective-correction-for-quadrilateral-objects/.
Currently, the system follows a loosely similar process as this tutorial, with some additions. The general technique stack is:
Pre-processing
Find edges with Canny edge detection
Find edges that look like lines with Hough transform
Find intersection points between lines in the hope of finding book corners
Filter out implausible lines and intersection points based on simple geometric properties
Take convex hull of intersection points
Get polygon approximation to the convex hull and use this to get four corners
Apply perspective/homographic transform
The output points (used to calculate the perspective transform) are known because we assume a known aspect ratio (i.e. book dimensions).
It works for some images where the book is against fairly homogeneous backgrounds (around 1/3 to 1/2 of "nicer" images). After experimenting with the fairly dumb convex hull approach as well as a more involved quadrilateral-enumeration approach, I've concluded that the problem may be impossible using just geometric/spatial information alone -- it would probably need augmenting with colour/texture information (well, this is obvious when you consider the case of 180 degrees rotation/upside-down books).
The obvious challenge is that there is an almost infinite variety of possible book covers, and an almost infinite variety of possible backgrounds. Therefore, solving for the general case would be impossible or at least intractably hard. I knew this when I began the task. But, I hoped it would be the sort of problem that may have a solution enough of the time.
Other approaches I've considered looking at include OCRing the titles/text to work out orientation or possibly general position. The other approach that might conceivably be fruitful is some sort of learning-based classifier.
A related subtask I'm working on is the same goal but in a webcam video stream. This is definitely easier since I can use temporal information (i.e. position across frames). I just started this one yesterday but, after some initial progress, plateaued. A human holding the book generates background movement noise which throws off trivial approaches like frame differencing / background subtraction. Compared with the static image problem, however, I feel this is far more doable.
Sorry if that was a little long-winded. I wanted to make sure I made a sincere effort to articulate the problem(s). What do people think? Anyone have any thoughts as to how these problems might best be tackled?
Does calculating homography with 4 lines instead of 4 points help the problem? As you probably know, if points are related as p2=Hp1, the lines are related as l2=H-1l1. The lines on the book border should be quite prominent especially if the deformation is not large. Is you main problem selecting right lines (you did NOT actually said what's your problem was)? May be some kind of Hough-rectangle can help to find lines?
Anyway, selecting lines for homography input has an additional advantage that RANSAC homography with a constraint on aspect ratio is likely to keep right lines as inliners in the presence of numerous outliers from the background. And if those outliers sneak in they probably look like another book.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
I'm using Python in an attempt to analyse a large chunk of empiric measurements. In essence, I've two functions transforming the empiric data which also takes 3 'count' parameters - and returns a sequence of floats in each configuration. I'm expecting (hoping) to see some interesting patterns emerge when appropriate parameters are selected. I anticipate that the patterns might be relative between sequences returned for each function - and/or relate to patterns of some kind in the parameters. In case it's relevant, the 3 'count' parameters roughly correspond to:
A 'window size' on the underlying data over which summary statistics are calculated
A number of consecutive windows used to compute a single summary statistic (i.e. the trade-off between greater spatial or greater temporal accuracy)
An 'minimum age' - an offset into history of the underlying data.
The summary statistics (which generate the resulting sequences of floats for each parameter configuration) are non-trivial but will be independently sensitive to all three parameters.
I'm interested in visualisation techniques - suited to RAD/ad-hoc enquiry that will help me experiment with this multi-dimensional data.
So far, I've tinkered with MatPlotLib but find being restricted generating two graphs of 2/3 dimensions in the style of batch processing makes investigation very tedious. Ideally, I'd find a tool that would allow me to visualise more than two dimensions... perhaps allowing me to switch real-time between dimensions in an interactive GUI.
I'd really appreciate hints from any visualisation gurus as to suitable tools I should investigate - ideally to integrate with my existing Python functions - or in other languages. I'd especially like to hear any anecdotes of success with similar visualisation problems.
EDIT to add: One possible approach I'm considering is to use animation on 2 or 3D plots (to capture another dimension... leaving 1 or 2 for manual selection)... though I've found no good tools to help me achieve this, yet.
RGL is a visualization device system for R, using OpenGL as the rendering backend. An rgl device at its core is a real-time 3D engine written in C++. It provides an interactive viewpoint navigation facility (mouse + wheel support) and an R programming interface.
GGobi is an open source visualization program for exploring high-dimensional data. It provides highly dynamic and interactive graphics such as tours, as well as familiar graphics such as the scatterplot, barchart and parallel coordinates plots. Plots are interactive and linked with brushing and identification.
There's a tutorial that covers both of the above systems here.
RPy is a very simple, yet robust, Python interface to the R Programming Language. It can manage all kinds of R objects and can execute arbitrary R functions (including the graphic functions). All errors from the R language are converted to Python exceptions. Any module installed for the R system can be used from within Python.
You might want to look at outputting SVG with animation, in which case this question might interest you. I suspect the animation aspects will require a lot of work on your part. Another option is maybe visualizing the data as a graph, although I'm don't know enough about your data to know whether this would be useful to you. If it is, cytoscape is python scriptable
If all you want is an animated surface, then gnuplot can do it. A quick intro on it can be found here, or from the gnuplot FAQ. More detail can obviously be found in the gnuplot docs.
You could try guiqwt. It's aimed for 2D graphs, but targets more specifically interactive plots (as opposed to Matplotlib, although it can handle some degree of interaction too). From the guiqwt documentation:
Overview
Based on PyQwt (plotting widgets for PyQt4 graphical user interfaces)
and on the scientific modules NumPy and SciPy, guiqwt is a Python
library providing efficient 2D data-plotting features (curve/image
visualization and related tools) for interactive computing and
signal/image processing application development.
Performances
The most popular Python module for data plotting is currently
matplotlib, an open-source library providing a lot of plot types and
an API (the pylab interface) which is very close to MATLAB’s plotting
interface.
guiqwt plotting features are quite limited in terms of plot types
compared to matplotlib. However the currently implemented plot types
are much more efficient. For example, the guiqwt image showing
function (guiqwt.pyplot.imshow()) do not make any copy of the
displayed data, hence allowing to show images which are much larger
than with its matplotlib‘s counterpart. In other terms, when showing a
30-MB image (16-bits unsigned integers for example) with guiqwt, no
additional memory is wasted to display the image (except for the
offscreen image of course which depends on the window size) whereas
matplotlib takes more than 600-MB of additional memory (the original
array is duplicated four times using 64-bits float data types).
(I haven't tried it, so I can't comment on these claims.)
Okay, now that I understand your data I can definitely suggest a method of visualisation. A coloured 3D surface density plot. Use a0, a1 and a2 as standard x,y,z axes, use a3 as the time axis, and plot different colours over a monochromatic range (or cold to hot). That way the only thing that needs an interactive slider is a3.
As far as tools to do this are concerned
I don't know whether gnuplot can do colour density plots, if it can this is your best bet. Generate an set of gifs across domain of a3, use imagemagick to make a single animated gif out of them, then use an animated .gif editor that allows you to move back and forth between frames
Again, with matplotlib, I'm not certain whether it is possible to do colour density plots
SVG can definitely do everything you need to do, including the animation aspects, but as I've said before, is going to be a lot of hard work.
Sounds like Mayavi might fit your needs. It is written in Python, can be used interactively and supports 3D graphs and animations. You can have a look at this tutorial to see if it fits your needs.
I have done an interactive 3D visualization with animation in Python using the older version 1 of mayavi, see this page.
Edit
Unfortunately, most Mayavi examples show off too much advanced functionality. Here are two examples that demonstrate more basic applications. If these two do not fit your needs, then Mayavi may not be a good choice in your case. My understanding is that you have arrays of floats that you want to visualize.
Example 1
Here is a specific example from the older page on what you can do with a 3D array of floats: 3D data example. This example shows the use of isocontour surfaces, one solid cut plane through the data and another cut plane with isocontour lines. You can interactively move the cut planes around or choose different visualization tools. (In my case I had added another dimension and an animation that presented the data as 3D-cube slices through the hypercube.)
Example 2
Here is another example of what a more "conventional" plot with Mayavi could look like: Fourier transform example. This is quite similar to what the many other plotting libraries do.
Go download a free trial of Tableau (www.tableausofware.com). It will encode your data on X, Y, size, color and shape, and you can create small multiples any other dimensions you have -- i.e. you can look at lots of dimensions at once. You can try lots and lots of visualizations very rapidly. There is free training on the company website.
Disclaimer: I work for them.
The simplest visualization for 3+dimensions is bubble chart or motion chart. On top of the x and y axis you can use the bubble size and the bubble color for the extra dimensions.
Google visualization (http://code.google.com/apis/chart/interactive/docs/gallery/motionchart.html) and its google spreadsheet interactive mode give a simple interface to play with which of the dimensions is on which of the axis/size/color.
It is not aimed at handling too many data points, but you can use it to identify patterns on samples of the data with ease.