Generate terrain with less chaotic noise [closed] - python

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 8 years ago.
Improve this question
I made a heightmap generator which uses gradient/value noise to generate a terrain. The problem is, that the height map is too chaotic to look realistic.
Here's what I am talking about:
Here's the map without the colors:
I used a 257x257 grid of blocks with 17x17 gradients.
As it is visible, there are too many islands as well as there are some random small beach islands in the middle of the ocean.
Also, There are a lot of sharp edges, especially for the mountain terrain (dark gray).
What I would like is a smoother and less chaotic terrain, such as a large island, etcetera. How do I do that?

In games, the most common noise generator for textures and heightmaps is the Perlin Noise.
I don't know from your answer is you actually want to create the noise generator or use it in your application.
If you are looking to create your own Perlin Noise Generator, this would be a good starting point.
I would however recommend using the noise (https://pypi.python.org/pypi/noise/) library available through pip using:
pip install noise
You can then use the noise.snoise2(x,y,a,b,c) function and fiddle with with the different parameters.
I would recommend reading this article: http://simblob.blogspot.ch/2010/01/simple-map-generation.html if you want to learn more about terrain generation.

Look at this article where Amit walks through some map generation techniques. He even has sample code online.
In the article, he takes perlin noise as a randomization parameter to his terrain generator, but doesn't use it as the whole generator. The result looks really good. (I'd post a picture of the result, but I don't know of copyright issues just yet.)
While you're at it, Amit has written and curated on things game programming for years and years. Here and here are a few more articles of his on the subject. I hope this doesn't become a time sink for you, I've certainly spent many hours on his blog. :)
(PS. I prefer simplex noise over perlin noise. Same inventor, simpler implementation, and looks better to me.)

From what I see, your sample may lack octaves and interpolation.
Depending on the implementation you are using, you may play with octave number, frequency, persistence / lacunarity, various interpolation techniques, etc...
Try playing / mixing with turbulence too (easy way to add fancy features to your height maps).
Many simplex noise (Ken Perlin's too, but scales better / faster on more dimensions) implementations deal with pretty complete set of parameters for you to play with, when generating your height maps.

Related

Path detection and progress in the maze with live stereo3d image

I'm producing an ugv prototype. The goal is to perform the desired actions to the targets set within the maze. When I surf the Internet, the mere right to navigate in the labyrinth is usually made with a distance sensor. I want to consult more ideas than the question.
I want to navigate the labyrinth by analyzing the image from the 3d stereo camera. Is there a resource or successful method you can suggest for this? As a secondary problem, the car must start in front of the entrance of the labyrinth, see the entrance and go in, and then leave the labyrinth after it completes operations in the labyrinth.
I would be glad if you suggest a source for this problem. :)
The problem description is a bit vague, but i'll try to highlight some general ideas.
An useful assumption is that labyrinth is a 2D environment which you want to explore. You need to know, at every moment, which part of the map has been explored, which part of the map still needs exploring, and which part of the map is accessible in any way (in other words, where are the walls).
An easy initial data structure to help with this is a simple matrix, where each cell represents a square in the real world. Each cell can be then labelled according to its state, starting in an unexplored state. Then you start moving, and exploring. Based on the distances reported by the camera, you can estimate the state of each cell. The exploration can be guided by something such as A* or Q-learning.
Now, a rather subtle issue is that you will have to deal with uncertainty and noise. Sometimes you can ignore it, sometimes you don't. The finer the resolution you need, the bigger is the issue. A probabilistic framework is most likely the best solution.
There is an entire field of research of the so-called SLAM algorithms. SLAM stands for simultaneous localization and mapping. They build a map using some sort of input from various types of cameras or sensors, and they build a map. While building the map, they also solve the localization problem within the map. The algorithms are usually designed for 3d environments, and are more demanding than the simpler solution indicated above, but you can find ready to use implementations. For exploration, something like Q-learning still have to be used.

Which python canvas API has the best line quality?

I am implementing a small play tool to understand polynomial interpolation, bezier curves, b-splines...
I tried using tk, but the quality of the generated lines is pretty bad.
You can see there's a lot of aliasing and noise along the curve, and the width is not very consistent. It does its job, but I would like my lines to look more like this:
i.e much less aliased and consistent curve width.
Any suggestions?
EDIT:
For the people voting to close this question because it's "opinion based". Line quality isn't opinion based. Anti aliasing techniques are a well defined algorithm that is commonly used to improve line rendering. The screenshot shows that tk does not anti alias its lines. Thus there are objective methods to improve the quality of the line. The question is whether there are other python tools which do the effort of rendering lines properly.

Having extracted the blood vessels, how to brighten them?

I've used Kirsch filter to try and obtain the blood vessels, but the result isn't the best, as shown below:
Although the vessels have been obtained, they aren't bright enough. How do I go about making them 'more visible'?
I worked on retina vessel detection for a bit few years ago, and there are different ways to do it:
If you don't need a top result but something fast, you can use oriented openings, see here and here.
Then you have an other version using mathematical morphology version here.
For better results, here are some ideas:
Personally, I used combination of Gabor filters, and results where pretty good. See the segmentation result here on the first image of drive.
And Gabor can be combined with learning for a good result, or here.
Few years ago, they claimed to have the best algorithm, but I've never had the opportunity to test it. I was sceptic about the performance gap and the way they thresholded the line detector results, it was kind of obscure.
But I know that nowadays, many people try to tackle the problem using CNN, but I've not heard about significant improvements.
[EDIT] To answer your specific question, you can erase the bright ring, and then apply a histogram stretching. But I think that the methods I introduced before will work better than the filter you are using.
looks like, the solution for your problem is histogram equalization (we had the same problem for homework)
http://docs.opencv.org/3.1.0/d5/daf/tutorial_py_histogram_equalization.html#gsc.tab=0

Locating and extracting (unknown) book from an image in OpenCV [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm trying to locate a (possibly perspective-deformed) book in an image and extract it so that it is "straight" and "front-on" (i.e. perspective-corrected).
The particular book is unknown -- there is no query or reference image to check for matches against (i.e. by some sort of feature descriptor matching process). In other words, I'm trying to hunt through the image and find a bunch of pixels that look like they belong to the object class "book", not a particular book.
The book may be somewhat rotated or otherwise perspective-deformed. However, it is assumed the amount of deformation is within fairly reasonable bounds: the person taking the photo is working "with" me. This means as well that the book should feature prominently in the image -- perhaps 30-90% of total image area (and not as some random item amidst a bunch of other clutter).
Good resources exist for (superficially) similar problems online. For example, this well-written tutorial covers automatic perspective-correction of playing cards: https://opencv-code.com/tutorials/automatic-perspective-correction-for-quadrilateral-objects/.
Currently, the system follows a loosely similar process as this tutorial, with some additions. The general technique stack is:
Pre-processing
Find edges with Canny edge detection
Find edges that look like lines with Hough transform
Find intersection points between lines in the hope of finding book corners
Filter out implausible lines and intersection points based on simple geometric properties
Take convex hull of intersection points
Get polygon approximation to the convex hull and use this to get four corners
Apply perspective/homographic transform
The output points (used to calculate the perspective transform) are known because we assume a known aspect ratio (i.e. book dimensions).
It works for some images where the book is against fairly homogeneous backgrounds (around 1/3 to 1/2 of "nicer" images). After experimenting with the fairly dumb convex hull approach as well as a more involved quadrilateral-enumeration approach, I've concluded that the problem may be impossible using just geometric/spatial information alone -- it would probably need augmenting with colour/texture information (well, this is obvious when you consider the case of 180 degrees rotation/upside-down books).
The obvious challenge is that there is an almost infinite variety of possible book covers, and an almost infinite variety of possible backgrounds. Therefore, solving for the general case would be impossible or at least intractably hard. I knew this when I began the task. But, I hoped it would be the sort of problem that may have a solution enough of the time.
Other approaches I've considered looking at include OCRing the titles/text to work out orientation or possibly general position. The other approach that might conceivably be fruitful is some sort of learning-based classifier.
A related subtask I'm working on is the same goal but in a webcam video stream. This is definitely easier since I can use temporal information (i.e. position across frames). I just started this one yesterday but, after some initial progress, plateaued. A human holding the book generates background movement noise which throws off trivial approaches like frame differencing / background subtraction. Compared with the static image problem, however, I feel this is far more doable.
Sorry if that was a little long-winded. I wanted to make sure I made a sincere effort to articulate the problem(s). What do people think? Anyone have any thoughts as to how these problems might best be tackled?
Does calculating homography with 4 lines instead of 4 points help the problem? As you probably know, if points are related as p2=Hp1, the lines are related as l2=H-1l1. The lines on the book border should be quite prominent especially if the deformation is not large. Is you main problem selecting right lines (you did NOT actually said what's your problem was)? May be some kind of Hough-rectangle can help to find lines?
Anyway, selecting lines for homography input has an additional advantage that RANSAC homography with a constraint on aspect ratio is likely to keep right lines as inliners in the presence of numerous outliers from the background. And if those outliers sneak in they probably look like another book.

Artificial intelligence that evolves in Python [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
So I've been thinking of creating a program similar to this in Python. The problem is that I know next to nothing about AI. Right now I'm reading up on genetic algorithms and neural networks and such, but there's a lot of things to absorb.
My largest problem now is I don't know how the entire thing is put together. So if anyone can tell me the general framework for a program like this, I'll be really grateful. For example, how the creatures can "see" things in their surroundings, the kind of "chromosomes" they have and how the environment is created.
Also, can anyone suggest suitable libraries for AIs, any engines, etc? I've been thinking of using neurolab, will that be a good idea?
Thanks!
Interestingly, I've created what you're describing, in Python.
Here's a pic:
The green dots are food, and the red ones are organisms that need to evolve and learn how to eat the food blocks.
The best source of inspiration and guidance for something like this is definitely Polyworld: youtube.com/watch?v=_m97_kL4ox0.
Another great example which helped me learn about backpropagation, which I suggest you should learn too, is here: arctrix.com/nas/python/bpnn.py.
Although this question is simply too vague to answer correctly, the general tip is that you first get a solid ground on how ANN's function, and then construct a program with this general algorithm:
Check for user input (should the camera move? should the program quit?)
For each organism:
Check if it is alive, etc.
Take input from the world
Evaluate its neural network
Apply the results to the physical body
Apply positive or negative feedback (as you'll see, it all comes down to this)
Update the world (you can use a physics engine)
Optionally, draw the world
For example, your input neurons could correspond with physical or visual stimulation. Is there something in front of me? Then set neuron X to 1, otherwise to 0.
And your output neurons could map into forces. Is neuron Y active? If so, apply an impulse to this organism's in this world cycle.
I also recommend that you don't use other people's libraries for your neural network computation, because this is a great way to actually learn how to implement them yourself. You could you Pygame for rendering, and PyBox2D for physics, though.
Now, about "seeing" other organisms... These problems are best solved when isolated. You need a good software design, that will enable you to divide the whole problem into many subproblems, which are easier to solve.
In general, seeing can be done by raycasting. Create a directional, normalized vector (where the length is 1 unit, that is, sqrt(x*x + y*y) == 1), and then scale it (x *= factor; y *= factor;). That way, you get a line that's stretching outwards. Before you scale it, you can then loop through all of the organisms in the world (you'll have to optimize this), and check if the vector is inside of them. If it is, calculate its length, and your organism has some information about its surroundings.
There are many simpler ways. For example, divide space around each organism into four quadrants, which four represent eyes (X - up, right, down, left of the organism). Then calculate the length from the organism to every other organism (optimize!). Find the closest organism. Find out in which quadrant it is. Voila, an active eye!
I hope I was helpful, but you'll have to be more precise and come back when you have an actual problem. Read the FAQ as well.
http://scikit-learn.org/ is a great resource (and framework) for you to learn about machine learning in python.
If you want to build a neural network, http://pybrain.org/ is a good framework.

Categories

Resources