So I have been working on some procedural generating and i managed to create a circular monochrome gradient map which i use to generate other maps.
def create_circular_gradient(self, world):
center_x, center_y = self.mapSize[1] // 2, self.mapSize[0] // 2
circle_grad = np.zeros_like(world)
for y in range(world.shape[0]):
for x in range(world.shape[1]):
distx = abs(x - center_x)
disty = abs(y - center_y)
dist = math.sqrt(distx * distx + disty * disty)
circle_grad[y][x] = dist
# get it between -1 and 1
max_grad = np.max(circle_grad)
circle_grad = circle_grad / max_grad
circle_grad -= 0.5
circle_grad *= 2.0
circle_grad = -circle_grad
# shrink gradient
for y in range(world.shape[0]):
for x in range(world.shape[1]):
if circle_grad[y][x] > 0:
circle_grad[y][x] *= 20
# get it between 0 and 1
max_grad = np.max(circle_grad)
circle_grad = circle_grad / max_grad
grad_world = self.apply_gradient_noise(world, circle_grad)
return grad_world
Now my question is, how exactly would i need to modify this to create a swirly gradient in a random way? and even maybe a box or a diverse linear gradients?
Note that world is a randomly generated 2d array (and can be any shape like 100x100 or even 100x50 but of course i use it a whole lot bigger) and to the circle gradient, I add a perlin noise to randomize the generation. The code that randomizes it will work for any map but i'm not sure where to start creating different shapes of gradient.
Edit
Ignore the magic numbers, that was the only way i found how to create a circular monochrome gradient
Related
I need to draw slanted lines like this programmatically using opencv-python, and it has to be similar in terms of the slant angle and the distance between the lines:
If using OpenCV cv.line() i need to supply the function with the line's start and endpoint.
Following this StackOverflow accepted answer, I think I will be able to know those two points, but first I need to calculate the line equation itself.
So what I have done is first I calculate the slant angle of the line using the measure tool in ai (The actual image was given by the graphic designer as ai (adobe illustrator) file), and I got 67deg and I solve the gradient of the line. But the problem is I don't know how to get the horizontal spacing/distance between the lines. I needed that so i can supply the start.X. I used the illustrator, and try to measure the distance between the lines but how to map it to opencv coordinate?
Overall is my idea feasible? Or is there a better way to achieve this?
Update 1:
I managed to draw this experimental image:
And this is code:
def show_image_scaled(window_name,image,height,width):
cv2.namedWindow(window_name,cv2.WINDOW_NORMAL)
cv2.resizeWindow(window_name,width,height)
cv2.imshow(window_name,image)
cv2.waitKey(0)
cv2.destroyAllWindows()
def slanted_lines_background():
canvas = np.ones((200,300)) * 255
end_x = 0
start_y = 0
m = 2.35
end_x = 0
for x in range(0,canvas.shape[1],10):
start_x = x
end_y = start_y + compute_length(m,start_x,start_y,end_x)
cv2.line(canvas,(start_x,start_y),(end_x,end_y),(0,0,0),2)
show_image_scaled("Slant",canvas,200,300)
def compute_length(m,start_x,start_y,end_x=0):
c = start_y - (m * start_x)
length_square = (end_x - start_x)**2 + ((m *end_x) + c - start_y) ** 2
length = math.sqrt(length_square)
return int(length)
Still working on to fill the left part of the rectangle
This code "shades" every pixel in a given image to produce your hatched pattern. Don't worry about the math. It's mostly correct. I've checked the edge cases for small and wide lines. The sampling isn't exactly correct but nobody's gonna notice anyway because the imperfection amounts to small fractions of a pixel. And I've used numba to make it fast.
import numpy as np
from numba import njit, prange
#njit(parallel=True)
def hatch(im, angle=45, stride=10, dc=None):
stride = float(stride)
if dc is None:
dc = stride * 0.5
assert 0 <= dc <= stride
stride2 = stride / 2
dc2 = dc / 2
angle = angle / 180 * np.pi
c = np.cos(angle)
s = np.sin(angle)
(height, width) = im.shape[:2]
for y in prange(height):
for x in range(width):
# distance to origin along normal
dist_origin = c*x - s*y
# distance to center of nearest line
dist_center = stride2 - abs((dist_origin % stride) - stride2)
# distance to edge of nearest line
dist_edge = dist_center - dc2
# shade pixel, with antialiasing
# use edge-0.5 to edge+0.5 as "gradient" <=> 1-sized pixel straddles edge
# for thick/thin lines, needs hairline handling
# thin line -> gradient hits far edge of line / pixel may span both edges of line
# thick line -> gradient hits edge of adjacent line / pixel may span adjacent line
if dist_edge > 0.5: # background
val = 0
else: # pixel starts covering line
val = 0.5 - dist_edge
if dc < 1: # thin line, clipped to line width
val = min(val, dc)
elif stride - dc < 1: # thick line, little background
val = max(val, 1 - (stride - dc))
im[y,x] = val
canvas = np.zeros((128, 512), 'f4')
hatch(canvas, angle=-23, stride=5, dc=2.5)
# mind the gamma mapping before imshow
I have made a Mandelbrot fractal using PIL module in Python.
Now, I want to make a GIF of zooming into one point. I have watched other code on-line, but needless to say, I didn't understand it, since the pattern I'm using is a bit different (I'm using classes).
I know that to zoom in I need to change scale... But I plainly don't know how to implement it.
from PIL import Image
import random
class Fractal:
"""Fractal class."""
def __init__(self, size, scale, computation):
"""Constructor.
Arguments:
size -- the size of the image as a tuple (x, y)
scale -- the scale of x and y as a list of 2-tuple
[(minimum_x, minimum_y), (maximum_x, maximum_y)]
computation -- the function used for computing pixel values as a function
"""
self.size = size
self.scale = scale
self.computation = computation
self.img = Image.new("RGB", (size[0], size[1]))
def compute(self):
"""
Create the fractal by computing every pixel value.
"""
for y in range(self.size[1]):
for x in range(self.size[0]):
i = self.pixel_value((x, y))
r = i % 8 * 32
g = i % 16 * 16
b = i % 32 * 8
self.img.putpixel((x, y), (r, g, b))
def pixel_value(self, pixel):
"""
Return the number of iterations it took for the pixel to go out of bounds.
Arguments:
pixel -- the pixel coordinate (x, y)
Returns:
the number of iterations of computation it took to go out of bounds as integer.
"""
# x = pixel[0] * (self.scale[1][0] - self.scale[0][0]) / self.size[0] + self.scale[0][0]
# y = pixel[1] * (self.scale[1][1] - self.scale[0][1]) / self.size[1] + self.scale[0][1]
x = (pixel[0] / self.size[0]) * (self.scale[1][0] - self.scale[0][0]) + self.scale[0][0]
y = (pixel[1] / self.size[1]) * (self.scale[1][1] - self.scale[0][1]) + self.scale[0][1]
return self.computation((x, y))
def save_image(self, filename):
"""
Save the image to hard drive.
Arguments:
filename -- the file name to save the file to as a string.
"""
self.img.save(filename, "PNG")
if __name__ == "__main__":
def mandelbrot_computation(pixel):
"""Return integer -> how many iterations it takes for the pixel to escape the mandelbrot set."""
c = complex(pixel[0], pixel[1]) # Complex number: A + Bi (A is real number, B is imaginary number).
z = 0 # We are assuming the starting z value for each square is 0.
iterations = 0 # Will count how many iterations it takes for a pixel to escape the mandelbrot set.
for i in range(255): # The more iterations, the more detailed the mandelbrot set will be.
if abs(z) >= 2.0: # Checks, if pixel escapes the mandelbrot set. Same as square root of pix[0] and pix[1].
break
z = z**2 + c
iterations += 1
return iterations
mandelbrot = Fractal((1000, 1000), [(-2, -2), (2, 2)], mandelbrot_computation())
mandelbrot.compute()
mandelbrot.save_image("mandelbrot.png")
This is a "simple" linear transformation, both scale (zoom) and translate (shift), as you learned in linear algebra. Do you recall a formula something like
s(y-k) = r(x-h) + c
The translation is (h, k); the scale in each direction is (r, s).
To implement this, you need to alter your loop increments. To zoom in a factor of k in each direction, you need to reduce the range of your coordinates, likewise reducing the increment between pixel positions.
The most important thing here is to partially decouple your display coordinates from your mathematical values: you no longer display the value for 0.2 - 0.5i at the location labeled (0.2, -0.5); the new position computes from your new frame bounds.
Your old code isn't quite right for this:
for y in range(self.size[1]):
for x in range(self.size[0]):
i = self.pixel_value((x, y))
...
self.img.putpixel((x, y), (r, g, b))
Instead, you'll need something like:
# Assume that the limits x_min, x_max, y_min, y_max
# are assigned by the zoom operation.
x_inc = (x_max - x_min) / self.size[0]
y_inc = (y_max - y_min) / self.size[1]
for y in range(self.size[1]):
for x in range(self.size[0]):
a = x*x_inc + x_min
b = y*y_inc + y_min
i = self.pixel_value((a, b))
...
self.img.putpixel((x, y), (r, g, b))
I'm trying to create a lookAt function in 2 dimensions using Python, so here's my code right now.
from math import *
def lookAt(segment, originPoint):
segmentCenterPoint = getSegmentCenter(segment)
for i in range(2):
vtx = getVertex(segment, i)
x, y = getVertexCoord(vtx)
# Calculate the rotation angle already applied on the polygon
offsetAngle = atan2(y - segmentCenterPoint.y, x - segmentCenterPoint.x)
# Calculate the rotation angle to orient the polygon to an origin point
orientAngle = atan2(segmentCenterPoint.y - originPoint.y, segmentCenterPoint.x - originPoint.x)
# Get the final angle
finalAngle = orientAngle - (pi / 2)
if offsetAngle >= pi:
offsetAngle -= pi
elif offsetAngle < 0:
offsetAngle += pi
finalAngle += offsetAngle
# Temporary move the point to have its rotation pivot to (0,0)
tempX = x - segmentCenterPoint.x
tempY = y - segmentCenterPoint.y
# Calculate coords of the point with the rotation applied
s = sin(finalAngle)
c = cos(finalAngle)
newX = tempX * c - tempY * s
newY = tempX * s + tempY * c
# Move the point to the initial pivot
x = newX + segmentCenterPoint.x
y = newY + segmentCenterPoint.y
# Apply new coords to the vertex
setVertexCoord(vtx, x, y)
I tried some examples manually and worked well, but when I tried to apply the function on thousands of segments, it seems some segment are not well oriented.
I probably missed something but i don't know what it is. Also, maybe there's a faster way to calculate it ?
Thank you for your help.
EDIT
Here is a visualization to understand better the goal of the lookAt.
The goal is to find A' and B' coordinates, assuming we already know O, A and B ones. ([AB] is the segment we need to orient perpendicularly to the point O)
To find positions of A' and B', you don't need rotate points (and dealt with angles at all).
Find vector OC = segmentCenterPoint - originPoint
Make normalized (unit) vector oc = OC / Length(OC)
Make perpendicular vector P = (-oc.Y, oc.X)
Find CB length lCB
Find A' = C + lCB * P
B' = C - lCB * P
My plan is to create a "simple" 2-player minigame game (for ex. sumo and race).
My goal would be to efficiently implement collisions (my current code can deal only with simple wall physics and movement of the objects) with square, circle (and triangle?) shaped objects that can be either part of the environment (for ex. "rocks" or immovable obstacles) or part of the user controlled items (for ex. "cars" or pushable obstacles). It would be also nice to know how mass could also be accounted for in the collisions.
There are two aspects I need help with:
Different types of dynamic collisions between two moving objects (that have a mass and 2D vector)
(the physics part not the detection).
Making sure everything that needs to collide, collides fast enough (so that my slow computer could still render more than 40-60 frames per second) , and according to the specific rules (or if it would be possible then according to one rule?). So that it would not also be hard to manage the objects that need to be collided (add, remove, modify and so on).
Or should I just implement two types of collision for ex. static + dynamic circle and dynamic + dynamic circle?
def checkcollisions(object1, object2):
# x is the current x position
# y is the current y position
# angle is the current vector angle (calculated from x and y with pythagoros
# speed is the length of the vector
dx = object1.x - object2.x
dy = object1.y - object2.y
dist = hypot(dx, dy)
if dist < object1.radius + object2.radius:
angle = atan2(dy, dx) + 0.5 * pi
total_mass = object1.mass + object2.mass
'''''http://www.petercollingridge.co.uk/pygame-physics-simulation/mass'''''
if (0.79 <= object1.angle < 2.36 or 0.79-2*pi <= object1.angle < 2.36-2*pi) or (3.93 <= object1.angle < 5.5 or 3.93-2*pi <= object1.angle < 5.5-2*pi) and ((0.79 <= object2.angle < 2.36 or 0.79-2*pi <= object2.angle < 2.36-2*pi) or (3.93 <= object2.angle < 5.5 or 3.93-2*pi <= object2.angle < 5.5-2*pi)):
(object2angle, object2speed) = vectorsum((object2.angle, object2.speed*(object2.mass-object1.mass)/total_mass), (angle+pi, 2*object1.speed*object1.mass/total_mass))
(object1angle, object1speed) = vectorsum((object1.angle, object1.speed*(object1.mass-object2.mass)/total_mass), (angle, 2*object2.speed*object2.mass/total_mass))
else:
'''''https://en.wikipedia.org/wiki/Elastic_collision'''''
CONTACT_ANGLE = angle
x = (((object1.speed * cos(object1.angle - CONTACT_ANGLE) * (object1.mass-object2.mass)+ 2*object2.mass*object2.speed*cos(object2.angle - CONTACT_ANGLE))/total_mass)*cos(CONTACT_ANGLE))+object1.speed*sin(object1.angle - CONTACT_ANGLE)*cos(CONTACT_ANGLE + 0.5 * pi)
y = (((object1.speed * cos(object1.angle - CONTACT_ANGLE) * (object1.mass-object2.mass)+ 2*object2.mass*object2.speed*cos(object2.angle - CONTACT_ANGLE))/total_mass)*cos(CONTACT_ANGLE))+object1.speed*sin(object1.angle - CONTACT_ANGLE)*sin(CONTACT_ANGLE + 0.5 * pi)
object1angle = pi/2 - atan2(y, x)
object1speed = hypot(x, y)
x = (((object2.speed * cos(object2.angle - CONTACT_ANGLE)*(object2.mass-object1.mass)+2*object1.mass*object1.speed*cos(object1.angle - CONTACT_ANGLE))/total_mass)*cos(CONTACT_ANGLE))+object2.speed*sin(object2.angle - CONTACT_ANGLE)*cos(CONTACT_ANGLE + 0.5 * pi)
y = (((object2.speed * cos(object2.angle - CONTACT_ANGLE)*(object2.mass-object1.mass)+2*object1.mass*object1.speed*cos(object1.angle - CONTACT_ANGLE))/total_mass)*cos(CONTACT_ANGLE))+object2.speed*sin(object2.angle - CONTACT_ANGLE)*sin(CONTACT_ANGLE + 0.5 * pi)
object2angle = pi/2 - atan2(y, x)
object2speed = hypot(x, y)
(object2.angle, object2.speed) = (object2angle, object2speed)
(object1.angle, object1.speed) = (object1angle, object1speed)
object1.speed *= 0.999
object2.speed *= 0.999
overlap = 0.5*(object1.radius + object2.radius - dist+1)
object1.x += sin(angle)*overlap
object1.y -= cos(angle)*overlap
object2.x -= sin(angle)*overlap
object2.y += cos(angle)*overlap
'''''http://www.petercollingridge.co.uk/pygame-physics-simulation/mass'''''
def vectorsum(vectorx, vectory): # Every array's first number is the degree from 0, the second is speed
x = sin(vectory[0]) * vectory[1] + sin(vectorx[0]) * vectorx[1]
y = cos(vectory[0]) * vectory[1] + cos(vectorx[0]) * vectorx[1] # Calculating new vectors from anle and lenght
angle = pi / 2 - atan2(y, x) # Calculating the degree
speed = hypot(x, y) # Calculating the speed
return angle, speed
(I'm just a beginner with Python (or English) so keep that in mind please.)
Collision detection is very easy in pygame. Take a look at using pygame.sprite. They have several functions to detect collisions. (spritecollide, groupcollide, etc) If you have some complex collision interaction generally you use the rect or circle to see if they collide, then only do your complex calculations on those. Though for most games you do not need to have the cost of perfect collision detection, close enough is good enough.
As far was what happens when you collide, that is more physics then programming. Some concepts to keep in mind are: momentum conservation, elastic vs inelastic collisions, deflection angle. "How to building a 2D physics engine" is a bit too broad for SO question. Maybe look at how-to-create-a-custom-2d-physics-engine-oriented-rigid-bodies
I was reading this paper "Self-Invertible 2D Log-Gabor Wavelets" it defines 2D log gabor filter as such:
The paper also states that the filter only covers one side of the frequency space and shows that in this image
On my attempt to implement the filter I get results that do not match with what is said in the paper. Let me start with my implementation then I will state the problems.
Implementation:
I created a 2d array that contains the filter and transformed each index so that the origin of the frequency domain is at the center of the array with positive x-axis going right and positive y-axis going up.
number_scales = 5 # scale resolution
number_orientations = 9 # orientation resolution
N = constantDim # image dimensions
def getLogGaborKernal(scale, angle, logfun=math.log2, norm = True):
# setup up filter configuration
center_scale = logfun(N) - scale
center_angle = ((np.pi/number_orientations) * angle) if (scale % 2) \
else ((np.pi/number_orientations) * (angle+0.5))
scale_bandwidth = 0.996 * math.sqrt(2/3)
angle_bandwidth = 0.996 * (1/math.sqrt(2)) * (np.pi/number_orientations)
# 2d array that will hold the filter
kernel = np.zeros((N, N))
# get the center of the 2d array so we can shift origin
middle = math.ceil((N/2)+0.1)-1
# calculate the filter
for x in range(0,constantDim):
for y in range(0,constantDim):
# get the transformed x and y where origin is at center
# and positive x-axis goes right while positive y-axis goes up
x_t, y_t = (x-middle),-(y-middle)
# calculate the filter value at given index
kernel[y,x] = logGaborValue(x_t,y_t,center_scale,center_angle,
scale_bandwidth, angle_bandwidth,logfun)
# normalize the filter energy
if norm:
Kernel = kernel / np.sum(kernel**2)
return kernel
To calculate the filter value at each index another transform is made where we go to the log-polar space
def logGaborValue(x,y,center_scale,center_angle,scale_bandwidth,
angle_bandwidth, logfun):
# transform to polar coordinates
raw, theta = getPolar(x,y)
# if we are at the center, return 0 as in the log space
# zero is not defined
if raw == 0:
return 0
# go to log polar coordinates
raw = logfun(raw)
# calculate (theta-center_theta), we calculate cos(theta-center_theta)
# and sin(theta-center_theta) then use atan to get the required value,
# this way we can eliminate the angular distance wrap around problem
costheta, sintheta = math.cos(theta), math.sin(theta)
ds = sintheta * math.cos(center_angle) - costheta * math.sin(center_angle)
dc = costheta * math.cos(center_angle) + sintheta * math.sin(center_angle)
dtheta = math.atan2(ds,dc)
# final value, multiply the radial component by the angular one
return math.exp(-0.5 * ((raw-center_scale) / scale_bandwidth)**2) * \
math.exp(-0.5 * (dtheta/angle_bandwidth)**2)
Problems:
The angle: the paper stated that indexing the angles from 1->8 would produce good coverage of the orientation, but in my implementation angles from 1->n don't cover except for half orientations. Even the vertical orientation is not correctly covered. This can be shown in this figure which contains sets of filters of scale 3 and orientations ranging from 1->8:
The coverage: from filters above it is clear the filter covers both sides of the space which is not what the paper says. This can be made more explicit by using 9 orientations ranging from -4 -> 4. The following image contains all the filters in one image to show how it covers both sides of the spectrum (this image is created by taking the maximum at each location from all filters):
Middle Column (orientation $\pi / 2$): in the first figure in orientation from 3 -> 8 it can be seen that the filter vanishes at orientation $ \pi / 2$. Is this normal? This can be seen too when I combine all the filters(of all 5 scales and 9 orientations) in one image:
Update:
Adding the impulse response of the filter in spatial domain, as you can see there is an obvious distortion in -4 & 4 orientations:
After a lot of code analysis, I found that my implementation was correct but the getPolar function was messed up, so the code above should work just fine. This is the a new code without the getPolar function if any one was looking for it:
number_scales = 5 # scale resolution
number_orientations = 8 # orientation resolution
N = 128 # image dimensions
def getFilter(f_0, theta_0):
# filter configuration
scale_bandwidth = 0.996 * math.sqrt(2/3)
angle_bandwidth = 0.996 * (1/math.sqrt(2)) * (np.pi/number_orientations)
# x,y grid
extent = np.arange(-N/2, N/2 + N%2)
x, y = np.meshgrid(extent,extent)
mid = int(N/2)
## orientation component ##
theta = np.arctan2(y,x)
center_angle = ((np.pi/number_orientations) * theta_0) if (f_0 % 2) \
else ((np.pi/number_orientations) * (theta_0+0.5))
# calculate (theta-center_theta), we calculate cos(theta-center_theta)
# and sin(theta-center_theta) then use atan to get the required value,
# this way we can eliminate the angular distance wrap around problem
costheta = np.cos(theta)
sintheta = np.sin(theta)
ds = sintheta * math.cos(center_angle) - costheta * math.sin(center_angle)
dc = costheta * math.cos(center_angle) + sintheta * math.sin(center_angle)
dtheta = np.arctan2(ds,dc)
orientation_component = np.exp(-0.5 * (dtheta/angle_bandwidth)**2)
## frequency componenet ##
# go to polar space
raw = np.sqrt(x**2+y**2)
# set origin to 1 as in the log space zero is not defined
raw[mid,mid] = 1
# go to log space
raw = np.log2(raw)
center_scale = math.log2(N) - f_0
draw = raw-center_scale
frequency_component = np.exp(-0.5 * (draw/ scale_bandwidth)**2)
# reset origin to zero (not needed as it is already 0?)
frequency_component[mid,mid] = 0
return frequency_component * orientation_component