Koch snowflake rendering time (and how to draw a snowflake using turtle) - python

I'm currently working through the online course material for the MIT 6.006 course for fun. I'm on problem set #2 (found here) and had a question about the calculations for the asymptotic rendering time for the koch snowflake problem (problem #1).
According to the solutions, when the CPU is responsible for the rendering and the calculation of coordinates, the asymptotic rendering time is faster than if the process is split up between the CPU and GPU. The math makes sense to me, but does anyone have an intuition about why this is true?
In my mind, the CPU still has to calculate the coordinates to render the snowflake (Theta(4^n) time), and then has to render the image. In my mind, these should be additive, not multiplicative.
However, the solutions state these are multiplicative, so since each triangle/line segment is shorter (for the last two subproblems in problem 1) the runtime is reduced to either Theta((4/3)^n) or Theta(1)!
I'm not a computer scientist--this stuff is just a fun hobby for me. I'd really appreciate an answer from one of you geniuses out there :)
Also, some fun I had while playing with the python turtle module. Heres some very imperfect code to draw a koch snowflake in python:
import turtle
def snowflake(n,size=200):
try: turtle.clear()
except: pass
turtle.tracer(0,0)
snowflake_edge(n,size)
turtle.right(120)
snowflake_edge(n,size)
turtle.right(120)
snowflake_edge(n,size)
turtle.update()
turtle.hideturtle()
def snowflake_edge(n,size=200):
if n==0:
turtle.forward(size)
else:
snowflake_edge(n-1,size/3.0)
turtle.left(60)
snowflake_edge(n-1,size/3.0)
turtle.right(120)
snowflake_edge(n-1,size/3.0)
turtle.left(60)
snowflake_edge(n-1,size/3.0)

As indicated by the comments on P.5 of the problem set, the approach taken by the CPU and the GPU are different.
The CPU-only (without hardware acceleration) appproach is to first compute what needs to be drawn, and then send it to the GPU to draw. Since we are assuming that the cost to rasterize is bigger than the cost to gather the line segments, then the amount of time to draw the image will be dominated by the GPU, whose cost will be proportional to the number of pixels it actually has to draw.
The GPU-CPU (with hardware acceleration) approach computes all the triangles, big and small, and then sends them to the GPU to draw the big triangle, delete the middle pixels, draw the smaller triangles, delete the unneeded pixels, and so on. Since drawing takes a long time, then the time GPU has to spend drawing and erasing will dominate the total amount of time spent; since most (asymptotically, 100%) of the pixels that are drawn will in the end be erased, then the total amount of time taken will be substantially more than simply the number of pixels that actually have to be drawn.
In other words: the hardware-acceleration scenario dumps most of the work on to the GPU, which is much slower than the CPU. This is okay if the CPU has other things to work on while the GPU is doing its processing; however, this is not the case here; so, the CPU is just wasting cycles while the GPU is doing its drawing.

Related

pygame.draw.rect on image

I am writing a python 3.7 pygame pacman clone.
Right now, I hard-coded the level in a 2d array for collision detection, and every frame I do:
screen.fill((0,0,0))
for x in range(GRID_W):
for y in range(GRID_H):
num=tiles[x][y]
if num is WALL or num is GHOST_HOUSE_BORDER:
pygame.draw.rect(screen,(255,0,255),[x*TILE_W,y*TILE_H,TILE_W,TILE_H])
This is really slow for some reason. I think that pygame draws the rects pixel-by-pixel in a 2d for loop, which would be very inneficient.
Is there a way to do this rendering before the main loop, so I just blit an image to the screen? Or is there a better way to do it?
My computer is a Macbook Pro:
Processor 2.9 GHz Intel Core i7
Memory 16 GB 2133 MHz LPDDR3
Graphics Radeon Pro 560 4096 MB, Intel HD Graphics 630 1536 MB
It can run intense OpenGL and OpenCL applications just fine, so pygame should not be a stretch.
The slowing down has nothing to do with the drawing being slow. It's actually because your map gets bigger.
In your file, you have several classes that have attributes speed (for example, line 192, your player has a self.speed). If you increase the size of your map without increasing your sprite's speeds, they will look like they are moving slower. They are actually moving the exact same speed, just not the same speed relative to the map.
If you want your game to be able to scale the size display screen, you also need to scale everything based on that same scaling factor. Otherwise, increasing/decreasing the size of your game will also affect all the interactions in your game (moving, jumping, etc... depending on the game).
I'd recommend putting a SCALE constant at the top of your file and multiplying all of your sizing and moving things by it. That way the game still feels the same no matter what size you want to play on.

Gradient alpha polygon with pygame

I have a scene, and I need to be able to overlay the scene with translucent polygons (which can be done easily using pygame.gfxdraw.filled_polygon which supports drawing with alpha), but the catch is that the amount of translucency has to fade over a distance (so for example, if the alpha value is 255 at one end of the polygon, then it is 0 at the other end and it blends from 255 to 0 through the polygon). I've implemented drawing shapes with gradients by drawing the gradient and then drawing a mask on top, but I've never come across a situation like this, so I have no clue what to do. I need a solution that can run in real time. Does anyone have any ideas?
It is possible that you have already thought of this and have decided against it, but it would obviously run far better in real time if the polygons were pre-drawn. Presuming there aren't very many different types of polygons, you could even resize them however you need and you would be saving CPU.
Also, assuming that all of the polygons are regular, you could just have several different equilateral triangles with gradients going in various directions on them to produce the necessary shapes.
Another thing you could do is define the polygon you are drawing, than draw an image of a gradient saved on your computer inside that shape.
The final thing you could do is to build your program (or certain, CPU intensive parts of your program) in C or C++. Being compiled and automatically optimized during compiling, these languages are significantly faster than python and better suited to what you are trying to do.

How can I track extremly slow objects

Using OpenCV and python 2.7 I have written a script that detects and marks movement in a stream from a webcam. In order to detect movement in the image I use the RunningAvg function in openCV like so. . .
cv.RunningAvg(img, running_avg, 0.500, None)
cv.AbsDiff(img, running_avg, difference)
The overall script works great but I'm having a difficult time fine tuning it to pickup subtle motions(breathing for instance). I want to be able to target slow movements breathing specifically. I want to be able to do this without knowing things like color or size of targets ahead of time. I'm wondering if there is another method that is more suited to picking up subtle movements.
I think you should probably change the running average parameter way down to like 0.01
because 0.5 means the running average is half of the last frame.
This is assuming that breathing is the only motion in the frame. If there larger motions or the camera is moving you are going to need a more adaptive baseline.

implement image segmentation with python generator

Following the last question: read big image file as an array in python
Due to the memory limitation of my laptop, I would like to implement image segmentation algorithm with python generator which can read every pixel at a time, rather than the whole image.
My laptop is Window 7 (64 bit OS) with 4G ram and Intel(R) Core (TM) i7-2860 QM CPU, and the images I am processing are over 2G. The algorithm I want to apply is watershed segmentation: http://scikits-image.org/docs/dev/auto_examples/plot_watershed.html
The only similar example I can find is http://vkedco.blogspot.com/2012/04/rgb-to-gray-level-to-binary-python.html, but what I need is not just converting a pixel value at a time. I need to consider the relations among near pixels. How can I do?
Any idea or hint for me? Thanks in advance!
Since the RGB to graylevel conversion operation is purely local, a streaming approach is trivial; the position of the pixels is irrelevant. Watershed is a global operation. One pixel can change the output dramatically. You have several options:
Write an implementation of Watershed that works on tiles and iterates on many passes through the image. This sounds difficult to me.
Use a local method to segment (i.e. thresholding).
Get a computer with more RAM. RAM is cheap and you can stick tons of it into a desktop system.

Pygame: Tiled Map or Large Image

I am trying to decide if it is better to use a pre-rendered large image for a scrolling map game or to render the tiles individual on screen each frame. I have tried to program the game both ways and don't see any obvious difference in speed, but that might be due to my lack of experiences.
Besides for memory, is there a speed reasons to not use a pre-rendered map?
The only reason I can think of for picking one over the other on modern hardware (anything as fast and with as much ram as, say, an iPhone), would be technical ones that make the game code itself easier to follow. There's not much performance wise to distinguish them.
One exception I can think of, is if you are using a truly massive background, and doing tile rendering in a GPU, tiles can be textures and you'll get a modest speed bump since you don't need to push much data between cpu and gpu per frame, and it'll use very little video ram.
Memory and speed are closely related. If your tile set fits in video memory, but the pre-rendered map doesn't, speed will suffer.
Maybe it really depends of the map's size but this shouldn't be a problem even with a low-level-computer.
The problem with big images is that it takes a lot of time to redraw all the stuff on it so you will get an unflexible "map".
But a real advantage with an optimized image(use convert()-function and 16 bit) is are the fast blittings.
I work with big images as well on a maybe middle-good-computer and I have around 150 FPS by blitting huge images which require just ~ 100? MB RAM
image = image.convert()#video system has to be initialed
The following code creates an image(5000*5000), draws something on it, (blit this to the screen, fill the screen)*50 times and at the end it tells how long it took to do one blit and one flip.
def draw(dr,image,count,radius,r):
for i in range(0,5000,5000//count):
for j in range(0,5000,5000//count):
dr.circle(image,(r.randint(0,255),r.randint(0,255),r.randint(0,255)),[i,j],radius,0)
def geschw_test(screen,image,p):
t1 = p.time.get_ticks()
screen.blit(image,(-100,-100))
p.display.flip()
return p.time.get_ticks() - t1
import pygame as p
import random as r
p.init()
image = p.Surface([5000,5000])
image.fill((255,255,255))
image.set_colorkey((255,255,255))
screen = p.display.set_mode([1440,900],p.SWSURFACE,16)
image = image.convert()#extremely efficient
screen.fill((70,200,70))
draw(p.draw,image,65,50,r)#draw on surface
zahler = 0
anz = 20
speed_arr = []
while zahler < anz:
zahler += 1
screen.fill((0,0,0))
speed_arr.append(geschw_test(screen,image,p))
p.quit()
speed = 0
for i in speed_arr:
speed += i
print(round(speed/anz,1),"miliseconds per blit with flip")
Depends on the size of the map you want to make, however, with the actual technologies it's very hard to see a tile-map "rendered" to take longer than expected, tiled based games are almost extinguished, however is always a good practice and a starting point to the world of game programming

Categories

Resources