I am making a snake game in pygame, and i need to make an array of pygame rects. When i was testing the code to see if the basic idea works, it didn't. When it was supposed to print
[[0,0],
[10,0],
[20,0],
and so on until it got to the biggest x value, and then it would add ten to the y value, it just prints the x values when the y value is always 0. I am new to pygame and python, so any help would be appreciated.
My code:
class Grid:
def __init__(self, gridSize):
self.gridSize = gridSize
self.numX = int(screenX / gridSize)
self.numY = int(screenX / gridSize)
self.xList = []
for y in range(0, self.numY * 10, 10):
for x in range(0, self.numX * 10, 10):
self.xList.append((x,y))
if y == 0:
self.array = np.array(self.xList)
else:
np.append(self.array, self.xList)
self.xList = []
print(self.array)
Most (if not all) numpy commands don't modify their arrays in-place. They return a new array, but the old array stays as it is.
Thus, under your else, you'll need
else:
self.array = np.append(self.array, self.xList)
This will update self.array so that it holds the new, appended array.
It also explains why you're only seeing print-outs for y = 0 and not other values. (You could possibly arrive at this same conclusion by debugging and stepping through your code. Maybe next time? :-) )
For starters, you aren't iterating over the Y range:
self.numY = int(screenX / gridSize) needs to be self.numY = int(screenY/ gridSize)
Related
Hoping this is an easy problem and I just don't know the correct syntax.
I currently have a small 3D volume that is defined by a numpy array of 100,100,100.
For the problem I am testing I want to put this volume into a larger array (doesn't matter how big right now but I am testing on a 1000,1000,100 array).
Currently I am just making an empty numpy array using the following:
BigArray = np.zeros((1000,1000,100),np.float16)
Then I have my smaller array that for the purpose of this example can just be a randomly filled array.:
SmallArray = np.random.rand(100,100,100)
From here I want to loop through and fill the 1000,1000,100 array with the 100,100,100 array placing each cube next to one another. The large array starts with '0' values so it should be as simple as just adding the small array to the correct coordinates of the larger array however have no idea the syntax to do this. Could someone help?
Thanks
This should do it -- just use a standard nested for loop and numpy array assignment syntax:
small = np.random.rand(100, 100, 100)
big = np.zeros((1000, 1000, 100), dtype=np.int16)
for i in range(0, 1000, 100):
for j in range(0, 1000, 100):
big[i:i+100, j:j+100, :] = small
For generic sized 3D arrays:
def inset_into(small, big):
sx, sy, sz = small.shape
bx, by, bz = big.shape
# make sure values work
assert bx % sx == 0
assert by % sy == 0
assert bz == sz
for i in range(0, bx, sx):
for j in range(0, by, sy):
big[i:i+sx, j:j+sy, :] = small
return big
This should just be numpy slicing.
small = np.random.rand(100, 100, 100)
big = np.zeros((1000, 1000, 100), dtype=np.int16)
If you want to make big out of a bunch of smalls here is another way.
big = np.concatenate([small] * (big.shape[0] // small.shape[0]), axis=1)
big = np.concatenate([big] * (big.shape[1] // small.shape[1]), axis=0)
There is a speed difference. Looping is better.
I am working on an agent based modelling project and have a 800x800 grid that represents a landscape. Each cell in this grid is assigned certain variables. One of these variables is 'vegetation' (i.e. what functional_types this cell posses). I have a data fame that looks like follows:
Each cell is assigned a landscape_type before I access this data frame. I then loop through each cell in the 800x800 grid and assign more variables, so, for example, if cell 1 is landscape_type 4, I need to access the above data frame, generate a random number for each functional_type between the min and max_species_percent, and then assign all the variables (i.e. pollen_loading, succession_time etc etc) for that landscape_type to that cell, however, if the cumsum of the random numbers is <100 I grab function_types from the next landscape_type (so in this example, I would move down to landscape_type 3), this continues until I reach a cumsum closer to 100.
I have this process working as desired, however it is incredibly slow - as you can imagine, there are hundreds of thousands of assignments! So far I do this (self.model.veg_data is the above df):
def create_vegetation(self, landscape_type):
if landscape_type == 4:
veg_this_patch = self.model.veg_data[self.model.veg_data['landscape_type'] <= landscape_type].copy()
else:
veg_this_patch = self.model.veg_data[self.model.veg_data['landscape_type'] >= landscape_type].copy()
veg_this_patch['veg_total'] = veg_this_patch.apply(lambda x: randint(x["min_species_percent"],
x["max_species_percent"]), axis=1)
veg_this_patch['cum_sum_veg'] = veg_this_patch.veg_total.cumsum()
veg_this_patch = veg_this_patch[veg_this_patch['cum_sum_veg'] <= 100]
self.vegetation = veg_this_patch
I am certain there is a more efficient way to do this. The process will be repeated constantly, and as the model progresses, landscape_types will change, i.e. 3 become 4. So its essential this become as fast as possible! Thank you.
As per the comment: EDIT.
The loop that creates the landscape objects is given below:
for agent, x, y in self.grid.coord_iter():
# check that patch is land
if self.landscape.elevation[x,y] != -9999.0:
elevation_xy = int(self.landscape.elevation[x, y])
# calculate burn probabilities based on soil and temp
burn_s_m_p = round(2-(1/(1 + (math.exp(- (self.landscape.soil_moisture[x, y] * 3)))) * 2),4)
burn_s_t_p = round(1/(1 + (math.exp(-(self.landscape.soil_temp[x, y] * 1))) * 3), 4)
# calculate succession probabilities based on soil and temp
succ_s_m_p = round(2 - (1 / (1 + (math.exp(- (self.landscape.soil_moisture[x, y] * 0.5)))) * 2), 4)
succ_s_t_p = round(1 / (1 + (math.exp(-(self.landscape.soil_temp[x, y] * 1))) * 0.5), 4)
vegetation_typ_xy = self.landscape.vegetation[x, y]
time_colonised_xy = self.landscape.time_colonised[x, y]
is_patch_colonised_xy = self.landscape.colonised[x, y]
# populate landscape patch with values
patch = Landscape((x, y), self, elevation_xy, burn_s_m_p, burn_s_t_p, vegetation_typ_xy,
False, time_colonised_xy, is_patch_colonised_xy, succ_s_m_p, succ_s_t_p)
self.grid.place_agent(patch, (x, y))
self.schedule.add(patch)
Then, in the object itself I call the create_vegetation function to add the functional_types from the above df. Everything else in this loop comes from a different dataset so isn't relevant.
You need to extract as many calculations as you can into a vectorized preprocessing step. For example in your 800x800 loop you have:
burn_s_m_p = round(2-(1/(1 + (math.exp(- (self.landscape.soil_moisture[x, y] * 3)))) * 2),4)
Instead of executing this line 800x800 times, just do it once, during initialization:
burn_array = np.round(2-(1/(1 + (np.exp(- (self.landscape.soil_moisture * 3)))) * 2),4)
Now in your loop it is simply:
burn_s_m_p = burn_array[x, y]
Apply this technique to the rest of the similar lines.
I'm trying to get a code to print small rectangles all over my screen in pygame with the help of for loops, but having trouble. I have solved parts of it with this code but it looks ugly and preforms bad:
x = 0
y = 0
for y_row in range(60):
y = y + 10
pygame.draw.rect(screen, GREEN, [x, y, 5, 5], 0)
for x_row in range(70):
pygame.draw.rect(screen, GREEN, [x, y, 5, 5], 0)
x = x + 10
x = 0
To start of, I do not believe I have to assign a value to x and y if I just can figure out how to implement the value of y_row and x_row at x and y's places instead, now it increases with 1, it should increase with 10, than I can implement it instead.
Another problem with the code is that it leaves a blank row at the top, this is because I had to add the y = y + 10 above the pygame draw, otherwise it just printed one rectangle there witch made it more visible.
The template I'm using to get the code working you can find Here.
Drawing 4,200 rectangles to the screen every 60th of a second is probably a significant task for the CPU. I suspect that the pygame.draw.rect() function is fairly high-level and calls are not batched by pygame making it sub-optimal, there is a hint in the documentation (https://www.pygame.org/docs/ref/draw.html#pygame.draw.rect) that Surface.fill(color, rect=None, special_flags=0) can be hardware accelerated and may be a faster option if you're filling the rectangles.
Note: the code examples below are pseudo ... just means you need to fill in the gaps.
You only need 1 call to pygame.draw.rect per iteration of the loop not 2 as you have now, e.g.
for row in rows:
y = ...
for col in cols:
x = ...
... draw rect ...
One easy win for performance is to not draw anything that's off-screen, so test your x, y coordinates before rendering, e.g:
screen_width = 800
screen_height = 600
for ...
y = y += 10
if y > screen_height:
break
for ...
x += 10
if x > screen_width:
break
... draw block ...
The same approach could also be used (with a continue) to implement an offset (e.g a starting offset_x, offset_y value) where rectangles with negative x, y values are not rendered (the test is not x < 0 however, but x < -block_size).
There's nothing wrong with calculating the x and y values from a loop index as you are doing, it's often useful to have an index (for example the index [row][col] might give you the location of data for a tile in a 2D matrix representing game tiles). I would calculate the x, y values myself from the indexes using a multiplier (this also solves the blank first row issue):
block_size = 10
for row in ...
y = row * block_size
if y > screen_height:
break
for col in ...
x = col * block_size
if x > screen_width:
break
... draw block ...
If you're using Python2 then you might consider using xrange to predefine the loop ranges to improve performance (though I imagine only a small amount and as always with optimization testing the performance difference is key). For example:
rows = xrange(60)
cols = xrange(70)
for row in rows:
...
for cols in cols:
... draw block ...
As #bshuster13 mentioned you can use pythons range() function and pass an optional step and stop argument to create a list containing arithmetic progressions.
numberOfRows = 60
numberOfColumns = 70
stepBetweenRects = 10
for y in range(0, numberOfRows * stepBetweenRects, stepBetweenRects):
for x in range(0, numberOfColumns * stepBetweenRects, stepBetweenRects):
pygame.draw.rect(screen, GREEN, (x, y, 5, 5), 0)
I'm working in a project and I have to create a method to generate an image with an background and vector flows. So, I'm using the stream plot from matplotlib.
class ImageData(object):
def __init__(self, width=400, height=400, range_min=-1, range_max=1):
"""
The ImageData constructor
"""
self.width = width
self.height = height
#The values range each pixel can assume
self.range_min = range_min
self.range_max = range_max
#self.data = np.arange(width*height).reshape(height, width)
self.data = []
for i in range(width):
self.data.append([0] * height)
def generate_images_with_streamline(self, file_path, background):
# Getting the vector flow
x_vectors = []
y_vectors = []
for i in range(self.width):
x_vectors.append([0.0] * self.height)
y_vectors.append([0.0] * self.height)
for x in range(1, self.width-1):
for y in range(1, self.height-1):
vector = self.data[x][y]
x_vectors[x][y] = vector[0].item(0)
y_vectors[x][y] = vector[1].item(0)
u_coord = np.array(x_vectors)
v_coord = np.array(y_vectors)
# Static image size
y, x = np.mgrid[-1:1:400j, -1:1:400j]
# Background + vector flow
mg = mpimg.imread(background)
plt.figure()
plt.imshow(mg, extent=[-1, 1, -1, 1])
plt.streamplot(x, y, u_coord, v_coord, color='y', density=2, cmap=plt.cm.autumn)
plt.savefig(file_path+'Streamplot.png')
plt.close()
The problem is 'cause my np.mgrid should vary from -1 to 1 and have the self.width and self.height. But if do:
y, x = np.mgrid[-1:1:self.width, -1:1:self.height]
It doesn't work. And also don't know what this j means, but this seems to be important, 'cause if I this take off the j (even if with an static size), it doesn't work either. So, I'm wondering how I could do this mgrid to be dynamically, following the self size.
Thank you in advance.
Short answer
j is for imaginary part of a complex number, and gives numpy.mgrid the number of values to generate. In your case, here is what you shall write:
y, x = np.mgrid[-1:1:self.width*1j, -1:1:self.height*1j]
Long answer
step value in np.mgrid[start:stop:step] shall be understood as follows:
if step is real, then it is used as stepping from start up to stop, not included.
if step is pure imaginary (e.g. 5j), it is used as the number of steps to return, stop value included.
if step is complex, (e.g. 1+5j), well I must say I don't understand the result...
The j is for an imaginary part.
Examples:
>>> np.mgrid[-1:1:0.5] # values starting at -1, using 0.5 as step, up to 1 (not included)
array([-1. , -0.5, 0. , 0.5])
>>> np.mgrid[-1:1:4j] # values starting at -1 up to +1, 4 values requested
array([-1. , -0.33333333, 0.33333333, 1. ])
>>> np.mgrid[-1:1:1+4j] # ???
array([-1. , -0.3596118 , 0.28077641, 0.92116461])
I'm trying to use fancy indexing instead of looping to speed up a function in Numpy. To the best of my knowledge, I've implemented the fancy indexing version correctly. The problem is that the two functions (loop and fancy-indexed) do not return the same result. I'm not sure why. It's worth pointing out that the functions do return the same result if a smaller array is used (e.g., 20 x 20 x 20).
Below I've included everything necessary to reproduce the error. If the functions do return the same result, then the line find_maxdiff(data) - find_maxdiff_fancy(data) should return an array full of zeroes.
from numpy import *
def rms(data, axis=0):
return sqrt(mean(data ** 2, axis))
def find_maxdiff(data):
samples, channels, epochs = shape(data)
window_size = 50
maxdiff = zeros(epochs)
for epoch in xrange(epochs):
signal = rms(data[:, :, epoch], axis=1)
for t in xrange(window_size, alen(signal) - window_size):
amp_a = mean(signal[t-window_size:t], axis=0)
amp_b = mean(signal[t:t+window_size], axis=0)
the_diff = abs(amp_b - amp_a)
if the_diff > maxdiff[epoch]:
maxdiff[epoch] = the_diff
return maxdiff
def find_maxdiff_fancy(data):
samples, channels, epochs = shape(data)
window_size = 50
maxdiff = zeros(epochs)
signal = rms(data, axis=1)
for t in xrange(window_size, alen(signal) - window_size):
amp_a = mean(signal[t-window_size:t], axis=0)
amp_b = mean(signal[t:t+window_size], axis=0)
the_diff = abs(amp_b - amp_a)
maxdiff[the_diff > maxdiff] = the_diff
return maxdiff
data = random.random((600, 20, 100))
find_maxdiff(data) - find_maxdiff_fancy(data)
data = random.random((20, 20, 20))
find_maxdiff(data) - find_maxdiff_fancy(data)
The problem is this line:
maxdiff[the_diff > maxdiff] = the_diff
The left side selects only some elements of maxdiff, but the right side contains all elements of the_diff. This should work instead:
replaceElements = the_diff > maxdiff
maxdiff[replaceElements] = the_diff[replaceElements]
or simply:
maxdiff = maximum(maxdiff, the_diff)
As for why 20x20x20 size seems to work: This is because your window size is too large, so nothing gets executed.
First, in fancy your signal is now 2D if I understand correctly - so I think it would be clearer to index it explicitly (eg amp_a = mean(signal[t-window_size:t,:], axis=0). Similarly with alen(signal) - this should just be samples in both cases so I think it would be clearer to use that.
It is wrong whenever you are actually doing something in the t loop - when samples < window_lenght as in the 20x20x20 example, that loop never gets executed. As soon as that loop is executed more than once (ie samples > 2 *window_length+1) then the errors come. Not sure why though - they do look equivalent to me.