Get the value of an individual pixel - python

I'm making Tetris and im trying to make a line clear function. I have a way to do this that is a bit complicated and im looking to simplify.
My idea is to check some pixel RGB values and see if they're not black (the color of the background). Is there a quick way to get the RGB value of a pixel on the screen?

pygame.PixelArray should do the trick. You can use it on the screen surface.
But personally, I wouldn't recommend that you use the pixels as a reference.
Handling everything logically is a lot better.

As Icfseth noted "Handling everything logically is a lot better."
Tetris is an array of blocks that occupy a logical space that has a granularity larger than a pixel. Your game model should operate in "block coordinates" and detect whether a space is open or filled based on that. The screen is just a representation or "view" of the block space.
Try first just moving a 16px square around a 320x320px screen in steps of 16px. You could keep track of every pixel, but it makes much more sense to have have the block position ranging from x = [0..20], y = [0..20], and let the display code worry about how to show a block at position (2,3).

Related

How would I go about creating a boolean matrix from a 2D game image?

Here presented is a screenshot from a 2D game:
(Sorry, don't have the reputation to post inline images)
You can walk on the grass, the asphalt, sidewalk etc. but you cannot walk into buildings/walls/any structures. Here is a small gif to demonstrate this
What I want: my ideal goal is to create a boolean matrix, where 0 means you can't traverse through that pixel and 1 means you can. So, in this example, it would look something like this (I forgot to add the trees)
Where the "red zones" would be 0 in this array and the other tiles would be 1
What I've tried and what I had in mind:
All of my ideas lead to one thing, and I don't know if it's right: creating one universal list of "accepted" colors where we could set the pixel to be set to 1 if it's in that list of acceptable colors. This of course has its downsides: one of them being that a non-traversable territory could easily have a color of a traversable one (for example if we're using green as in the color of grass - that could be swamp or bushes), and the other downside is computation time to compare each pixel.
I've considered thresholds but if I got it correctly, it's only used for white and black comparisons.
I've also looked at image quantization using PIL, but the quantized image didn't change much for me. What would I do after the quantization?
If we look deeper into this game, a tile here is considered a 32x32 pixel area and it's drawn using a tileset (sort of a canvas): https://i.imgur.com/j9CNQdi.png and there is a list of "non-traversable" tiles (blocking), maybe it's somehow possible to use it?
Anyways, I'm lost at what to look for. I've been researching but found nothing similar. I would appreciate any advice on where to look, thanks.

How can I use binary_fill_holes for a tiled image?

My question is simple.
I have a large binary image, and I would like to fill the holes in the objects. Iv'e tried using binary_fill_holes, but since the image is so big, I have to tile it, so only a small part of the image is loaded into a numpy array at a time. Therefore holes may be divided over several tiles, and are not filled by binary_fill_holes.
Here is an example of one of the tiles:
The white is the object, and all the black pixels are actually part of a hole that is also present in some of the neighbouring tiles. However, in this single tile it does not look like a hole, and is therefore not filled by the function.
I think maybe something with connected components could be used, but it gets the same tiling issues, and I am hoping there may be some easy, pythonic way.

Pygame Large Surfaces

I'm drawing a map of a real world floor with dimensions roughly 100,000mm x 200,000mm.
My initial code contained a function that converted any millimeter based position to screen positioning using the window size of my pygame map, but after digging through some of the pygame functions, I realized that the pygame transformation functions are quite powerful.
Instead, I'd like to create a surface that is 1:1 scale of real world and then scale it right before i blit it to the screen.
Is this the right way to be doing this? I get an error that says Width or Height too large. Is this a limit of pygame?
I dont fully understand your question, but to attempt to answer it here is the following.
No you should not fully draw to the screen then scale it. This is the wrong approach. You should tile very large surfaces and only draw the relevant tiles. If you need a very large view, you should use a scaled down image (pre-scaled). Probably because the amount of memory required to draw an extremely large surface is prohibitive, and scaling it will be slow.
Convert the coordinates to the tiled version using some sort of global matrix that scales everything to the size you expect. So you should also filter out sprites that are not visible by testing their inclusion inside the bounding box of your view port. Keep track of your view port position. You will be able to calculate where in the view port each sprite should be located based on its "world" coordinates.
If your map is not dynamic, I would suggest draw a map outside the game and load it in game.
If you plan on converting the game environment into a map, It might be difficult for a large environment. 100,000mm x 200,000mm is a very large area when converting into a pixels. I would suggest you to scale it down before loading.
As for scaling in-game, you can use pygame.transform.rotozoom or pygame.transform.smoothscale.
Also like the first answer mentions, scaling can take significant memory and time for very large images. Scaling a very large image to a very small image can make the image incomprehensible.

2D Game Engine - Implementing a Camera

So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)

Python (Jython) Playing notes from pixels in picture

This is from a class assignment:
This program is about listening to colors. We will treat pictures as piano scores.
Write a function called listenToPicture that takes one picture as an argument. It first shows the picture. Next, it will loop through every 4th pixel in every 4th row and do the following. It will compute the total of the red, green and blue levels of the pixel, divide that by 9, then add the result to 24. That number will be the note number played by playNote.
That means that the darker the pixel, the lower the note; the lighter the pixel, the higher the note. It will play that note at full volume (127) for a tenth of a second (100 milliseconds). Every time it moves to a new row, it prints out the row number (y value) on the console.
Your main function will ask the user to select a file with a picture. It will print the number of notes to be played (which is the number of pixels in the picture divided by 16; why?). It will then call the listenToPicture function.
Ok I edited in what I have so far and the only thing I haven't figured out (I believe) is how to print the number of notes in the main function. By the way, thanks to everyone who helped. You guys are amazing. Is there a place to donate to this site?
def main():
pic=makePicture(pickAFile())
show (pic)
listenToPicture(pic)
def listenToPicture(pic):
w=getWidth(pic)
h=getHeight(pic)
for y in range(0,h,4):
printNow(str(y))
for x in range (0,w,4):
px=getPixel(pic,x,y)
r=getRed(px)
g=getGreen(px)
b=getBlue(px)
tot=((r+g+b)/9)+24
playNote(tot,100,127)
Robbie is right for the width/height for loops.
The loop you are using to get the pixels and play the notes looks as if it is getting ALL the pixels and playing them all every time you get a unique x and y. What you should be doing is be getting the pixel at (x,y) then pulling out the rgb values and calling play note on that. You really shouldn't even need the 3rd for loop. You're not too far off. Try writing the problem out in logical steps in plain English. I find that helps a ton before I start coding.
Good Luck.
You asked about similar things before. Well, since you didn't put any code in about actually retrieving the pixel value, I'll assume that you still aren't able to do that. I know this is going way beyond your question, but last time you were pretty vague about your question and indicated that you needed more help than just what you had asked. If any of this is not necessary then just ignore it. I'm just trying to offer some advice and you can take it or leave it.
In case you haven't figured out how to read a pixel, I recommend using PIL. It has functions for opening images documented here. Then you can access a pixel in the image by its x and y value using getpixel which is documented on the same page.
For playing the note I would recommend looking into the PyAudio module and just making your own sinusoids of various frequencies (depending on the magnitude of the pixel) that you write to an open audio stream. There might be better packages for this part, but this is what I have used in my small adventures in Python audio.
For the audio stuff, I would try just outputting a sound at a fixed frequency before trying to actually emit a varying frequency.
Edit:
Your loops look better now so I took out my stuff about your loops.

Categories

Resources