I'm trying to make a simple game on an FPGA board and I was hoping to use sprites for background and player characters. To do this I was told to use either Python or Matlab to take an image and convert it to a sprite. I am trying to convert the images to have 16 bits per pixel since the memory on the board can hold 16 bits per memory location.
For the past few days I've been searching for a tutorial or some sort of example to help me figure this out, but I've had no real luck. So I'm just hoping someone can give me advice on how to accomplish this or just point me in the right direction.
I don't know if this helps but I've discussed this with another person and the advice they gave me is to just take an image and put it into a matrix and divide it by 8. Then I'll have 15 bits for color, 5 each for RGB, and then a 16th bit for transparency. I should then write this matrix to a hex file and put it onto the boards memory. There is a program that properly edits the hex file into the correct format so that's not something I have to worry about.
Does this approach seem right? Or is there a better way? Thanks in advance!
Related
I'm very new to image processing in Python (and not massively adept at python in general), so forgive me for how stupid this may sound. Im working with an AI for object detection, and need to submit 1000x1000 pixel images to it, that have been divided up from larger images of varying lengths and widths (not necessarily divisible, but I have a way of padding out images less than 1000x1000). In order for this to work, I need 200 pixel overlap on each segment or the AI will pick may miss objects.
I've tried a host of methods, and have either got the image to divide up using the methods suggested in Creating image tiles (m*n) of original image using Python and Numpy and how can I split a large image into small pieces in python (plus a few others that effectively do the same techniques in different words. I've been able to make a grid and get the tile names from this, using How to determine coordinate of grid elements of an image, however have not been able to get overlap to work in this, as I would then just tile it normally.
Basically what I'm saying is that I've found one way to cut the images up that works, and one way to get the tile coordinates, but I am utterly failing at putting it all together. Does anyone have any advice on what to do here?
So far I've not found a direct approach to my end goal online - and I've tried mucking around with different scripts (like the ones listed above), but feel like Im barking up totally the wrong tree.
Recently I wanted to make a script which could convert images into game levels (the game is Antiyoy online), the game can give out levels in a special text form which makes them easy to edit just by changing specific text.
I am rather begginer to Python and don't know much yet. I didn't start doing anything yet because I can't figure out which module should I use for the task of the image reading. I want for it to read the color of a chosen pixel on the image (and average out the color of multiple pixels for scaling if possible) and give out it in hex or other form in which I can convert it to the color closest to the ones selected by me.
Doing some research I'm pretty sure NumPy can do this but I have no experience with it and there are probably more specialized modules for this.
If what am I asking for is hard to understand I'm open for questions, thank you in advance.
First of all sorry for my bad english :)
So what i'm trying to accomplish is to convert specific image into coordinates but unfortunately i can't find good and easy way to do it so that's why i'm asking here :)
Example i want to convert this image ( all off it or just specific parts) into coordinates:
I need this to move mouse accurately since there's many similar pixels on image, unless ofcourse anyone knows easier way ;)
So I've been making a game using Python, specifically the PyGame module. Everything has been going fairly well (except Python's speed, am I right :P), and I've got a nice list of accomplishments from this, but I just ran into a... speedbump. Maybe a mountain. I'm not to sure yet. The problem is:
How do I go about implementing a Camera with my current engine?
That probably means nothing to you, though, so let me explain what my current engine is doing: I have a spritesheet that I use for all images. The map is made up of a double array of Tile objects, which fills up the display (800 x 640). The map also contains references to all Entity's and Particles. So now I want to create a a camera, so that the map object can be Larger than the display. To do this I've devised that I'll need some kind of camera that follows the player (with the player at the center of the screen). I've seen this implemented before in games, and even read a few other similar posts, but I need to also know Will I have to restructure all game code to work this in? My first attempt was to make all object move on the screen when the player moves, but I feel that there is a better way to do this, as this screws up collision detection and such.
So, if anyone knows any good references to problems like this, or a way to fix it, I'm all ears... er.. eyes.
Thanks
You may find this link to be of interest.
In essence, what you need to do is to distinguish between the "actual" coordinates, and the "display" coordinates of each object.
What you would do is do the bulk of the work using the actual coordinates of each entity in your game. If it helps, imagine that you have a gigantic screen that can show everything at once, and calculate everything as normal. It might help if you also designed the camera to be an entity, so that you can update the position of your camera just like any other object.
Once everything is updated, you go to the camera object, and determine what tiles, objects, particles, etc. are visible within the window, and convert their actual, world coordinates to the pixel coordinates you need to display them correctly.
If this is done correctly, you can also do things like scale and otherwise modify the image your camera is displaying without affecting gameplay.
In essence, you want to have a very clear distinction between gameplay and physics logic/code, and your rendering/display code, so your game can do whatever it wants, and you can render it however you want, with minimal crossover between the two.
So the good news is, you probably don't need to change anything about how your game itself works. The bad news is, you'll probably have to go in and rewrite your rendering/drawing code so that everything is drawn relative to the camera, not to the world.
Since I can't have a look into your code, I can't assess how useful this answer will be for you.
My approach for side scroller, moveable maps, etc. is to blit all tiles onto a pygame.Surface spanning the dimensions of the whole level/map/ etc. or at least a big chunk of it. This way I have to blit only one surface per frame which is already prepared.
For collision detection I keep the x/y values (not the entire rect) of the tiles involved in a separate list. Updating is then mainly shifting numbers around and not surfaces anymore.
Feel free to ask for more details, if you deem it useful :)
This is from a class assignment:
This program is about listening to colors. We will treat pictures as piano scores.
Write a function called listenToPicture that takes one picture as an argument. It first shows the picture. Next, it will loop through every 4th pixel in every 4th row and do the following. It will compute the total of the red, green and blue levels of the pixel, divide that by 9, then add the result to 24. That number will be the note number played by playNote.
That means that the darker the pixel, the lower the note; the lighter the pixel, the higher the note. It will play that note at full volume (127) for a tenth of a second (100 milliseconds). Every time it moves to a new row, it prints out the row number (y value) on the console.
Your main function will ask the user to select a file with a picture. It will print the number of notes to be played (which is the number of pixels in the picture divided by 16; why?). It will then call the listenToPicture function.
Ok I edited in what I have so far and the only thing I haven't figured out (I believe) is how to print the number of notes in the main function. By the way, thanks to everyone who helped. You guys are amazing. Is there a place to donate to this site?
def main():
pic=makePicture(pickAFile())
show (pic)
listenToPicture(pic)
def listenToPicture(pic):
w=getWidth(pic)
h=getHeight(pic)
for y in range(0,h,4):
printNow(str(y))
for x in range (0,w,4):
px=getPixel(pic,x,y)
r=getRed(px)
g=getGreen(px)
b=getBlue(px)
tot=((r+g+b)/9)+24
playNote(tot,100,127)
Robbie is right for the width/height for loops.
The loop you are using to get the pixels and play the notes looks as if it is getting ALL the pixels and playing them all every time you get a unique x and y. What you should be doing is be getting the pixel at (x,y) then pulling out the rgb values and calling play note on that. You really shouldn't even need the 3rd for loop. You're not too far off. Try writing the problem out in logical steps in plain English. I find that helps a ton before I start coding.
Good Luck.
You asked about similar things before. Well, since you didn't put any code in about actually retrieving the pixel value, I'll assume that you still aren't able to do that. I know this is going way beyond your question, but last time you were pretty vague about your question and indicated that you needed more help than just what you had asked. If any of this is not necessary then just ignore it. I'm just trying to offer some advice and you can take it or leave it.
In case you haven't figured out how to read a pixel, I recommend using PIL. It has functions for opening images documented here. Then you can access a pixel in the image by its x and y value using getpixel which is documented on the same page.
For playing the note I would recommend looking into the PyAudio module and just making your own sinusoids of various frequencies (depending on the magnitude of the pixel) that you write to an open audio stream. There might be better packages for this part, but this is what I have used in my small adventures in Python audio.
For the audio stuff, I would try just outputting a sound at a fixed frequency before trying to actually emit a varying frequency.
Edit:
Your loops look better now so I took out my stuff about your loops.