Scaling down a gif using pyglet - python

I am currently trying to make a simple image viewers that lets me view images or gifs. I've done most of the work and it's coming out how I'd like it, but I'd also like to be able to view gifs that are too big for my screen. I have a normal 1080p monitor, but the images are 1900x1300, which I understand is an odd image size for gifs. Problem is, pyglet has no obvious way to scale down gifs when drawing them. I'm using sprites, but scaling down the sprite merely returns an error, as the image itself hasn't actually changed, and is still 1900x1300.
What I need is a method to take a gif file, scale it down by 1/2, and render that gif using pyglet. I could probably use other libraries, but I'm trying to keep the project small.
Thanks!

Related

Tkinter screwing with PIL image transparency?

I made a custom image viewing program in python since I couldn't find one that worked how I wanted it to, and I know it at least USED to work with transparent gifs but now it doesn't. The effects seem to vary from gif to gif to gif but it always is an issue with the transparency. Doing some testing with .save I found that the PIL image is completely fine and I can export it as a png with absolutely no issues and perfect transparency. With that said, despite making the program I never really learned Tkinter since I just didn't need to use it beyond simple canvas clearing and updating, so I have no clue how to test beyond PIL. I believe the issue should be in these lines :
image = ImageTk.PhotoImage(GifFrameSized) #GifFrameSized is the resized GIF
imagesprite = canvas.create_image(show.w/2,show.h/2,image=image) #w and h are the width and height of the monitor
root.update_idletasks()
root.update()
canvas.pack()
I genuinely have no idea how the issue could be coming from any of these but I was able to use PIL to save a png of the frame in the line immediately above "image = ImageTk.PhotoImage(GifFrameSized)" and it looked fine so I have to imagine its somewhere in those lines.

Draw multiple PIL.Image in python

I have a python funtion that draws a Fractal to a PIL.Image, but i want to vary the parameters of the function in realtime and to plot it to the screen. How can i plot the image and keep updating the ploted image each time the parametes of the function vary
Use matplotlib, wxPython, PyQt, PyGame, Tk/TCL or some other lib to display the image.
Draw as many images as you need, whenever you need, using any lib you need, and then display it on a screen using one of above mentioned or some other GUI libs.
If you are working with plots and math functions, matplotlib will help you most. You might even totally use it, forgoing PIL completely.
If you want to stick to PIL only, you will have to write your own show() function, that will use some external imaging software which will seemlessly change to show another image when you send it. Perhaps Irfan View would do.

Pygame, change resolution of my whole game

I have designed my whole pygame to work for 1920x1080 resolution.
However, I have to adapt it for smaller resoltion.
There is a lot of hardcoded value in the code.
Is there a simple way to change the resolution, like resizing the final image at the end of each loop, just before drawing it ?
You can use this : pygame.transform.scale or better (but less efficient) pygame.transform.smoothscale.
To do that, just change the reference surface where you draw (screen) to a generic surface. And after, just resize it, and put it on screen.
I can show you some code if you don't understand how it's work. Just ask.
i usually create a base resolution and then whenever the screen is resized, i scale all the assets and surfaces by ratios.
This works well if you have assets that are of large resolution and you have scaled then down but would pixelate for smaller images.
you can also create multiple assets file for each resolution and when ever your resolution goes above one of the available asset resolution you can change the image. you can think in in context of css media query to better understand.

Can you show a static image after some animation loops in a gif?

I have an animated gif that I want to loop n times (animation.gif). After the loop, I want to show a static image (static.gif). The animation will be displayed over the web, so the file size needs to be a small as possible.
I tried implementing it with Imagemagick by adding the static image with a zero delay ...
convert -loop 3 animated.gif -delay 0 static.gif newanim.gif
Although the static image is shown in the end, the problem is that after every iteration static.gif is shown for a split second.
Another thing I tried was to add the animation 3 times and the static image at the end. This works perfectly but the file becomes too large especially if the animation is long, and if it is looped many times. For instance, a 6.1mb animation becomes ~18mb.
convert -loop 1 animated.gif animated.gif animated.gif static.gif newanim.gif
I'm using Python in a linux environment to implement this, so if there are programmatic ways of doing this instead of Imagemagick that would work as well.
EDIT: I failed to mention a constraint: it needs to work without any client side programming (Javascript, CSS, etc). It needs to be a purely gif solution. This constraint makes it different from How to stop an animated gif from looping
No, you can't. The GIF anim format doesn't provide that ability.
There are various ways to do what you want on a Web page, but you'll need to have a separate file for the static image, and you'll need some code (eg JavaScript, or maybe CSS) to display the static image after the desired number of anim loops.
I'm pretty sure that your problem with resulting gif size is in tools you are using. I've created those samples, one with animation and another with animation repeated 2 times and got the same size for both. Check it yourself:
I've used ScreenToGif, it is super buggy and only works on Windows but does its job and can open existing gif or list of images for editing.
If you need solution for Linux take a look at FFmpeg, but I didn't used it myself.

streaming video from camera using pyqt4

I am using a thirdparty library that utilizes a circular buffer for image data and video. It has a method for getting the last image or popping it. I was wondering what would be the best way to implement video functionality in pyqt for this. Is there some video widget with a callback function that I could use? Or do I have to somehow implement parallel processing on my own? Parallel to this, suggestions on how this would be implemented in qt if you dont know how to implement it in pyqt would also be very much appreciated.
Thanks in advance!
I would pop the last image (from the circular buffer) and load it into a QPixmap. This allows you to put the image into a form that a pyqt4 gui would be able to load.
Depending on your libraries image format (straight forward bmp? jpg? png? raw pixels?), you can load the data into the QPixmap in one of two ways.
First: do it pixel by pixel (set the width and height of the image and copy each pixels value over one by one). This method is slow and I'd only resort to it if necessary.
Second: if the image is being stored in a common format (ones that are 'supported' are listed here), this becomes trivial.
Then after the image is loaded in the QPixmap, I would use QLabel.setPixmap() to display the image.
Do this with a QTimer slot at a certain rate and you'll be able to display your images in a pyqt4 gui.

Categories

Resources