I am rendering Mandelbrot fractal on a pygame surface from a numpy array.
When I generate a 10k px * 10k px image and save it using pylab in a 10 * 10 inch image with a 1000dpi I get a 10k pixels image which render pretty well when windows build in photo app display it with zoom ajustment.
In pygame, the image looks pretty ugly although it is displayed with the same size :
I'm using this code :
pygame.init()
display = pygame.display.set_mode((1000, 1000))
surf = pygame.surfarray.make_surface(gimage)
surf = pygame.transform.rotate(surf, 90)
surf = pygame.transform.scale(surf, (1000, 1000))
How would one set pygame image size and ajust DPI ?
scale() is "fast scale operation" and doesn't use resampling.
There is also smoothscale() which uses different algorythm.
Maybe it will give you better result.
You can also use PIL/Pillow to resize() with different methods of resampling.
You can also try to use CV2 to resize().
Yesterday there was question how to use CV2 with PyGame
Related
I am using Python 3.10 and moviepy library to process videos. I need to scale up (zoom) video without changing its resolution. There are a lot of example of using moviepy resize method, but it changes only the resolution.
Are there any options of scaling video with moviepy or maybe you can suggest some solutions with openCV?
For achieving zoomed in video, we may combine scaling and cropping.
Example:
from moviepy.editor import VideoFileClip
clip = VideoFileClip("input.mp4")
width, height = clip.size
resized_and_cropped_clip = clip.resize(2).crop(x1=width//2, y1=height//2, x2=width*3//2, y2=height*3//2)
resized_and_cropped_clip.write_videofile("output.mp4")
resize(2) - resize the video by a factor of 2 in each axis.
crop(x1=width//2, y1=height//2, x2=width*3//2, y2=height*3//2) - crop a rectangle with the original image size around the center of the resized video.
Alternately we may first crop and then resize.
It is more efficient, but may result minor degradation at the frame's margins:
from moviepy.editor import VideoFileClip
clip = VideoFileClip("input.mp4")
width, height = clip.size
resized_and_cropped_clip = clip.crop(x1=width//4, y1=height//4, x2=width*3//4, y2=height*3//4).resize(2)
resized_and_cropped_clip.write_videofile("output.mp4")
The above examples show zoom in by a factor of x2.
For zooming by other factor, we have to adjust the computation of x1, y1, x2, y2 arguments.
I'm trying to get my screen size using python. I keep getting the incorrect value because my code is taking into account the scale factor. For example: My screen resolution is set to: 2736 x 1824. My scale factor is 200% so when I execute my code I get 1368 x 912.
import win32api
width = GetSystemMetrics(0)
height = GetSystemMetrics(1)
print('Width:', width)
print('Height:', height)
Is there any way I can get the resolution as shown in my windows settings without the scale factor? I want to be able to read 2736 x 1824.
import ctypes
scaleFactor = ctypes.windll.shcore.GetScaleFactorForDevice(0) / 100
You can get the scale factor via shcore using ctypes.
Then from this you can calculate the real resolution.
ie: scaleFactor 1.25 is 125%
Your application is not DPI aware. Windows has to lie to the application about the dimensions and magnify the GUI to fit the scale.
A quick fix:
import ctypes
ctypes.windll.user32.SetProcessDPIAware()
For an executable it is recommended to set DPI awareness in the manifest file.
I have implemented pyqtgraph inside QGraphicsView in PyQt5. When I display the image the following way, it is stretched out and expands in the same aspect ratio as the screen. How do I fix this?
image = pg.ImageItem(asarray(Image.open('pic.png')) )
self.graphicsView.addItem(image)
image.rotate(270)
EDIT: found out how to rotate image, so I updated question with the solution. Now I am just trying to scale it properly.
You probably want something like:
import pyqtgraph as pg
from PIL import Image
from numpy import asarray
app = pg.mkQApp()
# Set up a window with ViewBox inside
gv = pg.GraphicsView()
vb = pg.ViewBox()
gv.setCentralItem(vb)
gv.show()
# configure view for images
vb.setAspectLocked()
vb.invertY()
# display image
img_data = asarray(Image.open('/home/luke/tmp/graph.png'))
image = pg.ImageItem(img_data, axisOrder='row-major')
vb.addItem(image)
The important pieces here that set the image scaling/orientation are:
using ImageItem(axisOrder='row-major') because image files are stored in row-major order
vb.invertY() because image files have the +y axis pointing downward
and vb.setAspectLocked() to keep the pixels square
I used np.rot90() instead, it's much faster and cythonable
image = pg.ImageItem(np.rot90(np.asarray(Image.open('pic.png'))))
Below is my Code. It is a server program that receives stream of bitmaps from the client and I want to display the bitmap in real time. However, the "frame.set_data(im)" is the bottle neck of my code and I only get 5 FPS. Disabling that line, I get aroung 15fps for receiving images. (the display is disabled ofcourse without set_data()).
I looked for other answers, and I know I have to perform blitting to fast things up with MatPlotLib. However, I have no idea how to perform blitting with bitmaps. Could someone help me fast things up?
import matplotlib
matplotlib.use('TKAgg')
import matplotlib.pyplot as plt
while 1:
# Decode and Save Image
imgdata = base64.b64decode(data)
stream = io.BytesIO(imgdata)
# Display realtime gameplay
im = plt.imread(stream,"bmp")
if frame is None:
print "Start Rendering.."
frame = plt.imshow(im)
plt.show()
else:
frame.set_data(im)
plt.pause(0.00000001)
Thanks #kazemakase
I was able to achieve desired speed with pygame.
Below is my code.
import pygame
pygame.init()
screen = pygame.display.set_mode(size)
while 1:
# Decode and Save Image
imgdata = base64.b64decode(data)
stream = io.BytesIO(imgdata)
pygame.event.get()
img=pygame.image.load(stream,'bmp')
screen.blit(img,(0,0))
pygame.display.flip() # update the display
I am trying to use overlays in pygame to display video. The trouble is that my frames are loaded as RGB Surface()s while Overlay().display() requires YUV format.
I saw that pygame.camera module contains a colorspace() function that should be able to convert RGB Surface() to YUV one. Does anyone know how to do the trick? The conversion and the displaying?
pygame.camera.colorspace() is not very well documented.
If this doesn't work, does anyone know how to do this by using PIL to convert to YUV?
I haven't had much time to play with .Overlay()
but the colorspace function seems to go as follows:
yuv_surface = pygame.camera.colorspace(rgb_surface,"YUV")
This example runs without error:
import pygame
import pygame.camera
pygame.init()
pygame.camera.init()
screen = pygame.display.set_mode((400,400))
rgb_surface = pygame.Surface((400,400))
yuv_surface = pygame.camera.colorspace(rgb_surface,"YUV")
screen.blit(yuv_surface,(0,0))
clock = pygame.time.Clock()
while True:
pygame.display.flip()
clock.tick(30)