I am currently working on a project where I need to take a 30x40 pixels screenshot from a specific area of my screen. This is not very hard to do as there are plenty of methods that do that.
The issue I have is that I need to take about 10 to 15 screenshots/second of the size I mentioned. When I looked at some of these methods that capture the screen, I have seen that when you give them parameters for a smaller selection, there's cropping involved. So a full screenshot is being taken, then the method crops it to the given size. That seems like a waste of resources if I'm only going to use 30x40 image, especially considering I will take thousands of screenshots.
So my question is: Is there a method that ONLY captures a part of the screen without capturing the whole screen cutting the desired section out of the big screenshot? I'm currently using this command:
im = pyautogui.screenshot(region=(0,0, 30, 40)).
The Python mss module ( https://github.com/BoboTiG/python-mss , https://python-mss.readthedocs.io/examples.html ), an ultra fast cross-platform multiple screenshots module in pure Python using ctypes ( where MSS stands for Multiple Screen Shots ), is what you are looking for. The screenshots are fast enough to capture frames from a video and the smaller the part of the screen to grab the faster the capture (so there is apparently no cropping involved ). Check it out. mss.mss().grab() outperforms by far PIL.ImageGrab.grab(). Below a code example showing how to get the data of the screenshot pixels (allows to detect changes):
import mss
from time import perf_counter as T
left = 0
right = 2
top = 0
btm = 2
with mss.mss() as sct:
# parameter for sct.grab() can be:
monitor = sct.monitors[1] # entire screen
bbox = (left, top, right, btm) # screen part to capture
sT=T()
sct_im = sct.grab(bbox) # type: <class 'mss.screenshot.ScreenShot'>
eT=T();print(" >", eT-sT) # > 0.0003100260073551908
print(len(sct_im.raw), sct_im.raw)
# 16 bytearray(b'-12\xff\x02DU\xff-12\xff"S_\xff')
print(len(sct_im.rgb), sct_im.rgb)
# 12 b'21-UD\x0221-_S"'
Related
I'm trying to render a cube (default blender scene) with a camera facing it. I have added a spotlight at the same location as the camera. Spotlight direction also faces towards the cube.
When I render, location changes take effect for both camera and spotlight but, rotations don't. scene context update is deprecated now. I have seen other update answers, but they don't seem to help.
I have done some workarounds and they seem to work, but this is not the correct way.
If I render the same set of commands twice (in a loop), I get the correct render.
If I run the script from the blender's python console (only once), I get the correct render. But If the same code is run as a script inside the blender, render is again wrong.
import pdb
import numpy as np
import bpy
def look_at(obj_camera, point):
loc_camera = obj_camera.matrix_world.to_translation()
direction = point - loc_camera
rot_quat = direction.to_track_quat('-Z', 'Y')
obj_camera.rotation_euler = rot_quat.to_euler()
data_path='some folder'
locs=np.array([ 0.00000000e+00, -1.00000000e+01, 3.00000011e-06]) #Assume, (I have big array where camera and spotlight needs to be placed, and then made to look towards cube)
obj_camera = bpy.data.objects["Camera"]
obj_other = bpy.data.objects["Cube"]
bpy.data.lights['Light'].type='SPOT'
obj_light=bpy.data.objects['Light']
loc=locs
i=0
##### if I run following lines two times, correct render is obtained.
obj_camera.location = loc
obj_light.location= obj_camera.location
look_at(obj_light, obj_other.matrix_world.to_translation())
look_at(obj_camera, obj_other.matrix_world.to_translation())
bpy.context.scene.render.filepath = data_path+'image_{}.png'.format(i)
bpy.ops.render.render(write_still = True)
You might need to call bpy.context.view_layer.update() (bpy.context.scene.update() with older versions than blender 2.8) after changing the camera orientation by obj_camera.rotation_euler = rot_quat.to_euler() and make sure that the layers that are going to be rendered are active when calling update() (see here https://blender.stackexchange.com/questions/104958/object-locations-not-updating-before-render-python).
(A bit late ;-) but this was one of the rare questions I found for a related issue.)
I just got a Sick Tim 571 laser scanner. Since I'm new to ROS I wanted to test some stuff in an easy rospy implementation.
I thought that the code below will show me a live output of the laser measurements like it is possible in Rviz (Rviz works fine for me) - but in Python and with the possibility to use the measurements in my own code. Unfortunately, the output frame shows only one static measurement (from the time when the Python code was started for the first time) over and over again.
If I run Rviz parallel to this Python code, I get a dynamically updated representation of the measured area.
I thought that the .callback(data) function is called with a new set of laser scanner data each time. But it seems like that it works not as I imagined. So the error is possibly located in .laser_listener() where the callback function is called.
TL;DR
How can I use dynamically updated (live) laser scanner measurements in rospy?
import rospy
import cv2
import numpy as np
import math
from sensor_msgs.msg import LaserScan
def callback(data):
frame = np.zeros((500, 500,3), np.uint8)
angle = data.angle_min
for r in data.ranges:
#change infinite values to 0
if math.isinf(r) == True:
r = 0
#convert angle and radius to cartesian coordinates
x = math.trunc((r * 30.0)*math.cos(angle + (-90.0*3.1416/180.0)))
y = math.trunc((r * 30.0)*math.sin(angle + (-90.0*3.1416/180.0)))
#set the borders (all values outside the defined area should be 0)
if y > 0 or y < -35 or x<-40 or x>40:
x=0
y=0
cv2.line(frame,(250, 250),(x+250,y+250),(255,0,0),2)
angle= angle + data.angle_increment
cv2.circle(frame, (250, 250), 2, (255, 255, 0))
cv2.imshow('frame',frame)
cv2.waitKey(1)
def laser_listener():
rospy.init_node('laser_listener', anonymous=True)
rospy.Subscriber("/scan", LaserScan,callback)
rospy.spin()
if __name__ == '__main__':
laser_listener()
[EDIT_1]:
When I add print(data.ranges[405]) to the callback function I get this output. It changes slightly. But it's wrong. I covered the whole sensor in the middle of the measurement. The values still only fit for the time when the program was started.
1.47800004482
1.48000001907
1.48000001907
1.48000001907
1.48300004005
1.47899997234
1.48000001907
1.48099994659
1.47800004482
1.47899997234
1.48300004005
1.47800004482
1.48500001431
1.47599995136
1.47800004482
1.47800004482
1.47399997711
1.48199999332
1.48099994659
1.48000001907
1.48099994659
The same as above but the other way around. I started with a covered sensor and lifted the cover during the measurement.
0.0649999976158
0.0509999990463
0.0529999993742
0.0540000014007
0.0560000017285
0.0579999983311
0.0540000014007
0.0579999983311
0.0560000017285
0.0560000017285
0.0560000017285
0.0570000000298
[EDIT_2]:
Oh... if I comment out the whole cv2 part, I get the realtime print output! So cv2 slows it so much down that I get the 15Hz measurement shown at a much slower rate.
So my question is now: Is there an alternative to cv2 that is capable to refresh at a higher rate?
You Can Use Librviz But thats In C++ and i haven't seen python version for it.
You can Use OpenGL (PyOpenGL) But I Don't Recommend It Cause it makes what u intened to do Really Complex but it's fast.
Why Not Use the rviz for visualization Only and do Other things elsewhere?
I've even seen a whole framework Placed In rviz(check Moveit framework). Rviz is Completely Customizable You can write Your Own PlugIns for it and it will Handle The outputing whatever topic You want.
just move out cv2.circle cv2.imshow cv2.waitkey from the for loop,and the problem will be solved.
I am new to Python and have been working with the turtle module as a way of learning the language.
Thanks to stackoverflow, I researched and learned how to copy the image into an encapsulated postscript file and it works great. There is one problem, however. The turtle module allows background color which shows on the screen but does not show in the .eps file. All other colors, i.e. pen color and turtle color, make it through but not the background color.
As a matter of interest, I do not believe the import of Tkinter is necessary since I do not believe I am using any of the Tkinter module here. I included it as a part of trying to diagnose the problem. I had also used bgcolor=Orange rather than the s.bgcolor="orange".
No Joy.
I am including a simple code example:
# Python 2.7.3 on a Mac
import turtle
from Tkinter import *
s=turtle.Screen()
s.bgcolor("orange")
bob = turtle.Turtle()
bob.circle(250)
ts=bob.getscreen()
ts.getcanvas().postscript(file = "turtle.eps")
I tried to post the images of the screen and the .eps file but stackoverflow will not allow me to do so as a new user. Some sort of spam prevention. Simple enough to visualize though, screen has background color of orange and the eps file is white.
I would appreciate any ideas.
Postscript was designed for making marks on some medium like paper or film, not raster graphics. As such it doesn't have a background color per se that can be set to given color because that would normally be the color of the paper or unexposed film being used.
In order to simulate this you need to draw a rectangle the size of the canvas and fill it with the color you want as the background. I didn't see anything in the turtle module to query the canvas object returned by getcanvas() and the only alternative I can think of is to read the turtle.cfg file if there is one, or just hardcode the default 300x400 size. You might be able to look at the source and figure out where the dimensions of the current canvas are stored and access them directly.
Update:
I was just playing around in the Python console with the turtle module and discovered that what the canvas getcanvas() returns has a private attribute called _canvas which is a <Tkinter.Canvas instance>. This object has winfo_width() and winfo_height() methods which seem to contain the dimensions of the current turtle graphics window. So I would try drawing a filled rectangle of that size and see if that gives you what you want.
Update 2:
Here's code showing how to do what I suggested. Note: The background must be drawn before any other graphics are because otherwise the solid filled background rectangle created will cover up everything else on the screen.
Also, the added draw_background() function makes an effort to save and later restore the graphics state to what it was. This may not be necessary depending on your exact usage case.
import turtle
def draw_background(a_turtle):
""" Draw a background rectangle. """
ts = a_turtle.getscreen()
canvas = ts.getcanvas()
height = ts.getcanvas()._canvas.winfo_height()
width = ts.getcanvas()._canvas.winfo_width()
turtleheading = a_turtle.heading()
turtlespeed = a_turtle.speed()
penposn = a_turtle.position()
penstate = a_turtle.pen()
a_turtle.penup()
a_turtle.speed(0) # fastest
a_turtle.goto(-width/2-2, -height/2+3)
a_turtle.fillcolor(turtle.Screen().bgcolor())
a_turtle.begin_fill()
a_turtle.setheading(0)
a_turtle.forward(width)
a_turtle.setheading(90)
a_turtle.forward(height)
a_turtle.setheading(180)
a_turtle.forward(width)
a_turtle.setheading(270)
a_turtle.forward(height)
a_turtle.end_fill()
a_turtle.penup()
a_turtle.setposition(*penposn)
a_turtle.pen(penstate)
a_turtle.setheading(turtleheading)
a_turtle.speed(turtlespeed)
s = turtle.Screen()
s.bgcolor("orange")
bob = turtle.Turtle()
draw_background(bob)
ts = bob.getscreen()
canvas = ts.getcanvas()
bob.circle(250)
canvas.postscript(file="turtle.eps")
s.exitonclick() # optional
And here's the actual output produced (rendered onscreen via Photoshop):
I haven't found a way to get the canvas background colour on the generated (Encapsulated) PostScript file (I suspect it isn't possible). You can however fill your circle with a colour, and then use Canvas.postscript(colormode='color') as suggested by #mgilson:
import turtle
bob = turtle.Turtle()
bob.fillcolor('orange')
bob.begin_fill()
bob.circle(250)
bob.begin_fill()
ts = bob.getscreen()
ts.getcanvas().postscript(file='turtle.eps', colormode='color')
Improving #martineau's code after a decade
import turtle as t
Screen=t.Screen()
Canvas=Screen.getcanvas()
Width, Height = Canvas.winfo_width(), Canvas.winfo_height()
HalfWidth, HalfHeight = Width//2, Height//2
Background = t.Turtle()
Background.ht()
Background.speed(0)
def BackgroundColour(Colour:str="white"):
Background.clear() # Prevents accumulation of layers
Background.penup()
Background.goto(-HalfWidth,-HalfHeight)
Background.color(Colour)
Background.begin_fill()
Background.goto(HalfWidth,-HalfHeight)
Background.goto(HalfWidth,HalfHeight)
Background.goto(-HalfWidth,HalfHeight)
Background.goto(-HalfWidth,-HalfHeight)
Background.end_fill()
Background.penup()
Background.home()
BackgroundColour("orange")
Bob=t.Turtle()
Bob.circle(250)
Canvas.postscript(file="turtle.eps")
This depends on what a person is trying to accomplish but generally, having the option to select which turtle to use to draw your background to me is unnecessary and can overcomplicate things so what one can do instead is have one specific turtle (which I named Background) to just update the background when desired.
Plus, rather than directing the turtle object via magnitude and direction with setheading() and forward(), its cleaner (and maybe faster) to simply give the direct coordinates of where the turtle should go.
Also for any newcomers: Keeping all of the constants like Canvas, Width, and Height outside the BackgroundColour() function speeds up your code since your computer doesn't have to recalculate or refetch any values every time the function is called.
I'm trying to learn Pygame, and the tutorial that I am following has a section explaining how to animate sprites. It gives me a sprite sheet that has 8 images measuring 128x128 each, while the entire sheet measures 1024x128.
Then it presents the following code:
#! /usr/bin/env_python
import pygame, sys
from pygame.local import *
pygame.init()
ZONE = pygame.display.set_mode((400,300))
pygame.display.set_caption("Game Zone")
RED = (255,0,0)
clock = pygame.time.Clock()
counter = 0
sprites = []
sheet = pygame.image.load("spritesheet.gif").convert_alpha()
width = sheet.get_width()
for i in range(int(width/128)):
sprites.append(sheet.subsurface(i*128,0,128,128))
while True:
pygame.display.update()
ZONE.fill(RED)
ZONE.blit(sprites[counter],(10,10))
counter = (counter + 1) % 8
clock.tick(16)
for event in pygame.event.get():
if event.type == QUIT:
pygame.quit()
sys.exit()
The tutorial is very vague about what those lines do, so I wonder:
What does sheet.subsurface() do? And what do those four parameters stand for? (I believe that the third and fourth are referring to the individual images' width and height.)
What does .convert_alpha() do? The tutorial says it "preserves transparency," but I found it strange since I already used images with transparent backgrounds before and none of those needed such conversion.
What does % do? I already know that / stands for division, but the tutorial never explained %.
subsurface gets you a surface that represents a rectangular section of a larger surface. In this case you have one big surface with lots of sprites on it, and subsurface is used to extract the pieces from that surface. You could also create new surfaces and use blit to copy the pixels, but it's a bit easier to use subsurface and it doesn't need to copy the pixel data.
https://www.pygame.org/docs/ref/surface.html#pygame.Surface.subsurface
Suggested search: pygame subsurface
convert and convert_alpha are both used to convert surfaces to the same pixel format as used by the screen. This ensures that you won't lose performance because of conversions when you're blitting them to the screen. convert throws away any alpha channel, whereas convert_alpha keeps it. The comment that you see refers to the choice to use convert_alpha instead of convert, rather than a choice to use convert_alpha instead of nothing.
https://www.pygame.org/docs/ref/surface.html#pygame.Surface.convert
Suggested search: pygame convert_alpha
The '%' operator isn't a Pygame feature, it's just Python's "modulo/remainder" operator. In this case it's used to make the counter variable loop repeatedly through the values 0 through to 7 and back to 0 again.
https://docs.python.org/2/reference/expressions.html#binary-arithmetic-operations
Suggested search: python percent sign
Let's talk about subsurface(). Assume you have 1,600 images you want to load into the program. There are two ways to do that. (Well, more than two, but I'm making a point here.) First, you could create 1,600 files, load each one into a surface in turn, and start the program. Alternately, you could place them in one file, load that one file into a single surface, and use subsurface(). In this case, spritesheet.gif is 128 pixels high, and contains a new image every 128 pixels.
The two ways basically do the same thing, but one may be more convenient than the other. In particular, opening and reading a file has a small performance cost, and if you need to do this 1,600 times in a row, that cost could be significant.
My understanding of a child surface is that it's basically a Pygame Surface, but defined in terms of a parent surface; if you changed the parent Surface, any child surfaces would be changed in the same way. However, in all other ways, it can be treated as a regular surface.
I've been using pyglet for a while now and I really like it. I've got one thing I'd like to do but have been unable to do so far, however.
I'm working on a 2D roleplaying game and I'd like the characters to be able to look different - that is to say, I wouldn't like use completely prebuilt sprites, but instead I'd like there to be a range of, say, hairstyles and equipment, visible on characters in the game.
So to get this thing working, I thought the most sensible way to go on about it would be to create a texture with pyglet.image.Texture.create() and blit the correct sprite source images on that texture using Texture.blit_into. For example, I could blit a naked human image on the texture, then blit a hair texture on that, etc.
human_base = pyglet.image.load('x/human_base.png').get_image_data()
hair_style = pyglet.image.load('x/human_hair1.png').get_image_data()
texture = pyglet.image.Texture.create(width=human_base.width,height=human_base.height)
texture.blit_into(human_base, x=0, y=0, z=0)
texture.blit_into(hair_style, x=0, y=0, z=1)
sprite = pyglet.sprite.Sprite(img=texture, x=0, y=0, batch=my_sprite_batch)
The problem is that blitting the second image into the texture "overwrites" the texture already blitted in. Even though both of the images have an alpha channel, the image below (human_base) is not visible after hair_style is blit on top of it.
One reading this may be wondering why do it this way instead of, say, creating two different pyglet.sprite.Sprite objects, one for human_base and one for hair_style and just move them together. One thing is the draw ordering: the game is tile-based and isometric, so sorting a visible object consisting of multiple sprites with differing layers (or ordered groups, as pyglet calls them) would be a major pain.
So my question is, is there a way to retain alpha when using blit_into with pyglet. If there is no way to do it, please, any suggestions for alternative ways to go on about this would be very much appreciated!
setting the blend function correctly should fix this:
pyglet.gl.glBlendFunc(pyglet.gl.GL_SRC_ALPHA,pyglet.gl.GL_ONE_MINUS_SRC_ALPHA)
I ran into the very same problem and couldn't find a proper solution. Apparently blitting two RGBA images/textures overlapping together will remove the image beneath. Another approache I came up with was using every 'clothing image' on every character as an independent sprite attached to batches and groups, but that was far from the optimal and reduced the FPS dramatically.
I got my own solution by using PIL
import pyglet
from PIL import Image
class main(pyglet.window.Window):
def __init__ (self):
TILESIZE = 32
super(main, self).__init__(800, 600, fullscreen = False)
img1 = Image.open('under.png')
img2 = Image.open('over.png')
img1.paste(img2,(0,0),img2.convert('RGBA'))
img = img1.transpose(Image.FLIP_TOP_BOTTOM)
raw_image=img.tostring()
self.image=pyglet.image.ImageData(TILESIZE,TILESIZE,'RGBA',raw_image)
def run(self):
while not self.has_exit:
self.dispatch_events()
self.clear()
self.image.blit(0,0)
self.flip()
x = main()
x.run()
This may well not be the optimal solution, but if you do the loading in scene loading, then it won't matter, and with the result you can do almost almost anything you want to (as long as you don't blit it on another texture, heh). If you want to get just 1 tile (or a column or a row or a rectangular box) out of a tileset with PIL, you can use the crop function.