Live 2D laser scanner data in rospy - python

I just got a Sick Tim 571 laser scanner. Since I'm new to ROS I wanted to test some stuff in an easy rospy implementation.
I thought that the code below will show me a live output of the laser measurements like it is possible in Rviz (Rviz works fine for me) - but in Python and with the possibility to use the measurements in my own code. Unfortunately, the output frame shows only one static measurement (from the time when the Python code was started for the first time) over and over again.
If I run Rviz parallel to this Python code, I get a dynamically updated representation of the measured area.
I thought that the .callback(data) function is called with a new set of laser scanner data each time. But it seems like that it works not as I imagined. So the error is possibly located in .laser_listener() where the callback function is called.
TL;DR
How can I use dynamically updated (live) laser scanner measurements in rospy?
import rospy
import cv2
import numpy as np
import math
from sensor_msgs.msg import LaserScan
def callback(data):
frame = np.zeros((500, 500,3), np.uint8)
angle = data.angle_min
for r in data.ranges:
#change infinite values to 0
if math.isinf(r) == True:
r = 0
#convert angle and radius to cartesian coordinates
x = math.trunc((r * 30.0)*math.cos(angle + (-90.0*3.1416/180.0)))
y = math.trunc((r * 30.0)*math.sin(angle + (-90.0*3.1416/180.0)))
#set the borders (all values outside the defined area should be 0)
if y > 0 or y < -35 or x<-40 or x>40:
x=0
y=0
cv2.line(frame,(250, 250),(x+250,y+250),(255,0,0),2)
angle= angle + data.angle_increment
cv2.circle(frame, (250, 250), 2, (255, 255, 0))
cv2.imshow('frame',frame)
cv2.waitKey(1)
def laser_listener():
rospy.init_node('laser_listener', anonymous=True)
rospy.Subscriber("/scan", LaserScan,callback)
rospy.spin()
if __name__ == '__main__':
laser_listener()
[EDIT_1]:
When I add print(data.ranges[405]) to the callback function I get this output. It changes slightly. But it's wrong. I covered the whole sensor in the middle of the measurement. The values still only fit for the time when the program was started.
1.47800004482
1.48000001907
1.48000001907
1.48000001907
1.48300004005
1.47899997234
1.48000001907
1.48099994659
1.47800004482
1.47899997234
1.48300004005
1.47800004482
1.48500001431
1.47599995136
1.47800004482
1.47800004482
1.47399997711
1.48199999332
1.48099994659
1.48000001907
1.48099994659
The same as above but the other way around. I started with a covered sensor and lifted the cover during the measurement.
0.0649999976158
0.0509999990463
0.0529999993742
0.0540000014007
0.0560000017285
0.0579999983311
0.0540000014007
0.0579999983311
0.0560000017285
0.0560000017285
0.0560000017285
0.0570000000298
[EDIT_2]:
Oh... if I comment out the whole cv2 part, I get the realtime print output! So cv2 slows it so much down that I get the 15Hz measurement shown at a much slower rate.
So my question is now: Is there an alternative to cv2 that is capable to refresh at a higher rate?

You Can Use Librviz But thats In C++ and i haven't seen python version for it.
You can Use OpenGL (PyOpenGL) But I Don't Recommend It Cause it makes what u intened to do Really Complex but it's fast.
Why Not Use the rviz for visualization Only and do Other things elsewhere?
I've even seen a whole framework Placed In rviz(check Moveit framework). Rviz is Completely Customizable You can write Your Own PlugIns for it and it will Handle The outputing whatever topic You want.

just move out cv2.circle cv2.imshow cv2.waitkey from the for loop,and the problem will be solved.

Related

Why doesn't OpenCV cv2.moveWindow move always to the same XY-position?

The question Why is opencv's moveWindow command inconsistent? was asked 3 years, 1 month ago and was related to OpenCV (version 4.1.0) in Python (version 3.7.3). The question remains still unanswered. The statement
OpenCV's GUI functionality is not great, and is mostly available for debugging purposes. I would not rely on it. [...] – alkasm
provided as comment there doesn't answer the question about a possible reason for such behavior. But knowing the reason for such behavior is a necessary step toward a fix or debugging of the OpenCV code.
Today I am using OpenCV 4.5.5 in Python 3.9 and experience the same problem positioning windows displaying following image:
along with two other images demonstrating finding of contours with OpenCV using following code making it possible for you to reproduce the issue:
import time
import cv2 as cv
cv.namedWindow("Tetris Blocks") # flags=cv.CV_WINDOW_AUTOSIZE # to image size
time.sleep(2.0) # required to place the window correctly
cv.moveWindow(winname="Tetris Blocks", x= 1150, y= 10)
cv.namedWindow("Tetris Blocks Gray") # flags=cv.CV_WINDOW_AUTOSIZE # to image size
time.sleep(2.0) # required to place the window correctly
cv.moveWindow(winname="Tetris Blocks Gray", x= 1150, y= 380)
cv.namedWindow("Tetris Blocks Contours") # flags=cv.CV_WINDOW_AUTOSIZE # to image size
time.sleep(2.0) # required to place the window correctly
cv.moveWindow(winname="Tetris Blocks Contours", x= 1150, y= 730)
cv_img = cv.imread("tetris_blocks.png")
cv.imshow("Tetris Blocks", cv_img)
# cv.waitKey(0)
cv_img_gray = cv.cvtColor(cv_img, cv.COLOR_BGR2GRAY)
cv.imshow("Tetris Blocks Gray", cv_img_gray)
# cv.waitKey(0)
thresh = cv.threshold(cv_img_gray, 225, 255, cv.THRESH_BINARY_INV)[1]
(cnts, _) = cv.findContours(thresh.copy(), cv.RETR_EXTERNAL, cv.CHAIN_APPROX_SIMPLE)
cv.drawContours(cv_img, cnts, -1, (0, 0, 0), 10)
cv.imshow("Tetris Blocks Contours", cv_img)
cv.waitKey(0)
placing the windows in one column and not in one row as in the mentioned old unanswered question.
The pictures below show the result of running the above code in the same way multiple times obtaining different results. Often is the x-position the same:
where the y-position is not:
but the displacement occur also in both directions:
Looking at the provided code you can see an attempt to fix the problem as I have experienced much larger differences in the positions without 'sleeping' between positioning the windows. Inserting time.sleep() reduces the problem, but does not reliably solve it.
I suppose an easy to find and fix bug in OpenCV code behind this behavior and wonder how it comes that the problem persists in the timescale of years.

Fast screenshot of a small part of the screen in Python

I am currently working on a project where I need to take a 30x40 pixels screenshot from a specific area of my screen. This is not very hard to do as there are plenty of methods that do that.
The issue I have is that I need to take about 10 to 15 screenshots/second of the size I mentioned. When I looked at some of these methods that capture the screen, I have seen that when you give them parameters for a smaller selection, there's cropping involved. So a full screenshot is being taken, then the method crops it to the given size. That seems like a waste of resources if I'm only going to use 30x40 image, especially considering I will take thousands of screenshots.
So my question is: Is there a method that ONLY captures a part of the screen without capturing the whole screen cutting the desired section out of the big screenshot? I'm currently using this command:
im = pyautogui.screenshot(region=(0,0, 30, 40)).
The Python mss module ( https://github.com/BoboTiG/python-mss , https://python-mss.readthedocs.io/examples.html ), an ultra fast cross-platform multiple screenshots module in pure Python using ctypes ( where MSS stands for Multiple Screen Shots ), is what you are looking for. The screenshots are fast enough to capture frames from a video and the smaller the part of the screen to grab the faster the capture (so there is apparently no cropping involved ). Check it out. mss.mss().grab() outperforms by far PIL.ImageGrab.grab(). Below a code example showing how to get the data of the screenshot pixels (allows to detect changes):
import mss
from time import perf_counter as T
left = 0
right = 2
top = 0
btm = 2
with mss.mss() as sct:
# parameter for sct.grab() can be:
monitor = sct.monitors[1] # entire screen
bbox = (left, top, right, btm) # screen part to capture
sT=T()
sct_im = sct.grab(bbox) # type: <class 'mss.screenshot.ScreenShot'>
eT=T();print(" >", eT-sT) # > 0.0003100260073551908
print(len(sct_im.raw), sct_im.raw)
# 16 bytearray(b'-12\xff\x02DU\xff-12\xff"S_\xff')
print(len(sct_im.rgb), sct_im.rgb)
# 12 b'21-UD\x0221-_S"'

Rendering in blender dont take rotation updates of camera and light directions

I'm trying to render a cube (default blender scene) with a camera facing it. I have added a spotlight at the same location as the camera. Spotlight direction also faces towards the cube.
When I render, location changes take effect for both camera and spotlight but, rotations don't. scene context update is deprecated now. I have seen other update answers, but they don't seem to help.
I have done some workarounds and they seem to work, but this is not the correct way.
If I render the same set of commands twice (in a loop), I get the correct render.
If I run the script from the blender's python console (only once), I get the correct render. But If the same code is run as a script inside the blender, render is again wrong.
import pdb
import numpy as np
import bpy
def look_at(obj_camera, point):
loc_camera = obj_camera.matrix_world.to_translation()
direction = point - loc_camera
rot_quat = direction.to_track_quat('-Z', 'Y')
obj_camera.rotation_euler = rot_quat.to_euler()
data_path='some folder'
locs=np.array([ 0.00000000e+00, -1.00000000e+01, 3.00000011e-06]) #Assume, (I have big array where camera and spotlight needs to be placed, and then made to look towards cube)
obj_camera = bpy.data.objects["Camera"]
obj_other = bpy.data.objects["Cube"]
bpy.data.lights['Light'].type='SPOT'
obj_light=bpy.data.objects['Light']
loc=locs
i=0
##### if I run following lines two times, correct render is obtained.
obj_camera.location = loc
obj_light.location= obj_camera.location
look_at(obj_light, obj_other.matrix_world.to_translation())
look_at(obj_camera, obj_other.matrix_world.to_translation())
bpy.context.scene.render.filepath = data_path+'image_{}.png'.format(i)
bpy.ops.render.render(write_still = True)
You might need to call bpy.context.view_layer.update() (bpy.context.scene.update() with older versions than blender 2.8) after changing the camera orientation by obj_camera.rotation_euler = rot_quat.to_euler() and make sure that the layers that are going to be rendered are active when calling update() (see here https://blender.stackexchange.com/questions/104958/object-locations-not-updating-before-render-python).
(A bit late ;-) but this was one of the rare questions I found for a related issue.)

plotting / wxPython - Why is this call to MemoryDC.SelectObject() slow?

I'm attempting to do real-time plotting of data in matplotlib, in a "production" application that uses wxPython. I had been using Chaco for this purpose, but I'm trying to avoid Chaco in the future for many reasons, one of which is that since it's not well-documented I often must spend a long time reading the Chaco source code when I want to add even the smallest feature to one of my plots. One aspect where Chaco wins out over matplotlib is in speed, so I'm exploring ways to get acceptable performance from matplotlib.
One technique I've seen widely used for fast plots in matplotlib is to set animated to True for elements of the plot which you wish to update often, then draw the background (axes, tick marks, etc.) only once, and use the canvas.copy_from_bbox() method to save the background. Then, when drawing a new foreground (the plot trace, etc.), you use canvas.restore_region() to simply copy the pre-rendered background to the screen, then draw the new foreground with axis.draw_artist() and canvas.blit() it to the screen.
I wrote up a fairly simple example that embeds a FigureCanvasWxAgg in a wxPython Frame and tries to display a single trace of random data at 45 FPS. When the program is running with the Frame at the default size (hard-coded in my source), it achieves ~13-14 frames per second on my machine. When I maximize the window, the refresh drops to around 5.5 FPS. I don't think this will be fast enough for my application, especially once I start adding additional elements to be rendered in real-time.
My code is posted here: basic_fastplot.py
I wondered if this could be made faster, so I profiled the code and found that by far the largest consumers of processing time are the calls to canvas.blit() at lines 99 and 109. I dug a little further, instrumenting the matplotlib code itself to find that most of this time is spent in a specific call to MemoryDC.SelectObject(). There are several calls to SelectObject in surrounding code, but only the one marked below takes any appreciable amount of time.
From the matplotlib source, backend_wxagg.py:
class FigureCanvasWxAgg(FigureCanvasAgg, FigureCanvasWx):
# ...
def blit(self, bbox=None):
"""
Transfer the region of the agg buffer defined by bbox to the display.
If bbox is None, the entire buffer is transferred.
"""
if bbox is None:
self.bitmap = _convert_agg_to_wx_bitmap(self.get_renderer(), None)
self.gui_repaint()
return
l, b, w, h = bbox.bounds
r = l + w
t = b + h
x = int(l)
y = int(self.bitmap.GetHeight() - t)
srcBmp = _convert_agg_to_wx_bitmap(self.get_renderer(), None)
srcDC = wx.MemoryDC()
srcDC.SelectObject(srcBmp) # <<<< Most time is spent here, 30milliseconds or more!
destDC = wx.MemoryDC()
destDC.SelectObject(self.bitmap)
destDC.BeginDrawing()
destDC.Blit(x, y, int(w), int(h), srcDC, x, y)
destDC.EndDrawing()
destDC.SelectObject(wx.NullBitmap)
srcDC.SelectObject(wx.NullBitmap)
self.gui_repaint()
My questions:
What could SelectObject() be doing that is taking so long? I had sort of assumed it would just be setting up pointers, etc., not doing much copying or computation.
Is there any way I might be able to speed this up (to get maybe 10 FPS at full-screen)?

Digital Image cropping in Python

Got this question from a professor, a physicist.
I am a beginner in Python programming. I am not a computer professional I am a physicist. I was trying to write a code in python for my own research which involves a little image processing.
All I need to do is to display an image and then select a region of interest using my mouse and finally crop out the selected region. I can do this in Matlab using the ginput() function.
I tried using PIL. But I find that after I issue the command Image.show(), the image is displayed but then the program halts there unless I exit from the image window. Is there any way to implement what I was planning. Do I need to download any other module? Please advise.
While I agree with David that you should probably just use GIMP or some other image manipulation program, here is a script (as I took it to be an exercise to the reader) using pygame that does what you want. You will need to install pygame as well as the PIL, usage would be:
scriptname.py <input_path> <output_path>
Actual script:
import pygame, sys
from PIL import Image
pygame.init()
def displayImage( screen, px, topleft):
screen.blit(px, px.get_rect())
if topleft:
pygame.draw.rect( screen, (128,128,128), pygame.Rect(topleft[0], topleft[1], pygame.mouse.get_pos()[0] - topleft[0], pygame.mouse.get_pos()[1] - topleft[1]))
pygame.display.flip()
def setup(path):
px = pygame.image.load(path)
screen = pygame.display.set_mode( px.get_rect()[2:] )
screen.blit(px, px.get_rect())
pygame.display.flip()
return screen, px
def mainLoop(screen, px):
topleft = None
bottomright = None
runProgram = True
while runProgram:
for event in pygame.event.get():
if event.type == pygame.QUIT:
runProgram = False
elif event.type == pygame.MOUSEBUTTONUP:
if not topleft:
topleft = event.pos
else:
bottomright = event.pos
runProgram = False
displayImage(screen, px, topleft)
return ( topleft + bottomright )
if __name__ == "__main__":
screen, px = setup(sys.argv[1])
left, upper, right, lower = mainLoop(screen, px)
im = Image.open(sys.argv[1])
im = im.crop(( left, upper, right, lower))
im.save(sys.argv[2])
Hope this helps :)
For what it's worth (coming from another physicist), I would just do this in an image processing program like the GIMP. The main benefit of doing this task in Python (or any language) would be to save time by automating the process, but unless you - well, the professor - can somehow develop an algorithm to automatically figure out what part of the image to crop, there doesn't seem to be much time to be saved by automation.
If I remember correctly, GIMP is actually scriptable, possibly with Python, so it might be possible to write a time-saving GIMP script to do what your professor friend wants.
Image.show() just calls whatever simple picture viewer it can find on the current platform, one that may or may not have a crop-and-save facility.
If you are on a Windows box and you just need to make it work on your machine, set the ‘Open with...’ association to make it so running an image loads it into an editor of your choice. On OS X and *nix you'd want to hack the _showxv() method at the bottom of Image.py to change the command used to open the image.
If you do actually need to provide a portable solution, you'll need to use a UI framework to power your cropping application. The choices boil down to Tkinter (ImageTk.py gives you a wrapper for displaying PIL images in Tk), PyQT4 (ImageQt in PIL 1.1.6 gives you a wrapper for displaying images in QT4) or wxPython (a higher-level application authoring toolkit using wxWidgets). It'll be quite a bit of work to get the hang of a full UI kit, but you'll be able to completely customise how your application's interface will work.
Is there a script in python like a library to auto crop images :
Automatically crop image
What you are looking for is the module: matplotlib, it emulates Matlab. See the ginput() function. That allow you to find the bounding box, then you can use crop from PIL.
http://matplotlib.sourceforge.net/api/figure_api.html

Categories

Resources