How to place one image on top of another, and update? - python

I don't know where to start:
from PIL import Image
im1 = Image.open('C:/background.png') # size = 1065x460px
I need to load an image that would be the "background", and on this image I will place a series of circles when a condition is met.
And when another condition is met, delete or update only the previous circle (the background must always remain) and place the new circle
I just edited the post:
Now I have an image that will be the background (that I will never erase)
and on the other hand I have the circles, which will be generated by graph.DrawCircle and my question, is there a way to update and delete these circles, that is, when I place the 2nd circle the 1st is deleted.
layout = [sg.Graph(canvas_size=(1065, 460), graph_bottom_left=(0, 0),
graph_top_right=(1065, 460), key="-GRAPH-")]
graph = window.Element("-GRAPH-")
def circle_position(x,y,r):
graph.DrawCircle((x,y), r, line_color='red')
before drawing a circle I have to erase all circles (but not the background) :
elif (event =="Display Error Code") or (event == 'Submit'):
# erasing all the circles------before drawing the new circle
circle_position(324,257,16)
if (values[0] == "1002") : # 2nd circle
# erasing all the circles------before drawing the new circle
circle_position(342,303,16)
Thanks again

Related

Pygame splitscreen is not displaying correctly [duplicate]

I have a project where i have to create a 4 way split screen using pygame. On this screen i have to draw the same image on each of the screen just have different view of the image. I just can not figure out how to create this 4 way split screen using pygame.
I need my screen to be divided like above so i can draw my points onto each section.
I have been looking around and I can not find anything like this so any help would be great
thanks
In addition to the surface you have that gets rendered to the display, likely called something like screen, you should create another surface which all of the "action" gets drawn to. You can then use a Rect object for each quadrant of the screen which will represent the "camera" (assuming each quadrant doesn't necessarily need to show exactly the same image). When you draw back to screen, you use each camera Rect object to select a portion of the game space to draw to a specific quadrant.
# canvas will be a surface that captures the entirety of the "action"
canvas = pygame.Surface((800, 600))
# the following are your "camera" objects
# right now they are taking up discrete and even portions of the canvas,
# but the idea is that they can move and possibly cover overlapping sections
# of the canvas
p1_camera = pygame.Rect(0,0,400,300)
p2_camera = pygame.Rect(400,0,400,300)
p3_camera = pygame.Rect(0,300,400,300)
p4_camera = pygame.Rect(400,300,400,300)
On each update, you would then use these "camera" objects to blit various portions of the canvas back to the screen surface.
# draw player 1's view to the top left corner
screen.blit(canvas, (0,0), p1_camera)
# player 2's view is in the top right corner
screen.blit(canvas, (400, 0), p2_camera)
# player 3's view is in the bottom left corner
screen.blit(canvas, (0, 300), p3_camera)
# player 4's view is in the bottom right corner
screen.blit(canvas, (400, 300), p4_camera)
# then you update the display
# this can be done with either display.flip() or display.update(), the
# uses of each are beyond this question
display.flip()
There is no functions to split screen. But you can draw 4 views directly on screen or you can draw on 4 surfaces (pygame.Surface) and than blit surfaces on screen.
Since you were looking for a way to split the screen in to 4 sections and draw some points on to them I'd suggest creating 4 subsurface surfaces of the original "canvas" image for convenience.
These surfaces would act as your player(split screen) canvasses which can easily be modified.
This will enable the usage of normalized coordinates for player specific drawing purposes.
Assuming you have a screen surface set up
# Image(Surface) which will be refrenced
canvas = pygame.Surface((800, 600))
# Camera rectangles for sections of the canvas
p1_camera = pygame.Rect(0,0,400,300)
p2_camera = pygame.Rect(400,0,400,300)
p3_camera = pygame.Rect(0,300,400,300)
p4_camera = pygame.Rect(400,300,400,300)
# subsurfaces of canvas
# Note that subx needs refreshing when px_camera changes.
sub1 = canvas.subsurface(p1_camera)
sub2 = canvas.subsurface(p2_camera)
sub3 = canvas.subsurface(p3_camera)
sub4 = canvas.subsurface(p4_camera)
Now drawing on any of of the subsurfaces with these normalized coordinates
# Drawing a line on each split "screen"
pygame.draw.line(sub2, (255,255,255), (0,0), (0,300), 10)
pygame.draw.line(sub4, (255,255,255), (0,0), (0,300), 10)
pygame.draw.line(sub3, (255,255,255), (0,0), (400,0), 10)
pygame.draw.line(sub4, (255,255,255), (0,0), (400,0), 10)
# draw player 1's view to the top left corner
screen.blit(sub1, (0,0))
# player 2's view is in the top right corner
screen.blit(sub2, (400, 0))
# player 3's view is in the bottom left corner
screen.blit(sub3, (0, 300))
# player 4's view is in the bottom right corner
screen.blit(sub4, (400, 300))
# Update the screen
pygame.display.update()
Note that modifications to the subsurface pixels will affect the canvas as well. I'd recommend reading the full documentation on subsurfaces.

Editing image in Python via OpenCV and displaying it in PyQt5 ImageView?

I am taking a live image from a camera in Python and displaying it in an ImageView in my PyQt5 GUI, presenting it as a live feed.
Is is displaying fine, but I would like to draw a red crosshair on the center of the image to help the user know where the object of focus moved, relative to the center of the frame.
I tried drawing on it using "cv2.line(params)" but I do not see the lines. This is strange to me because in C++, when you draw on an image, it takes the mat and changes that mat in the code going forward. How can I display this on the UI window without having to make a separate call to cv2.imshow()?
This is the signal from the worker thread that changes the image, it emits an ndarray and a bool:
def pipeline_camera_acquire(self):
while True:
self.mutex.lock()
#get data and pass them from camera to img object
self.ximeaCam.get_image(self.img)
#get data from camera as numpy array
data_pic = self.img.get_image_data_numpy()
#Edits
cv2.line(data_pic, (-10,0), (10,0), (0,0,255), 1)
cv2.line(data_pic, (0,10), (0,-10), (0,0,255), 1)
self.editedIMG = np.rot90(data_pic, 3)
self.mutex.unlock()
#send signal to update GUI
self.imageChanged.emit(self.editedIMG, False)
I don't think that it isn't drawing the line, I think it just is drawing it out of the visible area. (0,0) is the upper right hand corner of the image, so (0,10),(0,-10), would be a thin line right at the edge of the image.
If you are trying to draw in the center then you should calculate it from the center of the numpy array.
For example:
x, y = data_pic.shape[0]//2, data_pic.shape[1]//2
cv2.line(data_pic, (x-10,y), (x+10,y), (0,0,255), 1)
cv2.line(data_pic, (x, y-10), (x, y+10), (0,0,255), 1)
That should draw the

Windows: Draw rectangle on camera preview

Similar questions have been asked a lot for Android, but so far I haven´t been able to find resources related to Windows OS. So basically, as the topic suggests, I would like to draw a rectangle on my camera preview. Some work has been done, but there´s still some problem in my program. Due to some limits, I would like to avoid using opencv as much as possible. Following is my approach:
Open Window´s built-in camera app
Run Python code that draws rectangle on screen, pixel by pixel (see below)
Click on screen with mouse to move rectangle with its upper-left corner
As you can see in the code, I´m not actually drawing on the camera preview but rather drawing on my screen, where the camer preview runs on one layer lower.
Here´s the python code:
import win32gui, win32ui, win32api, win32con
from win32api import GetSystemMetrics
dc = win32gui.GetDC(0)
dcObj = win32ui.CreateDCFromHandle(dc)
hwnd = win32gui.WindowFromPoint((0,0))
monitor = (0, 0, GetSystemMetrics(0), GetSystemMetrics(1))
red = win32api.RGB(255, 0, 0) # Red
past_coordinates = monitor
rec_x = 200 # width of rectangle
rec_y = 100 # height of rectangle
m = (100, 100) # initialize start coordinate
def is_mouse_down():
key_code = win32con.VK_LBUTTON
state = win32api.GetAsyncKeyState(key_code)
return state != 0
while True:
if(is_mouse_down() == True):
m = win32gui.GetCursorPos()
for x in range(rec_x):
win32gui.SetPixel(dc, m[0]+x, m[1], red)
win32gui.SetPixel(dc, m[0]+x, m[1]+rec_y, red)
for y in range(rec_y):
win32gui.SetPixel(dc, m[0], m[1]+y, red)
win32gui.SetPixel(dc, m[0]+rec_x, m[1]+y, red)
As a result, I´m able to draw a red rectangle. However, because the screen is constantly being refreshed, the two horizontal lines of my rectangle (see gif below) are shown as running dots that go from left to right. I can´t find or think of a way to improve this, whilst keeping the possibility to move the rectangle around per ckick.
PS. Ignore white rectangle. It´s a built-in thing of the camera app when you click anywhere on the preview.
Here are the references I used to get to this step:
How to draw an empty rectangle on screen with Python
https://python-forum.io/Thread-How-to-trigger-a-function-by-clicking-the-left-mouse-click

cv2.putText text stays in image after next cv2.read

I am working on some code for my master thesis. There are two threads, one does cv2.VideoCapture(), sets the ready Event(), in the second thread while loop waits for the event to be set and then does some image processing. Typical producer, consumer problem.
The problem is, that I am trying to create "model" of the scene, where I would update pixels only in areas without red circle, which i am using as a marker to detect position of an object. I need to remember part of image behind the red marker and then update area everywhere else.
The problem is, that i need to mark the center of the red circle. I use the cv2.circle(). Black dot somehow gets in the model, even though I have never put it there. The cv2.putText() function has the same problem. The code looks like this.
if __name__ == "__main__":
ready = Event()
vs = VideoStream(0, ready).start()
v = Vision(vs.read())
cv2.imshow('firework', v.getImg())
while cv2.getWindowProperty('firework', cv2.WND_PROP_VISIBLE) >= 1:
# data acquisition
while not ready.isSet():
pass
img = vs.read()
ready.clear()
# image processing
v.updateImg(img)
v.detectRedMarker() # set cX, cY
mask = v.getROIMask(radius=100)
v.updateModel(mask) # circle mask, center cX, cY
# extras
img = v.putMarkerToImg(img)
cv2.imshow('firework', img)
if cv2.waitKey(1) == ord(' '):
break
vs.stop()
Methods in the Vision class...
def updateImg(self, img):
self.img = img
self.imgGray = cv2.cvtColor(self.img,cv2.COLOR_BGR2GRAY)
return
def getROIMask(self,radius):
mask = np.zeros((self.imgHeight, self.imgWidth),np.uint8)
cv2.circle(mask, (self.cX, self.cY), radius, 1, thickness=-1)
return mask
def updateModel(self,mask):
modelUpdate = cv2.bitwise_and(self.imgGray, cv2.bitwise_not(mask*255))
modelOld = cv2.bitwise_and(self.model, (mask*255))
self.model = cv2.add(modelOld,modelUpdate)
return
As you can see, I put the dot in the center of the red marker after i update the model and then read new image in the next iteration. Somehow the black dot gets into the model. Can anyone please suggest how does this happen?
Black dot marking the center of red circle somehow gets into the model

8 Queens (pyglet-python)

I'm trying to make 8 queens game on pyglet. I have succesfully generated board.png on window. Now when I paste queen.png image on it, I want it to show only queen on it not the white part. I removed white part using photoshop, but as I call it on board.png in pyglet it again shows that white part please help.
import pyglet
from pyglet.window import Window, mouse, gl
# Display an image in the application window
image = pyglet.image.Texture.create(800,800)
board = pyglet.image.load('resources/Board.png')
queen = pyglet.image.load('resources/QUEEN.png')
image.blit_into(board,0,0,0)
image.blit_into(queen,128,0,0)
# creating a window
width = board.width
height = board.height
mygame = Window(width, height,
resizable=False,
caption="8 Queens",
config=pyglet.gl.Config(double_buffer=True),
vsync=False)
# Making list of tiles
print("Height: ", board.height, "\nWidth: ", board.width)
#mygame.event
def on_draw():
mygame.clear()
image.blit(0, 0)
def updated(dt):
on_draw()
pyglet.clock.schedule_interval(updated, 1 / 60)
# Launch the application
pyglet.app.run()
These are the images:
queen.png
board.png
Your image is a rectangle. So necessarily, you will have a white space around your queen whatever you do.
I would recommend a bit of hacking (it's not very beautiful) and create two queen versions: queen_yellow and queen_black. Whenever the queen is standing on a yellow tile, display queen_yellow, and otherwise display queen_black.
To find out whether a tile is a yellow tile (using a matrix with x and y coordinates, where the top value for y is 0 and the very left value for x is 0):
if tile_y%2=0: #is it an even row?
if tile_x%2=0: #is it an even column?
queentype = queen_yellow
else:
queentype = queen_black
else: #is it an uneven row?
if tile_x%2!=0: #is it an uneven column?
queentype = queen_yellow
else: queentype = queen_black
Hope that helped,
Narusan
First of all, please verify that there is no background (you can use GIMP for that). Once that is done go ahead with this:
Since it is a PNG image, you can't just put it there on the window as it will lose its transparency. You need to import the PNGImageDecoder from pyglet like
from pyglet.image.codecs.png import PNGImageDecoder
then use it for loading the PNG image like
kitten = pyglet.image.load('kitten.png', decoder=PNGImageDecoder())
and finally draw it on the window by using
kitten.draw(), after specifying the x and y coordinates where you would like to have them.
The document for the above can be found here.
Hope this helps!

Categories

Resources