Why does the ellipse position not similar to my mouse position? - python

I have a code here that will add an ellipse and line when mouse is clicked.
class Viewer(QtWidgets.QGraphicsView):
def __init__(self, parent):
super(leftImagePhotoViewer, self).__init__(parent)
self._zoom = 0
self._empty = True
self._scene = QtWidgets.QGraphicsScene(self)
self.setGeometry(QtCore.QRect(20, 90, 451, 421))
self.setSceneRect(20, 90, 451, 421)
I have an MouseRelase Event
def mouseReleaseEvent(self,event):
pos = self.mapToScene(event.pos())
point = self._scene.addEllipse(self._size/2, self._size/2, 10, 10, QPen(Qt.black), QBrush(Qt.green))
point.setPos(QPointF(pos.x(),pos.y()))
self._scene.addLine(pos.x(),pos.y(), self.posprev.x(), self.posprev.y(), QPen(Qt.green))
When I clicked the line, its position is similar to the mouse positon, but the ellipse positon has few gap or difference to the exact mouse position.The center of the ellipse should be the endpoitn of the line or where the mouse position is.
See image here:
Can someone help me what is wrong why the ellipse will not add on the exact position to the mouse?

As the documentation of addEllipse() explains:
Note that the item's geometry is provided in item coordinates, and its position is initialized to (0, 0).
This is actually valid for all QGraphicsScene functions that add basic shapes, and the initialized position is always (0, 0) for all QGraphicsItems in general.
Consider the following:
point = scene.addEllipse(5, 5, 10, 10)
The above will create an ellipse enclosed in a rectangle that starts at (5, 5) relative to its position. Since we've not moved it yet, that position is the origin point of the scene.
The ellipse as it as soon as it's created, with the rectangle shown as a reference of its boundaries.
Then, we set its position (assuming the mouse is at 20, 20 of the scene):
point.setPos(QPointF(20, 20))
The result will be an ellipse enclosed in a rectangle that has its top left corner at (25, 25), which is the rectangle position relative to the item position: (5, 5) + (20, 20).
Note that the above shows both the ellipse in the original position and the result of setPos().
If you want an ellipse that will be centered on its position, you must create one with negative x and y coordinates that are half of the width and height of its rectangle.
Considering the case above, the following will properly show the ellipse centered at (20, 20):
point = scene.addEllipse(-5, -5, 10, 10)
point.setPos(QPointF(20, 20))
Notes:
as the documentation shows, mapToScene() already returns a QPointF, there's no point in doing setPos(QPointF(pos.x(), pos.y())): just do setPos(pos);
remember what said above: all items have a starting position at (0, 0); this is valid also for the line you're creating after that point, which will be drawn between pos and self.posprev, but will still be at (0, 0) in scene coordinates;
the view and the scene might need mouse events, especially if you're going to add movable items; you should always call the base implementation (in your case, super().mouseReleaseEvent(event)) when you override functions, unless you really know what you're doing;
as already suggested to you, it is of utmost importance that you read and understand the whole graphics view documentation, especially how its coordinate system works; the graphics view framework is as much powerful as it is complex, and cannot be learnt just by trial and error: being able to use it requires a lot of patience in understanding how it works by carefully studying the documentation of each of its classes and all functions you are going to use;

Related

How to convert screen x,y (cartesian coordinates) to 3D world space crosshair movement angles (screenToWorld)?

Recently I've been playing around with computer vision and neural networks.
And came across experimental object detection within a 3D application.
But, surprisingly to me - I've faced an issue of converting one coordinates system to another (AFAIK cartesian to polar/sphere).
Let me explain.
For example, we have a screenshot of a 3D application window (some 3D game):
Now, using Open-CV or neural network I'm able to detect the round spheres (in-game targets).
As well as their X, Y coordinates within the game window (x, y offsets).
And if I will programmatically move a mouse cursor within the given X, Y coordinates in order to aim one of the targets.
It will work only when I'm in desktop environment (moving the cursor in desktop).
But when I switch to the 3D game and thus, my mouse cursor is now within 3D game world environment - it does not work and does not aim the target.
So, I did a decent research on the topic.
And what I came across, is that the mouse cursor is locked inside 3D game.
Because of this, we cannot move the cursor using MOUSEEVENTF_MOVE (0x0001) + MOUSEEVENTF_ABSOLUTE (0x8000) flags within the mouse_event win32 call.
We are only able to move the mouse programmatically using relative movement.
And, theoretically, in order to get this relative mouse movement offsets, we can calculate the offset of detections from the middle of the 3D game window.
In such case, relative movement vector would be something like (x=-100, y=0) if the target point is 100px left from the middle of the screen.
The thing is, that the crosshair inside a 3D game will not move 100px to the left as expected.
And will not aim the given target.
But it will move a bit in a given direction.
After that, I've made more research on the topic.
And as I understand, the crosshair inside a 3D game is moving using angles in 3D space.
Specifically, there are only two of them: horizontal movement angles and vertical movement angles.
So the game engine takes our mouse movement and converts it to the movement angles within a given 3D world space.
And that's how the crosshair movement is done inside a 3D game.
But we don't have access to that, all we can is move the mouse with win32 calls externally.
Then I've decided to somehow calculate pixels per degree (amount of pixels we need to use with win32 relative mouse movement in order to move the crosshair by 1 degrees inside the game).
In order to do this, I've wrote down a simple calculation algorithm.
Here it is:
As you can see, we need to move our mouse relatively with win32 by 16400 pixels horizontally, in order to move the crosshair inside our game by 360 degrees.
And indeed, it works.
16400/2 will move the crosshair by 180 degrees respectively.
What I did next, is I tried to convert our screen X, Y target offset coordinates to percentages (from the middle of the screen).
And then convert them to degrees.
The overall formula looked like (example for horizontal movement only):
w = 100 # screen width
x_offset = 10 # target x offset
hor_fov = 106.26
degs = (hor_fov/2) * (x_offset /w) # 5.313 degrees
And indeed, it worked!
But not quite as expected.
The overall aiming precision was different, depending on how far the target is from the middle of the screen.
I'm not that great with trigonometry, but as I can say - there's something to do with polar/sphere coordinates.
Because we can see only some part of the game world both horizontally & vertically.
It's also called the FOV (Field of view).
Because of this, in the given 3D game we are only able to view 106.26 degrees horizontally.
And 73.74 degrees vertically.
My guess, is that I'm trying to convert coordinates from linear system to something non-linear.
As a result, the overall accuracy is not good enough.
I've also tried to use math.atan in Python.
And it works, but still - not accurate.
Here is the code:
def point_get_difference(source_point, dest_point):
# 1000, 1000
# source_point = (960, 540)
# dest_point = (833, 645)
# result = (100, 100)
x = dest_point[0]-source_point[0]
y = dest_point[1]-source_point[1]
return x, y
def get_move_angle__new(aim_target, gwr, pixels_per_degree, fov):
game_window_rect__center = (gwr[2]/2, gwr[3]/2)
rel_diff = list(point_get_difference(game_window_rect__center, aim_target))
x_degs = degrees(atan(rel_diff[0]/game_window_rect__center[0])) * ((fov[0]/2)/45)
y_degs = degrees(atan(rel_diff[1] / game_window_rect__center[0])) * ((fov[1]/2)/45)
rel_diff[0] = pixels_per_degree * x_degs
rel_diff[1] = pixels_per_degree * y_degs
return rel_diff, (x_degs+y_degs)
get_move_angle__new((900, 540), (0, 0, 1920, 1080), 16364/360, (106.26, 73.74))
# Output will be: ([-191.93420990140876, 0.0], -4.222458785413539)
# But it's not accurate, overall x_degs must be more or less than -4.22...
Is there a way to precisely convert 2D screen X, Y coordinates into 3D game crosshair movement degrees?
There must be a way, I just can't figure it out ...
The half-way point between the center and the edge of the screen is not equal to the field of view divided by four. As you noticed, the relationship is nonlinear.
The angle between a fractional position on the screen (0-1) and the middle of the screen can be calculated as follows. This is for the horizontal rotation (i.e, around the vertical axis), so we're only considering the X position on the screen.
# angle is the angle in radians that the camera needs to
# rotate to aim at the point
# px is the point x position on the screen, normalised by
# the resolution (so 0.0 for the left-most pixel, 0.5 for
# the centre and 1.0 for the right-most
# FOV is the field of view in the x dimension in radians
angle = math.atan((x-0.5)*2*math.tan(FOV/2))
For a field of view of 100 degrees and an x of zero, that gives us -50 degrees of rotation (exactly half the field of view). For an x of 0.25 (half-way between the edge and middle), we get a rotation of around -31 degrees.
Note that the 2*math.tan(FOV/2) part is constant for any given field of view, so you can calculate it in advance and store it. Then it just becomes (assuming we named it z):
angle = math.atan((x-0.5)*z)
Just do that for both x and y and it should work.
Edit / update:
Here is a complete function. I've tested it, and it seems to work.
import math
def get_angles(aim_target, window_size, fov):
"""
Get (x, y) angles from center of image to aim_target.
Args:
aim_target: pair of numbers (x, y) where to aim
window_size: size of area (x, y)
fov: field of view in degrees, (horizontal, vertical)
Returns:
Pair of floating point angles (x, y) in degrees
"""
fov = (math.radians(fov[0]), math.radians(fov[1]))
x_pos = aim_target[0]/(window_size[0]-1)
y_pos = aim_target[1]/(window_size[1]-1)
x_angle = math.atan((x_pos-0.5)*2*math.tan(fov[0]/2))
y_angle = math.atan((y_pos-0.5)*2*math.tan(fov[1]/2))
return (math.degrees(x_angle), math.degrees(y_angle))
print(get_angles(
(0, 0), (1920, 1080), (100, 67.67)
), "should be around -50, -33.835")
print(get_angles(
(1919, 1079), (1920, 1080), (100, 67.67)
), "should be around 50, 33.835")
print(get_angles(
(959.5, 539.5), (1920, 1080), (100, 67.67)
), "should be around 0, 0")
print(get_angles(
(479.75, 269.75), (1920, 1080), (100, 67.67)
), "should be around 30.79, 18.53")

PyQt5 QGraphicsSimpleTextItem Y position

Here is the simple code that draws letters using PyQt5.
setPos is 0, 0 but letters not at the top of the window.
Horizontally letters not at the window edge too.
What is wrong?
Thank you
from PyQt5 import QtWidgets, QtGui, Qt
from PyQt5.QtWidgets import QApplication, QMainWindow
from PyQt5.QtGui import QBrush, QColor
import sys
class MainWindow(QMainWindow):
def __init__(self):
super(MainWindow, self).__init__()
self.initWindow()
def initWindow(self):
self.setGeometry(200,200,1000,400)
self.show()
self.scene = QtWidgets.QGraphicsScene()
self.graphics = QtWidgets.QGraphicsView(self.scene, self)
self.graphics.setGeometry(0, 0, 1000, 400)
self.scene.setSceneRect(0, 0, 1000, 400)
self.graphics.setHorizontalScrollBarPolicy(1)
self.graphics.setVerticalScrollBarPolicy(1)
self.graphics.setFrameStyle(0)
text = QtWidgets.QGraphicsSimpleTextItem()
_font = QtGui.QFont()
_font.setPixelSize(200)
_font.setBold(False)
text.setFont(_font)
text.setText('DDD')
text.setBrush(QBrush(QColor(0,0,0)))
text.setPos(0, 0)
self.scene.addItem(text)
self.graphics.show()
app = QApplication(sys.argv)
win = MainWindow()
sys.exit(app.exec())
As the QGraphicsSimpleTextItem documentation explains, the positioning of the text is based on the font "bounding rectangle":
Each character symbol of a font uses a "baseline" for reference, an "ascent" (how much the font goes from the baseline to the top) and a descent (how much it goes down for letters like lowercase "g" or "q"). Also, most characters do not start exactly on the "left" 0-point, but there is always some margin of "horizontal margin".
If you want to position it exactly at the top-left, you'll need to use QFontMetrics, which provides specific metrics information about a font; also, since we're dealing with a QGraphicsScene, QFontMetricsF is more indicated, as it returns floating point precision. The most important function for this is tightBoundingRect().
If I add the following to your code, just after adding the text item:
outRect = self.scene.addRect(text.boundingRect())
outRect.setPen(QtCore.Qt.red)
fm = QtGui.QFontMetricsF(_font)
boundingRect = fm.tightBoundingRect(text.text())
self.scene.addRect(boundingRect.translated(0, text.boundingRect().bottom() - fm.descent()))
The result is clear (I used a different string to better show the differences):
The red rectangle indicates the actual bounding rectangle of the item (which has the same size of QFontMetrics(_font).boundingRect(QtCore.QRect(), QtCore.Qt.AlignCenter, text.text()) would return.
The black rectangle shows the "tight" bounding rectangle, which is the smallest possible rectangle for that string.
The tight rectangle (as the one provided by QFontMetrics.boundingRect) uses the baseline as the 0-point for the coordinates, so it will always have a negative y position and probably (but it depends on the font and the style) an x greater than 0.
Finally, to get your text item placed with the characters aligned on the top left corner of the scene, you'll need to compute its position based on that tight rectangle:
text.setPos(-boundingRect.left(), -(fm.ascent() + boundingRect.top()))
The left has to be negative to compensate the horizontal text positioning, while the negative vertical position is computed by adding the ascent to the boundingRect top (which is negative in turn).
Keep in mind, though, that the bounding rectangle will still be the bigger red rectangle shown before (obviously translated to the new position):
So, while the text appears aligned on the top left, the item bounding rect top-left corner is actually outside the scene rectangle, and its bottom-left corner exceedes its visual representation.
This is important, as it has to do with item collision detection and mouse button interaction.
To avoid that, you'd need to subclass QGraphicsSimpleTextItem and override boundingRect(), but keep in mind that positioning (expecially vertical) should always be based on the baseline (the ascent), otherwise you'll probably get unexpected or odd behavior if the text changes.

The pygame drawing functions leave pixel-wide gaps. Why?

After converting a piece of code (that animates a pattern of rectangles) from Java to Python, I noticed that the animation that the code produced seemed quite glitchy. I managed to reproduce the problem with a minimal example as follows:
import pygame
SIZE = 200
pygame.init()
DISPLAYSURF = pygame.display.set_mode((SIZE, SIZE))
D = 70.9
xT = 0.3
yT = 0
#pygame.draw.rect(DISPLAYSURF, (255,0,0), (0, 0, SIZE, SIZE))
pygame.draw.rect(DISPLAYSURF, (255,255,255), (xT, yT, D, D))
pygame.draw.rect(DISPLAYSURF, (255,255,255), (xT+D, yT+D, D, D))
pygame.draw.rect(DISPLAYSURF, (0,0,0), (xT, yT+D, D, D))
pygame.draw.rect(DISPLAYSURF, (0,0,0), (xT+D, yT, D, D))
pygame.display.update()
This code generates the following image:
Notice that the squares don't line up perfectly in the middle. Uncommenting the commented line in the code above results in the following image, which serves to illuminate the problem further:
It seems that there are pixel-wide gaps in the black and white pattern, even though it can be seen in the code (by the data that is passed in the calls to pygame.draw.rect()) that this shouldn't be the case. What is the reason for this behaviour, and how can I fix it?
(This didn't happen in Java, here is a piece of Java code corresponding to the Python code above).
Looking at the rendered picture in an image editor, the pixel distances can be confirmed as such:
Expanding the function calls (i.e. performing the additions manually), one can see that the input arguments to draw the white rectangles are of the form
pygame.draw.rect(DISPLAYSURF, (255,255,255), ( 0.3, 0, 70.9, 70.9))
pygame.draw.rect(DISPLAYSURF, (255,255,255), (71.2, 70.9, 70.9, 70.9))
Since fractions of pixels do not make sense screen-wise, the input must be discretized in some way. Pygame (or SDL, as mentioned in the comments to the question) seems to choose truncating, which in practice transforms the drawing commands to:
pygame.draw.rect(DISPLAYSURF, (255,255,255), ( 0, 0, 70, 70))
pygame.draw.rect(DISPLAYSURF, (255,255,255), (71, 70, 70, 70))
which corresponds to the dimensions in the rendered image. If AWT draws it differently, my guess is that it uses rounding (of some sort) instead of truncating. This could be investigated by trying different rendering inputs, or by digging in the documentation.
If one wants pixel perfect rendering, using floating points as input is not well defined. If one keeps to the integers, the result should be independent of renderer, though.
EDIT: I expand a bit if anyone else finds this, since I couldn't find much info on this behavior apart from the source code.
The function call in question takes the following input arguments (documentation):
pygame.draw.rect(Surface, color, Rect, width=0)
where Rect is a specific object defined by a top-left coordinate, a width and a height. By design it only handles integer attributes, since it is meant as a low-level "this is what you see on the screen" data type. The data type handles floats by truncating:
>>> import pygame
>>> r = pygame.Rect((1, 1, 8, 12))
>>> r.bottomright
(9, 13)
>>> r.bottomright = (9.9, 13.5)
>>> r.bottomright
(9, 13)
>>> r.bottomright = (11.9, 13.5)
>>> r.bottomright
(11, 13)
i.e., a regular (int) cast is done.
The Rect object is not meant as a "store the coordinates for my sprite" object, but as a "this is what the screen will represent" object. Floating points are certainly useful for the former purpose, and the designer would probably want to keep an internal list of floats to store this information. Otherwise, incrementing a screen position by e.g. r.left += 0.8 (where r is the Rect object) would never move r at all.
The problem in the question comes from (quite reasonably) assuming that the right x coordinate of the rectangle will at least be calculated as something like x₂ = int(x₁ + width), but since the function call implicitly transforms the input tuple to a Rect object before proceeding, and since Rect will truncate its input arguments, it will instead calculate it as x₂ = int(x₁) + int(width), which is not always the same for float input.
To create a Rect using rounding rules, one could e.g. define a wrapper like:
def rect_round(x1, y1, w, h):
"""Returns pygame.Rect object after applying sane rounding rules.
Args:
x1, y1, w, h:
(x1, y1) is the top-left coordinate of the rectangle,
w is width,
h is height.
Returns:
pygame.Rect object.
"""
r_x1 = round(x1)
r_y1 = round(y1)
r_w = round(x1 - r_x1 + w)
r_h = round(y1 - r_y1 + h)
return pygame.Rect(map(int, (r_x1, r_y1, r_w, r_h)))
(or modified for other rounding rules) and then call the draw function as e.g.
pygame.draw.rect(DISPLAYSURF, (255,255,255), rect_round(71.2, 70.9, 70.9, 70.9))
One will never bypass the fact that the pixel by definition is the smallest addressable unit on the screen, though, so this solution might also have its quirks.
Related thread on the Pygame mailing list from 2005: Suggestion: make Rect use float coordinates

Python/Pygame: Drawing graph origin mathematically?

I am working on a graphing program that I am calling PyGraph.
It allows you to create a graph of any size and draw on it, and later in development I will provide coordinates and things, but for now I have one question: How can I draw a intersecting lines through the center to represent the origin?
Here is what I have so far:
#pygraph
import pygame
from pygame.locals import *
pygame.init()
screen=pygame.display.set_mode((640,480))
x=0
y=0
size=16
screen.fill((255,255,255))
pygame.draw.line(screen, (0,0,0), (screen.get_width()/2,0),(screen.get_width()/2,screen.get_height()),5)
pygame.draw.line(screen, (0,0,0), (0,screen.get_height()/2),(screen.get_width(),screen.get_height()/2),5)
while True:
while y<480:
pygame.draw.rect(screen,(0,0,0),(x,y,size,size),1)
if x>640:
x=0
y+=size
pygame.draw.rect(screen,(0,0,0),(x,y,size,size),1)
x+=size
for e in pygame.event.get():
if e.type==QUIT:
exit()
if e.type==KEYUP:
if e.key==K_SPACE:
x=0
y=0
screen.fill((255,255,255))
pygame.draw.line(screen, (0,0,0), (screen.get_width()/2,0),(screen.get_width()/2,screen.get_height()),5)
pygame.draw.line(screen, (0,0,0), (0,screen.get_height()/2),(screen.get_width(),screen.get_height()/2),5)
size=input('Enter size: ')
pygame.display.flip()
The lines go though the center, but it doesn't work for every size graph. I'm not the best at math, but I hope this isn't obvious.. any advice?
The problem is that you draw the grid using the top left corner as your anchor. That is, all your grid rectangles have one corner in the top left. This becomes a problem when the distance between the center line and the screen edge is not divisible by the size - you can't divide a line of 640 units into even divisions of 15, for example.
A far better solution would be to use the center as the anchor. So basically, all the grid rectangles have one corner in the center of the graph, which means you will never get any "remainder" on the center line, and the "remainder" will instead be on the border of the graph, which looks much nicer.
Here is code for anchoring your rectangles at the center (should replace your original while y<480 loop):
while y<=480/2+size:
pygame.draw.rect(screen,(0,0,0),(640/2+x, 480/2+y,size,size),1)
pygame.draw.rect(screen,(0,0,0),(640/2-x, 480/2+y,size,size),1)
pygame.draw.rect(screen,(0,0,0),(640/2+x, 480/2-y,size,size),1)
pygame.draw.rect(screen,(0,0,0),(640/2-x, 480/2-y,size,size),1)
x+=size
if x>=640/2+size:
x=0
y+=size
Brief explanation:
I change the anchor of the rectangle (the point you pass into pygame.draw.rect) to the center of the graph, and instead of drawing one rectangle, I draw four - one in each quadrant of the graph.
I also fixed the code a bit to not need to call pygame.draw.rect() in the if statement.
A minor style tip:
Replace 480 and 640 with "screen.width" and "screen.height", so you can adjust the width and height later without problems.

PyQt QGraphicsEllipseItem rotate offset

New to PyQt and I'm having an issue rotating a QGraphicsEllipseItem. I want the ellipse to rotate around the center of the ellipse instead of the corner of the QRectF used to define the ellipse. My code looks like this (sorry, the computer I am coding it on, doesn't have internet access, so I am copying the relevant parts here by hand):
self.scene = QtGui.QGraphicsScene()
self.ui.graphicsView.setScene(self.scene)
pen = QtGui.QPen(QColor(Qt.yellow))
# Draw first Ellipse
# This code correctly places a yellow ellipse centered at the scene 500,500 point
ellipse1 = QtGui.QGraphicsEllipseItem(0,0,100,10)
ellipse1.setPen(pen)
self.scene.addItem(ellipse1)
ellipse1.setPos(500, 500)
ellipse1.translate(-50, -5)
# Now, try to draw a rotated ellipse
# This code rotates the ellipse about the 0,0 location of the rectangle
# which is the scene 450, 495 point, not the center of the ellipse
ellipse2 = QtGui.QGraphicsEllipseItem(0,0,100,10)
ellipse2.setPen(pen)
self.scene.addItem(ellipse2)
ellipse2.setPos(500, 500)
ellipse2.translate(-50, -5)
ellipse2.rotate(45.0)
OK, that is basically what I expected. Since QGraphicsEllipseItem is derived from QGraphicsItem, I tried to set the transform origin point for ellipse2 before the rotation:
ellipse2 = QtGui.QGraphicsEllipseItem(0,0,100,10)
ellipse2.setPen(pen)
self.scene.addItem(ellipse2)
ellipse2.setPos(500, 500)
ellipse2.translate(-50, -5)
ellipse2.setTransformOriginPoint(450, 495)
ellipse2.rotate(45.0)
This results in the error "AttributeError: 'QGraphicsEllipseItem' object has no attribute 'setTransformOriginPoint'
Obviously, I'm doing something wrong or making an incorrect assumption about QGraphicsEllipseItem. Some sites hint that I may need to use a bounding rectangle in order to do the rotation, but I don't understand how to do that.
If someone could show me the correct way to rotate an ellipse about its center in pyqt, I would greatly appreciate it!!!
Ok, so after many weeks I was able to find my own answer although I don't really really understand why it works. My standard method of programming by brail. Anyway, the code should look like this:
transform = QtGui.QTransform()
ellipse = QtGui.QGraphicsEllipseItem(0,0,100,10)
ellipse.setPen(pen)
ellipse.setPos(500, 500)
transform.rotate(-45.0) # rotate the negative of the angle desired
transform.translate((-50, -5) # center of the ellipse
ellipse.setTansform(transform)
self.scene.addItem(ellipse)
So this successfully places the center of the rotated ellipse at the point 500,500. I'm not sure why you would take the negative of the angle you want to rotate, but it seems to work. If anyone can explain why it works this, I would appreciate it.
I got the same problem and spent two whole days to solve it.This is my solution:
First of all you should define the coordinates(x,y) of the point around which the ellipse should rotate, this way:
ellipse.setTransformOriginPoint(QPointF(?, ?))
then you can use the setRotation() to rotate ellipse
the whole code can be seen below:
__author__ = 'shahryar_slg'
from PyQt4.QtGui import *
from PyQt4.QtCore import *
class MainWindow(QDialog):
def __init__(self):
super(QDialog, self).__init__()
self.view = QGraphicsView()
self.scene = QGraphicsScene()
self.layout = QGridLayout()
self.layout.addWidget(self.view, 0, 0)
self.view.setScene(self.scene)
self.setLayout(self.layout)
self.ellipse = QGraphicsEllipseItem(10, 20, 100, 60)
self.ellipse.setTransformOriginPoint(QPointF(100/2+10, 60/2+20))
self.ellipse.setRotation(-60)
self.scene.addItem(self.ellipse)
# I created another ellipse on the same position to show you
# that the first one is rotated around it's center:
self.ellipse2 = QGraphicsEllipseItem(20, 20, 100, 40)
self.scene.addItem(self.ellipse2)
self.update()
if __name__ == "__main__":
import sys
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec_()
Pay attention to the way I've calculated the center of the ellipse.

Categories

Resources