I'm developing a GUI using QTDesigner for some image processing tasks. I have two graphic views beneath each other in a grid layout. Both should display an image and later on I will add overlays.
I create my Pixmap img_pixmap and add it to my Scene. This scene is then added to my graphics view. Since my image is much larger than my screen size I apply fitInView(). In code:
self.img_pixmap_p = self.img_scene.addPixmap(img_pixmap)
self.img_view.setScene(self.img_scene)
self.img_scene.setSceneRect(QtCore.QRectF())
self.img_view.fitInView(self.img_scene.sceneRect(), QtCore.Qt.KeepAspectRatio)
So far, so good but how do i get rid of the white space around my image view? Ideally, I want my pixmap use the full width of the graphics view and to keep the aspect ratio, the graphics view should adjust its height accordingly. Any ideas on how to achieve that in a straight forward fashion?
Here an image to get a better idea of what I get:
As you can see, there are white borders, which I want to avoid.
Okay, I did as suggested by Pavel:
img_aspect_ratio = float(pixmap.size().width()) / pixmap.size().height()
width = img_view.size().width()
img_view.setFixedHeight( width / img_aspect_ratio )
img_view.fitInView(img_scene.sceneRect(), QtCore.Qt.KeepAspectRatio)
It works fine when you call this in each resizeEvent().
Related
I've been trying to figure this out for awhile now, but cannot seem to figure it out. I am developing a program with a UI that I made in Qt Designer, where I will be analyzing images using cv2. Part of this analysis involves having the user click on the image to store different coordinates from the image to be used later. When I made the UI in Qt Designer, the image object is of the class QtWidgets.QGraphicsView, but I cannot figure out how to show an image from cv2, which is a numpy array on an object of type QtWidgets.QGraphicsView. I am thinking I should be using a QtGui.QPixmap, but I'm still not sure. It seems there is not great documentation, so I'm having troubles figuring out how to choose how to represent these images that I plan to have interactions with and analysis of.
I did end up using QGraphicsPixmapItem as suggested by ekhumoro in the comments. I also used QImage to do this though. An example of how I'm doing this is
image_window = QtWidgets.QGraphicsView(centralwidget)
raw_image = cv2.imread('image.jpg')
height, width, channel = image.shape
bytes_per_line = width * 3
image = QtGui.QImage(raw_image.data, width, height, bytes_per_line, QtGui.QImage.Format_RGB888)
scene = QtWidgets.QGraphicsScene()
scene.addPixmap(QtGui.QPixmap.fromImage(image)
image_window.setScene(scene)
And this worked great for me. There may still be some optimizations I can do, but it gives me the freedom to keep everything separated so I can easily do manipulations to the raw_image pixel matrix. I also have everything abstracted by having a class for the scene and raw_image.
I'm embedding matplotlib in my PyQt 4 application and using it to display images. The matplotlib control is being displayed in a QDockWidget, and I'm setting some its parameters as follows:
imagePlot = self.axes.imshow(myNumpyArray, interpolation = "nearest")
imagePlot.axes.set_axis_off()
imagePlot.axes.get_xaxis().set_visible(False)
imagePlot.axes.get_yaxis().set_visible(False)
self.fig.subplots_adjust(bottom = 0, top = 1, left = 0, right = 1)
This has the desired effect i.e. the image is displayed filling as much space as possible in the QDockWidget while maintaining the aspect ratio. I can resize the dock widget by shrinking it horizontally and then expanding it and the image display is correct (filling as much space as possible while maintaining the aspect ratio). Now, if I add the following line:
imagePlot.axes.set(adjustable = "datalim")
I get unexpected behavior. Initially the image is displayed in the same manner as before, but after shrinking and expanding horizontally matplotlib seems to introduce a border around the data. For example, here's the image as it is displayed initially (and correctly):
And here's what matplotlib displays after I drag the side of the containing dock widget and shrink, then expand back to original size. Note the appearance of the border around the image.
What is causing this and how can I prevent it? Thank you.
I would like to display, in my custom window, two images in PyQt4. In the left side, a source image. In the right side, a transformed -transformation will be pixel-wise- image. The images will be in RGBA format loaded by PIL(low), and for numpy (scipy.ndimage.imread).
Both images have the same size (since the transformed image will be generated from the source image), and both displays will have the same size (independent of the image size):
If the image width is lower than display's, the image will be left-centered in the display. If it is greater, I need a scroll for the source display.
An analogous criterion for height, top-centered, and vertical scroll bars.
If I scroll in the source display, the transformed's display will scroll as well, but the transformed's display will not have scrollbars.
Question: What component could I use to display an image with inner scrolling capabilities? (I will be happy with the component name, and some guidelines; code would be appreciated but the main concept is the important thing for the answer).
There are several solutions. If you just want to create the numpy "image" once and then display it, and it's not too big (or you don't care about memory), you can simply convert it to a QPixmap and display that in a viewport: Convert numpy array to PySide QPixmap
self.imageLabel = QtGui.QLabel()
self.imageLabel.setBackgroundRole(QtGui.QPalette.Base)
self.imageLabel.setSizePolicy(QtGui.QSizePolicy.Ignored,
QtGui.QSizePolicy.Ignored) # not sure about this one.
self.imageLabel.setPixmap(...pixmap here...)
self.scrollArea = QtGui.QScrollArea()
self.scrollArea.setBackgroundRole(QtGui.QPalette.Dark)
self.scrollArea.setWidget(self.imageLabel)
(taken from http://ftp.ics.uci.edu/pub/centos0/ics-custom-build/BUILD/PyQt-x11-gpl-4.7.2/examples/widgets/imageviewer.py)
If the image is too big, then you can add a paint hook. In the hook, get the visible part of the QScrollArea (the viewport), turn just this into a pixmap and render it.
The same is true when the numpy array changes all the time. You need to trigger a refresh when the array has been updated. Qt will then call the paint hook and the new content will become visible.
I am trying to add icons to my menu options in wxpython. I have followed the code here but my image displays really large (not the nice icons size that Mike shows). Here is my code -- is there a way to make the icon fit the menu size (resize it)? Thanks!
self.HelpMenu = wx.Menu()
self.HelpAboutItem2 = wx.MenuItem(self.HelpMenu, 202, "&Visit Us", "Go to our website", wx.ITEM_NORMAL)
img = wx.Image('My_Image.jpg', wx.BITMAP_TYPE_ANY)
self.HelpAboutItem2.SetBitmap(wx.BitmapFromImage(img))
self.HelpMenu.AppendItem(self.HelpAboutItem2)
self.SetMenuBar(self.MainMenu)
I think I saved my image in a specific size for that particular example. I'm not finding anything in wx.Menu that let's you specify a size. However, wx.Image has Scale and Rescale methods that you might be able to use for scaling the image on the fly. I used that method in my image viewer tutorial to keep images from becoming too big for my screen.
I have a problem with saving an image. I have following part of code:
self.canvas.postscript(file = filename, colormode = "color")
It works good, but when I set background color in canvas constructor (f.e. bg='red'),
finally image doesn't have this background color. It is still white.
Could anybody help me ?
Sounds like you're using Tkinter: is that right?
I believe the problem is that the bg argument is a general property shared by all widgets. It's really a part of how the widget is drawn on the screen and not a part of the image you're constructing in your canvas. I think the easiest thing for you to do is to draw a red box in your canvas for your background - that will then be included as part of the image saved in your postscript file.