I want to adjust the image size in the QTableWidget.
But it is size comes out small. (The part that looks like an "이미지")
Below is the original image.
Please help me by referring to the code below.
Thanks.
from PyQt5.QtWidgets import *
from PyQt5.QtGui import *
from PyQt5.QtCore import *
imgPath = "D:\Image\Image1.png"
self.tableWidget1 = QTableWidget(self.tab1)
self.tableWidget1.setRowCount(30)
self.tableWidget1.setColumnCount(3)
self.tableWidget1.setHorizontalHeaderLabels(setNameList)
self.tableWidget1.resize(1824, 760)
self.tableWidget1.move(0, 0)
self.tableWidget1.setItem(0, 0, QTableWidgetItem(QIcon(QPixmap(imgPath).scaled(QSize(1280, 640), Qt.KeepAspectRatio)), ''))
Related
import os
from PIL import *
import numpy as np
import pandas as pd
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
# background png stored in list
os.chdir('D:\\My Image Processing\\Image Design\\IMAGE PROCESSING\\back\\BLACK FONT BG')
back = os.listdir()
for b in range(len(back)):
#background
b1 = Image.open(f"D:\\My Image Processing\\Image Design\\IMAGE PROCESSING\\back\\BLACK FONT BG\\{back[b]}")
# text
draw = ImageDraw.Draw(b1)
# Font
font33 = ImageFont.truetype("D:\\My Image Processing\\Image Design\\IMAGE PROCESSING\\fonts\\Poppins-Medium.ttf", 80)
x = "हार पत्करण माझ ध्येय नाही कारण मी बनलोय जिंकण्यासाठी."
draw.text((310,220), values[i].center(1), x,font=font33, align='center', stroke_width=1, stroke_fill=(0,0,0))
b1.save(f'D:\\My Image Processing\\Image Design\\IMAGE PROCESSING\\output\\BLACK OUTPUT MARATHI\\img.png')
output:
I am trying to put text on the image I was able to put English text on the image but when I tried to put text of other language such as hindi and marathi it is showing me various kind of errors such as wrong spelling & wrong syntax of that language... I tried various fonts and size but unable to over come this problem... So I ask for your help to overcome this problem.
Hey I had used pyvips to write hindi/devanagari text on an image using python.
First of all i installed pyvips, but when you import it will give you an error, so to overcome that issue you have to download file from pyvips and put it in your environment variable path and than you can put the text on image.
Actually I wrote article which contains step by step solution... I am sure this will help you.
https://www.infinitycodex.in/how-to-put-hindi-text-on-images-using
# IMPORTS
import os
import pyvips
import textwrap
from PIL import *
import pandas as pd
from PIL import Image
from PIL import ImageFont
from PIL import ImageDraw
# MAKE A FUNCTION
def bgoutput(filename, text):
rendered_text, feedback = pyvips.Image.text(text,
font='Mangal', fontfile='Mangal Regular.ttf',
width=900, height=900,
autofit_dpi=True)
rendered_text = rendered_text.gravity('centre', 1500, 1500)
image = rendered_text.new_from_image([0, 0, 0]).bandjoin(rendered_text)
image.write_to_file(f'{filename}.png')
# GENERATE OUTPUT 1
text = "परिस्थितियां विपरीत हो तो कुछ लोग टूट जाते हैं, और कुछ लोग रिकॉर्ड तोड़ देते हैं..!!"
bgoutput('new', text)
# COMBINE OUTPUT 1 WITH BACKGROUND IMAGE
img = Image.open('new.png')
b1 = Image.open('bg_img.png')
img = img.resize((1000,1000))
b1.paste(img,(50,50), mask=img)
# GENERATE FINAL OUTPUT
b1.save("final.png")
I'm trying to make a tool for my lab for manual image registration--where the user can select some points on two different images to align them. I made this in matplotlib, but zooming in/out was way too slow (I think because the images we're aligning are pretty high res). Is there a good way to do that in pyqtgraph? I just need to be able to select points on two image plots side by side and display where the point selections were.
Currently I have the images in ImageViews and I tried doing it with imv.scene.sigMouseClicked.connect(mouse_click), but in mouse_click(evt) evt.pos(), evt.scenePos(), and evt.screenPos() all gave coordinates that weren't in the image's coordinates. I also played around with doing the point selection with ROI free handles (since I could get the correct coordinates from those), but it doesn't seem like you could color the handles, which isn't a total deal-breaker I was wondering if there was a better option. Is there a better way to do this?
Edit:
The answer was great, I used it to make this pile of spaghetti:
https://github.com/xkstein/ManualAlign
Figured I'd like it in case someone was looking for something similar and didn't want to hassle with coding a new one from scratch.
Your question is unclear about how you want the program to match the points, here I provide a simple solution to allow you (1) Show an image. (2) Add points to the image.
The basic idea is to use a pg.GraphicsLayoutWidget, then add a pg.ImageItem and a pg.ScatterPlotItem, and each mouse click adds a point to the ScatterPlotItem. Code:
import sys
from PyQt5.QtCore import Qt
from PyQt5.QtWidgets import QApplication, QMainWindow, QWidget, QHBoxLayout
import pyqtgraph as pg
import cv2
pg.setConfigOption('background', 'w')
pg.setConfigOption('foreground', 'k')
class ImagePlot(pg.GraphicsLayoutWidget):
def __init__(self):
super(ImagePlot, self).__init__()
self.p1 = pg.PlotItem()
self.addItem(self.p1)
self.p1.vb.invertY(True) # Images need inverted Y axis
# Use ScatterPlotItem to draw points
self.scatterItem = pg.ScatterPlotItem(
size=10,
pen=pg.mkPen(None),
brush=pg.mkBrush(255, 0, 0),
hoverable=True,
hoverBrush=pg.mkBrush(0, 255, 255)
)
self.scatterItem.setZValue(2) # Ensure scatterPlotItem is always at top
self.points = [] # Record Points
self.p1.addItem(self.scatterItem)
def setImage(self, image_path, size):
self.p1.clear()
self.p1.addItem(self.scatterItem)
# pg.ImageItem.__init__ method takes input as an image array
# I use opencv to load image, you can replace with other packages
image = cv2.imread(image_path, 1)
# resize image to some fixed size
image = cv2.resize(image, size)
self.image_item = pg.ImageItem(image)
self.image_item.setOpts(axisOrder='row-major')
self.p1.addItem(self.image_item)
def mousePressEvent(self, event):
point = self.p1.vb.mapSceneToView(event.pos()) # get the point clicked
# Get pixel position of the mouse click
x, y = int(point.x()), int(point.y())
self.points.append([x, y])
self.scatterItem.setPoints(pos=self.points)
super().mousePressEvent(event)
if __name__ == "__main__":
QApplication.setAttribute(Qt.AA_EnableHighDpiScaling)
app = QApplication([])
win = QMainWindow()
central_win = QWidget()
layout = QHBoxLayout()
central_win.setLayout(layout)
win.setCentralWidget(central_win)
image_plot1 = ImagePlot()
image_plot2 = ImagePlot()
layout.addWidget(image_plot1)
layout.addWidget(image_plot2)
image_plot1.setImage('/home/think/image1.png', (310, 200))
image_plot2.setImage('/home/think/image2.jpeg', (310, 200))
# You can access points by accessing image_plot1.points
win.show()
if (sys.flags.interactive != 1) or not hasattr(Qt.QtCore, "PYQT_VERSION"):
QApplication.instance().exec_()
The result looks like:
I read a byte array of size: height*width*3 (3=RGB) that represents an image. This is raw data that I receive from a USB camera.
I was able to display and save it using PIL on this thread. Now I'm trying to display it on a PyQt5 window.
I have tried using QLabel.setPixmap() but it seems I can not create a valid pixel map.
Failed attempt reading the byte array:
from PyQt5.QtGui import QPixmap
from PyQt5.QtCore import QByteArray
from PyQt5.QtWidgets import QLabel
self.camLabel = QLabel()
pixmap = QPixmap()
loaded = pixmap.loadFromData(QByteArray(img)) # img is a byte array of size: h*w*3
self.imgLabel.setPixmap(pixmap)
in this example loaded returns False so I know imgLabel.setPixmap will not work, but I don't know how to debug further to find out why the loading has failed.
A second failed attempt trying to use PIL library:
import PIL.Image
import PIL.ImageQt
pImage = PIL.Image.fromarray(RGB) # RGB is a numpy array of the data in img
qtImage = PIL.ImageQt.ImageQt(pImage)
pixmap = QPixmap.fromImage(qtImage)
self.imgLabel.setPixmap(pixmap)
In this example the application crashes when I'm running: self.imgLabel.setPixmap(pixmap), so again, I'm not sure how to debug further.
Any help will be appreciated!
To get a QPixmap from the numpy array you could create an QImage first and use that to create the QPixmap. For example:
from PyQt5 import QtCore, QtWidgets, QtGui
import numpy as np
# generate np array of (r, g, b) triplets with dtype uint8
height = width = 255
RGBarray = np.array([[r % 256, c % 256, -c % 256] for r in range(height) for c in range(width)], dtype=np.uint8)
app = QtWidgets.QApplication([])
label = QtWidgets.QLabel()
# create QImage from numpy array
image = QtGui.QImage(bytes(RGBarray), width, height, 3*width, QtGui.QImage.Format_RGB888)
pixmap = QtGui.QPixmap(image)
label.setPixmap(pixmap)
label.show()
app.exec()
I have an image that contains tiles. I want to make a QPixmap of each tile within the QPixmap. I thought QPixmap's copy(x,y,width,height) method would do this for me, but must be copying the entire image, not just the rectangle defined by the arguments, and thus consuming WAY too much memory.
An example below illustrates the problem. My png happens to be 3400x3078 pixels, dividing by the 57 rows, 50 columns, the minipixmaps should each be 68x54 pixels, but are obviously using much more memory than that.
What am I missing?
from PyQt5.QtGui import QPixmap
from PyQt5.QtWidgets import QApplication
import os, sys
app = QApplication(sys.argv)
file = r"/path/to/image/grid-of-tiles.png"
# image I'm using has these many rows and columns of tiles
r = 57
c = 50
pixmap = QPixmap(file)
# the tiles are known to divide evenly
width = pixmap.width() / c
height = pixmap.height() / r
# after executing the below, mem use climbs north of 9GB!
# I was expecting on the order of twice the original image's size
minipixmaps = [pixmap.copy(row*height, col*width, width, height) for row in range(r) for col in range(c)]
Thanks to eyllanesc's comment. QImage does seem better behaved:
from PyQt5.QtGui import QPixmap, QImage
from PyQt5.QtWidgets import QApplication
import os, sys
app = QApplication(sys.argv)
file = r"/path/to/image/grid-of-tiles.png"
# image I'm using has these many rows and columns of tiles
r = 57
c = 50
mosaic = QImage(file) # QImage instead of QPixmap
# the tiles are known to divide evenly
width = mosaic.width() / c
height = mosaic.height() / r
# much less memory!
minipixmaps = [QPixmap.fromImage(mosaic.copy(row*height, col*width, width, height)) for row in range(r) for col in range(c)]
Despite not being a proficient GUI programmer, I figured out how to use the pyqtgraph module's ImageView function to display an image that I can pan/zoom and click on to get precise pixel coordinates. The complete code is given below. The only problem is that ImageView can apparently only display a single-channel (monochrome) image.
My question: How do I do EXACTLY the same thing as this program (ignoring histogram, norm, and ROI features, which I don't really need), but with the option to display a true color image (e.g., the original JPEG photo)?
import numpy as np
from pyqtgraph.Qt import QtCore, QtGui
import pyqtgraph as pg
import matplotlib.image as mpimg
# Load image from disk and reorient it for viewing
fname = 'R0000187.JPG' # This can be any photo image file
photo=np.array(mpimg.imread(fname))
photo = photo.transpose()
# select for red color and extract as monochrome image
img = photo[0,:,:] # WHAT IF I WANT TO DISPLAY THE ORIGINAL RGB IMAGE?
# Create app
app = QtGui.QApplication([])
## Create window with ImageView widget
win = QtGui.QMainWindow()
win.resize(1200,800)
imv = pg.ImageView()
win.setCentralWidget(imv)
win.show()
win.setWindowTitle(fname)
## Display the data
imv.setImage(img)
def click(event):
event.accept()
pos = event.pos()
print (int(pos.x()),int(pos.y()))
imv.getImageItem().mouseClickEvent = click
## Start Qt event loop unless running in interactive mode.
if __name__ == '__main__':
import sys
if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
QtGui.QApplication.instance().exec_()
pyqtgraph.ImageView does support rgb / rgba images. For example:
import numpy as np
import pyqtgraph as pg
data = np.random.randint(255, size=(100, 100, 3))
pg.image(data)
..and if you want to display the exact image data without automatic level adjustment:
pg.image(data, levels=(0, 255))
As pointed out by Luke, ImageView() does display RGB, provided the correct array shape is passed. In my sample program, I should have used photo.transpose([1,0,2]) to keep the RGB in the last dimension rather than just photo.transpose(). When ImageView is confronted with an array of dimension (3, W, H), it treats the array as a video consisting of 3 monochrome images, with a slider at the bottom to select the frame.
(Corrected to incorporate followup comment by Luke, below)