So I'm trying to decode a QR code image using code from this S.O. answer. Here's the adapted code:
import cv2
# Name of the QR Code Image file
filename = r"C:\temp\2021-12-14_162414.png"
# read the QRCODE image
image = cv2.imread(filename)
# initialize the cv2 QRCode detector
detector = cv2.QRCodeDetector()
# detect and decode
data, vertices_array, binary_qrcode = detector.detectAndDecode(image)
# if there is a QR code
# print the data
if vertices_array is not None:
print("QRCode data:")
print(data)
else:
print("There was some error")
(This is the whole program; I was still experimenting.)
The PNG file itself is really small, just 43 KB in size, with resolution of 290x290 (24 bpp) containing just the QR Code.
However, I keep getting the error:
Traceback (most recent call last):
File "C:/Repos/tesqr/decod-cv2.py", line 10, in <module>
data, vertices_array, binary_qrcode = detector.detectAndDecode(image)
cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\core\src\alloc.cpp:73: error: (-4:Insufficient memory) Failed to allocate 54056250000 bytes in function 'cv::OutOfMemoryError'
Why is alloc.cpp asking for 54 GB of RAM ???
I'm new with OpenCV, so please help me troubleshoot what went wrong.
The library I'm using is:
$ pip3 freeze | grep opencv
opencv-contrib-python-headless==4.5.4.60
the input image:
Short Answer:
Try WeChatQRCode
Long Answer:
There are several open memory issues about decoding with QRCodeDetector. I hope that in future versions it will be fixed. Meanwhile you can try WeChatQRCode also from cv2.
WeChatQRCode includes two CNN-based models: A object detection model and a super resolution model. Object detection model is applied to detect QRCode with the bounding box. super resolution model is applied to zoom in QRCode when it is small.
Your code modified:
import cv2
# Name of the QR Code Image file
filename = "2021-12-14_162414.png"
# read the QRCODE image
image = cv2.imread(filename)
# initialize the cv2 QRCode detector
detector =cv2.wechat_qrcode_WeChatQRCode(detector_prototxt_path = "detect.prototxt", detector_caffe_model_path = "detect.caffemodel", super_resolution_prototxt_path = "sr.prototxt", super_resolution_caffe_model_path = "sr.caffemodel")
# detect and decode
data, vertices_array = detector.detectAndDecode(image)
# if there is a QR code
# print the data
if vertices_array is not None:
print("QRCode data:")
print(data)
else:
print("There was some error")
Output:
QRCode data:
('PK\x03\x04\n',)
As you can see, it needs prototxt and caffemodel files. You can find them here.
Related
I'm trying to convert heic file in jpeg importing also all metadadata (like gps info and other stuff), unfurtunately with the code below the conversion is ok but no metadata are stored on the jpeg file created.
Anyone can describe me what I need to add in the conversion method?
heif_file = pyheif.read("/transito/126APPLE_IMG_6272.HEIC")
image = Image.frombytes(
heif_file.mode,
heif_file.size,
heif_file.data,
"raw",
heif_file.mode,
heif_file.stride,
)
image.save("/transito/126APPLE_IMG_6272.JPEG", "JPEG")
Thanks, i found a solution, I hope can help others:
# Open the file
heif_file = pyheif.read(file_path_heic)
# Creation of image
image = Image.frombytes(
heif_file.mode,
heif_file.size,
heif_file.data,
"raw",
heif_file.mode,
heif_file.stride,
)
# Retrive the metadata
for metadata in heif_file.metadata or []:
if metadata['type'] == 'Exif':
exif_dict = piexif.load(metadata['data'])
# PIL rotates the image according to exif info, so it's necessary to remove the orientation tag otherwise the image will be rotated again (1° time from PIL, 2° from viewer).
exif_dict['0th'][274] = 0
exif_bytes = piexif.dump(exif_dict)
image.save(file_path_jpeg, "JPEG", exif=exif_bytes)
HEIF to JPEG:
from PIL import Image
import pillow_heif
if __name__ == "__main__":
pillow_heif.register_heif_opener()
img = Image.open("any_image.heic")
img.save("output.jpeg")
JPEG to HEIF:
from PIL import Image
import pillow_heif
if __name__ == "__main__":
pillow_heif.register_heif_opener()
img = Image.open("any_image.jpg")
img.save("output.heic")
Rotation (EXIF of XMP) will be removed automatically when needed.
Call to register_heif_opener can be replaced by importing pillow_heif.HeifImagePlugin instead of pillow_heif
Metadata can be edited in Pillow's "info" dictionary and will be saved when saving to HEIF.
Here is an other approach to convert iPhone HEIC images to JPG preserving exif data
Pyhton 3.9 (I'm on Rasperry PI 4 64 bit)
install pillow_heif (0.8.0)
And run following code and you'll find exif data in the new JPEG image.
The trick is to get the dictionary information. No additional conversion required.
This is sample code, built your own wrapper around.
from PIL import Image
import pillow_heif
# open the image file
heif_file = pillow_heif.read_heif("/mnt/pictures/test/IMG_0001.HEIC")
#create the new image
image = Image.frombytes(
heif_file.mode,
heif_file.size,
heif_file.data,
"raw",
heif_file.mode,
heif_file.stride,
)
print(heif_file.info.keys())
dictionary=heif_file.info
exif_dict=dictionary['exif']
# debug
print(exif_dict)
image.save('/tmp/test000.JPG', "JPEG", exif=exif_dict)
I'am trying to write a script that gives me the dimension of the Cover art in a mp3 file, the furthest I've is via Mutgen doing:
audiofile = mutagen.File(wavefile, easy=False)
print(audiofile.tags)
but then from that raw output how can I extract the dimensions like 400x400
You can use stagger and PIL, i.e.:
import stagger, io, traceback
from PIL import Image
try:
mp3 = stagger.read_tag('Menuetto.mp3')
im = Image.open(io.BytesIO(mp3[stagger.id3.APIC][0].data))
print(im.size)
# (300, 300)
# im.save("cover.jpg") # save cover to file
except:
pass
print(traceback.format_exc())
I want to set up an imagestream from my rbpi to my server.
So I would like to setup a network stream discripted in the http://picamera.readthedocs.io/en/release-1.12/recipes1.html#streaming-capture.
This worked well, but now I want to save the captured Image.
-> (modified the server script)
import io
import socket
import struct
from PIL import Image
# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)
# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
while True:
# Read the length of the image as a 32-bit unsigned int. If the
# length is zero, quit the loop
image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
if not image_len:
break
# Construct a stream to hold the image data and read the image
# data from the connection
image_stream = io.BytesIO()
image_stream.write(connection.read(image_len))
# Rewind the stream, open it as an image with PIL and do some
# processing on it
image_stream.seek(0)
image = Image.open(image_stream)
print('Image is %dx%d' % image.size)
image.verify()
print('Image is verified')
im = Image.new("RGB", (640,480), "black") #the saving part
im = image.copy()
im.save("./img/test.jpg","JPEG")
finally:
connection.close()
server_socket.close()
But it returns me following errorcode:
Traceback (most recent call last):
File "stream.py", line 33, in <module>
im = image.copy()
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 781, in copy
self.load()
File "/usr/lib/python2.7/dist-packages/PIL/ImageFile.py", line 172, in load
read = self.fp.read
AttributeError: 'NoneType' object has no attribute 'read'
How can I fix this?
I don't have a raspberry-pi, but decided to see if I could reproduce the problem anyway. Also, for input I just created an image file on disk to eliminate all the socket stuff. Sure enough I got exactly the same error as you encountered. (Note: IMO you should have done this simplification yourself and posted an MCVE illustrating the problem (see How to create a Minimal, Complete, and Verifiable example in the SO Help Center).
To get the problem to go away I added a call to the image.load() method immediately after the Image.open() statement and things started working. Not only was the error gone, but the output file seemed fine, too.
Here's my simple test code with the fix indicated:
import io
import os
from PIL import Image
image_filename = 'pillow_test.jpg'
image_len = os.stat(image_filename).st_size
image_stream = io.BytesIO()
with open(image_filename, 'rb') as image_file:
image_stream.write(image_file.read(image_len))
image_stream.seek(0)
image = Image.open(image_stream)
image.load() # <======================== ADDED LINE
print('Image is %dx%d' % image.size)
image.verify()
print('Image is verified')
im = Image.new("RGB", (640,480), "black") #the saving part
im = image.copy()
im.save("pillow_test_out.jpg","JPEG")
print('image written')
The clue was this passage from the pillow documentation for the PIL.Image.open() function:
This is a lazy operation; this function identifies the file, but the file
remains open and the actual image data is not read from the file until you try
to process the data (or call the load() method).
Emphasis mine. You would think the image.verify() would make this unnecessary because it seems like verifying the "file" would require loading the image data in order to check its contents (according to that method's own documentation, which claims it "verifies the contents of a file"). My guess is this is likely a bug and you should report it.
I am using the Pillow fork of PIL and keep receiving the error
OSError: cannot identify image file <_io.BytesIO object at 0x103a47468>
when trying to open an image. I am using virtualenv with python 3.4 and no installation of PIL.
I have tried to find a solution to this based on others encountering the same problem, however, those solutions did not work for me. Here is my code:
from PIL import Image
import io
# This portion is part of my test code
byteImg = Image.open("some/location/to/a/file/in/my/directories.png").tobytes()
# Non test code
dataBytesIO = io.BytesIO(byteImg)
Image.open(dataBytesIO) # <- Error here
The image exists in the initial opening of the file and it gets converted to bytes. This appears to work for almost everyone else but I can't figure out why it fails for me.
EDIT:
dataBytesIO.seek(0)
does not work as a solution (tried it) since I'm not saving the image via a stream, I'm just instantiating the BytesIO with data, therefore (if I'm thinking of this correctly) seek should already be at 0.
(This solution is from the author himself. I have just moved it here.)
SOLUTION:
# This portion is part of my test code
byteImgIO = io.BytesIO()
byteImg = Image.open("some/location/to/a/file/in/my/directories.png")
byteImg.save(byteImgIO, "PNG")
byteImgIO.seek(0)
byteImg = byteImgIO.read()
# Non test code
dataBytesIO = io.BytesIO(byteImg)
Image.open(dataBytesIO)
The problem was with the way that Image.tobytes()was returning the byte object. It appeared to be invalid data and the 'encoding' couldn't be anything other than raw which still appeared to output wrong data since almost every byte appeared in the format \xff\. However, saving the bytes via BytesIO and using the .read() function to read the entire image gave the correct bytes that when needed later could actually be used.
image = Image.open(io.BytesIO(decoded))
# File "C:\Users\14088\anaconda3\envs\tensorflow\lib\site-packages\PIL\Image.py", line 2968, in open
# "cannot identify image file %r" % (filename if filename else fp)
# PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x000002B733BB11C8>
===
I fixed as worked:
message = request.get_json(force=True)
encoded = message['image']
# https://stackoverflow.com/questions/26070547/decoding-base64-from-post-to-use-in-pil
#image_data = re.sub('^data:image/.+;base64,', '', message['image'])
image_data = re.sub('^data:image/.+;base64,', '', encoded)
# Remove extra "data:image/...'base64" is Very important
# If "data:image/...'base64" is not remove, the following line generate an error message:
# File "C:\Work\SVU\950_SVU_DL_TF\sec07_TF_Flask06_09\32_KerasFlask06_VisualD3\32_predict_app.py", line 69, in predict
# image = Image.open(io.BytesIO(decoded))
# File "C:\Users\14088\anaconda3\envs\tensorflow\lib\site-packages\PIL\Image.py", line 2968, in open
# "cannot identify image file %r" % (filename if filename else fp)
# PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x000002B733BB11C8>
# image = Image.open(BytesIO(base64.b64decode(image_data)))
decoded = base64.b64decode(image_data)
image = Image.open(io.BytesIO(decoded))
# return json.dumps({'result': 'success'}), 200, {'ContentType': 'application/json'}
#print('#app.route => image:')
#print()
processed_image = preprocess_image(image, target_size=(224, 224))
prediction = model.predict(processed_image).tolist()
#print('prediction:', prediction)
response = {
'prediction': {
'dog': prediction[0][0],
'cat': prediction[0][1]
}
}
print('response:', response)
return jsonify(response)
On some cases the same error happens when you are dealing with a Raw Image file such CR2. Example: http://www.rawsamples.ch/raws/canon/g10/RAW_CANON_G10.CR2
when you try to run:
byteImg = Image.open("RAW_CANON_G10.CR2")
You will get this error:
OSError: cannot identify image file 'RAW_CANON_G10.CR2'
So you need to convert the image using rawkit first, here is an example how to do it:
from io import BytesIO
from PIL import Image, ImageFile
import numpy
from rawkit import raw
def convert_cr2_to_jpg(raw_image):
raw_image_process = raw.Raw(raw_image)
buffered_image = numpy.array(raw_image_process.to_buffer())
if raw_image_process.metadata.orientation == 0:
jpg_image_height = raw_image_process.metadata.height
jpg_image_width = raw_image_process.metadata.width
else:
jpg_image_height = raw_image_process.metadata.width
jpg_image_width = raw_image_process.metadata.height
jpg_image = Image.frombytes('RGB', (jpg_image_width, jpg_image_height), buffered_image)
return jpg_image
byteImg = convert_cr2_to_jpg("RAW_CANON_G10.CR2")
Code credit if for mateusz-michalik on GitHub (https://github.com/mateusz-michalik/cr2-to-jpg/blob/master/cr2-to-jpg.py)
While reading Dicom files the problem might be caused due to Dicom compression.
Make sure both gdcm and pydicom are installed.
GDCM is usually the one that's more difficult to install. The latest way to easily install the same is
conda install -U conda-forge gdcm
When dealing with url, this error can arise from a wrong extension of the downloaded
file or just a corrupted file.
So to avoid that use a try/except bloc so you app doesn't crash and will continue its job.
In the except part, you can retrieve the file in question for analysis:
A snippet here:
for url in urls:
with closing(urllib.request.urlopen(url)) as f:
try:
img = Image(f, 30*mm, 30*mm)
d_img.append(img)
except Exception as e:
print(url) #here you get the file causing the exception
print(e)
Here a related answer
The image file itself might be corrupted. So if you were to process a considerable amount of image files, then simply enclose the line that processes each image file with a try catch statement.
I want to convert a jpg file to png, but when I run this code :
from opencv import _cv
from opencv.highgui import cvSaveImage, cvLoadImage
cvSaveImage("bet.jpg",cvLoadImage("bet.jpg"))
if __name__ == '__main__':
pass
It gives this error which I don't understand :
Traceback (most recent call last):
File "convert.py", line 6, in <module>
cvSaveImage("bet.jpg",cvLoadImage("bet.jpg"))
File "/usr/lib/pymodules/python2.6/opencv/highgui.py", line 183, in cvSaveImage
return _highgui.cvSaveImage(*args)
RuntimeError: openCV Error:
Status=Null pointer
function name=cvGetMat
error message=NULL array pointer is passed
file_name=cxarray.cpp
line=2780
I have my picture with the same folder of source code and the name of the image is bet.jpg
Any idea ??
The best choice is pyopencv:
import pyopencv as cv
img = cv.imread('01.png')
cv.imshow('img-windows',img)
cv.waitKey(0)
cv.imwrite('01.png',img)
From Python CV documentation, the CV2 method for converting a jpeg to png is:
Python: cv2.imwrite(filename, img[, params]) → retval
For my example:
import cv2
filename = 'pic.jpeg'
cam = cv2.VideoCapture(filename)
s, img = cam.read()
picName = 'pic.png'
cv2.imwrite(picName, img)
VideoCapture is nice and general, and works with videos, webcams and image files.
I solved the problem, the image I took randomly from the Google Images doesn't load. Maybe it's encrypted or something I don't know. I tried it with other images, and worked very well. So watch out while copying images : )