I want to set up an imagestream from my rbpi to my server.
So I would like to setup a network stream discripted in the http://picamera.readthedocs.io/en/release-1.12/recipes1.html#streaming-capture.
This worked well, but now I want to save the captured Image.
-> (modified the server script)
import io
import socket
import struct
from PIL import Image
# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 8000))
server_socket.listen(0)
# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
while True:
# Read the length of the image as a 32-bit unsigned int. If the
# length is zero, quit the loop
image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
if not image_len:
break
# Construct a stream to hold the image data and read the image
# data from the connection
image_stream = io.BytesIO()
image_stream.write(connection.read(image_len))
# Rewind the stream, open it as an image with PIL and do some
# processing on it
image_stream.seek(0)
image = Image.open(image_stream)
print('Image is %dx%d' % image.size)
image.verify()
print('Image is verified')
im = Image.new("RGB", (640,480), "black") #the saving part
im = image.copy()
im.save("./img/test.jpg","JPEG")
finally:
connection.close()
server_socket.close()
But it returns me following errorcode:
Traceback (most recent call last):
File "stream.py", line 33, in <module>
im = image.copy()
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 781, in copy
self.load()
File "/usr/lib/python2.7/dist-packages/PIL/ImageFile.py", line 172, in load
read = self.fp.read
AttributeError: 'NoneType' object has no attribute 'read'
How can I fix this?
I don't have a raspberry-pi, but decided to see if I could reproduce the problem anyway. Also, for input I just created an image file on disk to eliminate all the socket stuff. Sure enough I got exactly the same error as you encountered. (Note: IMO you should have done this simplification yourself and posted an MCVE illustrating the problem (see How to create a Minimal, Complete, and Verifiable example in the SO Help Center).
To get the problem to go away I added a call to the image.load() method immediately after the Image.open() statement and things started working. Not only was the error gone, but the output file seemed fine, too.
Here's my simple test code with the fix indicated:
import io
import os
from PIL import Image
image_filename = 'pillow_test.jpg'
image_len = os.stat(image_filename).st_size
image_stream = io.BytesIO()
with open(image_filename, 'rb') as image_file:
image_stream.write(image_file.read(image_len))
image_stream.seek(0)
image = Image.open(image_stream)
image.load() # <======================== ADDED LINE
print('Image is %dx%d' % image.size)
image.verify()
print('Image is verified')
im = Image.new("RGB", (640,480), "black") #the saving part
im = image.copy()
im.save("pillow_test_out.jpg","JPEG")
print('image written')
The clue was this passage from the pillow documentation for the PIL.Image.open() function:
This is a lazy operation; this function identifies the file, but the file
remains open and the actual image data is not read from the file until you try
to process the data (or call the load() method).
Emphasis mine. You would think the image.verify() would make this unnecessary because it seems like verifying the "file" would require loading the image data in order to check its contents (according to that method's own documentation, which claims it "verifies the contents of a file"). My guess is this is likely a bug and you should report it.
Related
I am using image compression to reduce the image size. When submitting the post request, I am not getting any error, but can't figure out why the images do not get saved. Here is my code:
#app.post("/post_ads")
async def create_upload_files(title: str = Form(),body: str = Form(),
db: Session = Depends(get_db), files: list[UploadFile] = File(description="Multiple files as UploadFile")):
for file in files:
im = Image.open(file.file)
im = im.convert("RGB")
im_io = BytesIO()
im = im.save(im_io, 'JPEG', quality=50)
PIL.Image.open() takes as fp argumnet the following:
fp – A filename (string), pathlib.Path object or a file object. The
file object must implement file.read(), file.seek(), and
file.tell() methods, and be opened in binary mode.
Using a BytesIO stream, you would need to have something like the below (as shown in client side of this answer):
Image.open(io.BytesIO(file.file.read()))
However, you don't really have to use an in-memory bytes buffer, as you can get the the actual file object using the .file attribute of UploadFile. As per the documentation:
file: A SpooledTemporaryFile (a file-like object).
This is the actual Python file that you can pass directly to other
functions or libraries that expect a "file-like" object.
Example - Saving image to disk:
# ...
from fastapi import HTTPException
from PIL import Image
#app.post("/upload")
def upload(file: UploadFile = File()):
try:
im = Image.open(file.file)
if im.mode in ("RGBA", "P"):
im = im.convert("RGB")
im.save('out.jpg', 'JPEG', quality=50)
except Exception:
raise HTTPException(status_code=500, detail='Something went wrong')
finally:
file.file.close()
im.close()
Example - Saving image to an in-memory bytes buffer (see this answer):
# ...
from fastapi import HTTPException
from PIL import Image
#app.post("/upload")
def upload(file: UploadFile = File()):
try:
im = Image.open(file.file)
if im.mode in ("RGBA", "P"):
im = im.convert("RGB")
buf = io.BytesIO()
im.save(buf, 'JPEG', quality=50)
# to get the entire bytes of the buffer use:
contents = buf.getvalue()
# or, to read from `buf` (which is a file-like object), call this first:
buf.seek(0) # to rewind the cursor to the start of the buffer
except Exception:
raise HTTPException(status_code=500, detail='Something went wrong')
finally:
file.file.close()
buf.close()
im.close()
For more details and code examples on how to upload files/images using FastAPI, please have a look at this answer and this answer. Also, please have a look at this answer for more information on defining your endpoint with def or async def.
I assume you are writing to a BytesIO to get an "in memory" JPEG without slowing yourself down by writing to disk and cluttering your filesystem.
If so, you want:
from PIL import Image
from io import BytesIO
im = Image.open(file.file)
im = im.convert("RGB")
im_io = BytesIO()
# create in-memory JPEG in RAM (not disk)
im.save(im_io, 'JPEG', quality=50)
# get the JPEG image in a variable called JPEG
JPEG = im_io.get_value()
So I'm trying to decode a QR code image using code from this S.O. answer. Here's the adapted code:
import cv2
# Name of the QR Code Image file
filename = r"C:\temp\2021-12-14_162414.png"
# read the QRCODE image
image = cv2.imread(filename)
# initialize the cv2 QRCode detector
detector = cv2.QRCodeDetector()
# detect and decode
data, vertices_array, binary_qrcode = detector.detectAndDecode(image)
# if there is a QR code
# print the data
if vertices_array is not None:
print("QRCode data:")
print(data)
else:
print("There was some error")
(This is the whole program; I was still experimenting.)
The PNG file itself is really small, just 43 KB in size, with resolution of 290x290 (24 bpp) containing just the QR Code.
However, I keep getting the error:
Traceback (most recent call last):
File "C:/Repos/tesqr/decod-cv2.py", line 10, in <module>
data, vertices_array, binary_qrcode = detector.detectAndDecode(image)
cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\core\src\alloc.cpp:73: error: (-4:Insufficient memory) Failed to allocate 54056250000 bytes in function 'cv::OutOfMemoryError'
Why is alloc.cpp asking for 54 GB of RAM ???
I'm new with OpenCV, so please help me troubleshoot what went wrong.
The library I'm using is:
$ pip3 freeze | grep opencv
opencv-contrib-python-headless==4.5.4.60
the input image:
Short Answer:
Try WeChatQRCode
Long Answer:
There are several open memory issues about decoding with QRCodeDetector. I hope that in future versions it will be fixed. Meanwhile you can try WeChatQRCode also from cv2.
WeChatQRCode includes two CNN-based models: A object detection model and a super resolution model. Object detection model is applied to detect QRCode with the bounding box. super resolution model is applied to zoom in QRCode when it is small.
Your code modified:
import cv2
# Name of the QR Code Image file
filename = "2021-12-14_162414.png"
# read the QRCODE image
image = cv2.imread(filename)
# initialize the cv2 QRCode detector
detector =cv2.wechat_qrcode_WeChatQRCode(detector_prototxt_path = "detect.prototxt", detector_caffe_model_path = "detect.caffemodel", super_resolution_prototxt_path = "sr.prototxt", super_resolution_caffe_model_path = "sr.caffemodel")
# detect and decode
data, vertices_array = detector.detectAndDecode(image)
# if there is a QR code
# print the data
if vertices_array is not None:
print("QRCode data:")
print(data)
else:
print("There was some error")
Output:
QRCode data:
('PK\x03\x04\n',)
As you can see, it needs prototxt and caffemodel files. You can find them here.
Context
I have made a simple web app for uploading content to a blog. The front sends AJAX requests (using FormData) to the backend which is Bottle running on Python 3.7. Text content is saved to a MySQL database and images are saved to a folder on the server. Everything works fine.
Image processing and PIL/Pillow
Now, I want to enable processing of uploaded images to standardise them (I need them all resized and/or cropped to 700x400px).
I was hoping to use Pillow for this. My problem is creating a PIL Image object from the file object in Bottle. I cannot initialise a valid Image object.
Code
# AJAX sends request to this route
#post('/update')
def update():
# Form data
title = request.forms.get("title")
body = request.forms.get("body")
image = request.forms.get("image")
author = request.forms.get("author")
# Image upload
file = request.files.get("file")
if file:
extension = file.filename.split(".")[-1]
if extension not in ('png', 'jpg', 'jpeg'):
return {"result" : 0, "message": "File Format Error"}
save_path = "my/save/path"
file.save(save_path)
The problem
This all works as expected, but I cannot create a valid Image object with pillow for processing. I even tried reloading the saved image using the save path but this did not work either.
Other attempts
The code below did not work. It caused an internal server error, though I am having trouble setting up more detailed Python debugging.
path = save_path + "/" + file.filename
image_data = open(path, "rb")
image = Image.open(image_data)
When logged manually, the path is a valid relative URL ("../domain-folder/images") and I have checked that I am definitely importing PIL (Pillow) correctly using PIL.PILLOW_VERSION.
I tried adapting this answer:
image = Image.frombytes('RGBA', (128,128), image_data, 'raw')
However, I won’t know the size until I have created the Image object. I also tried using io:
image = Image.open(io.BytesIO(image_data))
This did not work either. In each case, it is only the line trying to initialise the Image object that causes problems.
Summary
The Bottle documentation says the uploaded file is a file-like object, but I am not having much success in creating an Image object that I can process.
How should I go about this? I do not have a preference about processing before or after saving. I am comfortable with the processing, it is initialising the Image object that is causing the problem.
Edit - Solution
I got this to work by adapting the answer from eatmeimadanish. I had to use a io.BytesIO object to save the file from Bottle, then load it with Pillow from there. After processing, it could be saved in the usual way.
obj = io.BytesIO()
file.save(obj) # This saves the file retrieved by Bottle to the BytesIO object
path = save_path + "/" + file.filename
# Image processing
im = Image.open(obj) # Reopen the object with PIL
im = im.resize((700,400))
im.save(path, optimize=True)
I found this from the Pillow documentation about a different function that may also be of use.
PIL.Image.frombuffer(mode, size, data, decoder_name='raw', *args)
Note that this function decodes pixel data only, not entire images.
If you have an entire image file in a string, wrap it in a BytesIO object, and use open() to load it.
Use StringIO instead.
From PIL import Image
try:
import cStringIO as StringIO
except ImportError:
import StringIO
s = StringIO.StringIO()
#save your in memory file to this instead of a regular file
file = request.files.get("file")
if file:
extension = file.filename.split(".")[-1]
if extension not in ('png', 'jpg', 'jpeg'):
return {"result" : 0, "message": "File Format Error"}
file.save(s)
im = Image.open(s)
im.resize((700,400))
im.save(s, 'png', optimize=True)
s64 = base64.b64encode(s.getvalue())
From what I understand, you're trying to resize the image after it has been saved locally (note that you could try to do the resize before it is saved). If this is what you want to achieve here, you can open the image directly using Pillow, it does the job for you (you do not have to open(path, "rb"):
image = Image.open(path)
image.resize((700,400)).save(path)
I am using the Pillow fork of PIL and keep receiving the error
OSError: cannot identify image file <_io.BytesIO object at 0x103a47468>
when trying to open an image. I am using virtualenv with python 3.4 and no installation of PIL.
I have tried to find a solution to this based on others encountering the same problem, however, those solutions did not work for me. Here is my code:
from PIL import Image
import io
# This portion is part of my test code
byteImg = Image.open("some/location/to/a/file/in/my/directories.png").tobytes()
# Non test code
dataBytesIO = io.BytesIO(byteImg)
Image.open(dataBytesIO) # <- Error here
The image exists in the initial opening of the file and it gets converted to bytes. This appears to work for almost everyone else but I can't figure out why it fails for me.
EDIT:
dataBytesIO.seek(0)
does not work as a solution (tried it) since I'm not saving the image via a stream, I'm just instantiating the BytesIO with data, therefore (if I'm thinking of this correctly) seek should already be at 0.
(This solution is from the author himself. I have just moved it here.)
SOLUTION:
# This portion is part of my test code
byteImgIO = io.BytesIO()
byteImg = Image.open("some/location/to/a/file/in/my/directories.png")
byteImg.save(byteImgIO, "PNG")
byteImgIO.seek(0)
byteImg = byteImgIO.read()
# Non test code
dataBytesIO = io.BytesIO(byteImg)
Image.open(dataBytesIO)
The problem was with the way that Image.tobytes()was returning the byte object. It appeared to be invalid data and the 'encoding' couldn't be anything other than raw which still appeared to output wrong data since almost every byte appeared in the format \xff\. However, saving the bytes via BytesIO and using the .read() function to read the entire image gave the correct bytes that when needed later could actually be used.
image = Image.open(io.BytesIO(decoded))
# File "C:\Users\14088\anaconda3\envs\tensorflow\lib\site-packages\PIL\Image.py", line 2968, in open
# "cannot identify image file %r" % (filename if filename else fp)
# PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x000002B733BB11C8>
===
I fixed as worked:
message = request.get_json(force=True)
encoded = message['image']
# https://stackoverflow.com/questions/26070547/decoding-base64-from-post-to-use-in-pil
#image_data = re.sub('^data:image/.+;base64,', '', message['image'])
image_data = re.sub('^data:image/.+;base64,', '', encoded)
# Remove extra "data:image/...'base64" is Very important
# If "data:image/...'base64" is not remove, the following line generate an error message:
# File "C:\Work\SVU\950_SVU_DL_TF\sec07_TF_Flask06_09\32_KerasFlask06_VisualD3\32_predict_app.py", line 69, in predict
# image = Image.open(io.BytesIO(decoded))
# File "C:\Users\14088\anaconda3\envs\tensorflow\lib\site-packages\PIL\Image.py", line 2968, in open
# "cannot identify image file %r" % (filename if filename else fp)
# PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x000002B733BB11C8>
# image = Image.open(BytesIO(base64.b64decode(image_data)))
decoded = base64.b64decode(image_data)
image = Image.open(io.BytesIO(decoded))
# return json.dumps({'result': 'success'}), 200, {'ContentType': 'application/json'}
#print('#app.route => image:')
#print()
processed_image = preprocess_image(image, target_size=(224, 224))
prediction = model.predict(processed_image).tolist()
#print('prediction:', prediction)
response = {
'prediction': {
'dog': prediction[0][0],
'cat': prediction[0][1]
}
}
print('response:', response)
return jsonify(response)
On some cases the same error happens when you are dealing with a Raw Image file such CR2. Example: http://www.rawsamples.ch/raws/canon/g10/RAW_CANON_G10.CR2
when you try to run:
byteImg = Image.open("RAW_CANON_G10.CR2")
You will get this error:
OSError: cannot identify image file 'RAW_CANON_G10.CR2'
So you need to convert the image using rawkit first, here is an example how to do it:
from io import BytesIO
from PIL import Image, ImageFile
import numpy
from rawkit import raw
def convert_cr2_to_jpg(raw_image):
raw_image_process = raw.Raw(raw_image)
buffered_image = numpy.array(raw_image_process.to_buffer())
if raw_image_process.metadata.orientation == 0:
jpg_image_height = raw_image_process.metadata.height
jpg_image_width = raw_image_process.metadata.width
else:
jpg_image_height = raw_image_process.metadata.width
jpg_image_width = raw_image_process.metadata.height
jpg_image = Image.frombytes('RGB', (jpg_image_width, jpg_image_height), buffered_image)
return jpg_image
byteImg = convert_cr2_to_jpg("RAW_CANON_G10.CR2")
Code credit if for mateusz-michalik on GitHub (https://github.com/mateusz-michalik/cr2-to-jpg/blob/master/cr2-to-jpg.py)
While reading Dicom files the problem might be caused due to Dicom compression.
Make sure both gdcm and pydicom are installed.
GDCM is usually the one that's more difficult to install. The latest way to easily install the same is
conda install -U conda-forge gdcm
When dealing with url, this error can arise from a wrong extension of the downloaded
file or just a corrupted file.
So to avoid that use a try/except bloc so you app doesn't crash and will continue its job.
In the except part, you can retrieve the file in question for analysis:
A snippet here:
for url in urls:
with closing(urllib.request.urlopen(url)) as f:
try:
img = Image(f, 30*mm, 30*mm)
d_img.append(img)
except Exception as e:
print(url) #here you get the file causing the exception
print(e)
Here a related answer
The image file itself might be corrupted. So if you were to process a considerable amount of image files, then simply enclose the line that processes each image file with a try catch statement.
from PIL import Image
image = Image.open("image.jpg")
file_path = io.BytesIO();
image.save(file_path,'JPEG');
image2 = Image.open(file_path.getvalue());
I get this error TypeError: embedded NUL character on the last statement Image.open on running the program
What is the correct way to open a file from streams?
http://effbot.org/imagingbook/introduction.htm#more-on-reading-images
from PIL import Image
import StringIO
buffer = StringIO.StringIO()
buffer.write(open('image.jpeg', 'rb').read())
buffer.seek(0)
image = Image.open(buffer)
print image
# <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=800x600 at 0x7FE2EEE2B098>
# if we try open again
image = Image.open(buffer)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 2028, in open
raise IOError("cannot identify image file")
IOError: cannot identify image file
Make sure you call buff.seek(0) before reading any StringIO objects. Otherwise you'll be reading from the end of the buffer, which will look like an empty file and is likely causing the error you're seeing.
Using BytesIO is much more simple, it took me a while to figure out. This allows you to read and write to zip files for example.
from PIL import Image
from io import BytesIO
# bytes of a simple 2x2 gif file
gif_bytes = b'\x47\x49\x46\x38\x39\x61\x02\x00\x02\x00\x80\x00\x00\x00\xFF\xFF\xFF\x21\xF9\x04\x00\x00\x00\x00\x00\x2C\x00\x00\x00\x00\x02\x00\x02\x00\x00\x02\x03\x44\x02\x05\x00\x3B'
gif_bytes_io = BytesIO() # or io.BytesIO()
# store the gif bytes to the IO and open as image
gif_bytes_io.write(gif_bytes)
image = Image.open(gif_bytes_io)
# optional proof of concept:
# image.show()
# save as png through a stream
png_bytes_io = BytesIO() # or io.BytesIO()
image.save(png_bytes_io, format='PNG')
print(png_bytes_io.getvalue()) # outputs the byte stream of the png