python convert bytes to image - python

I receive a bytes, which is converted from an image from Android client, how can I convert it to image on Python?
I use these code:
img = Image.open(BytesIO(data))
img.show()
img.save('D:\\1.jpg')
but I can only get an incomplete image like:
The image on the right is what I'm trying to send to Python, and the left is what I get. How can I solve this?
PS:data is complete because I save the image completely by using Eclipse.

I have already solve the problem.I made a stupid mistake that the buffersize I set for socket.recv is too short.
I set a longer buffersize like 100000000, and saving the image is easy like:
f = open('D:\\2.jpg', 'wb')
f.write(data)
f.close()

Related

Convert mp3 song image from png to jpg

I have a set of many songs, some of which have png images in metadata, and I need to convert these to jpg.
I know how to convert png images to jpg in general, but I am currently accessing metadata using eyed3, which returns ImageFrame objects, and I don't know how to manipulate these. I can, for instance, access the image type with
print(img.mime_type)
which returns
image/png
but I don't know how to progress from here. Very naively I tried loading the image with OpenCV, but it is either not a compatible format or I didn't do it properly. And anyway I wouldn't know how to update the old image with the new one either!
Note: While I am currently working with eyed3, it is perfectly fine if I can solve this any other way.
I was finally able to solve this, although in a not very elegant way.
The first step is to load the image. For some reason I could not make this work with eyed3, but TinyTag does the job:
from PIL import Image
from tinytag import TinyTag
tag = TinyTag.get(mp3_path, image=True)
image_data = tag.get_image()
img_bites = io.BytesIO(image_data)
photo = Image.open(im)
Then I manipulate it. For example we may resize it and save it as jpg. Because we are using Pillow (PIL) for these operations, we actually need to save the image and finally load it back to get the binary data (this detail is probably what should be improved in the process).
photo = photo.resize((500, 500)) # suppose we want 500 x 500 pixels
rgb_photo = photo.convert("RGB")
rgb_photo.save(temp_file_path, format="JPEG")
The last step is thus to load the image and set it as metadata. You have more details about this step in this answer.:
audio_file = eyed3.load(mp3_path) # this has been loaded before
audio_file.tag.images.set(
3, open(temp_file_path, "rb").read(), "image/jpeg"
)
audio_file.tag.save()

Error when trying to convert base64 string to image in Python

I am trying create a Python dashboard in which a user uploads an image and then the image is analyzed. The uploaded image is received as a base64 string and it needs to be converted to an image. I have tried
decoded = BytesIO(base64.b64decode(base64_string))
image = Image.open(decoded)
but I received this error:
cannot identify image file <_io.BytesIO object at 0x00000268954E9888>
Image needs a file-like object, the sort of thing returned by an open. The easiest way to do this is using a with statement:
decoded = base64.b64decode(base64)
with BytesIO(decoded) as fh:
image = Image.open(fh)
# do stuff with image here: when the with block ends
# it's very likely the image will no longer be usable
See if that works better for you. :)

Using Pillow and img2pdf to convert images to pdf

I have a task that requires me to get data from an image upload (jpg or png), resize it based on the requirement, and then transform it into pdf and then store in s3.
The file comes in as ByteIO
I have Pillow available so I can resize the image with it
Now the file type is class 'PIL.Image.Image' and I don't know how to proceed.
I found img2pdf library on https://gitlab.mister-muffin.de/josch/img2pdf, but I don't know how to use it when I have PIL format (use tobytes()?)
The s3 upload also looks like a file-like object, so I don't want to save it into a temp file before loading it again. Do I even need img2pdf in this case then?
How do I achieve this goal?
EDIT: I tried using tobytes() and upload to s3 directly. Upload was successful. However, when downloading to see the content, it shows an empty page. It seems like the file data is not written into the pdf file
EDIT 2: Actually went on the s3 and check the file stored. When I download it and open it, it shows cannot be opened
EDIT 3: I don't really have working code as I'm still experimenting what could work, but here's a gist
data = request.FILES['file'].file # where the data is
im = Image.open(data)
(width, height) = (im.width // 2, im.height // 2) # example action I wanna take with Pillow
data = im_resized.tobytes()
# potential step for using img2pdf here but I don't know how
# img2pdf.convert(data) # this fails because "ImageOpenError: cannot read input image (not jpeg2000). PIL: error reading image: cannot identify image file <_io.BytesIO..."
# img2pdf.convert(im_resized) # this also fails because "TypeError: Neither implements read() nor is str or bytes"
upload_to_s3(data) # some function that utilizes boto3 to upload to s3
The problem is that u use Image.Image object instead of JPEG or something like it
Try this:
bytes_io = io.BytesIO()
image.save(bytes_io, 'PNG')
with open(output_pdf, "wb") as f:
f.write(img2pdf.convert(bytes_io.getvalue()))

PIL with BytesIO: cannot identify image file

I am trying to send an image over socket connection for video chat, but the reconstruction of the image from bytes format is incorrect. Here is my conversion of the image to bytes to send:
pil_im = Image.fromarray(img)
b = io.BytesIO()
pil_im.save(b, 'jpeg')
im_bytes = b.getvalue()
return im_bytes
This sends fine, however, I cannot resolve the reformatting of these bytes into an image file. Here is my code to reformat into image for display:
pil_bytes = io.BytesIO(im_bytes)
pil_image = Image.open(pil_bytes)
cv_image = cv2.cvtColor(np.array(pil_image), cv2.COLOR_RGB2BGR)
return cv_image
The second line there raises the following exception:
cannot identify image file <_io.BytesIO object at 0x0388EF60>
I have looked at some other threads (this one and this one) but no solution has been helpful for me. I am also using this as reference to try to correct myself, but what seems to work fine for them just doesn't for me.
Thank you for any assistance you can provide and please excuse any errors, I am still learning python.
First of all Thank You! as the code in your question helped me to solve the first part of the problem I had. The second part was already solved for me using this simple code (don't convert to array)
dataBytesIO = io.BytesIO(im_bytes)
image = Image.open(dataBytesIO)
Hope this helps

From JPG to b64encode to cv2.imread()

For a program I am writing, I am transferring an image from one computer - using base64.b64encode(f.read(image)) - and trying to read it in the receiving script without saving it to hard drive (in an effort to minimize process time). I'm having a hard time figuring out how to read the image into OpenCV without saving it locally.
Here is what my code for sending the image looks like:
f = open(image.jpg)
sendthis = f.read()
f.close()
databeingsent = base64.b64encode(sendthis)
client.publish('/image',databeingsent,0)
# this is an MQTT publish, details for SO shouldn't be relevant
Meanwhile, here is the code receiving it. (This is in an on_message function, since I'm using MQTT for the transfer.)
def on_message(client, userdata, msg): # msg.payload is incoming data
img = base64.b64decode(msg.payload)
source = cv2.imread(img)
cv2.imshow("image", source)
After the message decodes, I have the error:
"TypeError: Your input type is not a numpy array".
I've done some searching, and I can't seem to find a relevant solution - some exist regarding converting from text files to numpy using b64, but none really relate to using an image and immediately reading that decoded data into OpenCV without the intermediary step of saving it to the harddrive (using the inverse process used to read the file in the "send" script).
I'm still pretty new to Python and OpenCV, so if there's a better encoding method to send the image - whatever solves the problem. How the image is sent is irrelevant, so long as I can read it in on the receiving end without saving it as a .jpg to disk.
Thanks!
You can get a numpy array from you decoded data using:
import numpy as np
...
img = base64.b64decode(msg.payload)
npimg = np.fromstring(img, dtype=np.uint8)
Then you need imdecode to read the image from a buffer in memory. imread is meant to load an image from a file.
So:
import numpy as np
...
def on_message(client, userdata, msg): # msg.payload is incoming data
img = base64.b64decode(msg.payload);
npimg = np.fromstring(img, dtype=np.uint8);
source = cv2.imdecode(npimg, 1)
From the OpenCV documentation we can see that:
imread : Loads an image from a file.
imdecode : Reads an image from a buffer in memory.
Seem a better way to do what you want.

Categories

Resources