How to read raw png from an array in python opencv? - python

I'm streaming a png image from my iPhone to my MacBook over tcp. The MacBook code is from http://docs.python.org/library/socketserver.html#requesthandler-objects. How can the image be converted for use with OpenCV? A png was selected because they are efficient, but other formats could be used.
I wrote a test program that reads the rawImage from a file, but not sure how to convert it:
# Read rawImage from a file, but in reality will have it from TCPServer
f = open('frame.png', "rb")
rawImage = f.read()
f.close()
# Not sure how to convert rawImage
npImage = np.array(rawImage)
matImage = cv2.imdecode(rawImage, 1)
#show it
cv.NamedWindow('display')
cv.MoveWindow('display', 10, 10)
cv.ShowImage('display', matImage)
cv. WaitKey(0)

#Andy Rosenblum's works, and it might be the best solution if using the outdated cv python API (vs. cv2).
However, because this question is equally interesting for users of the latest versions, I suggest the following solution. The sample code below may be better than the accepted solution because:
It is compatible with newer OpenCV python API (cv2 vs. cv). This solution is tested under opencv 3.0 and python 3.0. I believe only trivial modifications would be required for opencv 2.x and/or python 2.7x.
Fewer imports. This can all be done with numpy and opencv directly, no need for StringIO and PIL.
Here is how I create an opencv image decoded directly from a file object, or from a byte buffer read from a file object.
import cv2
import numpy as np
#read the data from the file
with open(somefile, 'rb') as infile:
buf = infile.read()
#use numpy to construct an array from the bytes
x = np.fromstring(buf, dtype='uint8')
#decode the array into an image
img = cv2.imdecode(x, cv2.IMREAD_UNCHANGED)
#show it
cv2.imshow("some window", img)
cv2.waitKey(0)
Note that in opencv 3.0, the naming convention for the various constants/flags changed, so if using opencv 2.x, you will need to change the flag cv2.IMREAD_UNCHANGED. This code sample also assumes you are loading in a standard 8-bit image, but if not, you can play with the dtype='...' flag in np.fromstring.

another way,
also in the case of a reading an actual file this will work for a unicode path (tested on windows)
with open(image_full_path, 'rb') as img_stream:
file_bytes = numpy.asarray(bytearray(img_stream.read()), dtype=numpy.uint8)
img_data_ndarray = cv2.imdecode(file_bytes, cv2.CV_LOAD_IMAGE_UNCHANGED)
img_data_cvmat = cv.fromarray(img_data_ndarray) # convert to old cvmat if needed

I figured it out:
# Read rawImage from a file, but in reality will have it from TCPServer
f = open('frame.png', "rb")
rawImage = f.read()
f.close()
# Convert rawImage to Mat
pilImage = Image.open(StringIO(rawImage));
npImage = np.array(pilImage)
matImage = cv.fromarray(npImage)
#show it
cv.NamedWindow('display')
cv.MoveWindow('display', 10, 10)
cv.ShowImage('display', matImage)
cv. WaitKey(0)

This works for me (these days):
import cv2
import numpy as np
data = open('016e263c726a.raw').read()
x = np.frombuffer(data, dtype='uint8').reshape(2048,2448)
cv2.imshow('x',x); cv2.waitKey(); cv2.destroyAllWindows()
But it reads a RAW image saved without any specific format.

(Your question seems to be tagged objective-c but you ask for Python and so is your example, so I'll use that.)
My first post on Stack Overflow!
The cv.LoadImageM method seems to be what you are looking for.
http://opencv.willowgarage.com/documentation/python/reading_and_writing_images_and_video.html
Example use:
http://opencv.willowgarage.com/wiki/PythonInterface/
LoadImage(filename, iscolor=CV_LOAD_IMAGE_COLOR) → None
Loads an image from a file as an IplImage.
Parameters:
filename (str) – Name of file to be loaded.
iscolor (int) –
Specific color type of the loaded image:
CV_LOAD_IMAGE_COLOR the loaded image is forced to be a 3-channel color image
CV_LOAD_IMAGE_GRAYSCALE the loaded image is forced to be grayscale
CV_LOAD_IMAGE_UNCHANGED the loaded image will be loaded as is.
The function cvLoadImage loads an image from the specified file and
returns the pointer to the loaded image. Currently the following file
formats are supported:
Windows bitmaps - BMP, DIB
JPEG files - JPEG, JPG, JPE
Portable Network Graphics - PNG
Portable image format - PBM, PGM, PPM
Sun rasters - SR, RAS
TIFF files - TIFF, TIF
Note that in the current implementation the alpha channel, if any, is
stripped from the output image, e.g. 4-channel RGBA image will be
loaded as RGB.

When you have to load from file, this simple solution does the job (tested with opencv-python-3.2.0.6):
import cv2
img = cv2.imread(somefile)

Related

Python: how to convert an image in memory?

I have an image in memory (downloaded from an online source) and I want to convert it to a different format before sending it on to a different online location.
The conversion is .webp to .jpg but that's not really relevant.
With Pillow I can easily convert local images and save them back to disc, but I can't get it to work with an image in memory.
I don't necessarily need to use Pillow. Any way to convert the image without having to save anything to disc is fine.
I am new to BytesIO with PIL, so just check my code attempt, with my test image it works, let me know
from PIL import Image
from io import BytesIO
img = Image.open('test.webp')
print('image : ', img.format)
img.show()
# Write PIL Image to in-memory PNG
membuf = BytesIO()
img.save(membuf, format="png")
img = Image.open(membuf)
print('image : ', img.format)
img.show()

conveting bytes to image using tinytag and PIL

I am using tinytags module in python to get the cover art of a mp3 file and want to display or store it. The return type of the variable is showing to be bytes. I have tried fumbling around with PIL using frombytes but to no avail. Is there any method to convert the bytes to image?
from tinytag import TinyTag
tag = TinyTag.get("03. Me, Myself & I.mp3", image=True)
img = tag.get_image()
I actually got a PNG image when I called tag.get_image() but I guess you might get a JPEG. Either way, you can wrap it in a BytesIO and open it with PIL/Pillow or display it. Carrying on from your code:
from PIL import Image
import io
...
im = tag.get_image()
# Make a PIL Image
pi = Image.open(io.BytesIO(im))
# Save as PNG, or JPEG
pi.save('cover.png')
# Display
pi.show()
Note that you don't have to use PIL/Pillow. You could look at the first few bytes and if they are a PNG signature (\x89PNG) save data as binary with PNG extension. If the signature is JPEG (\xff \xd8) save data as binary with JPEG extension.

How to create high res JPEG with Wand from binary string

I'm trying to convert some PDFs to high res jpegs using imagemagick . I'm working on win 10, 64 with python 3.62 - 64 bit and wand 0.4.4. At the command line I have :
$ /e/ImageMagick-6.9.9-Q16-HDRI/convert.exe -density 400 myfile.pdf -scale 2000x1000 test3.jpg.
which is working well for me.
In python:
from wand.image import Image
file_path = os.path.dirname(os.path.abspath(__file__))+os.sep+"myfile.pdf"
with Image(filename=file_path, resolution=400) as image:
image.save()
image_jpeg = image.convert('jpeg')
Which is giving me low res JPEGs . How do I translate this into my wand code to do the same thing?
edit:
I realized that the problem is that the input pdf has to be read into the Image object as a binary string, so based on http://docs.wand-py.org/en/0.4.4/guide/read.html#read-blob I tried:
with open(file_path,'rb') as f:
image_binary = f.read()
f.close()
with Image(blob=image_binary,resolution=400) as img:
img.transform('2000x1000', '100%')
img.make_blob('jpeg')
img.save(filename='out.jpg')
This reads the file in ok, but the output is split into 10 files. Why? I need to get this into 1 high res jpeg.
EDIT:
I need to send the jpeg to an OCR api, so I was wondering if I could write the output to a file like object. Looking at https://www.imagemagick.org/api/magick-image.php#MagickWriteImageFile, I tried :
emptyFile = Image(width=1500, height=2000)
with Image(filename=file_path, resolution=400) as image:
library.MagickResetIterator(image.wand)
# Call C-API Append method.
resource_pointer = library.MagickAppendImages(image.wand,
True)
library.MagickWriteImagesFile(resource_pointer,emptyFile)
This gives:
File "E:/ENVS/r3/pdfminer.six/ocr_space.py", line 113, in <module>
test_file = ocr_stream(filename='test4.jpg')
File "E:/ENVS/r3/pdfminer.six/ocr_space.py", line 96, in ocr_stream
library.MagickWriteImagesFile(resource_pointer,emptyFile)
ctypes.ArgumentError: argument 2: <class 'TypeError'>: wrong type
How can I get this working?
Why? I need to get this into 1 high res jpeg.
The PDF contains pages that ImageMagick considers individual images in a "stack". The wand library provides a wand.image.Image.sequance to work with each page.
However, to append all images into a single JPEG. You can either iterate over each page & stitch them together, or call C-API's method MagickAppendImages.
from wand.image import Image
from wand.api import library
import ctypes
# Map C-API not provided by wand library.
library.MagickAppendImages.argtypes = [ctypes.c_void_p, ctypes.c_int]
library.MagickAppendImages.restype = ctypes.c_void_p
with Image(filename="path_to_document.pdf", resolution=400) as image:
# Do all your preprocessing first
# Ether word directly on the wand instance, or iterate over each page.
# ...
# To write all "pages" into a single image.
# Reset the stack iterator.
library.MagickResetIterator(image.wand)
# Call C-API Append method.
resource_pointer = library.MagickAppendImages(image.wand,
True)
# Write C resource directly to disk.
library.MagickWriteImages(resource_pointer,
"output.jpeg".encode("ASCII"),
False)
Update:
I need to send the jpeg to an OCR api ...
Assuming your using OpenCV's python API, you'll only need to iterate over each page, and pass the image-file data to the OCR via numpy buffers.
from wand.image import Image
import numpy
import cv2
def ocr_process(file_data_buffer):
""" Replace with whatever your OCR-API calls for """
mat_instance = cv2.imdecode(file_data_buffer)
# ... work ...
source_image="path_to_document.pdf"
with Image(filename=source_image, resolution=400) as img:
for page in img.sequence:
file_buffer = numpy.asarray(bytearray(page.make_blob("JPEG")),
dtype=numpy.uint8)
ocr_process(file_buffer)
so I was wondering if I could write the output to a file like object
Don't assume that python "image" objects (or underlining C structures) from different libraries are comparable with each other.
Without knowing the OCR api, I can't help you past the wand part, but I can suggest one of the following...
Use temporary intermediate files. (slower I/O, but easier to learn/develop/debug)
with Image(filename=INPUT_PATH) as img:
# work
img.save(filename=OUTPUT_PATH)
# OCR work on OUTPUT_PATH
Use file descriptors if the OCR API supports it. (Same as above)
with open(INPUT_PATH, 'rb') as fd:
with Image(file=fd) as img:
# work
# OCR work ???
Use blobs. (faster I/O but need a lot more memory)
buffer = None
with Image(filename=INPUT_PATH) as img:
# work
buffer = img.make_blob(FORMAT)
if buffer:
# OCR work ???
Even More Updates
Wrapping all the comments together, a solution might be...
from wand.image import Image
from wand.api import library
import ctypes
import requests
# Map C-API not provided by wand library.
library.MagickAppendImages.argtypes = [ctypes.c_void_p, ctypes.c_int]
library.MagickAppendImages.restype = ctypes.c_void_p
with Image(filename='path_to_document.pdf', resolution=400) as image:
# ... Do pre-processing ...
# Reset the stack iterator.
library.MagickResetIterator(image.wand)
# Call C-API Append method.
resource_pointer = library.MagickAppendImages(image.wand, True)
# Convert to JPEG.
library.MagickSetImageFormat(resource_pointer, b'JPEG')
# Create size sentinel.
length = ctypes.c_size_t()
# Write image blob to memory.
image_data_pointer = library.MagickGetImagesBlob(resource_pointer,
ctypes.byref(length))
# Ensure success
if image_data_pointer and length.value:
# Create buffer from memory address
payload = ctypes.string_at(image_data_pointer, length.value)
# Define local filename.
payload_filename = 'my_hires_image.jpg'
# Post payload as multipart encoded image file with filename.
requests.post(THE_URL, files={'file': (payload_filename, payload)})
What about something like:
ok = Image(filename=file_path, resolution=400)
with ok.transform('2000x1000', '100%') as image:
image.compression_quality = 100
image.save()
or:
with ok.resize(2000, 1000)
related:
https://github.com/dahlia/wand/blob/13c4f544bd271fe298ac8dde44fbf178b349361a/docs/guide/resizecrop.rst
Python 3 Wand How to make an unanimated gif from multiple PDF pages

Python PIL reading PNG from STDIN

I am having a problem reading png images from STDIN using PIL. When the image is written by PIL it is all scrambled, but if I write the file using simple file open, write and close the file is saved perfectly.
I have a program that dumps png files to stdout in a sequence, with no compression, and I read that stream using a python script which is suposed to read the data and do some routines on almost every png. The program that dumps the data writes a certain string to delimiter the PNGs files, the string is "{fim:FILE_NAME.png}"
The script is something like:
import sys
import re
from PIL import Image
png = None
for linha in sys.stdin:
if re.search('{fim:', linha):
fname = linha.replace('{fim:','')[:-2]
# writes data directly to file, works fine
#f = open("/tmp/%s" % fname , 'w')
#f.write(png)
#f.close()
# create a PIL Image from data and writes to disk, fails fine
im = Image.frombuffer("RGB",(640,480),png, "raw", "RGB", 0, 1)
#im = Image.fromstring("RGB",(640,480),png)
im.save("/tmp/%s" % fname)
png = None
else:
if png is None:
png = linha
else:
png+= linha
imagemagick identify from a wrong image:
/tmp/1349194042-24.png PNG 640x480 640x480+0+0 8-bit DirectClass 361KiB 0.010u 0:00.019
imagemagick identify from a working image:
/tmp/1349194586-01.png PNG 640x480 640x480+0+0 8-bit DirectClass 903KiB 0.010u 0:00.010
Does any one have an idea of what is happening? Is it something about little/big endians? I have tried Image.frombuffer, Image.fromstring, different modes, but nothing. It seems that there is more information on the buffer that the PIL expects.
Thanks,
If the png variable contains the binary data from a PNG file, you can't read it using frombuffer; that's used for reading raw pixel data. Instead, use io.StringIO and Image.open, i.e.:
import io
from PIL import Image
img = Image.open(io.StringIO(png))
png variable is uninitialized on the first call to Image.frombuffer(). You need to initialize it to something from stdin.
I'm not sure about your use of for linha in sys.stdin:. That gives you line-buffered input. You probably want to use block buffered input of size N, like sys.stdin.read(N). This will read a specific number of bytes and then you can parse the data, like cutting your filename delimiter out and filling the input buffer for Image.frombuffer().

Resize image in Python without losing EXIF data

I need to resize jpg images with Python without losing the original image's EXIF data (metadata about date taken, camera model etc.). All google searches about python and images point to the PIL library which I'm currently using, but doesn't seem to be able to retain the metadata. The code I have so far (using PIL) is this:
img = Image.open('foo.jpg')
width,height = 800,600
if img.size[0] < img.size[1]:
width,height = height,width
resized_img = img.resize((width, height), Image.ANTIALIAS) # best down-sizing filter
resized_img.save('foo-resized.jpg')
Any ideas? Or other libraries that I could be using?
There is actually a really simple way of copying EXIF data from a picture to another with only PIL. Though it doesn't permit to modify the exif tags.
image = Image.open('test.jpg')
exif = image.info['exif']
# Your picture process here
image = image.rotate(90)
image.save('test_rotated.jpg', 'JPEG', exif=exif)
As you can see, the save function can take the exif argument which permits to copy the raw exif data in the new image when saving. You don't actually need any other lib if that's all you want to do. I can't seem to find any documentation on the save options and I don't even know if that's specific to Pillow or working with PIL too. (If someone has some kind of link, I would enjoy if they posted it in the comments)
import jpeg
jpeg.setExif(jpeg.getExif('foo.jpg'), 'foo-resized.jpg')
http://www.emilas.com/jpeg/
You can use pyexiv2 to copy EXIF data from source image. In the following example image is resized using PIL library, EXIF data copied with pyexiv2 and image size EXIF fields are set with new size.
def resize_image(source_path, dest_path, size):
# resize image
image = Image.open(source_path)
image.thumbnail(size, Image.ANTIALIAS)
image.save(dest_path, "JPEG")
# copy EXIF data
source_image = pyexiv2.Image(source_path)
source_image.readMetadata()
dest_image = pyexiv2.Image(dest_path)
dest_image.readMetadata()
source_image.copyMetadataTo(dest_image)
# set EXIF image size info to resized size
dest_image["Exif.Photo.PixelXDimension"] = image.size[0]
dest_image["Exif.Photo.PixelYDimension"] = image.size[1]
dest_image.writeMetadata()
# resizing local file
resize_image("41965749.jpg", "resized.jpg", (600,400))
Why not using ImageMagick?
It is quite a standard tool (for instance, it is the standard tool used by Gallery 2); I have never used it, however it has a python interface as well (or, you can also simply spawn the command) and most of all, should maintain EXIF information between all transformation.
Here's an updated answer as of 2018. piexif is a pure python library that for me installed easily via pip (pip install piexif) and worked beautifully (thank you, maintainers!). https://pypi.org/project/piexif/
The usage is very simple, a single line will replicate the accepted answer and copy all EXIF tags from the original image to the resized image:
import piexif
piexif.transplant("foo.jpg", "foo-resized.jpg")
I haven't tried yet, but it looks like you could also perform modifcations easily by using the load, dump, and insert functions as described in the linked documentation.
For pyexiv2 v0.3.2, the API documentation refers to the copy method to carry over EXIF data from one image to another. In this case it would be the EXIF data of the original image over to the resized image.
Going off #Maksym Kozlenko, the updated code for copying EXIF data is:
source_image = pyexiv2.ImageMetadata(source_path)
source_image.read()
dest_image = pyexiv2.ImageMetadata(dest_path)
dest_image.read()
source_image.copy(dest_image,exif=True)
dest_image.write()
You can use pyexiv2 to modify the file after saving it.
from PIL import Image
img_path = "/tmp/img.jpg"
img = Image.open(img_path)
exif = img.info['exif']
img.save("output_"+img_path, exif=exif)
Tested in Pillow 2.5.3
It seems #Depado's solution does not work for me, in my scenario the image does not even contain an exif segment.
pyexiv2 is hard to install on my Mac, instead I use the module pexif https://github.com/bennoleslie/pexif/blob/master/pexif.py. To "add exif segment" to an image does not contain exif info, I added the exif info contained in an image which owns a exif segment
from pexif import JpegFile
#get exif segment from an image
jpeg = JpegFile.fromFile(path_with_exif)
jpeg_exif = jpeg.get_exif()
#import the exif segment above to the image file which does not contain exif segment
jpeg = JpegFile.fromFile(path_without_exif)
exif = jpeg.import_exif(jpeg_exif)
jpeg.writeFile(path_without_exif)
Updated version of Maksym Kozlenko
Python3 and py3exiv2 v0.7
# Resize image and update Exif data
from PIL import Image
import pyexiv2
def resize_image(source_path, dest_path, size):
# resize image
image = Image.open(source_path)
# Using thumbnail, then 'size' is MAX width or weight
# so will retain aspect ratio
image.thumbnail(size, Image.ANTIALIAS)
image.save(dest_path, "JPEG")
# copy EXIF data
source_exif = pyexiv2.ImageMetadata(source_path)
source_exif.read()
dest_exif = pyexiv2.ImageMetadata(dest_path)
dest_exif.read()
source_exif.copy(dest_exif,exif=True)
# set EXIF image size info to resized size
dest_exif["Exif.Photo.PixelXDimension"] = image.size[0]
dest_exif["Exif.Photo.PixelYDimension"] = image.size[1]
dest_exif.write()
PIL handles EXIF data, doesn't it? Look in PIL.ExifTags.

Categories

Resources