The title is actually clear what I expect.
I expect if the stored image is saved in variable instead of disk because I'm using it for jpeg compression at its paramater and I'm doing it repeteadly for streaming usage.
So it will be disk consuming if I open and reopen to read that saved image.
Here what was I doing:
import pyautogui as pag
im =pag.screenshot()
im.save('hoho.jpg', optimize=True, quality=10) #I'm using compression parameter.
I expect that function return the binary format of saved image.
import pyautogui as pag
im =pag.screenshot()
binary_image = im.save('hoho.jpg', optimize=True, quality=10) #I'm using compression parameter.
I have the following code to generate GIFs from images, the code works fine and the gif is saved locally, what I want to do rather the saving the GIF locally, I want for example data URI that I could return to my project using a request. How can I generate the GIF and return it without saving it?
my code to generate the GIF
import os
import imageio as iio
import imageio
png_dir='./'
images=[]
for file_name in url:
images.append(file_name)
imageio.imwrite('movie.gif', images, format='gif')
I found I can save it as bytes with the following code
gif_encoded = iio.mimsave("<bytes>", images, format='gif')
it will save the GIF as bytes then you can encode it.
encoded_string = base64.b64encode(gif_encoded)
encoded_string = b'data:image/gif;base64,'+encoded_string
decoded_string = encoded_string.decode()
for more examples check this out
https://imageio.readthedocs.io/en/stable/examples.html#read-from-fancy-sources
I have a set of many songs, some of which have png images in metadata, and I need to convert these to jpg.
I know how to convert png images to jpg in general, but I am currently accessing metadata using eyed3, which returns ImageFrame objects, and I don't know how to manipulate these. I can, for instance, access the image type with
print(img.mime_type)
which returns
image/png
but I don't know how to progress from here. Very naively I tried loading the image with OpenCV, but it is either not a compatible format or I didn't do it properly. And anyway I wouldn't know how to update the old image with the new one either!
Note: While I am currently working with eyed3, it is perfectly fine if I can solve this any other way.
I was finally able to solve this, although in a not very elegant way.
The first step is to load the image. For some reason I could not make this work with eyed3, but TinyTag does the job:
from PIL import Image
from tinytag import TinyTag
tag = TinyTag.get(mp3_path, image=True)
image_data = tag.get_image()
img_bites = io.BytesIO(image_data)
photo = Image.open(im)
Then I manipulate it. For example we may resize it and save it as jpg. Because we are using Pillow (PIL) for these operations, we actually need to save the image and finally load it back to get the binary data (this detail is probably what should be improved in the process).
photo = photo.resize((500, 500)) # suppose we want 500 x 500 pixels
rgb_photo = photo.convert("RGB")
rgb_photo.save(temp_file_path, format="JPEG")
The last step is thus to load the image and set it as metadata. You have more details about this step in this answer.:
audio_file = eyed3.load(mp3_path) # this has been loaded before
audio_file.tag.images.set(
3, open(temp_file_path, "rb").read(), "image/jpeg"
)
audio_file.tag.save()
I am working with web services using requests to get an image based on parameters passed. The first response I get is a XML schema with file reference URL.
<?xml version="1.0"?>
<Coverages schemaLocation="http://localhost/server/schemas/wcs/1.1.1/wcsCoverages.xsd">
<Coverage>
<Title>Filename</Title>
<Abstract/>
<Identifier>Filename</Identifier>
<Reference href="http://localhost/server/temp/filename.tif"/>
</Coverage>
</Coverages>
Next using xml.etree.ElementTree I extracted the URL. Next what I need is to dsiplay that tiff image (or any other image) on the Jupyter Notebook without downloading (as one image can be more than 50 or 100 MB sometimes)
Currently I am downloading and plotting the file after reading and converting file data into array ( as pyplot plot image array/matrix ).
import requests as req # request wcs urls
import xml.etree.ElementTree as ET # used for xml parsing
import matplotlib.pyplot as plt # display image
import gdal
# Download the File to local directory using Chunks
chunk_size=1024
local_filename = url.split('/')[-1] # Filename from url
r = req.get(url, stream=True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size):
if chunk:
f.write(chunk)
# Read File Raster Data as Array using Gdal
gtif = gdal.Open(local_filename)
georaster = gtif.ReadAsArray()
# Plot image using matplotlib
plt.imshow(georaster)
plt.title(local_filename)
plt.show()
So is there any way to convert the raw response from requests API for file directly into image array (in chunks or whole) and display it on notebook (without downloading and taking space on local directory)
The raw response from get request for tiff file is below
resp2 = req.get('tiffFielurl')
rawdata = resp2.content
rawdata[0:10]
Output: b'MM\x00*\x00\x00\x00\x08\x00\x10'
I tried searching for this question but not found any good answer on it so if there is any related question or duplicate provide me the link.
You can try plotting tiff images using ipyplot package
import ipyplot
ipyplot.plot_images(
['https://file-examples.com/wp-content/uploads/2017/10/file_example_TIFF_1MB.tiff',
'https://file-examples.com/wp-content/uploads/2017/10/file_example_TIFF_1MB.tiff',
'https://file-examples.com/wp-content/uploads/2017/10/file_example_TIFF_1MB.tiff'],
img_width=250)
After doing so much research and trying different solutions it seems to me the only way to do the above procedure i.e displaying Tiff file for now is downloading and reading the data using gdal and converting into array and display using matplotlib .
As the solution mentioned in following link is only accepting "PNG" files.
How to plot remote image (from http url)
whcih comes to conclusion we need PIL library that I also tried and fails.
from PIL import Image
resp2 = req.get('tiffFielurl')
resp2.raw.decode_content = True
im = Image.open(resp2.raw)
im
Gives Output:
<PIL.TiffImagePlugin.TiffImageFile image mode=I;16BS size=4800x4800 at 0x11CB7C50>
and converting the PIL object to numpy array or even getting data or pixels from PIL object gives error of unrecognized mode error.
im.getdata()
im.getpixels((0,0))
numpy.array(im)
All gives same error
257 if not self.im or\
258 self.im.mode != self.mode or self.im.size != self.size:
--> 259 self.im = Image.core.new(self.mode, self.size)
260 # create palette (optional)
261 if self.mode == "P":
ValueError: unrecognized mode
It comes out that PIL even don't support 16bit Signed integer pixel that is defined in Tiff object of PIL above.
https://pillow.readthedocs.io/en/4.0.x/handbook/concepts.html#concept-modes
I'm streaming a png image from my iPhone to my MacBook over tcp. The MacBook code is from http://docs.python.org/library/socketserver.html#requesthandler-objects. How can the image be converted for use with OpenCV? A png was selected because they are efficient, but other formats could be used.
I wrote a test program that reads the rawImage from a file, but not sure how to convert it:
# Read rawImage from a file, but in reality will have it from TCPServer
f = open('frame.png', "rb")
rawImage = f.read()
f.close()
# Not sure how to convert rawImage
npImage = np.array(rawImage)
matImage = cv2.imdecode(rawImage, 1)
#show it
cv.NamedWindow('display')
cv.MoveWindow('display', 10, 10)
cv.ShowImage('display', matImage)
cv. WaitKey(0)
#Andy Rosenblum's works, and it might be the best solution if using the outdated cv python API (vs. cv2).
However, because this question is equally interesting for users of the latest versions, I suggest the following solution. The sample code below may be better than the accepted solution because:
It is compatible with newer OpenCV python API (cv2 vs. cv). This solution is tested under opencv 3.0 and python 3.0. I believe only trivial modifications would be required for opencv 2.x and/or python 2.7x.
Fewer imports. This can all be done with numpy and opencv directly, no need for StringIO and PIL.
Here is how I create an opencv image decoded directly from a file object, or from a byte buffer read from a file object.
import cv2
import numpy as np
#read the data from the file
with open(somefile, 'rb') as infile:
buf = infile.read()
#use numpy to construct an array from the bytes
x = np.fromstring(buf, dtype='uint8')
#decode the array into an image
img = cv2.imdecode(x, cv2.IMREAD_UNCHANGED)
#show it
cv2.imshow("some window", img)
cv2.waitKey(0)
Note that in opencv 3.0, the naming convention for the various constants/flags changed, so if using opencv 2.x, you will need to change the flag cv2.IMREAD_UNCHANGED. This code sample also assumes you are loading in a standard 8-bit image, but if not, you can play with the dtype='...' flag in np.fromstring.
another way,
also in the case of a reading an actual file this will work for a unicode path (tested on windows)
with open(image_full_path, 'rb') as img_stream:
file_bytes = numpy.asarray(bytearray(img_stream.read()), dtype=numpy.uint8)
img_data_ndarray = cv2.imdecode(file_bytes, cv2.CV_LOAD_IMAGE_UNCHANGED)
img_data_cvmat = cv.fromarray(img_data_ndarray) # convert to old cvmat if needed
I figured it out:
# Read rawImage from a file, but in reality will have it from TCPServer
f = open('frame.png', "rb")
rawImage = f.read()
f.close()
# Convert rawImage to Mat
pilImage = Image.open(StringIO(rawImage));
npImage = np.array(pilImage)
matImage = cv.fromarray(npImage)
#show it
cv.NamedWindow('display')
cv.MoveWindow('display', 10, 10)
cv.ShowImage('display', matImage)
cv. WaitKey(0)
This works for me (these days):
import cv2
import numpy as np
data = open('016e263c726a.raw').read()
x = np.frombuffer(data, dtype='uint8').reshape(2048,2448)
cv2.imshow('x',x); cv2.waitKey(); cv2.destroyAllWindows()
But it reads a RAW image saved without any specific format.
(Your question seems to be tagged objective-c but you ask for Python and so is your example, so I'll use that.)
My first post on Stack Overflow!
The cv.LoadImageM method seems to be what you are looking for.
http://opencv.willowgarage.com/documentation/python/reading_and_writing_images_and_video.html
Example use:
http://opencv.willowgarage.com/wiki/PythonInterface/
LoadImage(filename, iscolor=CV_LOAD_IMAGE_COLOR) → None
Loads an image from a file as an IplImage.
Parameters:
filename (str) – Name of file to be loaded.
iscolor (int) –
Specific color type of the loaded image:
CV_LOAD_IMAGE_COLOR the loaded image is forced to be a 3-channel color image
CV_LOAD_IMAGE_GRAYSCALE the loaded image is forced to be grayscale
CV_LOAD_IMAGE_UNCHANGED the loaded image will be loaded as is.
The function cvLoadImage loads an image from the specified file and
returns the pointer to the loaded image. Currently the following file
formats are supported:
Windows bitmaps - BMP, DIB
JPEG files - JPEG, JPG, JPE
Portable Network Graphics - PNG
Portable image format - PBM, PGM, PPM
Sun rasters - SR, RAS
TIFF files - TIFF, TIF
Note that in the current implementation the alpha channel, if any, is
stripped from the output image, e.g. 4-channel RGBA image will be
loaded as RGB.
When you have to load from file, this simple solution does the job (tested with opencv-python-3.2.0.6):
import cv2
img = cv2.imread(somefile)