I had sent an image that is encoded it to Base64 in Java to python, and when I got the data from python, this is the data
'iVBORw0KGgoAAAANSUhEUgAAAMMAAAEECAIAAAAahxdFAAAAA3NCSVQICAjb4U/gAAAgAElEQVR4\nnEy86Zokx5EkKCKq5h6ZlVUFEAfBAyR7e4+emd33f5jdb6enDzZJNIBCHZkRbqYq+8OjOFv5p77I\nyPBwdzNVudT59PqtRRlgrwaAPbKqig0gmbbby4wIotBacJK0LSmpgolM0WKXIwVTEhSku2GS5NAo\ngfSgGgAgyRFBqfhqzxv66H4IbtuF7kA8090dhAm1RrCIRmVeCgX0oKoBIJEB2j64HKmi5fLakCZa\nJpmNEVlVB1udy2vLdHODXl4+bU+vUwFWmwGaC+Jaa1MytOhwuTPUAdySGzjn5Mgwt2279W3BMnZx\ncRR6QOg2lVRm3tZxGfuiVREah4ok6S3yyg73o/eIKHelDJiKLUhqQhlKpmKSzBzLEVHBQbfiIca1\nnZnoYqTgLS43eWNQ3qVFWxwaCw7mPmLLUTX3HIBIjlSj9xgmbG8KAA0omRFs577d1hS4jWEIouAS\nhkaKjRqxxX55cLdFmiFVNwxJBKlYaGaQsEGKhqFMVXVEDMVEg5TQtIIRoQhkZAy6tWXkuIwLhKf9\nAdK+XbaRQ7HleNovr/fLY2jfwxLJ3LTlWG0EDjvEEPdIODNjYi40SAKCGZq1QnRXKpa60ZHZbQ0Y\nvYmRPFYTkjoZXTXljWECdqGINjCNMQbICK4+JAUH6RxjoocZ1NZw2oAFQaYjo23JMK+er7ClsguR\ng+gIJzWrLq/GS63LPkBameShSqsJiTQywqAFB4vMzAiOMdZtXRL7uExXaLixKzcxIj+xLxagBgqU\nNEIBltrJkEcTUUy1QGBEZGhwF93VewjdETF7DSXQoWii3SCGIhXFtp0UQrYzQhLQCCR13nECBKhw\ndWzbXnAylwvtEQnAdhMkKdo2qAgDBkhWLaVEwR6RCu4xmnjICzMQytid2jI1MuxLjKfHi+VLjJAJ\nxBiwPaJRL25TDiG'
How do I convert this to either bitmap or an image in python? If the code snippets are needed, please tell
I had tried using base64.b64decode and it gives the error incorrect padding
Based on one of the comments, I tried base64.b64decode(base64_str + '===') and gotten this
b'\x89PNG\r\n\x1a\n\x00\x00\x00\rIHDR\x00\x00\x00\xc3\x00\x00\x01\x04\x08\x02\x00\x00\x00\x1a\x87\x17E\x00\x00\x00\x03sBIT\x08\x08\x08\xdb\xe1O\xe0\x00\x00
\x00IDATx\x9c\x94\xbc\xddrdI\x92\xa4\xa7j\xe6~"\x90Y5US5\xdd=?|\x02^S\xf6\xfd\x1feeE\xb8\xec\xed\xee\xea\xaa\xcaLD\xc4q7S\xe5\xc5A\xf3\x86\xe4\x92\x03\x88\x00\x02\x08
q"\xdc\x8e\xfd\xa8~\x16\xfc_\xff\xcb\xff\x06gD0\x14\x11\x89\x1b3\x8a\xfd9'\xc9\xceAL\x04oG\x94\xf6m\xde\xe7q\xd8\xbc\xe7-#t?\x9e\xd4\xcfyS\xceO\xe3\xf0\xc1\xdb\xedv\x8c\xf1\xc3\xdbQo\xf3\x9e7\xc13\x8f\x18\xbe3s\x1c\x9f\x8e\x999\x01\x1co\xc7\x18\xe36\xe6\x88\x9c\xb7q\xbf\xddn\xb7\xe3\xed\xed\xed\xc7\x1f\xbe\xff\x97\x1f~\xf8tL\xfc'?L\xfc\xf2\xfb\xb7\xbf\xfc\xed\xef\x7f\xf9\xdb\xdf\xaa\xca\x80\xa4\x86\x01\x10S\x92\x89\xcclX6\x1d\x0f\xed\xbd\x95\x99\x92\x8e1\x11\xb2a3\xc2\x00nd[\x81\xdcP\x18\xc8\x904\x83[\xb6\r
a\x07mJ\n\x8b$\x00\xb7\x00\xb4E\xa8\xbb'!
\x80\xdd\x1d#\x80I\xad2\xc9\xe1n\x17I\xb7(7:\x15\xee\x06\xa4\x8e\xd5u\x9e\xfbQ\xfb\xf1<\xdf_\xef_\x1e\xef:\xd9\xddR-\x00r\x9a,\xe5\xc0\xb3;\xd4;\x80\xd6\x00_\xab\xdf\x8e\xe3\xac\x1d\x8dR\x87\xad\xa6Lj-\xb4\xdb\x98\xf9~.\x90\xec\x8f\xab\x1a\xe6\xc3\xeb\xd3\x1e\x95\xb5\na=\xa5\x99}\x9f\xb8a\xbe\xdd\x8f\xc8\xfe\xf4\xe9~\x8c\x81\xc1\xa0\xbb\xeb<\xcfok\xd7\xf2\xde\xcfD\xde\xd5\x8b\xdd}\xe6\xcf\xff\xf6\xc7c\xde\x97\xbb\r2"A\x12\xa4\x18\x87\x91\xe1\x88\xc1\xe1[\x8c\x1c\xa9\x08\x8e19\xd6\xc0\xfd>\xb1\x15s\x98\x18\x11i\xcc7\x98\x9d\xc7q\xe41#\x96FF\x98\x91#\x02\x02\xaa\xeb~\x1c\xb2g\xa4$\'\x03\x90\r0#Z\xde\xea\x91c\xce\xc1\xffL$\x11\x84\xd4\xaf\xd7z\x9e\xbb\xbbl\xcb&\t\x87)\xc0\x80i\x1b\xd7y7\x0c\x19\x80d\xc12\x1c\xb4\xa4
a%\xc3\x06\xc9\x88\x80\x1b\x82\xd4\xa0\x80\xb8B'\x05\xb8\x08\x02\x08\xda\xeav;#\xd4p\x83\x0e\x90\xb2\x01\xd8\x90\xd1\x92[-\x03\x81V\xcb\xf2u\x91av5\xdce\xb5\xb4\xbb\xbf<\x9f\xe7y>\xcegm\xac\xf3l\xe1\xb1_\x13\xa4]\xb5C\xa8\xae\xb3\xfbN\x08ZB\x16\xc5\xe8\xee\xf7\xbdb\x8c\xd1\xf1\xdc[\xaa\x01\xb7\xfb\x86h\xc0f\xef}\xb6\x07\x83\xec\xb6\x06\x93\x11\xd9H\xb9\xdc\n\xd0
x'?}\xbe\xff\xe9\x87\x1f\x7f\xfa\xe7\xcf?~\xf7\xf9~\x9b\x99\x98\x19I\xbe\xd6\xd9\xdd\xab\xb1\xd6\xbe\x8d\x94\\xd1\x141F\xfe\xe9\x7f\xf9\xf7fX\xce\x08\xdb#SIY\x86\xcc\x18s\x9e\xbd\xe7\x18s\x1eB\xde8+|\x8b\xe0`r\xb4\xd6\xdb\xed\xd3#\x1c\x11\xe3\x00\x88\xc1\xf8<\xde\xc0\xb0tv!'
I receive a bytes, which is converted from an image from Android client, how can I convert it to image on Python?
I use these code:
img = Image.open(BytesIO(data))
img.show()
img.save('D:\\1.jpg')
but I can only get an incomplete image like:
The image on the right is what I'm trying to send to Python, and the left is what I get. How can I solve this?
PS:data is complete because I save the image completely by using Eclipse.
I have already solve the problem.I made a stupid mistake that the buffersize I set for socket.recv is too short.
I set a longer buffersize like 100000000, and saving the image is easy like:
f = open('D:\\2.jpg', 'wb')
f.write(data)
f.close()
As the title suggests, I'm having some trouble with some UIImage color space conversions. The TL;DR version is that I need a way to convert a UIIMage in BGR format to RGB.
Here's the flow of events in my app:
App: get Image
App: convert to base64 and send to server
Server: convert base64 to an Image to use
Server: convert Image back to base64 String
Server: send base64 string to app
App: convert base64 string to UIImage
RGB version of the Test-Image on the server
BGR version of the Test-Image client-side
It's at this point that the UIImage displayed is in BGR format. My best guess is that something goes wrong at step 4, because up until then the image is in RGB format (I've written it to a file and checked). I've added the code to step 4 below just for reference. I'm actively looking to change the color space of the UIImage client-side, but I'm not opposed to fixing the issue server-side. Either solution would work.
Step 2: Convert UIIMage to base64 string
let imageData: Data = UIImageJPEGRepresentation(map.image,0.95)!
let base64EnCodedStr: String = imageData.base64EncodedString()
Step 3: Convert base64 String to a PIL Image
import io
import cv2
import base64
import numpy as np
from PIL import Image
# Take in base64 string and return PIL image
def stringToImage(base64_string):
imgdata = base64.b64decode(base64_string)
return Image.open(io.BytesIO(imgdata))
Step 4: Convert Image (numpy array) back to a base64 string
# Convert a numpyArray into a base64 string in JPEG format
def imageToString(npArray):
# convert array to PIL Image
newImage = Image.fromarray(npArray.astype('uint8'), 'RGB')
# convert to JPEG format
file = io.BytesIO()
newImage.save(file, format="JPEG")
# reset file pointer to start
file.seek(0)
img_bytes = file.read()
# encode data
encodedData = base64.b64encode(img_bytes)
return encodedData.decode('ascii')
EDIT:
As was mentioned earlier, there were two locations where I could do the conversions: Sever-side or client-side. Thanks to the responses to this question I was able to find solutions for both scenarios.
Solution 1: Server-side
referring to the code in step 4, change the first line in that function to the following:
# convert array to PIL Image
newImage = Image.fromarray( npArray[...,[2,1,0]] ) # swap color channels which converts BGR -> RGB
Solution 2: Client-side
Refer to #dfd 's solution. It's well written and works wonderfully. Here's the slightly adapted version I've tested in my application (which uses swift 4).
let data = NSData(base64Encoded: base64String, options: .ignoreUnknownCharacters)
let uiInput = UIImage(data: data! as Data)
let ciInput = CIImage(image: uiInput!)
let ctx = CIContext(options: nil)
let swapKernel = CIColorKernel( string:
"kernel vec4 swapRedAndGreenAmount(__sample s) {" +
"return s.bgra;" +
"}"
)
let ciOutput = swapKernel?.apply(withExtent: (ciInput?.extent)!, arguments: [ciInput as Any])
let cgImage = ctx.createCGImage(ciOutput!, from: (ciInput?.extent)!)
let rgbOutput = UIImage(cgImage: cgImage!)
Here's a very simple CIKernel to swap things:
kernel vec4 swapRedAndGreenAmount(__sample s) {
return s.bgra;
}
Here's the Swift code to use it:
let uiInput = UIImage(named: "myImage")
let ciInput = CIImage(image: uiInput!)
let ctx = CIContext(options: nil)
let swapKernel = CIColorKernel( source:
"kernel vec4 swapRedAndGreenAmount(__sample s) {" +
"return s.bgra;" +
"}"
)
let ciOutput = swapKernel?.apply(extent: (ciInput?.extent)!, arguments: [ciInput as Any])
let cgImage = ctx.createCGImage(ciOutput!, from: (ciInput?.extent)!)
let uiOutput = UIImage(cgImage: cgImage!)
Be aware of a few things:
This will work on devices running iOS 9 or later.
Second and almost as important, this uses CoreImage and the GPU. Thus, testing this on a simulator may take seconds to render. But on a device it will take microseconds.
I tend to use a CIContext to create a CGImage before ending up with a UIImage. You may be able to remove this step and go straight from a CIImage to a UIImage.
Excuse the wrapping/unwrapping, it's converted from old code. You can probably do a better job.
Explanation:
Using CoreImage "Kernel" code, which until iOS 11 could only be a subset of GLSL code, I wrote a simple CIColorKernel that takes a pixel's RGB value and returns the pixel color as GRB.
A CIColorKernel is optimized to work on a single pixel at a time with no access to the pixels surrounding it. Unlike that, a CIWarpKernel is optimized to "warp" a pixel based on the pixels around it. Both of these are (more or less) optimized subclasses of a CIKernel, which - until iOS 11 and Metal Performance Shaders - is about the closest you get to using openGL inside of CoreImage.
Final edit:
What this solution does is swap a pixel's RGB one-by-one using CoreImage. It's fast because it uses the GPU, deceptively fast (because the simulator does not give you anything close to real-time performance on a device), and simple (because it swaps things from RGB to BGR).
The actual code to do this is straightforward. Hopefully it works as a start for those who want to do much larger "under the hood" things using CoreImage.
EDIT (25 February 2021):
As of WWDC 2019 Apple deprecated openGL - specifically GLKit - in favor of MetalKit. For a color kernel like this, it's rather trivial to convert this code. Warp kernels are slightly more trickier though.
As for when Apple will "kill" OpenGL is hard to say. We all know that someday UIKit will also be deprecated, but (showing my age now) it may not be in my lifetime. YMMV.
I don't think there's a way to do it using CoreImage or CoreGraphics since iOS does not give you much leeway with regards to creating custom colorspaces. However, I found something that may help using OpenCV from this article: https://sriraghu.com/2017/06/04/computer-vision-in-ios-swiftopencv/. It requires a bit of Objective-C but with a bridging header, the code will be hidden away once it's written.
Add a new file -> ‘Cocoa Touch Class’, name it ‘OpenCVWrapper’ and set
language to Objective-C. Click Next and select Create. When it
prompted to create bridging header click on the ‘Create Bridging
Header’ button. Now you can observe that there are 3 files created
with names: OpenCVWrapper.h, OpenCVWrapper.m, and -Bridging-Header.h.
Open ‘-Bridging-Header.h’ and add the following line: #import
“OpenCVWrapper.h”
Go to ‘OpenCVWrapper.h’ file and add the following
lines of code:
#import <Foundation/Foundation.h>
#import <UIKit/UIKit.h>
#interface OpenCVWrapper: NSObject
+ (UIImage *) rgbImageFromBGRImage: (UIImage *) image;
#end
Rename OpenCVWrapper.m to “OpenCVWrapper.mm” for C++ support and add the following code:
#import "OpenCVWrapper.h"
// import necessary headers
#import <opencv2/core.hpp>
#import <opencv2/imgcodecs/ios.h>
#import <opencv2/imgproc/imgproc.hpp>
using namespace cv;
#implementation OpenCVWrapper
+ (UIImage *) rgbImageFromBGRImage: (UIImage *) image {
// Convert UIImage to cv::Mat
Mat inputImage; UIImageToMat(image, inputImage);
// If input image has only one channel, then return image.
if (inputImage.channels() == 1) return image;
// Convert the default OpenCV's BGR format to RGB.
Mat outputImage; cvtColor(inputImage, outputImage, CV_BGR2RGB);
// Convert the BGR OpenCV Mat to UIImage and return it.
return MatToUIImage(outputImage);
}
#end
The minor difference from the linked article is they are converting BGR to grayscale but we are converting BGR to RGB (good thing OpenCV has tons of conversions!).
Finally...
Now that there is a bridging header to this Objective-C class you can use OpenCVWrapper in Swift:
// assume bgrImage is your image from the server
let rgbImage = OpenCVWrapper.rgbImage(fromBGR: bgrImage)
// double check the syntax on this ^ I'm not 100% sure how the bridging header will convert it
You can use the underlying CGImage to create a CIImage in the format you desire.
func changeToRGBA8(image: UIImage) -> UIImage? {
guard let cgImage = image.cgImage,
let data = cgImage.dataProvider?.data else { return nil }
let flipped = CIImage(bitmapData: data as Data,
bytesPerRow: cgImage.bytesPerRow,
size: CGSize(width: cgImage.width, height: cgImage.height),
format: kCIFormatRGBA8,
colorSpace: cgImage.colorSpace)
return UIImage(ciImage: flipped)
}
The only issue is this only works if the UIImage was created with a CGImage in the first place! You can also convert it to a CIImage then a CGImage but the same applies, it only works if the UIImage was created from a CIImage.
There are ways around this limitation that I'll explore and post here if I have a better answer.
I am trying to send an image over socket connection for video chat, but the reconstruction of the image from bytes format is incorrect. Here is my conversion of the image to bytes to send:
pil_im = Image.fromarray(img)
b = io.BytesIO()
pil_im.save(b, 'jpeg')
im_bytes = b.getvalue()
return im_bytes
This sends fine, however, I cannot resolve the reformatting of these bytes into an image file. Here is my code to reformat into image for display:
pil_bytes = io.BytesIO(im_bytes)
pil_image = Image.open(pil_bytes)
cv_image = cv2.cvtColor(np.array(pil_image), cv2.COLOR_RGB2BGR)
return cv_image
The second line there raises the following exception:
cannot identify image file <_io.BytesIO object at 0x0388EF60>
I have looked at some other threads (this one and this one) but no solution has been helpful for me. I am also using this as reference to try to correct myself, but what seems to work fine for them just doesn't for me.
Thank you for any assistance you can provide and please excuse any errors, I am still learning python.
First of all Thank You! as the code in your question helped me to solve the first part of the problem I had. The second part was already solved for me using this simple code (don't convert to array)
dataBytesIO = io.BytesIO(im_bytes)
image = Image.open(dataBytesIO)
Hope this helps
I'm trying to convert this string in base64 (http://pastebin.com/uz4ta0RL) to something usable by OpenCV.
Currently I am using this code (img_temp is the string) to use the OpenCV functions by converting it to a temp image(imagetosave.jpg) then loading the temp image with cv2.imread.
fh = open('imagetosave.jpg','wb')
fh.write(img_temp.decode('base64'))
fh.close()
img = cv2.imread('imagetosave.jpg',-1)
Though this method works, I would like to be able to use the OpenCV functions without using a temp image.
Help would be much appreciated, thanks!
To convert string from base64 to normal you can use
import base64
base64.b64decode("TGlrZSB0aGF0Cg==")
but it itsn't unclear if is what you need.
You can convert your decoded string to a list of integers, which you can then pass to the imdecode function:
img_array = map(int, img_temp.decode('base64'))
img = cv2.imdecode(img_array, -1)