I've been trying to send multiple images from my flask server to flutter. I've tried everything, I either get a byte cannot be json serialised or flutter gives error in parsing the image. I've been using Image.memory() for the response.
The weird part is, if I send over one image in bytes format, it works as intended.
Any help is greatly appreciated
#app.route('/image', methods = ['POST'])
def hola():
with open("1.jpg", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
return encoded_string
This server side code works as intended. Following is the code I used for Flutter
Future<String> uploadImage(filename, url) async {
// List<String> images;
var request = http.MultipartRequest('POST', Uri.parse(url));
request.files.add(
await http.MultipartFile.fromPath('picture', filename),
);
request.headers.putIfAbsent('Connection', () => "Keep-Alive, keep-alive");
request.headers.putIfAbsent('max-age', () => '100000');
print(request.headers.entries);
http.Response response =
await http.Response.fromStream(await request.send());
print("Result: ${response.statusCode}");
// print(y);
return response.body;
// return res;
}
Then I call this function with help of an on button click event. like this:
var res = await uploadImage(file.path, url);
setState(() {
images = res;
});
Container(
child: state == ""
? Text('No Image Selected')
: Image.memory(base64.decode(images)),
),
The above is the working example it renders the Image I send. The following is where I face problem:
Server Side:
#app.route('/send', methods= ['GET'])
def send():
with open("1.jpg", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
with open("2.jpg", "rb") as image_file:
encoded_string2 = base64.b64encode(image_file.read())
x = [str(encoded_string2), str(encoded_string)]
return jsonify({'images':x})
To handle the above here is my flutter code:
var request = http.MultipartRequest('POST', Uri.parse(url));
request.files.add(
await http.MultipartFile.fromPath('picture', filename),
);
request.headers.putIfAbsent('Connection', () => "Keep-Alive, keep-alive");
request.headers.putIfAbsent('max-age', () => '100000');
print(request.headers.entries);
http.Response response =
await http.Response.fromStream(await request.send());
print("Result: ${response.statusCode}");
var x = jsonDecode(response.body);
var y = x['images'];
var z = y[0];
images = z;
To render the image the container code remains the same. I get this error:
The following _Exception was thrown resolving an image codec:
Exception: Invalid image data
or I get:
Unexpected character at _
I tried parsing in a different manner, for ex:
var x = jsonDecode(response.body);
var y = x['images'];
var z = utf8.encode(y[0]);
images = base64Encode(x[0]);
or this:
var x = jsonDecode(response.body);
var y = x['images'];
var z = base64Decode(y[0]);
images = z;
but nothing works
if you are trying to return several image binaries in a response I assume looks something like
{ "image1":"content of image one bytes", "image2":"content of image two bytes" }
and as you have found a problem arises in that binary content cannot be encoded naively into json
what you would typically do is convert it to base64
{"image1":base64.b64encode(open("my.png","rb").read()),"image2":open(...)}
most things can render base64 (not entirely sure specifically for flutter Images (but certainly for tags in html)
<img src="data:image/png;base64,<BASE64 encoded data>" />
If not you can always get the bytes back with base64decode (which flutter almost certainly has in one of its libraries)
Related
I'm trying to capture a single image from H.264 video streaming in my Raspberry Pi. The streaming is using raspivid with websocket. But, cannot show a correct image in imshow(). I also tried to set the .reshape(), but got ValueError: cannot reshape array of size 3607 into shape (480,640,3)
In client side, I successfully connect to the video streaming and get incoming bytes. The server is using raspivid-broadcaster for video streaming. I guess the first byte can be decoded to image? So, I do the following code.
async def get_image_from_h264_streaming():
uri = "ws://127.0.0.1:8080"
async with websockets.connect(uri) as websocket:
frame = json.loads(await websocket.recv())
print(frame)
width, height = frame["width"], frame["height"]
response = await websocket.recv()
print(response)
# transform the byte read into a numpy array
in_frame = (
numpy
.frombuffer(response, numpy.uint8)
# .reshape([height, width, 3])
)
# #Display the frame
cv2.imshow('in_frame', in_frame)
cv2.waitKey(0)
asyncio.get_event_loop().run_until_complete(get_image_from_h264_streaming())
print(frame) shows
{'action': 'init', 'width': 640, 'height': 480}
print(response) shows
b"\x00\x00\x00\x01'B\x80(\x95\xa0(\x0fh\x0..............xfc\x9f\xff\xf9?\xff\xf2\x7f\xff\xe4\x80"
Any suggestions?
---------------------------------- EDIT ----------------------------------
Thanks for this suggestion. Here is my updated code.
def decode(raw_bytes: bytes):
code_ctx = av.CodecContext.create("h264", "r")
packets = code_ctx.parse(raw_bytes)
for i, packet in enumerate(packets):
frames = code_ctx.decode(packet)
if frames:
return frames[0].to_ndarray()
async def save_img():
async with websockets.connect("ws://127.0.0.1:8080") as websocket:
image_init = await websocket.recv()
count = 0
combined = b''
while count < 3:
response = await websocket.recv()
combined += response
count += 1
frame = decode(combined)
print(frame)
cv2.imwrite('test.jpg', frame)
asyncio.get_event_loop().run_until_complete(save_img())
print(frame) shows
[[109 109 109 ... 115 97 236]
[109 109 109 ... 115 97 236]
[108 108 108 ... 115 97 236]
...
[111 111 111 ... 101 103 107]
[110 110 110 ... 101 103 107]
[112 112 112 ... 104 106 110]]
Below is the saved image I get. It has the wrong size of 740(height)x640(width). The correct one is 480(height) x 640(width). And, not sure why the image is grayscale instead of color one.
---------------------------------- EDIT 2 ----------------------------------
Below is the main method to send data in raspivid.
raspivid - index.js
const {port, ...raspividOptions} = {...options, profile: 'baseline', timeout: 0};
videoStream = raspivid(raspividOptions)
.pipe(new Splitter(NALSeparator))
.pipe(new stream.Transform({
transform: function (chunk, _encoding, callback){
...
callback();
}
}));
videoStream.on('data', (data) => {
wsServer.clients.forEach((socket) => {
socket.send(data, {binary: true});
});
});
stream-split - index.js (A line of code shows the max. size is 1Mb)
class Splitter extends Transform {
constructor(separator, options) {
...
this.bufferSize = options.bufferSize || 1024 * 1024 * 1 ; //1Mb
...
}
_transform(chunk, encoding, next) {
if (this.offset + chunk.length > this.bufferSize - this.bufferFlush) {
var minimalLength = this.bufferSize - this.bodyOffset + chunk.length;
if(this.bufferSize < minimalLength) {
//console.warn("Increasing buffer size to ", minimalLength);
this.bufferSize = minimalLength;
}
var tmp = new Buffer(this.bufferSize);
this.buffer.copy(tmp, 0, this.bodyOffset);
this.buffer = tmp;
this.offset = this.offset - this.bodyOffset;
this.bodyOffset = 0;
}
...
}
};
----------Completed Answer (Thanks Ann and Christoph for the direction)----------
Please see in answer section.
One question, how is the frame/stream transmitted trough websocket? The Byte sequence looks like a nal unit, it can be PPS or SPS etc. how do you know its an IFrame for example, i dont know If cv2.imshow Support RAW H264. Look into pyav there u can open h264 raw bytes then you can try to exract one frame out of it :) let me know if you need help on pyav, Look at this post
there is an example how you can doit.
Update
Based on your comment, you need a way to parse and decode a raw h264 stream,
below is a function that give u and idea about that, you need to pass your recived bytes from websocket to this function, be aware that needs to be enough data to extract one frame.
pip install av
PyAV docs
import av
# Feed in your raw bytes from socket
def decode(raw_bytes: bytes):
code_ctx = av.CodecContext.create("h264", "r")
packets = code_ctx.parse(raw_bytes)
for i, packet in enumerate(packets):
frames = code_ctx.decode(packet)
if frames:
return frame[0].to_ndarray()
You could also try to read directly with pyav the Stream with av.open("tcp://127.0.0.1:")
Update 2
Could u please test this, the issues that you have on your edit are weird, you dont need a websocket layer I thing you can read directly from raspivid
raspivid -a 12 -t 0 -w 1280 -h 720 -vf -ih -fps 30 -l -o tcp://0.0.0.0:5000
def get_first_frame(path):
stream = av.open(path, 'r')
for packet in stream.demux():
frames = packet.decode()
if frames:
return frames[0].to_ndarray(format='bgr24')
ff = get_first_frame("tcp://0.0.0.0:5000")
cv2.imshow("Video", ff)
cv2.waitKey(0)
Packages of PyAV and Pillow are required. No need to use cv2 anymore. So, add the packages
pip3 install av
pip3 install Pillow
Codes
import asyncio
import websockets
import av
import PIL
def decode_image(raw_bytes: bytes):
code_ctx = av.CodecContext.create("h264", "r")
packets = code_ctx.parse(raw_bytes)
for i, packet in enumerate(packets):
frames = code_ctx.decode(packet)
if frames:
return frames[0].to_image()
async def save_img_from_streaming():
uri = "ws://127.0.0.1:8080"
async with websockets.connect(uri) as websocket:
image_init = await websocket.recv()
count = 0
combined = b''
while count < 2:
response = await websocket.recv()
combined += response
count += 1
img = decode_image(combined)
img.save("img1.png","PNG")
asyncio.get_event_loop().run_until_complete(save_img_from_streaming())
By Christoph's answer, to_ndarray is suggested, but I found it somehow it results a grayscale image, which is casued by the return of incorrect numpy array form like [[...], [...], [...], ...]. The colored image should be an array like [[[...], [...], [...], ...], ...]. Then, I look at the PyAV docs, there is another method called to_image, which can return an RGB PIL.Image of the frame. So, just using that function can get what I need.
Notes that response from await websocket.recv() may be different. It depends on how the server sends.
This is a problem I once had when attempting to send numpy images (converted to bytes) through sockets. The problem was that the bytes string was too long.
So instead of sending the entire image at once, I sliced the image so that I had to send, say, 10 slices of the image. Once the other end receives the 10 slices, simply stack them together.
Keep in mind that depending on the size of your images, you may need to slice them more or less to achieve the optimal results (efficiency, no errors).
If I send second request by hand, it works fine, but if I try to do that by python, it is failing. No errors in apache log. File remains the same and is not overwritten.
Code:
def fen_actions(mode,fen):
global url
if mode: # 1-> Read 2-> Write
params = {'mode':'readFen'}
else:
params = {'mode':'putFen','fen':fen}
r = requests.get(url,params)
return r.text
fen_actions(2,chess.STARTING_BOARD_FEN) #This is not working. Starting fen is just a string
temp_readed_fen = fen_actions(1,0) # This works
php code:
<?php
$f = fopen("tempfen.txt","r+") or die("Unable to open a file");
if (strval($_GET["mode"]) === strval("putFen"))
{
if(strval($_GET['fen']) != strval(fread($f,filesize("tempfen.txt"))))
{
file_put_contents("tempfen.txt","");
fwrite($f,$_GET['fen']);
}
}
else if ($_GET["mode"] === strval("readFen"))
{
echo(fread($f,filesize("tempfen.txt")));
}
ini_set('display_errors',1);
?>
If I understand what you're trying to do, I think your if statements aren't actually checking what you want. According to your comment, it should be:
if mode == 1: # 1-> Read 2-> Write
params = {'mode':'readFen'}
elif mode == 2:
params = {'mode':'putFen','fen':fen}
I'm trying to decompress the response from a web request using Python requests and zlib but I'm not able to decompress the response content properly. Here's my code:
import requests
import zlib
URL = "http://" #omitted real url
r = requests.get(URL)
print r.content
data = zlib.decompress(r.content, lib.MAX_WBITS)
print data
However, I keep getting various errors when changing the wbits parameter.
zlib.error: Error -3 while decompressing data: incorrect header check
zlib.error: Error -3 while decompressing data: invalid stored block lengths
I tried the wbits parameters for deflate, zlip and gzip as noted here zlib.error: Error -3 while decompressing: incorrect header check
But still can't get pass these errors. I'm trying to this in Python, I was given this piece of code that did it with Objective-C but I don't know Objective-C
#import "GTMNSData+zlib.h"
+ (NSData*) uncompress: (NSData*) data
{
Byte *bytes= (Byte*)[data bytes];
NSInteger length=[data length];
NSMutableData* retdata=[[NSMutableData alloc] initWithCapacity:length*3.5];
NSInteger bSize=0;
NSInteger offSet=0;
while (true) {
offSet+=bSize;
if (offSet>=length) {
break;
}
bSize=bytes[offSet];
bSize+=(bytes[offSet+1]<<8);
bSize+=(bytes[offSet+2]<<16);
bSize+=(bytes[offSet+3]<<24);
offSet+=4;
if ((bSize==0)||(bSize+offSet>length)) {
LogError(#"Invalid");
return data;
}
[retdata appendData:[NSData gtm_dataByInflatingBytes: bytes+offSet length:bSize]];
}
return retdata;
}
According to Python requests documentation at:
http://docs.python-requests.org/en/master/user/quickstart/#binary-response-content
it says:
You can also access the response body as bytes, for non-text requests:
>>> r.content
b'[{"repository":{"open_issues":0,"url":"https://github.com/...
The gzip and deflate transfer-encodings are automatically decoded for you.
If requests understand the encoding, it should therefore already be uncompressed.
Use r.raw if you need to get access to the original data to handle a different decompression mechanism.
http://docs.python-requests.org/en/master/user/quickstart/#raw-response-content
The following is an untested translation of the Objective-C code:
import zlib
import struct
def uncompress(data):
length = len(data)
ret = []
bSize = 0
offSet = 0
while True:
offSet += bSize
if offSet >= length:
break
bSize = struct.unpack("<i", data[offSet:offSet+4])
offSet += 4
if bSize == 0 or bSize + offSet > length:
print "Invalid"
return ''.join(ret)
ret.append(zlib.decompress(data[offSet:offSet+bSize]))
return ''.join(ret)
A bit of a weird question perhaps, but I'm trying to replicate a python example where they are creating a HMAC SHA256 hash from a series of parameters.
I've run into a problem where I'm supposed to translate an api key in hex to ascii and use it as secret, but I just can't get the output to be the same as python.
>>> import hmac
>>> import hashlib
>>> apiKey = "76b02c0c4543a85e45552466694cf677937833c9cce87f0a628248af2d2c495b"
>>> apiKey.decode('hex')
'v\xb0,\x0cEC\xa8^EU$fiL\xf6w\x93x3\xc9\xcc\xe8\x7f\nb\x82H\xaf-,I['
If I've understood the material online this is supposed to represent the hex string in ascii characters.
Now to the powershell script:
$apikey = '76b02c0c4543a85e45552466694cf677937833c9cce87f0a628248af2d2c495b';
$hexstring = ""
for($i=0; $i -lt $apikey.Length;$i=$i+2){
$hexelement = [string]$apikey[$i] + [string]$apikey[$i+1]
$hexstring += [CHAR][BYTE][CONVERT]::toint16($hexelement,16)
}
That outputs the following:
v°,♀EC¨^EU$fiLöw?x3ÉÌè⌂
b?H¯-,I[
They are almost the same, but not quite and using them as secret in the HMAC generates different results. Any ideas?
Stating the obvious: The key in this example is no the real key.
Update:
They look more or less the same, but the encoding of the output is different. I also verified the hex to ASCII in multiple online functions and the powershell version seems right.
Does anyone have an idea how to compare the two different outputs?
Update 2:
I converted each character to integer and both Python and Powershell generates the same numbers, aka the content should be the same.
Attaching the scripts
Powershell:
Function generateToken {
Param($apikey, $url, $httpMethod, $queryparameters=$false, $postData=$false)
#$timestamp = [int]((Get-Date -UFormat %s).Replace(",", "."))
$timestamp = "1446128942"
$datastring = $httpMethod + $url
if($queryparameters){ $datastring += $queryparameters }
$datastring += $timestamp
if($postData){ $datastring += $postData }
$hmacsha = New-Object System.Security.Cryptography.HMACSHA256
$apiAscii = HexToString -hexstring $apiKey
$hmacsha.key = [Text.Encoding]::ASCII.GetBytes($apiAscii)
$signature = $hmacsha.ComputeHash([Text.Encoding]::ASCII.GetBytes($datastring))
$signature
}
Function HexToString {
Param($hexstring)
$asciistring = ""
for($i=0; $i -lt $hexstring.Length;$i=$i+2){
$hexelement = [string]$hexstring[$i] + [string]$hexstring[$i+1]
$asciistring += [CHAR][BYTE][CONVERT]::toint16($hexelement,16)
}
$asciistring
}
Function TokenToHex {
Param([array]$Token)
$hexhash = ""
Foreach($element in $Token){
$hexhash += '{0:x}' -f $element
}
$hexhash
}
$apiEndpoint = "http://test.control.llnw.com/traffic-reporting-api/v1"
#what you see in Control on Edit My Profile page#
$apikey = '76b02c0c4543a85e45552466694cf677937833c9cce87f0a628248af2d2c495b';
$queryParameters = "shortname=bulkget&service=http&reportDuration=day&startDate=2012-01-01"
$postData = "{param1: 123, param2: 456}"
$token = generateToken -uri $apiEndpoint -httpMethod "GET" -queryparameters $queryParameters, postData=postData, -apiKey $apiKey
TokenToHex -Token $token
Python:
import hashlib
import hmac
import time
try: import simplejson as json
except ImportError: import json
class HMACSample:
def generateSecurityToken(self, url, httpMethod, apiKey, queryParameters=None, postData=None):
#timestamp = str(int(round(time.time()*1000)))
timestamp = "1446128942"
datastring = httpMethod + url
if queryParameters != None : datastring += queryParameters
datastring += timestamp
if postData != None : datastring += postData
token = hmac.new(apiKey.decode('hex'), msg=datastring, digestmod=hashlib.sha256).hexdigest()
return token
if __name__ == '__main__':
apiEndpoint = "http://test.control.llnw.com/traffic-reporting-api/v1"
#what you see in Control on Edit My Profile page#
apiKey = "76b02c0c4543a85e45552466694cf677937833c9cce87f0a628248af2d2c495b";
queryParameters = "shortname=bulkget&service=http&reportDuration=day&startDate=2012-01-01"
postData = "{param1: 123, param2: 456}"
tool = HMACSample()
hmac = tool.generateSecurityToken(url=apiEndpoint, httpMethod="GET", queryParameters=queryParameters, postData=postData, apiKey=apiKey)
print json.dumps(hmac, indent=4)
apiKey with "test" instead of the converted hex to ASCII string outputs the same value which made me suspect that the conversion was the problem. Now I'm not sure what to believe anymore.
/Patrik
ASCII encoding support characters from this code point range 0–127. Any character outside this range, encoded with byte 63, which correspond to ?, in case you decode byte array back to string. So, with your code, you ruin your key by applying ASCII encoding to it. But if what you want is a byte array, then why do you do Hex String -> ASCII String -> Byte Array instead of just Hex String -> Byte Array?
Here is PowerShell code, which generate same results, as your Python code:
function GenerateToken {
param($apikey, $url, $httpMethod, $queryparameters, $postData)
$datastring = -join #(
$httpMethod
$url
$queryparameters
#[DateTimeOffset]::Now.ToUnixTimeSeconds()
1446128942
$postData
)
$hmacsha = New-Object System.Security.Cryptography.HMACSHA256
$hmacsha.Key = #($apikey -split '(?<=\G..)(?=.)'|ForEach-Object {[byte]::Parse($_,'HexNumber')})
[BitConverter]::ToString($hmacsha.ComputeHash([Text.Encoding]::UTF8.GetBytes($datastring))).Replace('-','').ToLower()
}
$apiEndpoint = "http://test.control.llnw.com/traffic-reporting-api/v1"
#what you see in Control on Edit My Profile page#
$apikey = '76b02c0c4543a85e45552466694cf677937833c9cce87f0a628248af2d2c495b';
$queryParameters = "shortname=bulkget&service=http&reportDuration=day&startDate=2012-01-01"
$postData = "{param1: 123, param2: 456}"
GenerateToken -url $apiEndpoint -httpMethod "GET" -queryparameters $queryParameters -postData $postData -apiKey $apiKey
I also fix some other errors in your PowerShell code. In particular, arguments to GenerateToken function call. Also, I change ASCII to UTF8 for $datastring encoding. UTF8 yields exactly same bytes if all characters are in ASCII range, so it does not matter in you case. But if you want to use characters out of ASCII range in $datastring, than you should choose same encoding, as you use in Python, or you will not get the same results.
I have the following named pipe created in Windows Powershell.
# .NET 3.5 is required to use the System.IO.Pipes namespace
[reflection.Assembly]::LoadWithPartialName("system.core") | Out-Null
$pipeName = "pipename"
$pipeDir = [System.IO.Pipes.PipeDirection]::InOut
$pipe = New-Object system.IO.Pipes.NamedPipeServerStream( $pipeName, $pipeDir )
Now, what i need is some Python code snippet to read from the above named pipe created. Can Python do that ?
Thanks in advance !
Courtesy :http://jonathonreinhart.blogspot.com/2012/12/named-pipes-between-c-and-python.html
Here's the C# Code
using System;
using System.IO;
using System.IO.Pipes;
using System.Text;
class PipeServer
{
static void Main()
{
var server = new NamedPipeServerStream("NPtest");
Console.WriteLine("Waiting for connection...");
server.WaitForConnection();
Console.WriteLine("Connected.");
var br = new BinaryReader(server);
var bw = new BinaryWriter(server);
while (true)
{
try
{
var len = (int)br.ReadUInt32(); // Read string length
var str = new string(br.ReadChars(len)); // Read string
Console.WriteLine("Read: \"{0}\"", str);
//str = new string(str.Reverse().ToArray()); // Aravind's edit: since Reverse() is not working, might require some import. Felt it as irrelevant
var buf = Encoding.ASCII.GetBytes(str); // Get ASCII byte array
bw.Write((uint)buf.Length); // Write string length
bw.Write(buf); // Write string
Console.WriteLine("Wrote: \"{0}\"", str);
}
catch (EndOfStreamException)
{
break; // When client disconnects
}
}
}
}
And here's the Python code:
import time
import struct
f = open(r'\\.\pipe\NPtest', 'r+b', 0)
i = 1
while True:
s = 'Message[{0}]'.format(i)
i += 1
f.write(struct.pack('I', len(s)) + s) # Write str length and str
f.seek(0) # EDIT: This is also necessary
print 'Wrote:', s
n = struct.unpack('I', f.read(4))[0] # Read str length
s = f.read(n) # Read str
f.seek(0) # Important!!!
print 'Read:', s
time.sleep(2)
Convert the C# code into a .ps1 file.