How to convert binary data into image file in Python - python

transmit image file into small segment (packet)
I used python to transmit image file on UDP sockets as required.
It is required to break the image file into small 'pieces' and in binary format not in string. Size of each packet is 1024B ('content'). And I make the packets.Adding checksum, sequence number,or corrupted data into the packet of 'Sender' is required as well.
need to display package of binary data into image
In the 'Receiver', extract the data from the packet to get 'content'(1024B), is it possible to display the packet data into image? There are transmit in binary format, and I want to show the realtime transmission,eg. data is corrupted, some part of the image(if can be displayed) will be 'blur'.
question
how to display each packet data into an image or cumulative data into images?
Can anyone help? I have searched much but cannot get the solution well.

Related

get the amplitude data from an mp3 audio files using python

I have an mp3 file and I want to basically plot the amplitude spectrum present in that audio sample.
I know that we can do this very easily if we have a wav file. There are lot of python packages available for handling wav file format. However, I do not want to convert the file into wav format then store it and then use it.
What I am trying to achieve is to get the amplitude of an mp3 file directly and even if I have to convert it into wav format, the script should do it on air during runtime without actually storing the file in the database.
I know we can convert the file like follows:
from pydub import AudioSegment
sound = AudioSegment.from_mp3("test.mp3")
sound.export("temp.wav", format="wav")
and it creates the temp.wav which it supposed to but can we just use the content without storing the actual file?
MP3 is encoded wave (+ tags and other stuff). All you need to do is decode it using MP3 decoder. Decoder will give you whole audio data you need for further processing.
How to decode mp3? I am shocked there are so few available tools for Python. Although I found a good one in this question. It's called pydub and I hope I can use a sample snippet from author (I updated it with more info from wiki):
from pydub import AudioSegment
sound = AudioSegment.from_mp3("test.mp3")
# get raw audio data as a bytestring
raw_data = sound.raw_data
# get the frame rate
sample_rate = sound.frame_rate
# get amount of bytes contained in one sample
sample_size = sound.sample_width
# get channels
channels = sound.channels
Note that raw_data is 'on air' at this point ;). Now it's up to you how do you want to use gathered data, but this module seems to give you everything you need.

Best practice for transferring images between microservices

I am trying to pass an image from one microservice to another microservice as a parameter where the image would be processed.
What is the best way to send this image across the services, image is in jpg format .
What I have tried is :
import binascii
f1 = open("imageName.jpg","rb")
sendToServiceTwo(binascii.hexlify(f1.read()))
The size of the image is around 28kb but in the converted hexadecimal format the size is increased to 48kb . I don't want to apply any compression while transferring the data.

How to save jpeg data that is identical to the original jpeg file using OpenCV

I'm using OpenCV and Python. I have loaded a jpeg image into a numpy array. Now i want to save it back into jpeg format, but since the image was not modified, I don't want to compress it again. Is it possible to create a jpeg from the numpy array that is identical with the jpeg that it was loaded from?
I know this workflow (decode-encode without doing anything) sounds a bit stupid, but keeping the original jpeg data is not an option. I'm interested if it is possible to recreate the original jpeg just using the data at hand.
The question is different from Reading a .JPG Image and Saving it without file size change, as I don't modify anything in the picture. I really want to restore the original jpeg file based on the data at hand. I assume one could bypass the compression steps (the compression artifacts are already in the data) and just write the file in jpeg format. The question is, if this is possible with OpenCV.
Clarified answer, following comment below:
What you say makes no sense at all; You say that you have the raw, unmodified, RGB data. No you don't. You have the uncompressed data that has been reconstructed from the compressed jpeg file.
The JPEG standards specify how to un-compress an image / video. There is nothing in the standard about how to actually do this compression, so your original image data could have been compressed any one of a zillion different ways. You have no way of knowing the decoding steps that were required to recreate your data, so you cannot reverse them.
Image this.
"I have a number, 44, please tell me how I can get the original
numbers that this came from"
This is, essentially, what you are asking.
The only way you can do what you want (other than just copy the original file) is to read the image into an array before loading into openCV. Then if you want to save it, then just write the raw array to a file, something like this:
fi = 'C:\\Path\\to\\Image.jpg'
fo = 'C:\\Path\\to\\Copy_Image.jpg'
with open(fi,'rb') as myfile:
im_array = np.array(myfile.read())
# Do stuff here
image = cv2.imdecode(im_array)
# Do more stuff here
with open(fo,'wb') as myfile:
myfile.write(im_array)
Of course, it means you will have the data stored twice, effectively, in memory, but this seems to me to be your only option.
Sometimes, no matter how hard you want to do something, you have to accept that it just cannot be done.

Why the frames of a VideoClip change when it is written to a video file?

I wrote the following code:
from moviepy.editor import *
from PIL import Image
clip= VideoFileClip("video.mp4")
video= CompositeVideoClip([clip])
video.write_videofile("video_new.mp4",fps=clip.fps)
then to check whether the frames have changed or not and if changed, which function changed them, i retrieved the first frame of 'clip', 'video' and 'video_new.mp4' and compared them:
clip1= VideoFileClip("video_new.mp4")
img1= clip.get_frame(0)
img2= video.get_frame(0)
img3= clip1.get_frame(0)
a=img1[0,0,0]
b=img2[0,0,0]
c=img3[0,0,0]
I found that a=24, b=24, but c=26....infact on running a array compare loop i found that 'img1' and 'img2' were identical but 'img3' was different.
I suspect that the function video.write_videofile is responsible for the change in array. But i dont know why...Can anybody explain this to me and also suggest a way to write clips without changing their frames?
PS: i read the docs of 'VideoFileClip', 'FFMPEG_VideoWriter', 'FFMPEG_VideoReader' but could not find anything useful...I need to read the exact frame as it was before writing in a code I'm working on. Please, suggest me a way.
Like JPEG, MPEG-4 uses lossy compression, so it's not surprising that the frames read from "video_new.mp4" are not perfectly identical to those in "video.mp4". And as well as the variations caused purely by the lossy compression there are also variations that arise due to the wide variety of encoding options that can be used by programs that write MPEG data.
If you really need to be able to read back the exact same frame data that you write then you will have to use a different file format, but be warned: your files will be huge!
The choice of video format partly depends on what the image data is like and on what you want to do with it. If the data uses 256 colours or less, and you don't intend to perform transformations on it that will modify the colours, a simple GIF anim is a good choice. But bear in mind that even something like non-integer scaling modifies colours.
If you want to analyze the image data and transform it in various ways, it makes sense to use a format with better colour support than GIF, eg a stream of PNG images, which I assume is what Zulko mentions in his answer. FWIW, there's an anim format related to PNG called MNG, but it is not well supported or widely known.
Another option is to use a stream of PPM images, or maybe even a stream of YUV data, which is useful for certain kinds of analysis and convenient if you do intend to encode as MPEG for final consumption. The PPM format is very simple and easy to work with; YUV is slightly messy since it's a raw format with no header data, so you have to keep track of the image size and resolution data yourself.
The file size of PPM or YUV streams is large, since they incorporate no compression at all, but of course they can be compressed using standard compression techniques, if you want to save a little space when saving them to disk. OTOH, typical video processing workflows that use such streams often don't bother writing them to disk: they are sent in pipelines (perhaps using named pipes), so the file size is (mostly) irrelevant.
Although such formats take up a lot of space compared to MPEG-based files, they are far superior for use as intermediate formats while performing image data analysis and transformation, since every time you write & read back MPEG you are losing a little bit of quality.
I assume that you intend to do your image data analysis and transformations using PIL/Pillow. But you can also work with PPM & YUV streams using the ffmpeg / avconv command line programs; and the ffmpeg family happily work with sets of individual image files and GIF anims, too.
You can have lossless compression with the 'png' codec:
clip.write_videoclip('clip_new.avi', codec='png')
EDIT #PM 2Ring: when you write the line above, it makes a video that is compressed using the png algortihm (I'm not sure whether each frame is a png or if it's more subtle).

how to create BMP image from binary data in Python?

Suppose I already read binary data from a binary file, how can I create a BMP image from that binary data?
You can find a definition of the bitmap file format on Wikipedia among other places. Use the struct module to create the necessary headers. Because the format is uncompressed it is very easy to write out. The color information must come in BGR order, bottom line to top line, and each line must be padded with zeros to a multiple of 4 bytes.
Or if you'd rather do it the easy way, PIL knows how to read and write BMP.

Categories

Resources