Using pylibdmtx i am not able to decode large number of barcode - python

Using pylibdmtx i am not able to decode large number of barcode.like out of 140 it is decoding 111 barcode any suggestions to to increase decode accuracy
Suggest something to detect the barcode more accurately

Related

What does encoding='latin-1' do when reading a file [duplicate]

This question already has answers here:
What is character encoding and why should I bother with it
(4 answers)
Closed 1 year ago.
I am using a Youtube channel to learn machine learning algorithms. Somewhere in this video, I encountered an argument inputted into pd.read_csv method called encoding='latin-1'. What is the function of this argument?
Here the underlying reason the for encoding parameter.
English speakers live in an easy world where the number of necessary characters to write any kind of text or computer code is small enough to be stored in a 8-bit byte (even on a 7-bit, btw, but that's not the point). Therefore, 1 character = 1 byte, everybody agrees on the meaning of each one of the 256 possible 8-bit values.
Many other languages, even those who use the same latin alphabet, need all kinds of accented letters and specialties that do not exist in English. In addition, all special characters of all those languages don't fit into 256 different byte values. Historically, every language community has decided on a specific encoding for all byte values above 127. latin-1, aka iso-8859-1, is one of those encodings, but as you may guess, not the only one. This doesn't scale well, of course, and won't work for languages that don't use latin alphabet and need far over 256 different values.
In all modern languages, a character and a byte are two different things.
(read this sentence twice or more, and commit into permanent brain memory)
The computer can in no way "guess" which is the encoding of a byte stream (like a csv file) that you feed it for processing as text (= strings of characters). Therefore, a function that reads files (I didn't watch the video, but the name of the function is explicit enough to understand its purpose) has to convert bytes (on the disk) into characters (in memory, using whatever internal representation the language happens to use). Inversely, when you have to write something to the disk or the network, which can only accept 8-bit bytes, you have to convert back your characters into bytes.
Those conversions are performed using the particular encoding your file/byte stream/network protocol is using.
As a side note, you should consider getting rid of 8859-* encoding and using unicode, and the utf-8 encoding, as much as possible in new developments.
https://en.wikipedia.org/wiki/ISO/IEC_8859-1
Latin-1 is the same as 8859-1. Every character is encoded as a single byte. There are 191 characters total.

get the duration of an MP3 in Microcontroller

Good day
I have been asked to do a project which consist of one STM32 and VS1003 , a FAT32 USB host MP3 player .
all the parts are done but now ,I need to get the duration of a song.
unfortunately TLEN is not available on all the songs so i cant count on it.
my understanding is an mp3 is made by frames and each frame is 0.026 second , each frame starts with 0XFF 0xFX (X can be any) so i need to search for 0xFFFx in 2 separate bytes and count them, then multiply by 0.026 and get the duration ,
since Microcontroller has limited SRAM file needs to be read 2048 bytes by 2048 bytes from USB , i decided to test this theory in computer by Python first then change it to C on microcontroller (for ease of testing the algorithm), but the numbers i'm getting is a lot more than what is expected .
for example an mp3 gives me 25300 of 0XFF 0XFX which translates to 657.5 Second , but i know that the it in fact is 187 Seconds
it seems that 0XFF 0xFx is in the middle of the song too
is there any reliable way to count the headers ? or is there any other way to get the lenght without counting the header ?
any notes or basic code (in python or c or js) is appreciated in advance
The frame sync marker is not 0xFFFx where x is any four bits, it's 0xFFFx or 0xFFEx. Because the same patterns can appear in the audio data, a brute-force search for the patterns won't work -- you'll have to find the first instance of the sync marker, and then calculate the byte length of each frame from the bitrate in the frame header. There's a post on that calculation already, here:
Formula from mp3 Frame Length

Compress data into smallest amount of text?

I have data (mostly a series of numpy arrays) that I want to convert into text that can be copied/pasted/emailed etc.. I created the following formula which does this.
def convert_to_ascii85(x):
p = pickle.dumps(x)
p = zlib.compress(p)
return b64.b85encode(p)
My issue is that the string it produces is longer than it needs to be because it only uses a subset of letters, numbers, and symbols. If I was able to encode using unicode, I feel like it could produce a shorter string because it would have access to more characters. Is there a way to do this?
Edit to clarify:
My goal is NOT the smallest amount of data/information/bytes. My goal is the smallest number of characters. The reason is that the channel I'm sending the data through is capped by characters (100k to be precise) instead of bytes (strange, I know). I've already tested that I can send 100k unicode characters, I just don't know how to convert my bytes into unicode.
UPDATE: I just saw that you changed your question to clarify that you care about character length rather than byte length. This is a really strange constraint. I've never heard of it before. I don't know quite what to make of it. But if that's your need, and you want predicable blocking behavior, then I'm thinking that your problem is pretty simple. Just pick the compatible character encoding that can represent the most possible unique characters, and then map blocks of your binary across that character set such that each block is the longest it can be and yet consists of fewer bits than the number of representable characters in your character encoding. Each such block then becomes a single character. Since this constraint is kinda strange, I don't know if there are libraries out there that do this.
UPDATE2: Being curious about the above myself, I just Google'd and found this: https://qntm.org/unicodings. If your tools and communication channels can deal with UFT-16 or UTF-32, then you might be onto something in seeking to use that. If so, I hope this article opens up to the solution you're looking for. I think this article is still optimizing for byte length vs character length, so maybe this won't provide the optimal solution, but it can only help (32 potential bits per char rather than 7 or 8). I couldn't find anything seeking to optimize on character count alone, but maybe a UTF-32 scheme like Base65536 is your answer. Check out https://github.com/qntm/base65536 .
If it is byte length that you cared about, and you want to stick to using what is usually meant by "printable characters" or "plain printable text", then here's my original answer...
There are options for getting better "readable text" encoding space efficiency from an encoding other than Base85. There's also a case to be made for giving up more space efficiency and going with Base64. Here I'll make the case for using both Base85 and Base64. If you can use Base85, you only take a 25% hit on the inflation of your binary, and you save a whole lot of headaches in doing so.
Base85 is pretty close to the best you're going to do if you seek to encode arbitrary binary to "plain text", and it is the BEST you can do if you want a "plain text" encoding that you can logically break into meaningful, predictable chunks. You can in theory use a character set that uses printable characters in the high-ASCII range, but experience has shown that many tools and communication channels don't deal well with high-ASCII if they can't handle straight binary. You don't get much in additional space savings for trying to use the extra 5 bits per 4 binary bytes or so that could potentially be used by using 256-bit high-ASCII vs 128-bit ASCII.
For any BaseXX encoding, the algorithm takes incoming binary bits and encodes them as tightly as it can using the XX printable characters it has at its disposal. Base85 will be more compact than Base64 because it uses more of the printable characters (85) than Base64 does (64 characters).
There are 95 printable characters in standard ASCII. So there is a Base95 that is the most compact encoding possible using all the printable characters. But to try to use all 95 bits is messy, because it leads to uneven blockings of the incoming bits. Each 4 binary bytes is mapped to some fractional number of characters less than 5.
It turns out that 85 characters is what you need to encode 4 bytes as exactly 5 printable characters. Many will choose to add about 10% of extra length to attain the fact that every 4 encoded bytes leads to exactly 5 ASCII characters. This is only a 25% inflation in size of the binary. That's not bad at all for all of the headaches it saves. Hence, the motivation behind Base85.
Base64 is used to produce longer, but even less problematic encodings. Characters that cause trouble for various text documents, like HTML, XML, JSON, etc, are not used. In this way, Base64 is useful in almost any context without any escaping. You have to be more careful with Base85, as it doesn't throw out any of these problematic characters. For encoding/decoding efficiency, it uses the range 33 (ā€œ!ā€) through 117 (ā€˜uā€™), starting at 33 rather than 32 just to avoid the often problematic space character. The characters above 'u' that it doesn't use are nothing special.
So that's the story pretty much on binary -> ASCII encoding side. The other question is what you can do to reduce the size of what you're representing prior to the stage of encoding its binary representation to ASCII. You're choosing to use pickle.dumps() and zlib.compress(). If those are your best choices are left for another discussion...

Reading raw audio values from ADC chip on raspberry pi

I wired up the MCP3008 ADC chip to an Electret Microphone and to my pi. I'm reading the input using bit-banging in python, and I'm getting an integer from 0-1024.
I followed this tutorial to do the bit-banging: https://learn.adafruit.com/reading-a-analog-in-and-controlling-audio-volume-with-the-raspberry-pi/connecting-the-cobbler-to-a-mcp3008
My question is how do I take this integer and convert it to something meaningful? Can I somehow write these bytes to a file in python to get the raw audio data that Audacity can play? Right now when I try to write the values they just show up as the integer instead of binary. I'm really new to python, and I've found this link for converting the raw data, but I'm having trouble generating the raw data first:Python open raw audio data file
I'm not even sure what these values represent, are they PCM data that I have to do math with related to time?
What you are doing here is sampling a time-varying analogue signal. so yes, the values you obtain are PCM - but with a huge caveat (see below). If you write them as a WAV file (possibly using this to help you), you will be able to open them in Audacity. You could either convert the values to unsigned 8-bit (by truncation and) or to 16-bit signed with a shift and subtraction.
The caveat is that PCM is the modulation of a sample clock with the signal. The clock signal in your case is the frequency with which you bit-bang the ADC.
Practically, it is very difficult to arrange for this to be regular in software - and particularly when bit-banging the device from a high-level language such as Python. You need to sample at twice the bandwidth of the signal (Nyquist's law) - so realistically, 8kHz for telephone speech quality.
An irregular sample clock will also result in significant artefacts - which you will hear as distortion.

.wav questions and python wave

The module "wave" of python gives me a list of hexadecimal bytes, that I can read like numbers. Let's say the frequency of my sample is 11025. Is there a 'header' in those bytes that specify this? I know I can use the wave method to get the frequency, but I wanna talk about the .wav file structure. It has a header? If I get those bytes, how do I know wich ones are the music and the ones that are information? If I could play these numbers in a speaker 11025 times per second with the intensity from 0 to 255, could I play the sound just like it is in the file?
Thanks!
.wav files are actually RIFF files under the hood. The WAVE section contains both the format information and the waveform data. Reading the codec, sample rate, sample size, and sample polarity from the format information will allow you to play the waveform data assuming you support the codec used.

Categories

Resources