Emulating Matlab fixed point number behavior in Python - python

I have a Matlab script that is old and decrepit, and I am trying to rewrite parts of it in Python. Unfortunately, not only am I unfamiliar with Matlab, but the script was written some 5-6 years ago, so there is no one that fully understands what it does or how it does it. For now the line I am interested in is this:
% Matlab
[rawData,count,errorMsg] = fscanf(serialStream, '%f')
Now, I incorrectly tried to do that as:
# Python
rawData = []
count = 0
while True:
rawData.append(struct.unpack('f', ser.read(4))[0])
count += 1
However, this prints out completely garbage values. Upon further research, I learned that, in Matlab, %f does not mean float like it does in any sensible language, but fixed point number. As such, it makes sense that my data looked like garbage.
Through trial and error, I have determined that I should be getting blocks of 156 bytes from a serial port. However, I am unsure of how many values that translates to, as I can't find documentation that explains how large fixed point numbers are in Matlab (this says they can be up to 128 bits, but that's not very helpful). I have also found the python library decimal, and it seems like I would want to form them from the constituent parts (i.e. provide sign, digits and exponent), but I'm not sure how those are stored in the stream of data I'm getting.
Is there a good way of making a fixed point number from a binary stream in Python? Or do I have to look up the implementation in Matlab and recreate it? Perhaps there's a better way of doing what I want to do?

Related

Reading N bits from a GPIO Pin Based on Baud Rate PiGPIO

Pretty much as I described in the title, I am attempting to read the next N bits appearing at a GPIO pin following a call. For context, I am setting a chip select (CS) enable and after two clock cycles I expect a 10 bit sequence. I'm not very worried in this moment about ensuring I have the exactly 10 bits, so much as I'd rather be sure I'm atleast getting the 10 I need, even if I also get some stuff around the edges. I've been using PiGPIO and trying to using the bb_serial_read functionality, however I've been having trouble fulling understanding the functionality of this tool since the documentation isn't too specific and I don't have much experience with bit banging.
I was approaching it this way:
pi.hardware_clock(CLOCK_PIN, CLOCK_FREQ)
pi.bb_serial_read_open(DATA_PIN, CLOCK_FREQ)
enable_CS()
(count, val) = pi.bb_serial_read(DATA_PIN)
disable_CS()
After which I would attempt to convert the value returned to a binary string. The primary obstacles I've run into surrounds the arguments to bb_serial_read_open, as the bb_serial_read returns a bytearray which tends to look like '\x00\x00\x01\x00' or something similar, the values tend to vary. If I want to ensure I am getting some number of consecutive bits, what is the appropriate way to convert this to the raw binary? Also, is there a way to flush the cyclic buffer the the bit bang protocol is storing its values in? I'm pretty sure this is a simple problem, but I've been back and forth between documentation and example code with no luck for a while now.

How do arithmetic operators work in python?

I am wondering how the "+" operator works in python, or indeed how any of the basic arithmetic operators work. My knowledge is very limited with regards to this topic, so I hope this isn't a repeat of a question already here.
More specifically, I would like to know how this code:
a = 5
b = 2
c = a + b
print (c)
produces the result of c = 7 when ran. How does the computer perform this operation? I found a thread on Reddit explaining how the computer performs the calculation in binary (https://www.reddit.com/r/askscience/comments/1oqxfr/how_do_computers_do_math/) which I can understand. What I fail to comprehend however is how the computer knows how to convert the values of 5 and 2 into binary and then perform the calculation. Is there a set formula for doing this for all integers or base 10 numbers? Or is there something else happening at a deeper hardware level here?
Again I'm sorry if this a repeat or if the question seems completely silly, I just can't seem to understand how python can take any two numbers and then sum them, add them, divide them or multiply them. Cheers.
The numbers are always in binary. The computer just isn't capable of keeping then in a different numerical system (well, there are ternary computers but these are a rare exception). The decimal system is just used for a "human representation", so that it is easier to read, but all the symbols (including the symbol "5" in the file, it's just a character) are mapped to numbers through some encoding (e. g. ASCII). These numbers are, of course in binary, just the computer knows (through the specification of the encoding) that if there is a 1000001 in a context of some string of characters, it has to display the symbol a (in the case of ASCII). That's it, the computer don't know the number 58, for it, these are just two symbols and are kept in the memory as ones and zeros.
Now, memory. This is where it's getting interesting. All the instructions and the data are kept in one place as a large buffer of ones and zeros. These are passed to the CPU which (using its instruction set) knows what the first chunk of ones and zeros (this is what we call a "word") means. The first word is an instruction, then the argument(s) follow. Depending on the instruction different things happen. Ok, what happens if the instruction means "add these two numbers" and store the result here?
Well, now it's a hardware job. Adding binary numbers isn't that complicated, it's explained in the link you provided. But how the CPU knows that this is the algorithm and how to execute it? Well, it uses a bunch of "full-adders". What is a "full-adder"? This is a hardware circuit that by given two inputs (each one of them is one bit, i. e. either one or zero) "adds" them and outputs the result to two other bits (one of which it uses for carry). But how the full-adder works? Well, it is constructed (physically) by half-adders, which are constructed by standard and and xor gates. If you're familiar with similar operators (& and ^ in Python) you probably know how they work. These gates are designed to work as expected using the physical properties of the elements (the most important of them being silicon) used in the electronic components. And I think this is where I'll stop.

Automatically generate data for unit testing in Python

I have a module to test, module includes a serie of functions / simple classes.
Wondering if there any attempts(ie package) to generate automatically:
1) Generate Python code from initial Python file containing function definition.
2) This code list of call to the functions with random/parametric data as parameters.
It is technically feasible by using inspect and python meta classes,
usually limited to numerical type functions....(numpy array).
Because string (ie url input) would be impossible (only parametrized...).
EDIT: By random, it means obviously "parametric random".
Suppose we have
def f(x1,x2,x3)
For all xi of f
if type(xi) = array1D ->
Do those tests: empty array, zeros array, negative array(random),
positivearray(random), high values, low values, integer array, real
number array, ordered array, equal space array,.....
if type(xi)=int -> test zero, 1, 2,3,4, randomValues, Negative
Do people think such project is possible using inspect and meta class? (limited to numpy/numerical items).
Suppose you have a very large library..., things can be done in background.
You might be thinking of fuzz testing, where a bunch of garbage data is submitted to a function to see if anything makes it behave badly. It sounds like the Hypothesis library will let you generate different test cases based on some parameters.
I spent searching, it seems this kind of project does not really exist (to my knowledge):
Technically, this is a mix of packages (issues):
Hypothese : data generation for input, running the code with crash/error.
(without the invariant part of Hypothese)
Jedi: Static analysis of code/Inference of the type
Type inference is a difficult issue in Python (in general)
implementing type inference
If type is num/array of num:
Boundary exists/ typical usage is clearly defined
If type is string: Inference is pretty difficult without human guessing.
Same for others, Context guessing is important

How can I use python to calculate very large numbers?

Like, really, really large numbers..
I'm trying out a variation of the fiboncci series (most significant variation being it squares each term before feeding it in again, although there are a few other modifications as well.), and I need to obtain a particular term whose value is too large for python to handle. I'm talking like well over a thousand digits, probably more. The program just starts and does nothing at all.
Is there any way I can use python to print such massive numbers, or can it be done with JavaScript (preferred) or any other language?
Program in question:
g=[0 for y in range(31)]
g[0]=0
g[1]=1
for x in range(2,31):
g[x]=pow((g[x-1]+g[x-2]),2)
print(g[30])
your program does nothing because it has probably consumed all the memory. As far as python, it can handle very large numbers. Check this link:
https://www.python.org/dev/peps/pep-0237/

Python MemoryError when using long lists not occurring on Linux

I've come to work with a rather big simulation code which needs to store up to 189383040 floating point numbers. I know, this is large, but there isn't much that could be done to overcome this, like only looking at a portion of them or processing them one-by-one.
I've written a short script, which reproduces the error so I could quickly test it in different environments:
noSnapshots = 1830
noObjects = 14784
objectsDict={}
for obj in range(0, noObjects):
objectsDict[obj]=[[],[],[]]
for snapshot in range(0,noSnapshots):
objectsDict[obj][0].append([1.232143454,1.232143454,1.232143454])
objectsDict[obj][1].append([1.232143454,1.232143454,1.232143454])
objectsDict[obj][2].append(1.232143454)
It represents the structure of the actual code where some parameters (2 lists of length 3 each and 1 float) have to be stored for each of the 14784 objects at 1830 distinct locations. Obviously the numbers would be different each time for a different object, but in my code I just went for some randomly-typed number.
The thing, which I find not very surprising, is that it fails on Windows 7 Enterprise and Home Premium with a MemoryError. Even if I run the code on a machine with 16 GB of RAM it still fails, even though there's still plenty of memory left on the machine. So the first question would be: Why does it happen so? I'd like to think that the more RAM I've got the more things I can store in the memory.
I ran the same code on an Ubuntu 12.04 machine of my colleague (again with 16 GB of RAM) and it finished no-problem. So another thing which I'd like to know is: Is there anything I could do to make Windows happy with this code? I.e. give my Python process more memory on heap and stack?
Finally: Does anyone have any suggestions as to how to store plenty of data in memory in a manner similar to the one in the example code?
EDIT
After the answer I changed the code to:
import numpy
noSnapshots = 1830
noObjects = int(14784*1.3)
objectsDict={}
for obj in range(0, noObjects):
objectsDict[obj]=[[],[],[]]
objectsDict[obj][0].append(numpy.random.rand(noSnapshots,3))
objectsDict[obj][1].append(numpy.random.rand(noSnapshots,3))
objectsDict[obj][2].append(numpy.random.rand(noSnapshots,1))
and it works despite the larger amount of data, which has to be stored.
In Python, every float is an object on the heap, with its own reference count, etc. For storing this many floats, you really ought to use a dense representation of lists of floats, such as numpy's ndarray.
Also, because you are reusing the same float objects, you are not estimating the memory use correctly. You have lists of references to the same single float object. In a real case (where the floats are different) your memory use would be much higher. You really ought to use ndarray.

Categories

Resources