I want to convert floating point sin values to fixed point values.
import numpy as np
Fs = 8000
f = 5
sample = 8000
x = np.arange(sample)
y = np.sin(2 * np.pi * f * x / Fs)
How can I easily convert this y floating point samples to fixed point samples?
Each element should be of 16bit and 1 bit integer part and 15 bits should be of fractional part, so that I can pass these samples to a DAC chip.
To convert the samples from float to Q1.15, multiply the samples by 2 ** 15. However, as mentioned in the comments, you can't represent 1.0 in Q1.15, since the LSB is representing the sign. Therefore you should clamp your values in the range of [-1, MAX_Q1_15] where MAX_Q1_15 = 1.0 - (2 ** -15). This can be done with a few helpful numpy functions.
y_clamped = np.clip(y, -1.0, float.fromhex("0x0.fffe"))
y_fixed = np.multiply(y_clamped, 32768).astype(np.int16)
Although you may fear this representation does not accurately represent the value of 1.0, it is close enough to do computation with. For example, if you were to square 1.0:
fmul_16x16 = lambda x, y: x * y >> 15
fmul_16x16(32767, 32767) # Result --> 32766
Which is very close, with 1-bit error.
Hopefully it helps.
You can use fxpmath to convert float values to fractional fixed-point. It supports Numpy arrays as inputs, so:
from fxpmath import Fxp
# your example code here
y_fxp = Fxp(y, signed=True, n_word=16, n_frac=15)
# plotting code here
15 bits for fractional give you a very low value for amplitue resolution, so I plot Q5.4 to show the conversion in an exaggerated way:
Related
A function determines y(integer) from given x (integer) and s (float) as follows:
floor(x * s)
If x and y are known how to calculate s so that floor(x * s) is guaranteed to be exactly equal to y.
If I simply perform s = y / x is there any chance that floor(x * s) won't be equal to y due to floating point operations?
If I simply perform s = y / x is there any chance that floor(x * s) won't be equal to y due to floating point operations?
Yes, there is a chance it won't be equal. #Eric Postpischil offer a simple counter example: y = 1 and x = 49.
(For discussion, let us limit x,y > 0.)
To find a scale factor s for a given x,y, that often works, we need to reverse y = floor(x * s) mathematically. We need to account for the multiplication error (see ULP) and floor truncation.
# Pseudo code
e = ULP(x*s)
y < (x*s + 0.5*e) + 1
y >= (x*s - 0.5*e)
# Estimate e
est = ULP((float)y)
s_lower = ((float)y - 1 - 0.5*est)/(float)x
s_upper = ((float)y + 0.5*est)/(float)x
A candidate s will lie s_lower < s <= s_upper.
Perform the above with higher precision routines. Then I recommend to use the float closest to the mid-point of s_lower, s_upper.
Alternatively, an initial stab at s could use:
s_first_attempt = ((float)y - 0.5)/(float)x
If we rephrase your question, you are wondering if the equation y = floor( x * y/x ) holds for x and y integers, where y/x translates in python into a 64-bit floating-point, and the subsequent multiplication also generates a 64b floating point value.
Python's 64b floating points follow the IEEE-754 norm, which gives them 15-17 bits of decimal precision. To perform the division and multiplication, both x and y are converted into floats, and these operations might reduce the minimum precision in up to 1 bit (really worst case), but they will for sure not increase the precision. As such, you can only expect up to 15-17 bits of precision in this operation. This means that y values above 10^15 might/will present rounding errors.
More practically, one example of this can be (and you can reuse this code for other examples):
import numpy as np
print("{:f}".format(np.floor(1.3 * (1.1e24 / 1.3))))
#> 1100000000000000008388608.000000
I am trying to plot a set of extreme floating-point values that require high precision. It seems to me there are precision limits in matplotlib. It cannot go further than the scale of 1e28.
This is my code for displaying a graph.
import matplotlib.pyplot as plt
import numpy as np
x = np.array([1737100, 38380894.5188064386003616016502, 378029000.0], dtype=np.longdouble)
y = np.array([-76188946654889063420743355676.5, -76188946654889063419450832178.0, -76188946654889063450098993033.0], dtype=np.longdouble)
plt.scatter(x, y)
#coefficients = np.polyfit(x, y, 2)
#poly = np.poly1d(coefficients)
#new_x = np.linspace(x[0], x[-1])
#new_y = poly(new_x)
#plt.plot(new_x, new_y)
plt.xlim([x[0], x[-1]])
plt.title('U vs. r')
plt.xlabel('Distance r')
plt.ylabel('Total gravitational potential energy U(r)')
plt.show()
I am expecting the middle point to be located higher than the other two points. It requires very high precision. How can I configure it?
Your current issue is likely not with matplotlib but with np.longdouble. To discover whether this is the case, run np.finfo(np.longdouble). This will be machine dependent, but on my machine, this says I'm using a float128 with the following description
Machine parameters for float128
---------------------------------------------------------------
precision = 18 resolution = 1.0000000000000000715e-18
machep = -63 eps = 1.084202172485504434e-19
negep = -64 epsneg = 5.42101086242752217e-20
minexp = -16382 tiny = 3.3621031431120935063e-4932
maxexp = 16384 max = 1.189731495357231765e+4932
nexp = 15 min = -max
---------------------------------------------------------------
The precision is just an estimate (due to binary vs decimal representation), but 18 digits is the float128 limit, and your specific numbers only start to become interesting after that.
An easy test is to print y[1]-y[0] and see if you get something other than 0.0.
An easy solution is to use Python ints since you'd capture most of the difference (or int of 10*y) since Python has infinite precision ints. So something like this:
x = np.array([1737100, 38380894.5188064386003616016502, 378029000.0], dtype=np.longdouble)
y = [-76188946654889063420743355676, -76188946654889063419450832178, -76188946654889063450098993033]
plt.scatter(x, [z-y[0] for z in y])
Another solution is to represent the numbers from the start so that they require a more accessible precision (ie, with most of the offset removed). And another is to use a high precision float library. It depends on which way you want to go.
It's also worth noting that, at least for my system which I think is typical, the default np.float is float64. For float64 the floating point mantisaa is 52 bits, whereas for float128 it's only 63 bits. Or in decimal, from about 15 digits to 18. So there's not a great precision increase for going from np.float to np.float128. (Here's a discussion of why np.longdouble ( or np.float128) sounds like it's going to add a lot of precision, but doesn't.)
(Finally, because this may cause confusion for some, if it were the case that np.longdouble or np.float128 were useful for this problem, it's worth noting that the line in the question that sets the initial array wouldn't give the intended precision of np.longdouble. That is, y=np.array( [-76188946654889063420743355676.5], dtype=np.longdouble) first creates and array of Python floats, and then creates the numpy array from that, but the precision will be lost in the Python array. So if longdouble were the solution, a different approach to initializing the array would be needed.)
I want to add 2D Gaussian noise to each (x,y) point of a list that I have.
That is why I want to create a noise vector with a random uniform direction over [0, 2pi) and a Gaussian-distributed magnitude with N(0, \sigma^2).
How can I generate a vector in Python only specifying the direction and its magnitude?
Well, this is not hard to do
n = 100
sigma = 1.0
phi = 2.0 * np.pi * np.random.random(n)
r = np.random.normal(loc=0.0, scale=sigma, size=n)
x = r*np.cos(phi)
y = r*np.sin(phi)
You can generate two vectors, one for the magnitude and another for the phase. Then you use both to get what you want.
import numpy as np
import math
sigma_squred = 0.01 # Change to whatever value you want
num_elements = 10 # Size of the vector you want
magnitude = math.sqrt(sigma_squred) * np.random.randn(num_elements)
phase = 2 * np.pi * np.random.random_sample(num_elements)
# This will give you a vector with a Gaussian magnitude and a random phase between 0 and 2PI
noise = magnitude * np.exp(1j*phase)
I find it easier to work with a single vector of complex numbers, but since you have individual x and y values, you can get a noise_x and a noise_y vector with
noise_x = noise.real
noise_y = noise.imag
Note: I'm assuming you can use the numpy library, which make things much easier. If that is not the case you will need a loop to generate each element. To generate a single sample for magnitude you can use random.gauss(0, sigma), while 2*math.pi*random.random() can be used to generate a sample for the phase. Then you do the same as before to get a complex number from where you can get the real and the imaginary parts.
I fit a polynomial to points from list data using the polyfit() function:
import numpy as np
data = [1,4,5,7,8,11,14,15,16,19]
x = np.arange(0,len(data))
y = np.array(data)
z = np.polyfit(x,y,2)
print (z)
print ("{0}x^2 + {1}x + {2}".format(*z))
Output:
[0.00378788 1.90530303 1.31818182]
0.003787878787878751x^2 + 1.9053030303030298x + 1.3181818181818175
How to get a fit to points, with rounded coefficients for example to three decimal places? For example, to get:
[0.004 1.905 1.318]
0.004x^2 + 1.905x + 1.318
There is no option in the polyfit method for the purpose of rounding. IIUC, you can use round after applying polyfit.
import numpy as np
data = [1,4,5,7,8,11,14,15,16,19]
x = np.arange(0,len(data))
y = np.array(data)
z = np.polyfit(x,y,2).round(decimals=3)
array([0.004, 1.905, 1.318])
For values exactly halfway between rounded decimal values, NumPy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due to the inexact representation of decimal fractions in the IEEE floating point standard [R9] and errors introduced when scaling by powers of ten. -- Cited from numpy.around
Is there any method to generate non-repeated float random numbers in a range with given size and standard deviation?
I generate e.g. 1000 random floats between a min and max value:
randSize= 1000
randValues = np.random.uniform(low=myMinVal, high=myMaxVal, size (randSize,))
But I want to generate only numbers, which have less than 0.2 SD in that range
Your question is a bit unclear. My understanding is that you want to extract 1000 float from a normal distribution with mean=0.0 and sigma=0.2, to me the easiest way is to use:
mu, sigma = 0, 0.2 # mean and standard deviation
s = numpy.random.normal(mu, sigma, 1000)
see here.
Now, as you know that the probability of obtaining the same float twice is very low, but if this is a requirement, an easy way tackling that is:
dim = 1000
original_list = list(set(np.random.normal(mu, sigma, 2*dim).tolist()))[:dim]
Explanation: I create an array of float double the size required, convert to a list and then to a set. By definition a set contains unique values, so potential duplicates are deleted. Now I get back to a list and cut it to the size you want: dim.