I went onto one line of code, which I want to convert from numpy syntax to regular python 2.7 syntax:
SunAz=SunAz + (SunAz < 0) * 360
source:
https://github.com/Sandia-Labs/PVLIB_Python/blob/master/pvlib/pvl_ephemeris.py#L147
If we pretend that the numpy array is one dimensional, can it be translated to regular python 2.7 syntax like so:
newSunAz = []
for item in SunAz:
if item < 0:
newItem = item + item*360
newSunAz.append(newItem)
else:
newSunAz.append(item)
??
Thank you for the help.
I'm not sure that this would be the translation. The line
SunAz=SunAz + (SunAz < 0) * 360
(SunAz < 0) creates a boolean array, True where the angle is negative, and false otherwise. Multiplying False by a constant gives 0, and True is interpreted to be a 1. This line actually says, "shift the angle by 360 degrees if negative, otherwise leave it be".
So a more literal translation would be the following:
SunAz = [angle + 360 if angle < 0 else angle for angle in SunAz]
Try this:
new_sun_az = [i+360 if i > 0 else i for i in sun_az]
The main difference is that most operators are applied to the list object in plain Python lists and return a single result, while they return a numpy array where each item is the result of the operation applied to the corresponding item on the original array for numpy arrays.
>>> import numpy as np
>>> plainlist = range(5)
>>> plainlist
[0, 1, 2, 3, 4]
>>> plainlist > 5 # single result
True
>>> nparray = np.array(plainlist)
>>> nparray
array([0, 1, 2, 3, 4])
>>> nparray > 5 # array of results
array([False, False, False, False, False], dtype=bool)
>>>
[update]
Mike's answer is right. My original answer was:
new_sun_az = [i+i*360 if i > 0 else i for i in sun_az]
Related
I am evaluating an extensive summation, by evaluating each term separately using a for-loop (Python 3.5 + NumPy 1.15.4). However, I obtained a surprising result when comparing manual term-by-term evaluation vs. using the for-loop. See MWE below.
S = sum(c_i x^i) for i=0..n (properly formatted LaTeX version here)
Main questions:
Where does the difference in the outputs y1 and y2 originate from?
How could I alter the code such that the for-loop yields the expected result (y1==y2)?
Comparing dy1 and dy2:
dy1:
[-1.76004137e-02 3.50290845e+01 1.50326037e+01 -7.25045852e+01
2.08908445e+02 -3.31104542e+02 2.98005855e+02 -1.53154111e+02
4.18203833e+01 -4.68961704e+00 0.00000000e+00]
dy2:
[-1.76004137e-02 3.50290845e+01 1.50326037e+01 -7.25045852e+01
-3.27960559e-01 -4.01636743e-04 2.26525295e-07 4.80637463e-10
1.93967535e-13 -1.93976497e-17 -0.00000000e+00]
dy1==dy2:
[ True True True True False False False False False False True]
Thanks!
MWE:
import numpy as np
coeff = np.array([
[ 0.000000000000E+00, -0.176004136860E-01],
[ 0.394501280250E-01, 0.389212049750E-01],
[ 0.236223735980E-04, 0.185587700320E-04],
[-0.328589067840E-06, -0.994575928740E-07],
[-0.499048287770E-08, 0.318409457190E-09],
[-0.675090591730E-10, -0.560728448890E-12],
[-0.574103274280E-12, 0.560750590590E-15],
[-0.310888728940E-14, -0.320207200030E-18],
[-0.104516093650E-16, 0.971511471520E-22],
[-0.198892668780E-19, -0.121047212750E-25],
[-0.163226974860E-22, 0.000000000000E+00]
]).T
c = coeff[1] # select appropriate coeffs
x = 900 # input
# manual calculation
y = c[0]*x**0 + c[1]*x**1 + c[2]*x**2 + c[3]*x**3 + c[4]*x**4 + \
c[5]*x**5 + c[6]*x**6 + c[7]*x**7 + c[8]*x**8 + c[9]*x**9 + c[10]*x**10
print('y:',y)
# calc terms individually
dy1 = np.zeros(c.size)
dy1[0] = c[0]*x**0
dy1[1] = c[1]*x**1
dy1[2] = c[2]*x**2
dy1[3] = c[3]*x**3
dy1[4] = c[4]*x**4
dy1[5] = c[5]*x**5
dy1[6] = c[6]*x**6
dy1[7] = c[7]*x**7
dy1[8] = c[8]*x**8
dy1[9] = c[9]*x**9
dy1[10] = c[10]*x**10
# calc terms in for loop
dy2 = np.zeros(len(c))
for i in np.arange(len(c)):
dy2[i] = c[i]*x**i
# summation and print
y1 = np.sum(dy1)
print('y1:',y1)
y2 = np.sum(dy2)
print('y2:',y2)
Output:
y: 37.325915370853856
y1: 37.32591537085385
y2: -22.788859384118823
It seems like raising a python int to a power of numpy integer (of specific size) leads to conversion of result to a numpy integer of the same size.
Example:
type(900**np.int32(10))
returns numpy.int32 and
type(900**np.int64(10))
returns numpy.int64
From this Stackoverflow question it seems that while Python int are variable sized, numpy integers are not (the size is specified by type as, for example, np.int32 or np.int64). So, while Python range function returns integers of variable size (Python int type), np.arange returns integers of specific type (if not specified, type is inferred).
Trying to compare the Python integer math vs numpy integer math:
900**10 returns 348678440100000000000000000000
while 900**np.int32(10) returns -871366656
Looks like you get integer overflow via np.arange function because the numpy integer dtype (in this case it is inferred as np.int32) is too small to store the resulting value.
Edit:
In this specific case, using np.arange(len(c), dtype = np.uint64) seems to output the right values:
dy2 = np.zeros(len(c))
for i in np.arange(len(c), dtype = np.uint64):
dy2[i] = c[i]*x**i
dy1 == dy2
Outputs:
array([ True, True, True, True, True, True, True, True, True,
True, True])
Note: the accuracy might suffer using numpy in this case (int(900*np.uint64(10)) returns 348678440099999970966892445696 which is less than 900**10), so if that is of importance, I'd still opt to use Python built-in range function.
I am using python-3.x, and I am trying to do mutation on a binary string that will flip one bit of the elements from 0 to 1 or 1 to 0 by random, I tried some methods but didn't work I don't know where is the problem:
x=[0, 0, 0, 0, 0]
def mutation (x, muta):
for i in range(len(x)):
if random.random() < muta:
x[i] = type(x[i])(not x[i])
return x,
print (x)
The output for example should be x=[0, 0, 0, 1, 0] or x=[1, 0, 0, 0, 0] and so on....
Also, I tried this one:
MUTATION_RATE = 0.5
CHROMO_LEN = 6
def mutate(x):
x = ""
for i in range(CHROMO_LEN):
if (random.random() < MUTATION_RATE):
if (x[i] == 1):
x += 0
else:
x += 1
else:
x += x[i]
return x
print(x)
please any suggestion or advice will be appreciated
Are you sure you're calling the function before printing x:
def mutation(x):
# your code without the trailing comma
mutation(x)
print(x)
In Python, creating a new list is usually preferable to mutating an old one. I would write your first function like this (I converted the integers to booleans because you're just flipping them:
x = [False, False, False, False]
def mutation(x, muta):
return [not e if random.random() < muta else e
for e in x]
Change x by assigning to it again:
x = mutation(x, .5)
Your original function is working if you remove the comma after the return:
def mutation(x, muta):
for i in range(len(x)):
if random.random() < muta:
x[i] = type(x[i])(not x[i])
return x
x = [False, False, False, False]
mutation(x, .5)
Out[8]: [False, False, True, False]
mutation(x, .5)
Out[9]: [True, True, True, False]
You could also use python's XOR operator to flip bits, which will flip between '1' and '0':
x[1] = x[1] ^ 1
See also: Python XOR preference: bitwise operator vs. boolean operators
I occasionally meet this python construction: number + array
And I wonder what is the return value, is it number or array. What it does?
Example, where I met it is this:
def __init__(self, n):
self.wins = np.zeros( n )
self.trials = np.zeros(n )
def sample( self, n=1 ):
for k in range(n):
choice = np.argmax( rbeta( 1 + self.wins, 1 + self.trials - self.wins) )
choices[ k ] = choice
return
Note: I know almost nothing about Python
Your question is not about the syntax per se (addition is nothing special syntax-wise), but about addition methods of numpy arrays. For the numpy array objects, addition of scalars is implemented so that the result is an array where all elements are added the scalar.
In [1]: import numpy as np
In [2]: a = np.arange(0, 5)
In [3]: a
Out[3]: array([0, 1, 2, 3, 4])
In [4]: 1+a
Out[4]: array([1, 2, 3, 4, 5])
Suggested reading:
Python Data Model, specifically the part about object.__add__ and object.__radd__;
Tentative NumPy tutorial.
this isn't number + array
it is scalar + nparray.
it adds the scalar to each element of the np array
How to write conditions in a function (k_over_iq)?
dt_for_all_days_np=a numpy array of numbers.
def k_over_iq(dt):
if dt !=0:
return 0.7*(1-e**(-0.01*dt**2.4))
else:
return 1
k_over_iq_i=k_over_iq(dt_for_all_days_np)
I get the following error:
if dt !=0: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
if dt != 0: won't work if dt is a numpy array. The if tries to get a single boolean out of the array, and as the error message warns, that's ambiguous: is array([True, False]) True or False?
To get around that in a vectorized fashion, two common ways are to use where or to use boolean indexing to patch.
Method #1, np.where
>>> dt = np.array([0,1,2,3])
>>> np.where(dt != 0, 0.7*(1-np.exp(-0.01*dt**2.4)), 1)
array([ 1. , 0.00696512, 0.03598813, 0.09124601])
This uses the function whenever dt != 0:
>>> dt != 0
array([False, True, True, True], dtype=bool)
and 1 otherwise.
Method #2: boolean indexing to patch
Compute the function everywhere and then fix the values that are wrong.
>>> b = 0.7*(1-np.exp(-0.01*dt**2.4))
>>> b
array([ 0. , 0.00696512, 0.03598813, 0.09124601])
>>> b[dt == 0] = 1
>>> b
array([ 1. , 0.00696512, 0.03598813, 0.09124601])
Indentation is the problem in your function.
I recommend you reading this: Python: Myths about Indentation
def k_over_iq(dt):
if dt !=0:
return 0.7*(1-e**(-0.01*dt**2.4))
else:
return 1
k_over_iq_i=k_over_iq(dt_for_all_days_np)
I have a big data set of floating point numbers. I iterate through them and evaluate np.log(x) for each of them.
I get
RuntimeWarning: divide by zero encountered in log
I would like to get around this and return 0 if this error occurs.
I am thinking of defining a new function:
def safe_ln(x):
#returns: ln(x) but replaces -inf with 0
l = np.log(x)
#if l = -inf:
l = 0
return l
Basically,I need a way of testing that the output is -inf but I don't know how to proceed.
Thank you for your help!
You are using a np function, so I can safely guess that you are working on a numpy array?
Then the most efficient way to do this is to use the where function instead of a for loop
myarray= np.random.randint(10,size=10)
result = np.where(myarray>0, np.log(myarray), 0)
otherwise you can simply use the log function and then patch the hole:
myarray= np.random.randint(10,size=10)
result = np.log(myarray)
result[result==-np.inf]=0
The np.log function return correctly -inf when used on a value of 0, so are you sure that you want to return a 0? if somewhere you have to revert to the original value, you are going to experience some problem, changing zeros into ones...
Since the log for x=0 is minus infinite, I'd simply check if the input value is zero and return whatever you want there:
def safe_ln(x):
if x <= 0:
return 0
return math.log(x)
EDIT: small edit: you should check for all values smaller than or equal to 0.
EDIT 2: np.log is of course a function to calculate on a numpy array, for single values you should use math.log. This is how the above function looks with numpy:
def safe_ln(x, minval=0.0000000001):
return np.log(x.clip(min=minval))
You can do this.
def safe_ln(x):
try:
l = np.log(x)
except ZeroDivisionError:
l = 0
return l
I like to use sys.float_info.min as follows:
>>> import numpy as np
>>> import sys
>>> arr = np.linspace(0.0, 1.0, 3)
>>> print(arr)
[0. 0.5 1. ]
>>> arr[arr < sys.float_info.min] = sys.float_info.min
>>> print(arr)
[2.22507386e-308 5.00000000e-001 1.00000000e+000]
>>> np.log10(arr)
array([-3.07652656e+02, -3.01029996e-01, 0.00000000e+00])
Other answers have also introduced small positive values, but I prefer to use the smallest possible value to make the approximation more accurate.
The answer given by Enrico is nice, but both solutions result in a warning:
RuntimeWarning: divide by zero encountered in log
As an alternative, we can still use the where function but only execute the main computation where it is appropriate:
# alternative implementation -- a bit more typing but avoids warnings.
loc = np.where(myarray>0)
result2 = np.zeros_like(myarray, dtype=float)
result2[loc] =np.log(myarray[loc])
# answer from Enrico...
myarray= np.random.randint(10,size=10)
result = np.where(myarray>0, np.log(myarray), 0)
# check it is giving right solution:
print(np.allclose(result, result2))
My use case was for division, but the principle is clearly the same:
x = np.random.randint(10, size=10)
divisor = np.ones(10,)
divisor[3] = 0 # make one divisor invalid
y = np.zeros_like(divisor, dtype=float)
loc = np.where(divisor>0) # (or !=0 if your data could have -ve values)
y[loc] = x[loc] / divisor[loc]
use exception handling:
In [27]: def safe_ln(x):
try:
return math.log(x)
except ValueError: # np.log(x) might raise some other error though
return float("-inf")
....:
In [28]: safe_ln(0)
Out[28]: -inf
In [29]: safe_ln(1)
Out[29]: 0.0
In [30]: safe_ln(-100)
Out[30]: -inf
you could do:
def safe_ln(x):
#returns: ln(x) but replaces -inf with 0
try:
l = np.log(x)
except RunTimeWarning:
l = 0
return l
For those looking for a np.log solution that intakes a np.ndarray and nudges up only zero values:
import sys
import numpy as np
def smarter_nextafter(x: np.ndarray) -> np.ndarray:
safe_x = np.where(x != 0, x, np.nextafter(x, 1))
return np.log(safe_x)
def clip_usage(x: np.ndarray, safe_min: float | None = None) -> np.ndarray:
# Inspiration: https://stackoverflow.com/a/13497931/
clipped_x = x.clip(min=safe_min or np.finfo(x.dtype).min)
return np.log(clipped_x)
def inplace_usage(x: np.ndarray, safe_min: float | None = None) -> np.ndarray:
# Inspiration: https://stackoverflow.com/a/62292638/
x[x == 0] = safe_min or np.finfo(x.dtype).min
return np.log(x)
Or if you don't mind nudging all values and like bad big-O runtimes:
def brute_nextafter(x: np.ndarray) -> np.ndarray:
# Just for reference, don't use this
while not x.all():
x = np.nextafter(x, 1)
return np.log(x)