I have a 1D array X with both +/- elements. I'm isolating their signs as follows:
idxN, idxP = X<0, X>=0
Now I want to create an array whose value depends on the sign of X. I was trying to compute this but it gives the captioned syntax error.
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
y(idxP) = X(idxP)+[math.log(np.exp(-x)+1) for x in X(idxP)];
Is the LHS assignment the culprit?
Thanks.
[Edit] The full code is as follows:
y = np.zeros(X.shape)
idxN, idxP = X<0, X>=0
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
y(idxP) = X(idxP)+[math.log(np.exp(-x)+1) for x in X(idxP)];
return y
The traceback is:
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
File "<ipython-input-63-9a4488f04660>", line 1
y(idxN) = [math.log(1+np.exp(x)) for x in X(idxN)]
^
SyntaxError: can't assign to function call
In some programming languages like Matlab, indexes are references with parentheses. In Python, indexes are represented with square brackets.
If I have a list, mylist = [1,2,3,4], I reference elements like this:
> mylist[1]
2
Wen you say y(idxN), Python thinks you are trying to pass idxN as an argument a function named y.
I got it to work like this:
y = np.zeros(X.shape)
idxN, idxP = X<0, X>=0
yn,yp,xn,xp = y[idxN], y[idxP],X[idxN],X[idxP]
yn = [math.log(1+np.exp(x)) for x in xn]
yp = xp+[math.log(np.exp(-x)+1) for x in xp];
If there is a better way, please let me know. Thanks.
Related
New to Programming. Currently learning Python.
Code I wrote is:
x = 0
for x in "Helper":
x = x + 1
print(x)
But i get an error message saying "TypeError: cannot concatenate 'str' and 'int' objects on line 82"
Can anyone explain what i did wrong?
As the comments say, don't use "x" for BOTH variables.
Do something like this:
x = 0
for c in "Helper":
x = x + 1
print(x)
Result:
6
which is the number of characters in the string "Helper".
x is initialized as int, but you are trying to add two different type (int, string) you should match each type
so try this
y = ""
for x in "Helper":
y = y + x
print(y)
Whenever I run my code I keep getting the error of "cannot unpack non-iterable float object", I'm confused on where the error is coming from, do I have to use the iterating variable in some way?
def DEADBEEF(n):
count = 0
for i in range(n):
x ,y = np.random.uniform(0,1)
if (np.sqrt(x**2 + y**2)<=1):
count = count + 1
answer = count/100
return answer
holder = DEADBEEF(100)
np.random.uniform returns a single float as long as you don't pass the size parameter.
If you want to use x, y = ..., you must supply at least two values on the right side of the assignment.
If you want to assign a float to both x and y using np.random.uniform, try using the size parameter:
x, y = np.random.uniform(0, 1, size=2)
I am a complete beginner in coding and was trying problems from Project Euler in Python. Could anyone please explain what is wrong with my code?
Problem: Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 terms will be:
1, 2, 3, 5, 8, 13, 21, 34, 55, 89, ...
By considering the terms in the Fibonacci sequence whose values do not exceed four million, find the sum of the even-valued terms.
a = []
x = 1
y = 2
for y in range (1, 400000):
if x % 2:
a.append(x)
x = y, y = x + y
print(sum(a))
I received the error message "cannot unpack non-iterable int object". What does this message mean and how to avoid it?
You get that error because you have the line x = y, y = x + y which is interpreted as an attempt to unpack values from an iterable into y, y, so you need to have a new line between x = y and y = x + y.
However since x is assigned with the value of y before y is updated, the better option is to change the line to x, y = y, x + yas JackTheCrab suggested in a comment.
Look the below example as well to explain the error:
price, amount = ["10$", 15] #iterable that can be unpacked
price, amount = 15 # this is just an int and this row will throw error "Cannot unpack non-iterable int object"
x = y, y = x + y
When i take away the addition from the end and the assignment from the beginning, the error still is there.
y, y = x
Output : an error. about non-iterable int ! Because you can unpack an iterable.
iterable = [0, 1, 2]
zero = iterable[0]
one = iterable[1]
two = iterable[2]
is not very good (sorry for my bad english ...) to use, because you have to type iterable[n] again and again. That's why you can unpack such iterables like this:
iterable = [0, 1, 2]
zero, one, two = iterable
This will produce the same result as the above long version.
Back to your code,
x = y, y = x + y
the python interpreter will first work on the assignment (x = ...). To resolve it, it tries to unpack x + y into two values, but this is not possible of course.
To declare two variables on one line, you have to separate the assignments with a colon ;, not a comma. But when you do just that, x will have the same value as y at the beginning, and so y will then be the double of itself, which surely is not what you wanted. Maybe you should do it like this ?
a = []
x = 1
y = 2
for y in range (1, 400000):
if x % 2:
a.append(x)
x, y = y, x + y
# above is equivalent to
# temp = y
# y = x + y
# x = temp
print(sum(a))
I'm writing a program that evaluates the power series sum_{m=0}{oo} a[m]x^m, where a[m] is recursively defined: a[m]=f(a[m-1]). I am generating symbols as follows:
a = list(sympy.symbols(' '.join([('a%d' % i) for i in range(10)])))
for i in range(1, LIMIT):
a[i] = f_recur(a[i-1], i-1)
This lets me refer to the symbols a0,a1,...,a9 using a[0],a[1],...,a[9], and a[m] is a function of a[m-1] given by f_recur.
Now, I hope code up the summation as follows:
m, x, y = sympy.symbols('m x y')
y = sympy.Sum(a[m]*x**m, (m, 0, 10))
But, m is not an integer so a[m] throws an Exception.
In this situation, where symbols are stored in a list, how would you code the summation? Thanks for any help!
SymPy's Sum is designed as a sum with a symbolic index. You want a sum with a concrete index running through 0, ... 9. This could be Python's sum
y = sum([a[m]*x**m for m in range(10)])
or, which is preferable from the performance point of view (relevant issue)
y = sympy.Add(*[a[m]*x**m for m in range(10)])
In either case, m is not a symbol but an integer.
I have a work-around that does not use sympy.Sum:
x = sympy.symbols('x')
y = a[0]*x**0
for i in range(1, LIMIT):
y += a[i]*x**i
This does the job, but sympy.Sum is not used.
Use IndexedBase instead of Symbol:
>>> a = IndexedBase('a')
>>> Sum(x**m*a[m],(m,1,3))
Sum(a[m]*x**m, (m, 1, 3))
>>> _.doit()
a[1]*x + a[2]*x**2 + a[3]*x**3
Here's what I'm doing in a SymPy session:
from sympy import *
xi1,xi2,xi3 = symbols('xi_1,xi_2,xi_3')
N1 = 1-xi1-xi2-xi3
N2 = xi3
N3 = xi1
N4 = xi2
x1,x2,x3,x4 = symbols('x_1, x_2, x_3, x_4')
x = N1*x1+N2*x2+N3*x3+N2*x4
subdict = {x1:Matrix([0.025,1.0,0.0]), x2 : Matrix([0,1,0]), x3:Matrix([0, 0.975, 0]), x4:Matrix([0,0.975,0.025])}
x.subs(subdict)
test.subs({xi1:1, xi2:0,xi3:0})
To me we are simply multiplying some scalars with some vectors then adding them up. However SymPy would disagree and throws a ginormous error for which the last line is:
TypeError: cannot add <class 'sympy.matrices.immutable.ImmutableDenseMatrix'> and <class 'sympy.core.numbers.Zero'>
Why is this a problem? Is there a workaround for what I am trying to do?
I suspect that what is happening is that before the matrix is substituted you substitute a 0 which makes 0*matrix_symbol = 0 instead of a matrix of zeros. Terms that end up being matrices cannot be added to 0 and thus the error. My attempts at using the simultaneous flag or xreplace instead of subs give the same result (on sympy.live.org). Then I tried to do the substitutions in reversed order, passing them as a list with the matrices first. Still didn't work. It looks like subs assumes that 0*foo is 0. An issue at sympy issues should be raised if there is not already an existing issue.
The workaround is to do the scalar substitutions first, allowing the zero terms to disappear. Then do a subs with the matrices. So this will require 2 calls to subs.
A true, hackish workaround for doing substitution with 0 is this:
def remul(m):
rv = 1
for i in m.args:
rv *= i
return rv
expr = x*y
mat = expr.subs(x, randMatrix(2)) # replace x with matrix
expr = mat.replace( # replace y with scalar 0
lambda m: m.is_Mul,
lambda m: remul(Mul(*[i.subs(y, 0) for i in m.args], evaluate=False)))