Problems storing a matrix in Python - python

So I have written a program in Python that takes a matrix from user's input and modifies it. From this modification I am able to pull out the inverse of the original matrix. To prove if the inverse is correct, I would like to do something like this:
matrix=input_from_the_user
matrix_untouched=input_from_the_user
... #act on **matrix** and turn it into its inverse, so that now **matrix** is no more
#the
original: it is equal to its inverse
test=product_of_matrices(matrix,matrix_untouched)
print(test)
I would expect the output to be the identity matrix. It is not: it looks like the variable matrix_untouched turns into matrix even though I never touch it after defining it. I have tried to print matrix_untouched at different steps of the program. From the first modification I make on matrix, also matrix_untouched is modified in the same way. How can I keep its value unaltered?

Related

I set 3 arrays to the same thing, changing a single entry in one of them also changes the other two arrays. How can I make the three arrays separate?

I am making a puzzle game in a command terminal. I have three arrays for the level, originlevel, which is the unaltered level that the game will return to if you restart the level. Emptylevel is the level without the player. Level is just the level. I need all 3, because I will be changing the space around the player.
def Player(matrix,width):
originlevel = matrix
emptylevel = matrix
emptylevel[PlayerPositionFind(matrix)]="#"
level = matrix
The expected result is that it would set one entry to "#" in the emptylevel array, but it actually sets all 3 arrays to the same thing! My theory is that the arrays are somehow linked because they are originally said to the same thing, but this ruins my code entirely! How can I make the arrays separate, so changing one would not change the other?
I should not that matrix is an array, it is not an actual matrix.
I tried a function which would take the array matrix, and then just return it, thinking that this layer would unlink the arrays. It did not. (I called the function IHATEPYTHON).
I've also read that setting them to the same array is supposed to do this, but I didn't actually find an answer how to make them NOT do that. Do I make a function which is just something like
for i in range(0,len(array)):
newarray.append(array[i])
return newarray
I feel like that would solve the issue but that's so stupid, can I not do it in another way?
This issue is caused by the way variables work in Python. If you want more background on why this is happening, you should look up 'pass by value versus pass by reference'.
In order for each of these arrays to be independent, you need to create a copy each time you assign it. The easiest way to do that is to use an array slice. This means you will get a new copy of the array each time.
def Player(matrix,width):
originlevel = matrix[:]
emptylevel = matrix[:]
emptylevel[PlayerPositionFind(matrix)]="#"
level = matrix[:]

Store values of integration from array, then use that new array

I'm new to Python so I'm really struggling with this. I want to define a function, have a certain calculation done to it for an array of different values, store those newly calculated values in a new array, and then use those new values in another calculation. My attempt is this:
import numpy as np
from scipy.integrate import quad
radii = np.arange(10) #array of radius values
def rho(r):
return (r**2)
for i in range(len(radii)):
def M[r]: #new array by integrating over values from 0 to radii
scipy.integrate.quad(rho(r), 0, radii[i])
def P(r):
return (5*M[r]) #make new array using values from M[r] calculated above
Alright, this script is a bit of a mess, so let's unpack this. I've never used scipy.integrate.quad but I looked it up, and along with testing it have determined that those are valid arguments for quad. There are more efficient ways to do this, but in the interests of preservation, I'll try to keep the overall structure of your script, just fixing the bugs and errors. So, as I understand it, you want to write this:
import numpy as np
from scipy.integrate import quad
# Here's where we start to make changes. First, we're going to define the function, taking in two parameters, r and the array radii.
# We don't need to specify data types, because Python is a strongly-typed language.
# It is good practice to define your functions before the start of the program.
def M(r, radii):
# The loop goes _inside_ the function, otherwise we're just defining the function M(r) over and over again to a slightly different thing!
for i in range(len(radii)):
# Also note: since we imported quad from scipy.integrate, we only need to reference quad, and in fact referencing scipy.integrate.quad just causes an error!
output[i] = quad(r, 0, radii[i])
# We can also multiply by 5 in this function, so we really only need one. Hell, we don't actually _need_ a function at all,
# unless you're planning to reference it multiple times in other parts of a larger program.
output[i] *= 5
return output
# You have a choice between doing the maths _inside_ the main function or in maybe in a lambda function like this, which is a bit more pythonic than a 1-line normal function. Use like so:
rho = lambda r: r**2
# Beginning of program (this is my example of what calling the function with a list called radii might be)
radii = np.arange(10)
new_array = M(rho, radii)
If this solution is correct, please mark it as accepted.
I hope this helps!

Straightforward way to print a numpy array as python hard-code

The premises are these: I'm calculating some coefficients coming from a polynomial fitting through numpy.polynomial.polynomial.
I'd like to have them printed in python code to be reused in another script, e.g. something like:
my_coeff = np.array([0.1,0.2,...,-0.3])
Is there a built-in and straightforward function for doing this?

How to change the value of a torch tensor with requires_grad=True so that the backpropogation can start again?

a=torch.tensor([1,2],dtype=dtype,requires_grad=True)
b=a[0]*a[1]
b.backward()
print(a.grad)
but when I use a+=1 and want to give a another value and do another round of backpropogation, it shows that a leaf Variable that requires grad is being used in an in-place operation.
However a=a+1 seems right. What's the difference between a=a+1 and a+=1?
(solved)
I find my doubt about torch actually lies in the following small example,
a=torch.tensor([1,2],dtype=dtype,requires_grad=True)
b=a[0]*a[1]
b.backward()
print(a.grad)
print(a.requires_grad)
c=2*a[0]*a[1]
c.backward()
print(a.grad)
When I do the first round of backpropogation, I get a.grad=[2,1].
Then I want to make a fresh start and calculate the differential of c with respect to a, however, the gradient seems to accumulate. How can I eliminate this effect?
The += operator is for in-place operations, while a = a+1 is not in-place (that is, a refers to a different tensor after this operation).
But in your example you don't seem to be using either of these, so it is hard to say what you want to achieve.

How to cache the function that is returned by scipy interpolation

Trying to speed up a potential flow aerodynamic solver. Instead of calculating velocity at an arbitrary point using a relatively expensive formula I tried to precalculate a velocity field so that I could interpolate the values and (hopefully) speed up the code. Result was a slow-down due (I think) to the scipy.interpolate.RegularGridInterpolator method running on every call. How can I cache the function that is the result of this call? Everything I tried gets me hashing errors.
I have a method that implements the interpolator and a second 'factory' method to reduce the argument list so that it can be used in an ODE solver.
x_panels and y_panels are 1D arrays/tuples, vels is a 2D array/tuple, x and y are floats.
def _vol_vel_factory(x_panels, y_panels, vels):
# Function factory method
def _vol_vel(x, y, t=0):
return _volume_velocity(x, y, x_panels, y_panels, vels)
return _vol_vel
def _volume_velocity(x, y, x_panels, y_panels, vels):
velfunc = sp_int.RegularGridInterpolator(
(x_panels, y_panels), vels
)
return velfunc(np.array([x, y])).reshape(2)
By passing tuples instead of arrays as inputs I was able to get a bit further but converting the method output to a tuple did not make a difference; I still got the hashing error.
In any case, caching the result of the _volume_velocity method is not really what I want to do, I really want to somehow cache the result of _vol_vel_factory, whose result is a function. I am not sure if this is even a valid concept.
scipy.interpolate.RegularGridInterpolator returns a numpy array. This is not cacheable because it doesn't implement hash.
You can store other representations of the numpy array and cache that and then convert it back to a numpy array though. For details on how to do that look at the following.
How to hash a large object (dataset) in Python?

Categories

Resources