I want to create a matrix with 1 column and n rows, to use in a calculation for a PageRank algorithm. If I make it like this, and then use the matrix in a calculation, it gives this result:
A = [[0.0375,0.4625,0.0375,0.32083333],
[0.0375, 0.0375,0.0375,0.32083333],
[0.8875, 0.0375, 0.0375, 0.32083333],
[0.0375, 0.4625, 0.8875, 0.0375]]
my_dp = 1/4
r = np.matrix([my_dp, my_dp, my_dp, my_dp])
r = np.transpose(r)
print(r)
for i in range(1,50):
r = A*r
print("Final:", print(r))
[[0.25]
[0.25]
[0.25]
[0.25]]
[[0.3570795 ]
[0.19760835]
[0.30663962]
[0.13867253]]
Final: None
But if I create it automatically, using np.empty and .fill, I get this result:
r = np.empty(n)
r.fill(my_dp)
r = r[None].T
print(r)
for i in range(1,50):
r = A*r
print("Final:", print(r))
[[0.25]
[0.25]
[0.25]
[0.25]]
[[3.35329783e-71 3.35329783e-71 7.21422583e-04 9.73677480e-18]
[1.60559016e-25 3.35329783e-71 3.35329783e-71 9.73677480e-18]
[1.60559016e-25 7.21422583e-04 3.35329783e-71 3.35329783e-71]
[1.60559016e-25 3.35329783e-71 3.35329783e-71 3.35329783e-71]]
Final: None
A is an nxn adjacency matrix.
Why is this? As you can see, if I print the matrices, they look identical, and they should be.
I tried creating a new matrix and filling it with .fill, I tried creating a matrix with .full, but everything resulted in the second outcome. The only time it works properly is when I create the matrix manually, which is not very possible, since to continue, I will need to have hundreds of elements in the matrix.
It isn't clearly visible, but there IS a difference between the two examples.
The difference is that the first r is an np.matrix, and the second r is an np.array. One of the few differences between the two is the multiply operator. Using * on a matrix does a matrix multiply. Using * on an array does an element-wise multiply, where r gets broadcast to fit the shape of A.
If you want a matrix multiply, use the # operator:
r = A#r
What are the differences between numpy arrays and matrices? Which one should I use?
Related
I want to write a function for centering an input data matrix by multiplying it with the centering matrix. The function shall subtract the row-wise mean from the input.
My code:
import numpy as np
def centering(data):
n = data.shape()[0]
centeringMatrix = np.identity(n) - 1/n * (np.ones(n) # np.ones(n).T)
data = centeringMatrix # data
data = np.array([[1,2,3], [3,4,5]])
center_with_matrix(data)
But I get a wrong result matrix, it is not centered.
Thanks!
The centering matrix is
np.eye(n) - np.ones((n, n)) / n
Here is a list of issues in your original formulation:
np.ones(n).T is the same as np.ones(n). The transpose of a 1D array is a no-op in numpy. If you want to turn a row vector into a column vector, add the dimension explicitly:
np.ones((n, 1))
OR
np.ones(n)[:, None]
The normal definition is to subtract the column-wise mean, not the row-wise, so you will have to transpose and right-multiply the input to get row-wise operation:
n = data.shape()[1]
...
data = (centeringMatrix # data.T).T
Your function creates a new array for the output but does not currently return anything. You can either return the result, or perform the assignment in-place:
return (centeringMatrix # data.T).T
OR
data[:] = (centeringMatrix # data.T).T
OR
np.matmul(centeringMatrix, data.T, out=data.T)
I have a matrix called inverseJ, which is a 2x2 matrix ([[0.07908312, 0.03071918], [-0.12699082, -0.0296126]]), and a one dimensional vector deltaT of length two ([-31.44630082, -16.9922145]). In NumPy, multiplying these should yield a one dimensional vector again, as in this example. However, when I multiply these using inverseJ.dot(deltaT), I get a two dimensional array ([[-3.00885838, 4.49657509]]) with the only element being the vector I am actually looking for. Does anyone know why I am not simply getting a vector? Any help is greatly appreciated!
Whole script for reference
from __future__ import division
import sys
import io
import os
from math import *
import numpy as np
if __name__ == "__main__":
# Fingertip position
x = float(sys.argv[1])
y = float(sys.argv[2])
# Initial guesses
q = np.array([0., 0.])
q[0] = float(sys.argv[3])
q[1] = float(sys.argv[4])
error = 0.01
while(error > 0.001):
# Configuration matrix
T = np.array([17.3*cos(q[0] + (5/3)*q[1])+25.7*cos(q[0] + q[1])+41.4*cos(q[0]),
17.3*sin(q[0] + (5/3)*q[1])+25.7*sin(q[0] + q[1])+41.4*sin(q[0])])
# Deviation
deltaT = np.subtract(np.array([x,y]), T)
error = deltaT[0]**2 + deltaT[1]**2
# Jacobian
J = np.matrix([ [-25.7*sin(q[0]+q[1])-17.3*sin(q[0]+(5/3)*q[1])-41.4*sin(q[0]), -25.7*sin(q[0]+q[1])-28.8333*sin(q[0]+(5/3)*q[1])],
[25.7*cos(q[0]+q[1])+17.3*cos(q[0]+(5/3)*q[1])+41.4*cos(q[0]), 25.7*cos(q[0]+q[1])+28.8333*cos(q[0]+(5/3)*q[1])]])
#Inverse of the Jacobian
det = J.item((0,0))*J.item((1,1)) - J.item((0,1))*J.item((1,0))
inverseJ = 1/det * np.matrix([ [J.item((1,1)), -J.item((0,1))],
[-J.item((1,0)), J.item((0,0))]])
### THE PROBLEMATIC MATRIX VECTOR MULTIPLICATION IN QUESTION
q = q + inverseJ.dot(deltaT)
When a matrix is involved in an operation, the output is another matrix. matrix object are matrices in the strict linear algebra sense. They are always 2D, even if they have only one element.
On the contrary, the example you mention uses arrays, not matrices. Arrays are more "loosely behaved". One of the differences is that "useless" dimensions are removed, yielding a 1D vector in this example.
This simply seems to be the way numpy.dot() functions. It does a simple array multiplication which, since one of the parameters is two dimensional, returns a two dimensional array. dot() is not a smart method, it just does what it's told without sanity checks from what I can gather in the documentation here. Note that this is not an error in your code, but you will have to extract the inner list yourself.
I have a number of indices and values that make up a scipy.coo_matrix. The indices/values are generated from different subroutines and are concatenated together before handed over to the matrix constructor:
import numpy
from scipy import sparse
n = 100000
I0 = range(n)
J0 = range(n)
V0 = numpy.random.rand(n)
I1 = range(n)
J1 = range(n)
V1 = numpy.random.rand(n)
# [...]
I = numpy.concatenate([I0, I1])
J = numpy.concatenate([J0, J1])
V = numpy.concatenate([V0, V1])
matrix = sparse.coo_matrix((V, (I, J)), shape=(n, n))
Now, the components of (I, J, V) can be quite large such that the concatenate operations become significant. (In the above example it takes over 20% of the runtime on my machine.) I'm reading that it's not possible to concatenate without a copy.
Is there a way for handing over indices and values without copying the input data around first?
If you look at the code for coo_matrix.__init__ you'll see that it's pretty simple. In fact if the (V, (I,J)) inputs are right it will simply assign those 3 arrays to its .data, row, col attributes. You can even check that after creation by comparing those attributes with your variables.
If they aren't 1d arrays of the right dtype, it will massage them - make the arrays, etc. So without getting into details, processing that you do before hand might save time in the coo call.
self.row = np.array(row, copy=copy, dtype=idx_dtype)
self.col = np.array(col, copy=copy, dtype=idx_dtype)
self.data = np.array(obj, copy=copy)
One way or other those attributes will have to each be a single array, not a loose list of arrays or lists of lists.
sparse.bmat makes a coo matrix from other ones. It collected their coo attributes, joins them in the fill an empty array styles, and calls coo_matrix. Look at its code.
Almost all numpy operations that return a new array do so by allocating an empty and filling it. Letting numpy do that in compiled code (with np.concatentate) should be a be a little faster, but details like the size and number of inputs will make a difference.
A non_connonical coo matrix is just the start. Many operations require a conversion to one of the other formats.
Efficiently construct FEM/FVM matrix
This is about sparse matrix constrution where there are many duplicate points that need to be summed - and using using the csr format for calculations.
You can try pre-allocating the arrays. It'll spare you the copy at least. I didn't see any speedup for the example, but you might see a change.
import numpy
from scipy import sparse
n = 100000
I = np.empty(2*n, np.double)
J = np.empty_like(I)
V = np.empty_like(I)
I[:n] = range(n)
J[:n] = range(n)
V[:n] = numpy.random.rand(n)
I[n:] = range(n)
J[n:] = range(n)
V[n:] = numpy.random.rand(n)
matrix = sparse.coo_matrix((V, (I, J)), shape=(n, n))
In the following code we calculate magnitudes of vectors between all pairs of given points. To speed up this operation in NumPy we can use broadcasting
import numpy as np
points = np.random.rand(10,3)
pair_vectors = points[:,np.newaxis,:] - points[np.newaxis,:,:]
pair_dists = np.linalg.norm(pair_vectors,axis=2).shape
or outer product iteration
it = np.nditer([points,points,None], flags=['external_loop'], op_axes=[[0,-1,1],[-1,0,1],None])
for a,b,c in it:
c[...] = b - a
pair_vectors = it.operands[2]
pair_dists = np.linalg.norm(pair_vectors,axis=2)
My question is how could one use broadcasting or outer product iteration to create an array with the form 10x10x6 where the last axis contains the coordinates of both points in a pair (extension). And in a related way, is it possible to calculate pair distances using broadcasting or outer product iteration directly, i.e. produce a matrix of form 10x10 without first calculating the difference vectors (reduction).
To clarify, the following code creates the desired matrices using slow looping.
pair_coords = np.zeros(10,10,6)
pair_dists = np.zeros(10,10)
for i in range(10):
for j in range(10):
pair_coords[i,j,0:3] = points[i,:]
pair_coords[i,j,3:6] = points[j,:]
pair_dists[i,j] = np.linalg.norm(points[i,:]-points[j,:])
This is a failed attempt to calculate distanced (or apply any other function that takes 6 coordinates of both points in a pair and produce a scalar) using outer product iteration.
res = np.zeros((10,10))
it = np.nditer([points,points,res], flags=['reduce_ok','external_loop'], op_axes=[[0,-1,1],[-1,0,1],None])
for a,b,c in it: c[...] = np.linalg.norm(b-a)
pair_dists = it.operands[2]
Here's an approach to produce those arrays in vectorized ways -
from itertools import product
from scipy.spatial.distance import pdist, squareform
N = points.shape[0]
# Get indices for selecting rows off points array and stacking them
idx = np.array(list(product(range(N),repeat=2)))
p_coords = np.column_stack((points[idx[:,0]],points[idx[:,1]])).reshape(N,N,6)
# Get the distances for upper triangular elements.
# Then create a symmetric one for the final dists array.
p_dists = squareform(pdist(points))
Few other vectorized approaches are discussed in this post, so have a look there too!
I am trying to solve an LP problem represented using sparse matrices in Gurobi / python.
max c′ x, subject to A x = b, L ≤ x ≤ U
where A is a SciPy linked list sparse matrix of size ~10002. Using the code
model = gurobipy.Model()
rows, cols = len(b), len(c)
for j in range(cols):
model.addVar(lb=L[j], ub=U[j], obj=c[j])
model.update()
vars = model.getVars()
S = scipy.sparse.coo_matrix(A)
expr, used = [], []
for i in range(rows):
expr.append(gurobipy.LinExpr())
used.append(False)
for i, j, s in zip(S.row, S.col, S.data):
expr[i] += s*vars[j]
used[i] = True
for i in range(rows):
if used[i]:
model.addConstr(lhs=expr[i], sense=gurobipy.GRB.EQUAL, rhs=b[i])
model.update()
model.ModelSense = -1
model.optimize()
the problem is built and solved in ~1s, which is ~10-100 times slower than the same task in Gurobi / Matlab. Do you have any suggestions for improving the efficiency of the problem definition, or suggestions for avoiding translation to sparse coordinate format?
MATLAB will always be more efficient than scipy when dealing with sparse matrices. However, there are a few things you might try to speed things up.
Gurobi's Python interface takes individual sparse constraints. This means that you want to access your matrix in compressed sparse row format (rather than coordinate format).
Try doing:
S = S.tocsr()
or directly constructing your matrix in compressed sparse row format.
This page indicates that you can access the raw data, indicies, and row pointers from a scipy sparse matrix in CSR format. So you should then be able to iterate over these as follows:
model = gurobipy.Model()
row, cols = len(b), len(c)
x = []
for j in xrange(cols):
x.append(model.addVar(lb=L[j], ub=U[j], obj=c[j])
model.update()
# iterate over the rows of S adding each row into the model
for i in xrange(rows):
start = S.indptr[i]
end = S.indptr[i+1]
variables = [x[j] for j in S.indices[start:end]]
coeff = S.data[start:end]
expr = gurobipy.LinExpr(coeff, variables)
model.addConstr(lhs=expr, sense=gurobipy.GRB.EQUAL, rhs=b[i])
model.update()
model.ModelSense = -1
model.optimize()
Note that I added all the terms at once into the expression using the LinExpr() constructor.