I'm looking for function for linear interpolation in tensorflow similar to np.interp(..)
I'm aware that tensorflow is able to receive any numpy function and apply it on tensors but
np.interp is only activated on single object and as far as I checked couldn't be broadcasted.
so is there any efficient way to apply it using tensoflow ?
Thank you
I know this is a late answer, but google brought me here, so my answer might be useful for others.
You can use the interp_regular_1d_grid from tensorflow probability.
It works in a similar fashion as numpy.interp(), but consult the documentation for exact functionality.
Related
Does anyone know how to feed in an initial solution or matrix of initial solutions into the differential evolution function from the Scipy library?
The documentation doesn't explain if its possible but I know that initial solution implementation is not unusual. Scipy is so widely used I would expect it to have that type of functionality.
Ok, after review and testing I believe I now understand it.
There are a set of parameters that the scipy.optimize.differential_evolution(...) function can accept, one is the init parameter which allows you to upload an array of solutions. Personally I was looking at a set of coordinates so enumerated them into an array and fed in 99 other variations of it (100 different solutions) and fed this matrix into the inti parameter. I believe it needs to have more than 4 solutions or your are going to get a tuple error.
I probably didn't need to ask/answer the question though it may help others that got equally confused.
I have looked it up and there are some posts, for example this one, that suggest to use torch.gesv but I can't seem to find it in the PyTorch documentation.
Pytorch provides the function torch.solve, which behaves like numpy.linalg.solve. It will output the solution of the the linear system and the LU factorization that has been used to compute it.
More information and example codes here.
Pytorch now has a linalg part very similar to NumPy. So you can do torch.linalg.solve and expect the same behavior as NumPy.
Does anyone know a python library which has online PCA estimations (something similar to what is described in this paper online PCA)
Does it make sense to use the sklearn.decomposition.IncrementalPCA method with batch_size =1.
You can check this out:
https://github.com/flatironinstitute/online_psp
It is not exactly PCA since components might be not orthogonal (you can easily orthogonalize them at need, there is also an object method to do so)
Cheers
DISCLAIMER: I am one of the developers of this project.
I'm using TensorFlow for machine learning on Ogg and MIDI data, but a lot of preprocessing is done in NumPy (with feed_dict:s), and I'd like to migrate as much of it as possible into the computational graph in order to simplify production deployment (Google Cloud ML, or maybe self-hosted TensorFlow Serving). How would I go about this? Are there ways of converting NumPy to TensorFlow operations automatically?
Most of the Numpy functions have their TensorFlow equivalent documented in array_ops.
For the more mathematical operations, have a look at math_ops.
Finally, if you have more specific queries or are unable to convert some Numpy code to TensorFlow, you should always try to search StackOverflow Q/A or ask a question here. (Have a look at this for a good example of such a question).
Unrelated - If you face difficulties carrying out some matrix manipulation, try to look at the existing Numpy Q/A on StackOverflow. They can easily be applied to TensorFlow using the APIs above.
Are there functions in python that will fill out missing values in a matrix for you, by using collaborative filtering (ex. alternating minimization algorithm, etc). Or does one need to implement such functions from scratch?
[EDIT]: Although this isn't a matrix-completion example, but just to illustrate a similar situation, I know there is an svd() function in Matlab that takes a matrix as input and automatically outputs the singular value decomposition (svd) of it. I'm looking for something like that in Python, hopefully a built-in function, but even a good library out there would be great.
Check out numpy's linalg library to find a python SVD implementation
There is a library fancyimpute. Also, sklearn NMF