first question, I will do my best to be as clear as possible.
If I can provide UMAP with a distance function that also outputs a gradient or some other relevant information, can I apply UMAP to non-traditional looking data? (I.e., a data set with points of inconsistent dimension, data points that are non-uniformly sized matrices, etc.) The closest I have gotten to finding something that looks vaguely close to my question is in the documentation here (https://umap-learn.readthedocs.io/en/latest/embedding_space.html), but this seems to be sort of the opposite process, and as far as I can tell still supposes you are starting with tuple-based data of uniform dimension.
I'm aware that one way around this is just to calculate a full pairwise distance matrix ahead of time and give that to UMAP, but from what I understand of the way UMAP is coded, it only performs a subset of all possible distance calculations, and is thus much faster for the same amount of data than if I were to take the full pre-calculation route.
I am working in python3, but if there is an implementation of UMAP dimension reduction in some other environment that permits this, I would be willing to make a detour in my workflow to obtain this greater flexibility with incoming data types.
Thank you.
Algorithmically this is quite possible, but in practice most implementations do not support anything other than fixed dimension vectors. If computing the all pairs distances is not tractable another option is to try to find a way to featurize or vectorize the data in a way that will allow for easy distance computations. This is, of course, not always possible. The final option is to implement things yourself, but this requires handling the nearest neighbour search, which is likely a non-trivial coding project in and of itself.
Related
I am looking to linearly combine features to be used by UMAP. Some of them are GCS coordinates and require a haversine treatment while others can be compared using their euclidean distance.
distance(v1, v2) = alpha * euclidean(f1_eucl, f2_eucl) + beta * haversine(f1_hav, f2_hav)
So far, I have tried:
Creating a custom distance matrix. The squared matrix takes 70GB using float64, 35GB with float32. Using fastdist, I get a computation time of 7min, which is quite slow compared to UMAP's 2-3min -- all included. This approach falls apart as soon as I try adding the euclidean and haversine matrices together (140GB which is massive compared to UMAP's 5GB). I also tried chunking the computation using dask. The result is memory-friendly but my session kept crashing so I couldn't even tell how long that would have taken.
Using a custom callable to be ingested by UMAP. Thanks to the jit compilation using numba, I get the results quite quickly and have no memory problem. The major problem here is it looks like UMAP ignores my callable when the dataset reaches 4096 in size. If I set the callable to return 0, UMAP still shows the patterns of the original dataset in the graphs. If somebody could explain me what this is due to, that'd be great.
In summary, how do you go about, practically speaking, implementing a custom metric in UMAP? And bonus question, do you think this is a sound approach? Thanks.
The custom metric in numba should work for more than 4096 points. That's a relevant number because that's the stage at which it cuts over to using approximate nearest neighbor search (which is passes of to pynndescent). Now pynndescent does support custom metrics compiled with numba, so if it is somehow going astray it is because that is not getting passed to pynndescent correctly. Still, I would have expected an error, not defaulting to euclidean distance.
I have a data set that looks like this:
I used the scipy.signal.find_peaks function to determine the peaks of the data set, and it works out fine enough, but since this function determines the local maxima of the data, it is not neglecting the noise in the data which causes overshoot. Therefore what I'm determining isn't actually the location of the most likely maxima, but rather the location of an 'outlier'.
Is there another, more exact way to approximate the local maxima?
I'm not sure that you can consider those points to be outliers so easily, as they look to be close to the place I would expect them to be. But if you don't think they are a valid approximation let me tell you three other ways you can use.
First option
I would construct a physical model of these peaks (a mathematical formula) and do a fitting analysis around the peaks. You can for instance, suppose that the shape of the plot is the sum of some background model (maybe constant or maybe more complicated) plus some Gaussian peaks (or Lorentzian).
This is what we usually do in physics. Of course it will be more accurate taking knowledge from the underlying processes, which I don't have.
Having a good model, this way is definitely better as taking the maximum values, as even if they are not outliers, they still have errors which you want to reduce.
Second option
But if you want a easier way, just a rough estimation and you already found the location of the three peaks programmatically, you can make the average of a few points around the maximum. If you do it so, the function np.where or np.argwhere tend to be useful for this kind of things.
Third option
The easiest option is taking the value by hand. It could sound unacceptable for academic proposes and it probably is. Even worst, it is not a programmatic way, and this is SO. But at the end, it depends on why and for what you need those values and on the confidence interval you need for your measurement.
I am currently simulating a structural optimization problem in which the gradients of responses are extracted from Nastran and provided to SLSQP optimizer in OpenMDAO. The number of constraints changes in subsequent iterations, because the design variables included both the shape and sizing variables, therefore a new mesh is generated every time. A constraint component is defined in OpenMDAO, and it reads the response data exported from Nastran. Now, the issue here is in defining the shape of its output variable "f_const". The shape of this output variable is required to adjust according to the shape of the available response array, since outputs['f_const'] = np.loadtxt("nsatran_const.dat"). Here, nastran_const.dat is the file containing response data extracted from Nastran. The shape of this data is not known at the beginning of design iteration and keep on changing during the subsequent iterations. So, if some shape of f_const is defined at the start, then it does not change later and gives error because of the mismatch in the shapes.
In the doc of openmdao, I found https://openmdao.org/newdocs/versions/latest/features/experimental/dyn_shapes.html?highlight=varying%20shape
It explains, that the shape of input/out variable can be set dynamic by linking it to any connecting or local variables whose shapes are already known. This is different from my case because, the shape of stress array is not known before the start of computation. The shape of f_const is to be defined in setup, and I cannot figure out how to change it later. Please guide me in this regard.
You can't have arrays that change shape like that. The "dynamic" shape you found in the docs refers to setup-time variation. Once setup has been finished though, sizes are fixed. So we need a way for you arrays to be of a fixed size.
If you really must re-mesh every time (which I don't recommend) then there are two possible solutions I can think of:
Over-allocation
Constraint Aggregation
Option 1 -- Over Allocation
This topic is covered in detail in this related question, but briefly what you could do is allocate an array big enough that you always have enough space. Then you can use one entry of the array to record how many active entries are in it. Any non-active entries would be set to a default value that won't violate your constraints.
You'll have to be very careful with the way you define the derivatives. For active array entries, the derivatives come from NASTRAN. For inactive ones, you could set them to 0 but note that you are creating a discrete discontinuity when an entry switches to active. This may very well give the optimizer fits when its trying to converge and derivatives of active constraints keep flipping between 0 and nonzero values.
I really don't recommend this approach, but if you absolutely must have "variable size" arrays then over-allocation is your best best.
Option 2 -- Constraint Aggregation
They key idea here is to use an aggregation function to collapse all the stress constraints into a single value. For structural problems this is most often done with a KS function. OpenMDAO has a KScomponent in its standard library that you can use.
The key is that this component requires a constant sized input. So again, over-allocation would be used here. In this case, you shouldn
't track the number of active values in the array, because you are passing that to the aggregation function. KS functions are like smooth max functions, so if you have a bunch of 0's then it shouldn't affect it.
Your problem still has a discontinuous operation going on with the re-meshing and the noisy constraint array. The KS function should smooth some of that, but not all of it. I still think you'll have trouble converging, but it should work better than raw over-allocation.
Option 3 --- The "right" answer
Find a way to fix your grid, so it never changes. I know this is hard if you're using VSP to generate your discritizations, and letting NASTRAN re-grid things from there ... but its not impossible at all.
OpenVSP has a set of geometry-query functions that can be used to back-fit fixed meshes into the parametric space of the geometry. If you do that, then you can regenerate the geometry in VSP and use the parametric space to move your fixed grids with it. This is how the pyGeo tool that the University of Michigan MDO Lab does it, and it works very well.
Its a modest amount of work (though a lot less if you use pyGeo directly), but I think its well worth it. You'll get faster components and a much more stable optimization.
t-SNE can supposedly scale to millions of observations (see here), but I'm curious how that can be true, at least in the Sklearn implementation.
I'm trying it on a dataset with ~100k items, each with ~190 features. Now, I'm aware that I can do a first pass of dimensionality reduction with, e.g. PCA, but the problem seems more fundamental.
t-SNE computes and stores the full, dense similarity matrix calculated for the input observations (
I've confirmed this by looking at the source). In my case, this is a 10 billion element dense matrix, which by itself requires 80 GB+ of memory. Extrapolate this to just one million observations, and you're looking at 8 terabytes of RAM just to store the distance matrix (let alone computation time...)
So, how can we possibly scale t-SNE to millions of datapoints in the sklearn implementation? Am I missing something? The sklearn docs at least imply that it's possible:
By default the gradient calculation algorithm uses Barnes-Hut approximation running in O(NlogN) time. method=’exact’ will run on the slower, but exact, algorithm in O(N^2) time. The exact algorithm should be used when nearest-neighbor errors need to be better than 3%. However, the exact method cannot scale to millions of examples.
That's my emphasis, but I would certainly read that as implying the Barnes-hut method can scale to millions of examples, but I'll reiterate that the code requires calculating the full distance matrix well before we even get to any of the actual t-sne transformations (with or without Barnes-hut).
So am I missing something? Is it possible to scale this up to millions of datapoints?
Barnes-Hut does NOT require you to compute and storex the full, dense similarity matrix calculated for the input observations.
Also, take a look at the references mentioned at the documentation. In particular, this one. Quoting that page:
The technique can be implemented via Barnes-Hut approximations, allowing it to be applied on large real-world datasets. We applied it on data sets with up to 30 million examples.
That page also links to this talk about how the approximation works: Visualizing Data Using t-SNE.
I recommend you using another algorithm called UMAP. It is proven to perform at least as well as t-SNE and in most cases, it performs better. Most importantly, it scales significantly better. Their approach to the problem is similar so they generate similar results but UMAP is a lot faster (Look at the last graph here: https://umap-learn.readthedocs.io/en/latest/benchmarking.html). You can look at the original paper and the following link for details.
https://www.nature.com/articles/nbt.4314.pdf
https://towardsdatascience.com/how-exactly-umap-works-13e3040e1668#:~:text=tSNE%20is%20Dead.&text=Despite%20tSNE%20made%20a%20dramatic,be%20fixed%20sooner%20or%20later.
OpenVisuMap (at github) implements t-SNE without resorting to approximation. It uses GPU to calculate the distance matrix on-fly. It still has O(N^2) calculation complexity, but only O(N) memory complexity.
I am studying some machine-learning and I have come across, in several places, that Latent Semantic Indexing may be used for feature selection. Can someone please provide a brief, simplified explanation of how this is done? Ideally both theoretically and in commented code. How does it differ from Principal Component Analysis?
What language it is written in doesn't really worry me, just that I can understand both code and theory.
LSA is conceptually similar to PCA, but is used in different settings.
The goal of PCA is to transform data into new, possibly less-dimensional space. For example, if you wanted to recognize faces and use 640x480 pixel images (i.e. vectors in 307200-dimensional space), you would probably try to reduce this space to something reasonable to both - make it computationally simpler and make data less noisy. PCA does exactly this: it "rotates" axes of your high-dimensional space and assigns "weight" to each of new axes, so that you can throw away least important of them.
LSA, on other hand, is used to analyze semantic similarity of words. It can't handle images, or bank data, or some other custom dataset. It is designed specifically for text processing, and works specifically with term-document matrices. Such matrices, however, are often considered too large, so they are reduced to form lower-rank matrices in a way very similar to PCA (both of them use SVD). Feature selection, though, is not performed here. Instead, what you get is feature vector transformation. SVD provides you with some transformation matrix (let's call it S), which, being multiplied by input vector x gives new vector x' in a smaller space with more important basis.
This new basis is your new features. Though, they are not selected, but rather obtained by transforming old, larger basis.
For more details on LSA, as long as implementation tips, see this article.