Specific tensor decomposition - python

I want to decompose a 3-dimensional tensor using SVD.
I am not quite sure if and, how following decomposition can be achieved.
I already know how I can split the tensor horizontally from this tutorial: tensors.org Figure 2.2b
d = 10; A = np.random.rand(d,d,d)
Am = A.reshape(d**2,d)
Um,Sm,Vh = LA.svd(Am,full_matrices=False)
U = Um.reshape(d,d,d); S = np.diag(Sm)

Matrix methods can be naturally extended to higher-orders. SVD, for instance, can be generalized to tensors e.g. with the Tucker decomposition, sometimes called a higher-order SVD.
We maintain a Python library for tensor methods, TensorLy, which lets you do this easily. In this case you want a partial Tucker as you want to leave one of the modes uncompressed.
Let's import the necessary parts:
import tensorly as tl
from tensorly import random
from tensorly.decomposition import partial_tucker
For testing, let's create a 3rd order tensor of size (10, 10, 10):
size = 10
order = 3
shape = (size, )*order
tensor = random.random_tensor(shape)
You can now decompose the tensor using the tensor decomposition. In your case, you want to leave one of the dimensions untouched, so you'll only have two factors (your U and V) and a core tensor (your S):
core, factors = partial_tucker(tensor, rank=size, modes=[0, 2])
You can reconstruct the original tensor from your approximation using a series of n-mode products to contract the core with the factors:
from tensorly import tenalg
rec = tenalg.multi_mode_dot(core, factors, modes=[0, 2])
rec_error = tl.norm(rec - tensor)/tl.norm(tensor)
print(f'Relative reconstruction error: {rec_error}')
In my case, I get
Relative reconstruction error: 9.66027176805661e-16

You can also use "tensorlearn" package in python for example using tensor-train (TT) SVD algorithm.
https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition
import numpy as np
import tensorlearn as tl
#lets generate an arbitrary array
tensor = np.arange(0,1000)
#reshaping it into a higher (3) dimensional tensor
tensor = np.reshape(tensor,(10,20,5))
epsilon=0.05
#decompose the tensor to its factors
tt_factors=tl.auto_rank_tt(tensor, epsilon) #epsilon is the error bound
#tt_factors is a list of three arrays which are the tt-cores
#rebuild (estimating) the tensor using the factors again as tensor_hat
tensor_hat=tl.tt_to_tensor(tt_factors)
#lets see the error
error_tensor=tensor-tensor_hat
error=tl.tensor_frobenius_norm(error_tensor)/tl.tensor_frobenius_norm(tensor)
print('error (%)= ',error*100) #which is less than epsilon
# one usage of tensor decomposition is data compression
# So, lets calculate the compression ratio
data_compression_ratio=tl.tt_compression_ratio(tt_factors)
#data saving
data_saving=1-(1/data_compression_ratio)
print('data_saving (%): ', data_saving*100)

Related

Finite difference using xarray rolling

My goal is to compute a derivative of a moving window of a multidimensional dataset along a given dimension, where the dataset is stored as Xarray DataArray or DataSet.
In the simplest case, given a 2D array I would like to compute a moving difference across multiple entries in one dimension, e.g.:
data = np.kron(np.linspace(0,1,10), np.linspace(1,4,6) ).reshape(10,6)
T=3
reducedArray = np.zeros_like(data)
for i in range(data.shape[1]):
if i < T:
reducedArray[:,i] = data[:,i] - data[:,0]
else:
reducedArray[:,i] = data[:,i] - data[:,i-T]
where the if i <T condition ensures that input and output contain proper values (i.e., no nans) and are of identical shape.
Xarray's diff aims to perform a finite-difference approximation of a given derivative order using nearest-neighbours, so it is not suitable here, hence the question:
Is it possible to perform this operation using Xarray functions only?
The rolling weighted average example appears to be something similar, but still too distinct due to the usage of NumPy routines. I've been thinking that something along the lines of the following should work:
xr2DDataArray = xr.DataArray(
data,
dims=('x','y'),
coords={'x':np.linspace(0,1,10), 'y':np.linspace(1,4,6)}
)
r = xr2DDataArray.rolling(x=T,min_periods=2)
r.reduce( redFn )
I am struggling with the definition of redFn here ,though.
Caveat The actual dataset to which the operation is to be applied will have a size of ~10GiB, so a solution that does not blow up the memory requirements will be highly appreciated!
Update/Solution
Using Xarray rolling
After sleeping on it and a bit more fiddling the post linked above actually contains a solution. To obtain a finite difference we just have to define the weights to be $\pm 1$ at the ends and $0$ else:
def fdMovingWindow(data, **kwargs):
T = kwargs['T'];
del kwargs['T'];
weights = np.zeros(T)
weights[0] = -1
weights[-1] = 1
axis = kwargs['axis']
if data.shape[axis] == T:
return np.sum(data * weights, **kwargs)
else:
return 0
r.reduce(fdMovingWindow, T=4)
alternatively, using construct and a dot product:
weights = np.zeros(T)
weights[0] = -1
weights[-1] = 1
xrWeights = xr.DataArray(weights, dims=['window'])
xr2DDataArray.rolling(y=T,min_periods=1).construct('window').dot(xrWeights)
This carries a massive caveat: The procedure essentially creates a list arrays representing the moving window. This is fine for a modest 2D / 3D array, but for a 4D array that takes up ~10 GiB in memory this will lead to an OOM death!
Simplicistic - memory efficient
A less memory-intensive way is to copy the array and work in a way similar to NumPy's arrays:
xrDiffArray = xr2DDataArray.copy()
dy = xr2DDataArray.y.values[1] - xr2DDataArray.y.values[0] #equidistant sampling
for src in xr2DDataArray:
if src.y.values < xr2DDataArray.y.values[0] + T*dy:
xrDiffArray.loc[dict(y = src.y.values)] = src.values - xr2DDataArray.values[0]
else:
xrDiffArray.loc[dict(y = src.y.values)] = src.values - xr2DDataArray.sel(y = src.y.values - dy*T).values
This will produce the intended result without dimensional errors, but it requires a copy of the dataset.
I was hoping to utilise Xarray to prevent a copy and instead just chain operations that are then evaluated if and when values are actually requested.
A suggestion as to how to accomplish this will still be welcomed!
I have never used xarray, so maybe I am mistaken, but I think you can get the result you want avoiding using loops and conditionals. This is at least twice faster than your example for numpy arrays:
data = np.kron(np.linspace(0,1,10), np.linspace(1,4,6)).reshape(10,6)
reducedArray = np.empty_like(data)
reducedArray[:, T:] = data[:, T:] - data[:, :-T]
reducedArray[:, :T] = data[:, :T] - data[:, 0, np.newaxis]
I imagine the improvement will be higher when using DataArrays.
It does not use xarray functions but neither depends on numpy functions. I am confident that translating this to xarray will be straightforward, I know that it works if there are no coords, but once you include them, you get an error because of the coords mismatch (coords of data[:, T:] and of data[:, :-T] are different). Sadly, I can't do better now.

Gaussian filter in PyTorch

I am looking for a way to apply a Gaussian filter to an image (tensor) only using PyTorch functions. Using numpy, the equivalent code is
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
# Define 2D Gaussian kernel
def gkern(kernlen=256, std=128):
"""Returns a 2D Gaussian kernel array."""
gkern1d = signal.gaussian(kernlen, std=std).reshape(kernlen, 1)
gkern2d = np.outer(gkern1d, gkern1d)
return gkern2d
# Generate random matrix and multiply the kernel by it
A = np.random.rand(256*256).reshape([256,256])
# Test plot
plt.figure()
plt.imshow(A*gkern(256, std=32))
plt.show()
The closest suggestion I found is based on this post:
import torch.nn as nn
conv = nn.Conv2d(in_channels = 1, out_channels = 1, kernel_size=264, bias=False)
with torch.no_grad():
conv.weight = gaussian_weights
But it gives me the error NameError: name 'gaussian_weights' is not defined. How can I make it work?
Yupp I also had the same idea. So now the question becomes: is there a way to define a Gaussian kernel (or a 2D Gaussian) without using Numpy and/or explicitly specifying the weights?
Yes, it is pretty easy. Just have a look to the function documentation of signal.gaussian. There is a link to the source code. So what the method is doing is the following:
def gaussian(M, std, sym=True):
if M < 1:
return np.array([])
if M == 1:
return np.ones(1, 'd')
odd = M % 2
if not sym and not odd:
M = M + 1
n = np.arange(0, M) - (M - 1.0) / 2.0
sig2 = 2 * std * std
w = np.exp(-n ** 2 / sig2)
if not sym and not odd:
w = w[:-1]
return w
And you are lucky because is the straightforward to convert in Pytorch, (almost) just replacing np by torch and you are done!
Also, note that np.outer equivalent in torch is ger.
There is a Pytorch class to apply Gaussian Blur to your image:
torchvision.transforms.GaussianBlur(kernel_size, sigma=(0.1, 2.0))
Check the documentation for more info
Assuming that the question actually asks for a convolution with a Gaussian (i.e. a Gaussian blur, which is what the title and the accepted answer imply to me) and not for a multiplication (i.e. a vignetting effect, which is what the question's demo code produces), here is a pure PyTorch version that does not need torchvision to be installed (otherwise torchvision.transforms.GaussianBlur() can be used instead, as has been proposed by Mushfirat Mohaimin's answer):
from math import ceil
import torch
from torch.nn.functional import conv2d
from torch.distributions import Normal
def gaussian_kernel_1d(sigma: float, num_sigmas: float = 3.) -> torch.Tensor:
radius = ceil(num_sigmas * sigma)
support = torch.arange(-radius, radius + 1, dtype=torch.float)
kernel = Normal(loc=0, scale=sigma).log_prob(support).exp_()
# Ensure kernel weights sum to 1, so that image brightness is not altered
return kernel.mul_(1 / kernel.sum())
def gaussian_filter_2d(img: torch.Tensor, sigma: float) -> torch.Tensor:
kernel_1d = gaussian_kernel_1d(sigma) # Create 1D Gaussian kernel
padding = len(kernel_1d) // 2 # Ensure that image size does not change
img = img.unsqueeze(0).unsqueeze_(0) # Need 4D data for ``conv2d()``
# Convolve along columns and rows
img = conv2d(img, weight=kernel_1d.view(1, 1, -1, 1), padding=(padding, 0))
img = conv2d(img, weight=kernel_1d.view(1, 1, 1, -1), padding=(0, padding))
return img.squeeze_(0).squeeze_(0) # Make 2D again
if __name__ == "__main__":
import matplotlib.pyplot as plt
img = torch.rand(size=(100, 100))
img_filtered = gaussian_filter_2d(img, sigma=1.5)
plt.subplot(121)
plt.imshow(img)
plt.subplot(122)
plt.imshow(img_filtered)
plt.show()
The code uses the basic idea of a separable filter that Andrei Bârsan implied in a comment to this answer. This means that convolution with a 2D Gaussian kernel can be replaced by convolving twice with a 1D Gaussian kernel – once along the image's columns, once along its rows. This is more efficient in general, as it uses 2N rather than N² multiplications per pixel for a kernel of side length N.
So in the provided code, we first create a 1D Gaussian kernel with gaussian_kernel_1d(), which we then apply twice in gaussian_filter_2d().
Some more notes on the code:
The parameter num_sigmas controls how many standard deviations and thus how much of the bulge of the Gaussian function we actually sample for producing the convolution kernel. As the Gaussian function theoretically has infinite support (meaning it is never zero), this presents a trade-off between accuracy and kernel size (which affects speed and memory use). A length of 3 * sigma should be sufficient for the two halves of the support usually, given that it will cover 99.7% of the area under the corresponding Gaussian function.
Rather than using Normal().log_prob().exp_() for producing the kernel, we could explicitly write
the function of the normal distribution here, which might be a bit more efficient. In fact, we could write kernel = support.square_().mul_(-.5 / (sigma ** 2)).exp_(), thus (1) altering the values of support in-place (as we won't need them, any longer) and (2) even omitting the normalization constant of the normal distribution (as we must normalize the kernel by its sum before returning it, anyway).
Although we use conv2d() rather than conv1d(), effectively we still have two 1D convolutions, as we apply a N×1 and 1×N kernel in conv2d(). We could have used conv1d() instead, but the code is much simpler with conv2d().
In more recent PyTorch versions, we can use conv2d(…, padding="same"), rather than calculating the padding amount ourselves. In either case, using conv2d()'s padding parameter implies padding with zeros. If we wanted more padding options, we could manually pad the image with torch.nn.functional.pad() before the convolution instead.
Used all the codes from above and updated with Pytorch revision of torch.outer
import torch
def gaussian_fn(M, std):
n = torch.arange(0, M) - (M - 1.0) / 2.0
sig2 = 2 * std * std
w = torch.exp(-n ** 2 / sig2)
return w
def gkern(kernlen=256, std=128):
"""Returns a 2D Gaussian kernel array."""
gkern1d = gaussian_fn(kernlen, std=std)
gkern2d = torch.outer(gkern1d, gkern1d)
return gkern2d
# Generate random matrix and multiply the kernel by it
A = np.random.rand(256*256).reshape([256,256])
A = torch.from_numpy(A)
guassian_filter = gkern(256, std=32)
ax=[]
f = plt.figure(figsize=(12,5))
ax.append(f.add_subplot(131))
ax.append(f.add_subplot(132))
ax.append(f.add_subplot(133))
ax[0].imshow(A, cmap='gray')
ax[1].imshow(guassian_filter, cmap='gray')
ax[2].imshow(A*guassian, cmap='gray')
plt.show()

Re-compose a Tensor after tensor factorization

I am trying to decompose a 3D matrix using python library scikit-tensor. I managed to decompose my Tensor (with dimensions 100x50x5) into three matrices. My question is how can I compose the initial matrix again using the decomposed matrix produced with Tensor factorization? I want to check if the decomposition has any meaning. My code is the following:
import logging
from scipy.io.matlab import loadmat
from sktensor import dtensor, cp_als
import numpy as np
//Set logging to DEBUG to see CP-ALS information
logging.basicConfig(level=logging.DEBUG)
T = np.ones((400, 50))
T = dtensor(T)
P, fit, itr, exectimes = cp_als(T, 10, init='random')
// how can I re-compose the Matrix T? TA = np.dot(P.U[0], P.U[1].T)
I am using the canonical decomposition as provided from the scikit-tensor library function cp_als. Also what is the expected dimensionality of the decomposed matrices?
The CP product of, for example, 4 matrices
can be expressed using Einstein notation as
or in numpy as
numpy.einsum('az,bz,cz,dz -> abcd', A, B, C, D)
so in your case you would use
numpy.einsum('az,bz->ab', P.U[0], P.U[1])
or, in your 3-matrix case
numpy.einsum('az,bz,cz->abc', P.U[0], P.U[1], P.U[2])
sktensor.ktensor.ktensor also have a method totensor() that does exactly this:
np.allclose(np.einsum('az,bz->ab', P.U[0], P.U[1]), P.totensor())
>>> True
See an explanation of CP here. You may also use tensorlearn package to rebuild the tensor.

Computing cross-correlation function?

In R, I am using ccf or acf to compute the pair-wise cross-correlation function so that I can find out which shift gives me the maximum value. From the looks of it, R gives me a normalized sequence of values. Is there something similar in Python's scipy or am I supposed to do it using the fft module? Currently, I am doing it as follows:
xcorr = lambda x,y : irfft(rfft(x)*rfft(y[::-1]))
x = numpy.array([0,0,1,1])
y = numpy.array([1,1,0,0])
print xcorr(x,y)
To cross-correlate 1d arrays use numpy.correlate.
For 2d arrays, use scipy.signal.correlate2d.
There is also scipy.stsci.convolve.correlate2d.
There is also matplotlib.pyplot.xcorr which is based on numpy.correlate.
See this post on the SciPy mailing list for some links to different implementations.
Edit: #user333700 added a link to the SciPy ticket for this issue in a comment.
If you are looking for a rapid, normalized cross correlation in either one or two dimensions
I would recommend the openCV library (see http://opencv.willowgarage.com/wiki/ http://opencv.org/). The cross-correlation code maintained by this group is the fastest you will find, and it will be normalized (results between -1 and 1).
While this is a C++ library the code is maintained with CMake and has python bindings so that access to the cross correlation functions is convenient. OpenCV also plays nicely with numpy. If I wanted to compute a 2-D cross-correlation starting from numpy arrays I could do it as follows.
import numpy
import cv
#Create a random template and place it in a larger image
templateNp = numpy.random.random( (100,100) )
image = numpy.random.random( (400,400) )
image[:100, :100] = templateNp
#create a numpy array for storing result
resultNp = numpy.zeros( (301, 301) )
#convert from numpy format to openCV format
templateCv = cv.fromarray(numpy.float32(template))
imageCv = cv.fromarray(numpy.float32(image))
resultCv = cv.fromarray(numpy.float32(resultNp))
#perform cross correlation
cv.MatchTemplate(templateCv, imageCv, resultCv, cv.CV_TM_CCORR_NORMED)
#convert result back to numpy array
resultNp = np.asarray(resultCv)
For just a 1-D cross-correlation create a 2-D array with shape equal to (N, 1 ). Though there is some extra code involved to convert to an openCV format the speed-up over scipy is quite impressive.
I just finished writing my own optimised implementation of normalized cross-correlation for N-dimensional arrays. You can get it from here.
It will calculate cross-correlation either directly, using scipy.ndimage.correlate, or in the frequency domain, using scipy.fftpack.fftn/ifftn depending on whichever will be quickest.
For 1D array, numpy.correlate is faster than scipy.signal.correlate, under different sizes, I see a consistent 5x peformance gain using numpy.correlate. When two arrays are of similar size (the bright line connecting the diagonal), the performance difference is even more outstanding (50x +).
# a simple benchmark
res = []
for x in range(1, 1000):
list_x = []
for y in range(1, 1000):
# generate different sizes of series to compare
l1 = np.random.choice(range(1, 100), size=x)
l2 = np.random.choice(range(1, 100), size=y)
time_start = datetime.now()
np.correlate(a=l1, v=l2)
t_np = datetime.now() - time_start
time_start = datetime.now()
scipy.signal.correlate(in1=l1, in2=l2)
t_scipy = datetime.now() - time_start
list_x.append(t_scipy / t_np)
res.append(list_x)
plt.imshow(np.matrix(res))
As default, scipy.signal.correlate calculates a few extra numbers by padding and that might explained the performance difference.
>> l1 = [1,2,3,2,1,2,3]
>> l2 = [1,2,3]
>> print(numpy.correlate(a=l1, v=l2))
>> print(scipy.signal.correlate(in1=l1, in2=l2))
[14 14 10 10 14]
[ 3 8 14 14 10 10 14 8 3] # the first 3 is [0,0,1]dot[1,2,3]

How to extend pyWavelets to work with N-dimensional data?

This may be a question for a different forum, if so please let me know. I noticed that only 14 people follow the wavelet tag.
I've here an elegant way of extending the wavelet decomposition in pywt (pyWavelets package) to multiple dimensions. This should run out of the box if pywt is installed. Test 1 shows the decomposition and recomposition of a 3D array. All, one has to do is increase the number of dimensions and the code will work in decomposing/recomposing with 4, 6 or even 18 dimensions of data.
I've replaced the pywt.wavedec and pywt.waverec functions here. Also, in fn_dec, I show how the new wavedec function works just like the old one.
There is one catch though: It represents the wavelet coefficients as an array of the same shape as the data. As a consequence, with my limited knowledge of wavelets, I've only been able to use it for Haar wavelets. Others like DB4 for example bleed coefficients over the edges of this strict bounds (not a problem with the current representation of coefficients as list of arrays [CA, CD1 ... CDN]. Another catch is that I've only worked this with 2^N edge cuboids of data.
Theoretically, I think it should be possible to make sure that the "bleeding" does not occur. An algorithm for this sort of wavelet decomposition and recomposition is discussed in "numerical recipies in C" - by William Press, Saul A teukolsky, William T. Vetterling and Brian P. Flannery (Second Edition). Though this algorithm assumes reflection at the edges rather than the other forms of edge extensions (like zpd), the method is general enough to work for other forms of extension.
Any suggestion on how to extend this work to other wavelets?
NOTE: This query is also posted on http://groups.google.com/group/pywavelets
Thanks,
Ajo
import pywt
import sys
import numpy as np
def waveFn(wavelet):
if not isinstance(wavelet, pywt.Wavelet):
return pywt.Wavelet(wavelet)
else:
return wavelet
# given a single dimensional array ... returns the coefficients.
def wavedec(data, wavelet, mode='sym'):
wavelet = waveFn(wavelet)
dLen = len(data)
coeffs = np.zeros_like(data)
level = pywt.dwt_max_level(dLen, wavelet.dec_len)
a = data
end_idx = dLen
for idx in xrange(level):
a, d = pywt.dwt(a, wavelet, mode)
begin_idx = end_idx/2
coeffs[begin_idx:end_idx] = d
end_idx = begin_idx
coeffs[:end_idx] = a
return coeffs
def waverec(data, wavelet, mode='sym'):
wavelet = waveFn(wavelet)
dLen = len(data)
level = pywt.dwt_max_level(dLen, wavelet.dec_len)
end_idx = 1
a = data[:end_idx] # approximation ... also the original data
d = data[end_idx:end_idx*2]
for idx in xrange(level):
a = pywt.idwt(a, d, wavelet, mode)
end_idx *= 2
d = data[end_idx:end_idx*2]
return a
def fn_dec(arr):
return np.array(map(lambda row: reduce(lambda x,y : np.hstack((x,y)), pywt.wavedec(row, 'haar', 'zpd')), arr))
# return np.array(map(lambda row: row*2, arr))
if __name__ == '__main__':
test = 1
np.random.seed(10)
wavelet = waveFn('haar')
if test==0:
# SIngle dimensional test.
a = np.random.randn(1,8)
print "original values A"
print a
print "decomposition of A by method in pywt"
print fn_dec(a)
print " decomposition of A by my method"
coeffs = wavedec(a[0], 'haar', 'zpd')
print coeffs
print "recomposition of A by my method"
print waverec(coeffs, 'haar', 'zpd')
sys.exit()
if test==1:
a = np.random.randn(4,4,4)
# 2 D test
print "original value of A"
print a
# decompose the signal into wavelet coefficients.
dimensions = a.shape
for dim in dimensions:
a = np.rollaxis(a, 0, a.ndim)
ndim = a.shape
#a = fn_dec(a.reshape(-1, dim))
a = np.array(map(lambda row: wavedec(row, wavelet), a.reshape(-1, dim)))
a = a.reshape(ndim)
print " decomposition of signal into coefficients"
print a
# re-composition of the coefficients into original signal
for dim in dimensions:
a = np.rollaxis(a, 0, a.ndim)
ndim = a.shape
a = np.array(map(lambda row: waverec(row, wavelet), a.reshape(-1, dim)))
a = a.reshape(ndim)
print "recomposition of coefficients to signal"
print a
First of all, I would like to point you to the function that already implements Single-level Multi-dimensional Transform (Source). It returns a dictionary of n-dimensional coefficients arrays. Coefficients are addressed by keys that describe type of the transform (approximation/details) applied to each of the dimensions.
For example for a 2D case the result is a dictionary with approximation and details coefficients arrays:
>>> pywt.dwtn([[1,2,3,4],[3,4,5,6],[5,6,7,8],[7,8,9,10]], 'db1')
{'aa': [[5.0, 9.0], [13.0, 17.0]],
'ad': [[-1.0, -1.0], [-1.0, -1.0]],
'da': [[-2.0, -2.0], [-2.0, -2.0]],
'dd': [[0.0, 0.0], [0.0, -0.0]]}
Where aa is the coefficients array with approximation transform applied to both dimensions (LL) and da is the coefficients array with details transform applied to the first dimension and approximation transform applied to the second one (HL) (compare with dwt2 output).
Based on that it should be fairly easy to extend it to the multi-level case.
Here's my take on the decomposition part: https://gist.github.com/934166.
I would also like to address one issue you mention in your question:
There is one catch though: It
represents the wavelet coefficients as
an array of the same shape as the data.
The approach of representing results as an array of the same shape/size as the input data is in my opinion harmful. It makes the whole thing unnecessarily complex to understand and work with because anyway you have to make assumptions or maintain a secondary data structure with indexes to be able to access coefficient in the output array and perform an inverse transform (see Matlab's documentation for wavedec/waverec).
Also, even though it works great on paper, it does not always fit real world applications because of the problems you have mentioned: most of the times input data size is not 2^n and the decimated result of convolving signal with wavelet filter is larger that the "storage space", which in turn can lead to data loss and non-perfect reconstruction.
To avoid these problems I would recommend using more natural data structures to represent the result data hierarchy, like Python's lists, dictionaries and tuples (where available).

Categories

Resources