I want to create a vector with the size 10^15 with numpy and fill it with random numbers, but I get the following error:
Maximum allowed dimension exceeded.
Can it help if i use MPI?
Thank you
The Message Passing Interface (MPI) is mainly used to do parallel computations across multiple machines (nodes). Large arrays can be split into smaller arrays and stored on different machines. However, while it's of course possible to distribute the data to different nodes, you should carefully think about the necessity of doing this for your particular task. Additionally, if you are able to split your array, you could also do this on one machine. If performance is not an issue, avoid parallel computing.
Related
My problem is to perform 3 matrix multiplications on a 3D numpy array A too large to fit in a single processor. In tensorial form I want A_ijk B_km C_jn D_ip (B, C, and D can all fit in memory). I want to know if dask is appropriate for this task (or if another tool might be more suited).
I believe the best approach is to split this operation into each multiplication, and make sure that they are all local. This link has a really useful diagram that summarises what I'm talking about http://www.2decomp.org/1d_mode.html.
In more detail: First, to do A_ijk B_km, I should distribute A over the first two axes, and perform the matrix multiplication over each pencil locally (the first step in the diagram).
Then, I need to transpose the array, making the j axis local to each processor (and splitting over the k (now m) axis), to then perform the next multiplication. (So going from the first to the second step in the diagram). This is where I wonder if dask could help.
I'm aware that this can be done in principle using mpi4py, but the steps are pretty non-trivial, whereas dask arrays have helpful rechunk and transpose methods, which feel relevant to this application.
Does this seem like something well-suited to dask?
If not, is anyone aware of any python libraries that can perform these steps? I know that fftw has routines for doing just this, but I don't know how to write the C-code necessary, or how to get it to interface with python and numpy.
Thanks for any help.
For anyone else in the future, mpi4py does have a transpose method. But it's called Alltoall/Alltoallv. It's not explained in the documentation or tutorial on mpi4py. I found out about it at another tutorial: https://info.gwdg.de/wiki/doku.php?id=wiki:hpc:mpi4py.
Dask implements einsum, which may be what you are after, and there is, of course matmul, if you want to write out the operation. So long as your large matrix A is a Dask array, with reasonable chunk sizes, Dask will parcel out your work without running out of memory.
I am trying to learn ML using Kaggle datasets. In one of the problems (using Logistic regression) inputs and parameters matrices are of size (1110001, 8) & (2122640, 8) respectively.
I am getting memory error while doing it in python. This would be same for any language I guess since it's too big. My question is how do they multiply matrices in real life ML implementations (since it would usually be this big)?
Things bugging me :
Some ppl in SO have suggested to calculate dot product in parts and then combine. But even then matrix would be still too big for RAM (9.42TB? in this case)
And If I write it to a file wouldn't it be too slow for optimization algorithms to read from file and minimize function?
Even if I do write it to file how would fmin_bfgs(or any opt. function) read from file?
Also Kaggle notebook shows only 1GB of storage available. I don't think anyone would allow TBs of storage space.
In my input matrix many rows have similar values for some columns. Can I use it my advantage to save space? (like sparse matrix for zeros in matrix)
Can anyone point me to any real life sample implementation of such cases. Thanks!
I have tried many things. I will be mentioning these here, if anyone needs them in future:
I had already cleaned up data like removing duplicates and
irrelevant records depending on given problem etc.
I have stored large matrices which hold mostly 0s as sparse matrix.
I implemented the gradient descent using mini-batch method instead of plain old Batch method (theta.T dot X).
Now everything is working fine.
I have a program which creates an array:
List1 = zeros((x, y), dtype=complex_)
Currently I am using x = 500 and y = 1000000.
I will initialize the first column of the list by some formula. Then the subsequent columns will calculate their own values based on the preceding column.
After the list is completely filled, I will then display this multidimensional array using imshow().
The size of each value (item) in the list is 24 bytes.
A sample value from the code is: 4.63829355451e-32
When I run the code with y = 10000000, it takes up too much RAM and the system stops the run. How do I solve this problem? Is there a way to save my RAM while still being able to process the list using imshow() easily? Also, how large a list can imshow() display?
There's no way to solve this problem (in any general way).
Computers (as commonly understood) have a limited amount of RAM, and they require elements to be in RAM in order to operate on them.
An complex128 array size of 10000000x500 would require around 74GiB to store. You'll need to somehow reduce the amount of data you're processing if you hope to use a regular computer to do it (as opposed to a supercomputer).
A common technique is partitioning your data and processing each partition individually (possibly on multiple computers). Depending on the problem you're trying to solve, there may be special data structures that you can use to reduce the amount of memory needed to represent the data - a good example is a sparse matrix.
It's very unusual to need this much memory - make sure to carefully consider if it's actually needed before you dwell into the extremely complex workarounds.
Hej there, I am writing a data aquisition and analysis software to a physical measurement set up with Python. In the process I gather massive amounts of data points (easily in the order of 1.000.000 or more) which I subsequently will analyze. So far I am using arrays of float numbers, which in principle do the job.
However, I am getting strange effects on the aquired data as I use more and more data points per measurement, which makes me wonder wether the handling of the arrays is so inefficient, that writing into them makes for a significant time delay in the data aquisition loop.
Is that a possibility? Do you have any suggestions about how to improve the handling time in the writing process (it is a matter of microseconds) or is that not a possible influence and I need to look somewhere else?
Thanks in advance!
Do you mean lists? You can use NumPy to handle numerical arrays efficient and performant.
From the NumyPy website:
First of all, they are great for performing calculation relying
heavily on mathematical and numerical operations. They can work
natively with matrices and arrays, perform operations on them, find
eigenvectors, compute integrals, solve differential equations.
NumPy’s array class (which is used to implement the matrix class) is
implemented with speed in mind, so accessing NumPy arrays is faster
than accessing Python lists. Further, NumPy implements an array
language, so that most loops are not needed.
I need to create about 2 million vectors w/ 1000 slots in each (each slot merely contains an integer).
What would be the best data structure for working with this amount of data? It could be that I'm over-estimating the amount of processing/memory involved.
I need to iterate over a collection of files (about 34.5GB in total) and update the vectors each time one of the the 2-million items (each corresponding to a vector) is encountered on a line.
I could easily write code for this, but I know it wouldn't be optimal enough to handle the volume of the data, which is why I'm asking you experts. :)
Best,
Georgina
You might be memory bound on your machine. Without cleaning up running programs:
a = numpy.zeros((1000000,1000),dtype=int)
wouldn't fit into memory. But in general if you could break the problem up such that you don't need the entire array in memory at once, or you can use a sparse representation, I would go with numpy (scipy for the sparse representation).
Also, you could think about storing the data in hdf5 with h5py or pytables or netcdf4 with netcdf4-python on disk and then access the portions you need.
Use a sparse matrix assuming most entries are 0.
If you need to work in RAM try the scipy.sparse matrix variants. It includes algorithms to efficiently manipulate sparse matrices.