Numpy: random number generation - breaking loop into chunks - python

A question regarding the generation of random numbers in Numpy.
I have a code which does the following:
import numpy as np
for i in range(very_big_number):
np.random.randn(5)
# other stuff that uses the generated random numbers
since unfortunately very_big_number can really be a very large number, I wanted to break this loop into chunks, say e.g. call 10 times the same
for i in range(very_big_number/10):
np.random.randn(5)
# other stuff that uses the generated random numbers
and then collate all the output together. However, I want to make sure that this division into blocks preserves the randomness of my generated numbers.
My question is:reading the numpy docuemntation or equivalently
this question on StackOverflow, I would be tempted to think that it is enough to just divide the loops and run the subloops on e.g. ten different cores at the same time. However I would like to know if that is correct or if I should set some random number seed and if so, how.

Dividing the loop.... the randomness is questionable....
Instead go for parallel processing....
Try below said "Joblib" library or any other library if you know for parallel processing....
https://pythonhosted.org/joblib/parallel.html
Joblib provides a simple helper class to write parallel for loops
using multiprocessing

Related

Random seeds and multithreading in numba

I wondered if there is any way to reproducibly draw random numbers when using parallel==True with jitted functions in numba. I know that for singlethreaded code, you can set the random seed for numpy or the standard random module within a jitted function, but that does not seem to work for multithreaded code. Maybe there is some sort of workaround one could use?
In parallel, each worker need to have its own seed as a random number generator cannot be both efficient and and thread safe at the same time. If you want the number of threads not to have an impact on the result, then you need to split the computation in chunks and set a seed for each chunk (computed by one thread). The seed chosen for a given chunk can be for example the chunk ID.

python random number why isn't it same on all computers

One friend that I don't have any communication with anymore once told me the following:
Using that library(python's random) you can select a seed If you give it at seed it
means the randomly generated number will always be the same No matter
what computer you run it on
So I tried to test this, because this is what I need so it's the same on all computers and everytime someone calls this(this is important, as I am working on a blockchain NFT and trust is important here)
So I found this: https://machinelearningmastery.com/how-to-generate-random-numbers-in-python/
on that link, there's example:
from random import seed
from random import random
# seed random number generator
seed(1)
# generate some random numbers
print(random(), random(), random())
# reset the seed
seed(1)
# generate some random numbers
print(random(), random(), random())
running above in playground of python, I get
(0.417022004703, 0.720324493442, 0.000114374817345) (0.417022004703,
0.720324493442, 0.000114374817345)
But as you can see, on that website, the creator of that post got the following:
0.13436424411240122 0.8474337369372327 0.763774618976614
0.13436424411240122 0.8474337369372327 0.763774618976614
Why aren't they same on all computers then ? I am using the same seed. and how can I ensure that they will be the same ?
The ANSWER is that, even though you have tagged Python-3.x, you are actually using Python 2, and the random number algorithm changed between 2 and 3.
I can tell that because your print statement printed the values as a tuple, with parentheses. That wouldn't happen with Python 3's print function.
It's odd to rely on a particular implementation of a random number algorithm for financial purposes. If you really need reproducibility, then you should embed your own algorithm. There are several RNGs that are not difficult to code. But if the algorithm needs to be predictable, why not just use incrementing numbers? If you don't need randomness, then don't use randomness.
I get the post author's values both in Python 2.3.0 (over 18 years old) and in Python 3.10 (the newest, just a month old).
I get your values if I use numpy.random instead of Python's random.
So I suspect you're not telling the truth about your code or that that "playground of python" you're using is weird.

Difference between Numpy's random module and Python? [duplicate]

I have a big script in Python. I inspired myself in other people's code so I ended up using the numpy.random module for some things (for example for creating an array of random numbers taken from a binomial distribution) and in other places I use the module random.random.
Can someone please tell me the major differences between the two?
Looking at the doc webpage for each of the two it seems to me that numpy.random just has more methods, but I am unclear about how the generation of the random numbers is different.
The reason why I am asking is because I need to seed my main program for debugging purposes. But it doesn't work unless I use the same random number generator in all the modules that I am importing, is this correct?
Also, I read here, in another post, a discussion about NOT using numpy.random.seed(), but I didn't really understand why this was such a bad idea. I would really appreciate if someone explain me why this is the case.
You have made many correct observations already!
Unless you'd like to seed both of the random generators, it's probably simpler in the long run to choose one generator or the other. But if you do need to use both, then yes, you'll also need to seed them both, because they generate random numbers independently of each other.
For numpy.random.seed(), the main difficulty is that it is not thread-safe - that is, it's not safe to use if you have many different threads of execution, because it's not guaranteed to work if two different threads are executing the function at the same time. If you're not using threads, and if you can reasonably expect that you won't need to rewrite your program this way in the future, numpy.random.seed() should be fine. If there's any reason to suspect that you may need threads in the future, it's much safer in the long run to do as suggested, and to make a local instance of the numpy.random.Random class. As far as I can tell, random.random.seed() is thread-safe (or at least, I haven't found any evidence to the contrary).
The numpy.random library contains a few extra probability distributions commonly used in scientific research, as well as a couple of convenience functions for generating arrays of random data. The random.random library is a little more lightweight, and should be fine if you're not doing scientific research or other kinds of work in statistics.
Otherwise, they both use the Mersenne twister sequence to generate their random numbers, and they're both completely deterministic - that is, if you know a few key bits of information, it's possible to predict with absolute certainty what number will come next. For this reason, neither numpy.random nor random.random is suitable for any serious cryptographic uses. But because the sequence is so very very long, both are fine for generating random numbers in cases where you aren't worried about people trying to reverse-engineer your data. This is also the reason for the necessity to seed the random value - if you start in the same place each time, you'll always get the same sequence of random numbers!
As a side note, if you do need cryptographic level randomness, you should use the secrets module, or something like Crypto.Random if you're using a Python version earlier than Python 3.6.
From Python for Data Analysis, the module numpy.random supplements the Python random with functions for efficiently generating whole arrays of sample values from many kinds of probability distributions.
By contrast, Python's built-in random module only samples one value at a time, while numpy.random can generate very large sample faster. Using IPython magic function %timeit one can see which module performs faster:
In [1]: from random import normalvariate
In [2]: N = 1000000
In [3]: %timeit samples = [normalvariate(0, 1) for _ in xrange(N)]
1 loop, best of 3: 963 ms per loop
In [4]: %timeit np.random.normal(size=N)
10 loops, best of 3: 38.5 ms per loop
The source of the seed and the distribution profile used are going to affect the outputs - if you are looking for cryptgraphic randomness, seeding from os.urandom() will get nearly real random bytes from device chatter (ie ethernet or disk) (ie /dev/random on BSD)
this will avoid you giving a seed and so generating determinisitic random numbers. However the random calls then allow you to fit the numbers to a distribution (what I call scientific random ness - eventually all you want is a bell curve distribution of random numbers, numpy is best at delviering this.
SO yes, stick with one generator, but decide what random you want - random, but defitniely from a distrubtuion curve, or as random as you can get without a quantum device.
It surprised me the randint(a, b) method exists in both numpy.random and random, but they have different behaviors for the upper bound.
random.randint(a, b) returns a random integer N such that a <= N <= b. Alias for randrange(a, b+1). It has b inclusive. random documentation
However if you call numpy.random.randint(a, b), it will return low(inclusive) to high(exclusive). Numpy documentation

Bulletproof seeding of random generators to ensure computational reproducibility in Python

My intention is to create a guideline on how to do reproducible computations in Python (if possible, regardless the environment, the operating system, etc.). However the issue of generating random numbers keeps coming back to my mind. I am struggling to find a bulletproof way (if there is one).
Standard way how to make the output of random generators reproducible is to use
import random
random.seed()
As far as I know, the automatic choice of the seed is dependent on the system. (See random.seed's documentation in Python.)
A better way is therefore to use specific number to seed the generator.
import random
random.seed(0)
However, there are libraries that does not use built-in random but rather use numpy.random. Therefore you also need to seed numpy's generator.
import numpy
numpy.random.seed(0)
Built-in random works as a singleton and I suppose numpy.random works the same way. It means that you set the seed once and then it is used everywhere.
I would like to create a code snippet which you could use at the beginning of your code and which would ensure the computational reproducibility in terms of random generators.
Is there any better way then combining both generators and setting both seeds and possibly even keeping the reproducibility across operating systems?
And are you familiar with any widely used pseudo-random generators that should be added to built-in random and to numpy random generators in order to make the snippet as general as possible?

Numpy random numbers - all in one go or call repeatedly?

I need to compute something like 10^8 uniformly distributed numbers in [0,1) for a Monte Carlo simulation. I can see two approaches to get these - compute all random numbers I need in one go, e.g. by using
numpy.random.random_sample(however_many_I_need)
or repeatedly call
numpy.random.random_sample()
Is there any difference in speed or quality of the random numbers between the two approaches?
Why not time them and see?
import timeit
timeit.timeit("np.random.random_sample()",
setup="import numpy as np",
number=int(1E8))
14.27977508635054
timeit.timeit("np.random.random_sample(int(1E8))",
setup="import numpy as np",
number=1)
1.4685100695671025
As to the quality, both results will be as pseudo-random as the other - they come from the same generator. If you need something more secure it might be worth looking elsewhere, but if this is for a simple Monte Carlo problem I don't think you really need to.
PS timeit is great
10^8 is a large number. As with all things numpy, this will be much faster if you pre-generate the numbers in one go, since you avoid the python function call overhead. This also applies for other operations you may want to do - additions, subtractions, multiplications, divisions, exponentiation, filtering, and lots of others
On the other hand, it doesn't help much if you pre-generate the numbers and the proceed to access each one individually from python. Make sure you can complete the simulation using matrix/vector operations.
As for the quality, there's no difference between the two methods you mention. If you need cryptographically secure random numbers, you should check #MayurPatel's answer. This is only needed if you need the random sequence to be difficult to guess for an attacker. For a monte carlo simulation you're probably more interested in statistical soundness, and numpy's random is enough
Would you consider using os.urandom()? I believe this is the highest quality you will get natively in python; but it may not be as fast as some other methods.

Categories

Resources