Related
How can I generate random integers between 0 and 9 (inclusive) in Python?
For example, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Try random.randrange:
from random import randrange
print(randrange(10))
Try random.randint:
import random
print(random.randint(0, 9))
Docs state:
random.randint(a, b)
Return a random integer N such that a <= N <= b.
Try this:
from random import randrange, uniform
# randrange gives you an integral value
irand = randrange(0, 10)
# uniform gives you a floating-point value
frand = uniform(0, 10)
from random import randint
x = [randint(0, 9) for p in range(0, 10)]
This generates 10 pseudorandom integers in range 0 to 9 inclusive.
The secrets module is new in Python 3.6. This is better than the random module for cryptography or security uses.
To randomly print an integer in the inclusive range 0-9:
from secrets import randbelow
print(randbelow(10))
For details, see PEP 506.
Note that it really depends on the use case. With the random module you can set a random seed, useful for pseudorandom but reproducible results, and this is not possible with the secrets module.
random module is also faster (tested on Python 3.9):
>>> timeit.timeit("random.randrange(10)", setup="import random")
0.4920286529999771
>>> timeit.timeit("secrets.randbelow(10)", setup="import secrets")
2.0670733770000425
I would try one of the following:
1.> numpy.random.randint
import numpy as np
X1 = np.random.randint(low=0, high=10, size=(15,))
print (X1)
>>> array([3, 0, 9, 0, 5, 7, 6, 9, 6, 7, 9, 6, 6, 9, 8])
2.> numpy.random.uniform
import numpy as np
X2 = np.random.uniform(low=0, high=10, size=(15,)).astype(int)
print (X2)
>>> array([8, 3, 6, 9, 1, 0, 3, 6, 3, 3, 1, 2, 4, 0, 4])
3.> numpy.random.choice
import numpy as np
X3 = np.random.choice(a=10, size=15 )
print (X3)
>>> array([1, 4, 0, 2, 5, 2, 7, 5, 0, 0, 8, 4, 4, 0, 9])
4.> random.randrange
from random import randrange
X4 = [randrange(10) for i in range(15)]
print (X4)
>>> [2, 1, 4, 1, 2, 8, 8, 6, 4, 1, 0, 5, 8, 3, 5]
5.> random.randint
from random import randint
X5 = [randint(0, 9) for i in range(0, 15)]
print (X5)
>>> [6, 2, 6, 9, 5, 3, 2, 3, 3, 4, 4, 7, 4, 9, 6]
Speed:
► np.random.uniform and np.random.randint are much faster (~10 times faster) than np.random.choice, random.randrange, random.randint .
%timeit np.random.randint(low=0, high=10, size=(15,))
>> 1.64 µs ± 7.83 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit np.random.uniform(low=0, high=10, size=(15,)).astype(int)
>> 2.15 µs ± 38.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit np.random.choice(a=10, size=15 )
>> 21 µs ± 629 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%timeit [randrange(10) for i in range(15)]
>> 12.9 µs ± 60.4 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%timeit [randint(0, 9) for i in range(0, 15)]
>> 20 µs ± 386 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Notes:
1.> np.random.randint generates random integers over the half-open interval [low, high).
2.> np.random.uniform generates uniformly distributed numbers over the half-open interval [low, high).
3.> np.random.choice generates a random sample over the half-open interval [low, high) as if the argument a was np.arange(n).
4.> random.randrange(stop) generates a random number from range(start, stop, step).
5.> random.randint(a, b) returns a random integer N such that a <= N <= b.
6.> astype(int) casts the numpy array to int data type.
7.> I have chosen size = (15,). This will give you a numpy array of length = 15.
Choose the size of the array (in this example, I have chosen the size to be 20). And then, use the following:
import numpy as np
np.random.randint(10, size=(1, 20))
You can expect to see an output of the following form (different random integers will be returned each time you run it; hence you can expect the integers in the output array to differ from the example given below).
array([[1, 6, 1, 2, 8, 6, 3, 3, 2, 5, 6, 5, 0, 9, 5, 6, 4, 5, 9, 3]])
While many posts demonstrate how to get one random integer, the original question asks how to generate random integers (plural):
How can I generate random integers between 0 and 9 (inclusive) in Python?
For clarity, here we demonstrate how to get multiple random integers.
Given
>>> import random
lo = 0
hi = 10
size = 5
Code
Multiple, Random Integers
# A
>>> [lo + int(random.random() * (hi - lo)) for _ in range(size)]
[5, 6, 1, 3, 0]
# B
>>> [random.randint(lo, hi) for _ in range(size)]
[9, 7, 0, 7, 3]
# C
>>> [random.randrange(lo, hi) for _ in range(size)]
[8, 3, 6, 8, 7]
# D
>>> lst = list(range(lo, hi))
>>> random.shuffle(lst)
>>> [lst[i] for i in range(size)]
[6, 8, 2, 5, 1]
# E
>>> [random.choice(range(lo, hi)) for _ in range(size)]
[2, 1, 6, 9, 5]
Sample of Random Integers
# F
>>> random.choices(range(lo, hi), k=size)
[3, 2, 0, 8, 2]
# G
>>> random.sample(range(lo, hi), k=size)
[4, 5, 1, 2, 3]
Details
Some posts demonstrate how to natively generate multiple random integers.1 Here are some options that address the implied question:
A: random.random returns a random float in the range [0.0, 1.0)
B: random.randint returns a random integer N such that a <= N <= b
C: random.randrange alias to randint(a, b+1)
D: random.shuffle shuffles a sequence in place
E: random.choice returns a random element from the non-empty sequence
F: random.choices returns k selections from a population (with replacement, Python 3.6+)
G: random.sample returns k unique selections from a population (without replacement):2
See also R. Hettinger's talk on Chunking and Aliasing using examples from the random module.
Here is a comparison of some random functions in the Standard Library and Numpy:
| | random | numpy.random |
|-|-----------------------|----------------------------------|
|A| random() | random() |
|B| randint(low, high) | randint(low, high) |
|C| randrange(low, high) | randint(low, high) |
|D| shuffle(seq) | shuffle(seq) |
|E| choice(seq) | choice(seq) |
|F| choices(seq, k) | choice(seq, size) |
|G| sample(seq, k) | choice(seq, size, replace=False) |
You can also quickly convert one of many distributions in Numpy to a sample of random integers.3
Examples
>>> np.random.normal(loc=5, scale=10, size=size).astype(int)
array([17, 10, 3, 1, 16])
>>> np.random.poisson(lam=1, size=size).astype(int)
array([1, 3, 0, 2, 0])
>>> np.random.lognormal(mean=0.0, sigma=1.0, size=size).astype(int)
array([1, 3, 1, 5, 1])
1Namely #John Lawrence Aspden, #S T Mohammed, #SiddTheKid, #user14372, #zangw, et al.
2#prashanth mentions this module showing one integer.
3Demonstrated by #Siddharth Satpathy
You need the random python module which is part of your standard library.
Use the code...
from random import randint
num1= randint(0,9)
This will set the variable num1 to a random number between 0 and 9 inclusive.
Try this through random.shuffle
>>> import random
>>> nums = range(10)
>>> random.shuffle(nums)
>>> nums
[6, 3, 5, 4, 0, 1, 2, 9, 8, 7]
In case of continuous numbers randint or randrange are probably the best choices but if you have several distinct values in a sequence (i.e. a list) you could also use choice:
>>> import random
>>> values = list(range(10))
>>> random.choice(values)
5
choice also works for one item from a not-continuous sample:
>>> values = [1, 2, 3, 5, 7, 10]
>>> random.choice(values)
7
If you need it "cryptographically strong" there's also a secrets.choice in python 3.6 and newer:
>>> import secrets
>>> values = list(range(10))
>>> secrets.choice(values)
2
if you want to use numpy then use the following:
import numpy as np
print(np.random.randint(0,10))
>>> import random
>>> random.randrange(10)
3
>>> random.randrange(10)
1
To get a list of ten samples:
>>> [random.randrange(10) for x in range(10)]
[9, 0, 4, 0, 5, 7, 4, 3, 6, 8]
You can try importing the random module from Python and then making it choose a choice between the nine numbers. It's really basic.
import random
numbers = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
You can try putting the value the computer chose in a variable if you're going to use it later, but if not, the print function should work as such:
choice = random.choice(numbers)
print(choice)
Generating random integers between 0 and 9.
import numpy
X = numpy.random.randint(0, 10, size=10)
print(X)
Output:
[4 8 0 4 9 6 9 9 0 7]
Best way is to use import Random function
import random
print(random.sample(range(10), 10))
or without any library import:
n={}
for i in range(10):
n[i]=i
for p in range(10):
print(n.popitem()[1])
here the popitems removes and returns an arbitrary value from the dictionary n.
random.sample is another that can be used
import random
n = 1 # specify the no. of numbers
num = random.sample(range(10), n)
num[0] # is the required number
This is more of a mathematical approach but it works 100% of the time:
Let's say you want to use random.random() function to generate a number between a and b. To achieve this, just do the following:
num = (b-a)*random.random() + a;
Of course, you can generate more numbers.
From the documentation page for the random module:
Warning: The pseudo-random generators of this module should not be
used for security purposes. Use os.urandom() or SystemRandom if you
require a cryptographically secure pseudo-random number generator.
random.SystemRandom, which was introduced in Python 2.4, is considered cryptographically secure. It is still available in Python 3.7.1 which is current at time of writing.
>>> import string
>>> string.digits
'0123456789'
>>> import random
>>> random.SystemRandom().choice(string.digits)
'8'
>>> random.SystemRandom().choice(string.digits)
'1'
>>> random.SystemRandom().choice(string.digits)
'8'
>>> random.SystemRandom().choice(string.digits)
'5'
Instead of string.digits, range could be used per some of the other answers along perhaps with a comprehension. Mix and match according to your needs.
I thought I'd add to these answers with quantumrand, which uses ANU's quantum number generator. Unfortunately this requires an internet connection, but if you're concerned with "how random" the numbers are then this could be useful.
https://pypi.org/project/quantumrand/
Example
import quantumrand
number = quantumrand.randint(0, 9)
print(number)
Output: 4
The docs have a lot of different examples including dice rolls and a list picker.
I had better luck with this for Python 3.6
str_Key = ""
str_RandomKey = ""
for int_I in range(128):
str_Key = random.choice('0123456789')
str_RandomKey = str_RandomKey + str_Key
Just add characters like 'ABCD' and 'abcd' or '^!~=-><' to alter the character pool to pull from, change the range to alter the number of characters generated.
OpenTURNS allows to not only simulate the random integers but also to define the associated distribution with the UserDefined defined class.
The following simulates 12 outcomes of the distribution.
import openturns as ot
points = [[i] for i in range(10)]
distribution = ot.UserDefined(points) # By default, with equal weights.
for i in range(12):
x = distribution.getRealization()
print(i,x)
This prints:
0 [8]
1 [7]
2 [4]
3 [7]
4 [3]
5 [3]
6 [2]
7 [9]
8 [0]
9 [5]
10 [9]
11 [6]
The brackets are there becausex is a Point in 1-dimension.
It would be easier to generate the 12 outcomes in a single call to getSample:
sample = distribution.getSample(12)
would produce:
>>> print(sample)
[ v0 ]
0 : [ 3 ]
1 : [ 9 ]
2 : [ 6 ]
3 : [ 3 ]
4 : [ 2 ]
5 : [ 6 ]
6 : [ 9 ]
7 : [ 5 ]
8 : [ 9 ]
9 : [ 5 ]
10 : [ 3 ]
11 : [ 2 ]
More details on this topic are here: http://openturns.github.io/openturns/master/user_manual/_generated/openturns.UserDefined.html
Does range function allows concatenation ? Like i want to make a range(30) & concatenate it with range(2000, 5002). So my concatenated range will be 0, 1, 2, ... 29, 2000, 2001, ... 5001
Code like this does not work on my latest python (ver: 3.3.0)
range(30) + range(2000, 5002)
You can use itertools.chain for this:
from itertools import chain
concatenated = chain(range(30), range(2000, 5002))
for i in concatenated:
...
It works for arbitrary iterables. Note that there's a difference in behavior of range() between Python 2 and 3 that you should know about: in Python 2 range returns a list, and in Python3 an iterator, which is memory-efficient, but not always desirable.
Lists can be concatenated with +, iterators cannot.
I like the most simple solutions that are possible (including efficiency). It is not always clear whether the solution is such. Anyway, the range() in Python 3 is a generator. You can wrap it to any construct that does iteration. The list() is capable of construction of a list value from any iterable. The + operator for lists does concatenation. I am using smaller values in the example:
>>> list(range(5))
[0, 1, 2, 3, 4]
>>> list(range(10, 20))
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
>>> list(range(5)) + list(range(10,20))
[0, 1, 2, 3, 4, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
This is what range(5) + range(10, 20) exactly did in Python 2.5 -- because range() returned a list.
In Python 3, it is only useful if you really want to construct the list. Otherwise, I recommend the Lev Levitsky's solution with itertools.chain. The documentation also shows the very straightforward implementation:
def chain(*iterables):
# chain('ABC', 'DEF') --> A B C D E F
for it in iterables:
for element in it:
yield element
The solution by Inbar Rose is fine and functionally equivalent. Anyway, my +1 goes to Lev Levitsky and to his argument about using the standard libraries. From The Zen of Python...
In the face of ambiguity, refuse the temptation to guess.
#!python3
import timeit
number = 10000
t = timeit.timeit('''\
for i in itertools.chain(range(30), range(2000, 5002)):
pass
''',
'import itertools', number=number)
print('itertools:', t/number * 1000000, 'microsec/one execution')
t = timeit.timeit('''\
for x in (i for j in (range(30), range(2000, 5002)) for i in j):
pass
''', number=number)
print('generator expression:', t/number * 1000000, 'microsec/one execution')
In my opinion, the itertools.chain is more readable. But what really is important...
itertools: 264.4522138986938 microsec/one execution
generator expression: 785.3081048010291 microsec/one execution
... it is about 3 times faster.
python >= 3.5
You can use iterable unpacking in lists (see PEP 448: Additional Unpacking Generalizations).
If you need a list,
[*range(2, 5), *range(3, 7)]
# [2, 3, 4, 3, 4, 5, 6]
This preserves order and does not remove duplicates. Or, you might want a tuple,
(*range(2, 5), *range(3, 7))
# (2, 3, 4, 3, 4, 5, 6)
... or a set,
# note that this drops duplicates
{*range(2, 5), *range(3, 7)}
# {2, 3, 4, 5, 6}
It also happens to be faster than calling itertools.chain.
from itertools import chain
%timeit list(chain(range(10000), range(5000, 20000)))
%timeit [*range(10000), *range(5000, 20000)]
738 µs ± 10.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
665 µs ± 13.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
The benefit of chain, however, is that you can pass an arbitrary list of ranges.
ranges = [range(2, 5), range(3, 7), ...]
flat = list(chain.from_iterable(ranges))
OTOH, unpacking generalisations haven't been "generalised" to arbitrary sequences, so you will still need to unpack the individual ranges yourself.
Can be done using list-comprehension.
>>> [i for j in (range(10), range(15, 20)) for i in j]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 15, 16, 17, 18, 19]
Works for your request, but it is a long answer so I will not post it here.
note: can be made into a generator for increased performance:
for x in (i for j in (range(30), range(2000, 5002)) for i in j):
# code
or even into a generator variable.
gen = (i for j in (range(30), range(2000, 5002)) for i in j)
for x in gen:
# code
With the help of the extend method, we can concatenate two lists.
>>> a = list(range(1,10))
>>> a.extend(range(100,105))
>>> a
[1, 2, 3, 4, 5, 6, 7, 8, 9, 100, 101, 102, 103, 104]
range() in Python 2.x returns a list:
>>> range(10)
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
xrange() in Python 2.x returns an iterator:
>>> xrange(10)
xrange(10)
And in Python 3 range() also returns an iterator:
>>> r = range(10)
>>> iterator = r.__iter__()
>>> iterator.__next__()
0
>>> iterator.__next__()
1
>>> iterator.__next__()
2
So it is clear that you can not concatenate iterators other by using chain() as the other guy pointed out.
You can use list function around range function to make a list
LIKE THIS
list(range(3,7))+list(range(2,9))
I came to this question because I was trying to concatenate an unknown number of ranges, that might overlap, and didn't want repeated values in the final iterator. My solution was to use set and the union operator like so:
range1 = range(1,4)
range2 = range(2,6)
concatenated = set.union(set(range1), set(range2)
for i in concatenated:
print(i)
I am trying to get m values while stepping through every n elements of an array. For example, for m = 2 and n = 5, and given
a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
I want to retrieve
b = [1, 2, 6, 7]
Is there a way to do this using slicing? I can do this using a nested list comprehension, but I was wondering if there was a way to do this using the indices only. For reference, the list comprehension way is:
b = [k for j in [a[i:i+2] for i in range(0,len(a),5)] for k in j]
I agree with wim that you can't do it with just slicing. But you can do it with just one list comprehension:
>>> [x for i,x in enumerate(a) if i%n < m]
[1, 2, 6, 7]
No, that is not possible with slicing. Slicing only supports start, stop, and step - there is no way to represent stepping with "groups" of size larger than 1.
In short, no, you cannot. But you can use itertools to remove the need for intermediary lists:
from itertools import chain, islice
res = list(chain.from_iterable(islice(a, i, i+2) for i in range(0, len(a), 5)))
print(res)
[1, 2, 6, 7]
Borrowing #Kevin's logic, if you want a vectorised solution to avoid a for loop, you can use 3rd party library numpy:
import numpy as np
m, n = 2, 5
a = np.array(a) # convert to numpy array
res = a[np.where(np.arange(a.shape[0]) % n < m)]
There are other ways to do it, which all have advantages for some cases, but none are "just slicing".
The most general solution is probably to group your input, slice the groups, then flatten the slices back out. One advantage of this solution is that you can do it lazily, without building big intermediate lists, and you can do it to any iterable, including a lazy iterator, not just a list.
# from itertools recipes in the docs
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return itertools.zip_longest(*args, fillvalue=fillvalue)
groups = grouper(a, 5)
truncated = (group[:2] for group in groups)
b = [elem for group in truncated for elem in group]
And you can convert that into a pretty simple one-liner, although you still need the grouper function:
b = [elem for group in grouper(a, 5) for elem in group[:2]]
Another option is to build a list of indices, and use itemgetter to grab all the values. This might be more readable for a more complicated function than just "the first 2 of every 5", but it's probably less readable for something as simple as your use:
indices = [i for i in range(len(a)) if i%5 < 2]
b = operator.itemgetter(*indices)(a)
… which can be turned into a one-liner:
b = operator.itemgetter(*[i for i in range(len(a)) if i%5 < 2])(a)
And you can combine the advantages of the two approaches by writing your own version of itemgetter that takes a lazy index iterator—which I won't show, because you can go even better by writing one that takes an index filter function instead:
def indexfilter(pred, a):
return [elem for i, elem in enumerate(a) if pred(i)]
b = indexfilter((lambda i: i%5<2), a)
(To make indexfilter lazy, just replace the brackets with parens.)
… or, as a one-liner:
b = [elem for i, elem in enumerate(a) if i%5<2]
I think this last one might be the most readable. And it works with any iterable rather than just lists, and it can be made lazy (again, just replace the brackets with parens). But I still don't think it's simpler than your original comprehension, and it's not just slicing.
The question states array, and by that if we are talking about NumPy arrays, we can surely use few obvious NumPy tricks and few not-so obvious ones. We can surely use slicing to get a 2D view into the input under certain conditions.
Now, based on the array length, let's call it l and m, we would have three scenarios :
Scenario #1 :l is divisible by n
We can use slicing and reshaping to get a view into the input array and hence get constant runtime.
Verify the view concept :
In [108]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
In [109]: m = 2; n = 5
In [110]: a.reshape(-1,n)[:,:m]
Out[110]:
array([[1, 2],
[6, 7]])
In [111]: np.shares_memory(a, a.reshape(-1,n)[:,:m])
Out[111]: True
Check timings on a very large array and hence constant runtime claim :
In [118]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
In [119]: %timeit a.reshape(-1,n)[:,:m]
1000000 loops, best of 3: 563 ns per loop
In [120]: a = np.arange(10000000)
In [121]: %timeit a.reshape(-1,n)[:,:m]
1000000 loops, best of 3: 564 ns per loop
To get flattened version :
If we have to get a flattened array as output, we just need to use a flattening operation with .ravel(), like so -
In [127]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
In [128]: m = 2; n = 5
In [129]: a.reshape(-1,n)[:,:m].ravel()
Out[129]: array([1, 2, 6, 7])
Timings show that it's not too bad when compared with the other looping and vectorized numpy.where versions from other posts -
In [143]: a = np.arange(10000000)
# #Kevin's soln
In [145]: %timeit [x for i,x in enumerate(a) if i%n < m]
1 loop, best of 3: 1.23 s per loop
# #jpp's soln
In [147]: %timeit a[np.where(np.arange(a.shape[0]) % n < m)]
10 loops, best of 3: 145 ms per loop
In [144]: %timeit a.reshape(-1,n)[:,:m].ravel()
100 loops, best of 3: 16.4 ms per loop
Scenario #2 :l is not divisible by n, but the groups end with a complete one at the end
We go to the non-obvious NumPy methods with np.lib.stride_tricks.as_strided that allows to go beyoond the memory block bounds (hence we need to be careful here to not write into those) to facilitate a solution using slicing. The implementation would look something like this -
def select_groups(a, m, n):
a = np.asarray(a)
strided = np.lib.stride_tricks.as_strided
# Get params defining the lengths for slicing and output array shape
nrows = len(a)//n
add0 = len(a)%n
s = a.strides[0]
out_shape = nrows+int(add0!=0),m
# Finally stride, flatten with reshape and slice
return strided(a, shape=out_shape, strides=(s*n,s))
A sample run to verify that the output is a view -
In [151]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13])
In [152]: m = 2; n = 5
In [153]: select_groups(a, m, n)
Out[153]:
array([[ 1, 2],
[ 6, 7],
[11, 12]])
In [154]: np.shares_memory(a, select_groups(a, m, n))
Out[154]: True
To get flattened version, append with .ravel().
Let's get some timings comparison -
In [158]: a = np.arange(10000003)
In [159]: m = 2; n = 5
# #Kevin's soln
In [161]: %timeit [x for i,x in enumerate(a) if i%n < m]
1 loop, best of 3: 1.24 s per loop
# #jpp's soln
In [162]: %timeit a[np.where(np.arange(a.shape[0]) % n < m)]
10 loops, best of 3: 148 ms per loop
In [160]: %timeit select_groups(a, m=m, n=n)
100000 loops, best of 3: 5.8 µs per loop
If we need a flattened version, it's still not too bad -
In [163]: %timeit select_groups(a, m=m, n=n).ravel()
100 loops, best of 3: 16.5 ms per loop
Scenario #3 :l is not divisible by n,and the groups end with a incomplete one at the end
For this case, we would need an extra slicing at the end on top of what we had in the previous method, like so -
def select_groups_generic(a, m, n):
a = np.asarray(a)
strided = np.lib.stride_tricks.as_strided
# Get params defining the lengths for slicing and output array shape
nrows = len(a)//n
add0 = len(a)%n
lim = m*(nrows) + add0
s = a.strides[0]
out_shape = nrows+int(add0!=0),m
# Finally stride, flatten with reshape and slice
return strided(a, shape=out_shape, strides=(s*n,s)).reshape(-1)[:lim]
Sample run -
In [166]: a = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11])
In [167]: m = 2; n = 5
In [168]: select_groups_generic(a, m, n)
Out[168]: array([ 1, 2, 6, 7, 11])
Timings -
In [170]: a = np.arange(10000001)
In [171]: m = 2; n = 5
# #Kevin's soln
In [172]: %timeit [x for i,x in enumerate(a) if i%n < m]
1 loop, best of 3: 1.23 s per loop
# #jpp's soln
In [173]: %timeit a[np.where(np.arange(a.shape[0]) % n < m)]
10 loops, best of 3: 145 ms per loop
In [174]: %timeit select_groups_generic(a, m, n)
100 loops, best of 3: 12.2 ms per loop
I realize that recursion isn't popular, but would something like this work? Also, uncertain if adding recursion to the mix counts as just using slices.
def get_elements(A, m, n):
if(len(A) < m):
return A
else:
return A[:m] + get_elements(A[n:], m, n)
A is the array, m and n are defined as in the question. The first if covers the base case, where you have an array with length less than the number of elements you're trying to retrieve, and the second if is the recursive case. I'm somewhat new to python, please forgive my poor understanding of the language if this doesn't work properly, though I tested it and it seems to work fine.
With itertools you could get an iterator with:
from itertools import compress, cycle
a = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
n = 5
m = 2
it = compress(a, cycle([1, 1, 0, 0, 0]))
res = list(it)
time_interval = [4, 6, 12]
I want to sum up the numbers like [4, 4+6, 4+6+12] in order to get the list t = [4, 10, 22].
I tried the following:
t1 = time_interval[0]
t2 = time_interval[1] + t1
t3 = time_interval[2] + t2
print(t1, t2, t3) # -> 4 10 22
If you're doing much numerical work with arrays like this, I'd suggest numpy, which comes with a cumulative sum function cumsum:
import numpy as np
a = [4,6,12]
np.cumsum(a)
#array([4, 10, 22])
Numpy is often faster than pure python for this kind of thing, see in comparison to #Ashwini's accumu:
In [136]: timeit list(accumu(range(1000)))
10000 loops, best of 3: 161 us per loop
In [137]: timeit list(accumu(xrange(1000)))
10000 loops, best of 3: 147 us per loop
In [138]: timeit np.cumsum(np.arange(1000))
100000 loops, best of 3: 10.1 us per loop
But of course if it's the only place you'll use numpy, it might not be worth having a dependence on it.
In Python 2 you can define your own generator function like this:
def accumu(lis):
total = 0
for x in lis:
total += x
yield total
In [4]: list(accumu([4,6,12]))
Out[4]: [4, 10, 22]
And in Python 3.2+ you can use itertools.accumulate():
In [1]: lis = [4,6,12]
In [2]: from itertools import accumulate
In [3]: list(accumulate(lis))
Out[3]: [4, 10, 22]
I did a bench-mark of the top two answers with Python 3.4 and I found itertools.accumulate is faster than numpy.cumsum under many circumstances, often much faster. However, as you can see from the comments, this may not always be the case, and it's difficult to exhaustively explore all options. (Feel free to add a comment or edit this post if you have further benchmark results of interest.)
Some timings...
For short lists accumulate is about 4 times faster:
from timeit import timeit
def sum1(l):
from itertools import accumulate
return list(accumulate(l))
def sum2(l):
from numpy import cumsum
return list(cumsum(l))
l = [1, 2, 3, 4, 5]
timeit(lambda: sum1(l), number=100000)
# 0.4243644131347537
timeit(lambda: sum2(l), number=100000)
# 1.7077815784141421
For longer lists accumulate is about 3 times faster:
l = [1, 2, 3, 4, 5]*1000
timeit(lambda: sum1(l), number=100000)
# 19.174508565105498
timeit(lambda: sum2(l), number=100000)
# 61.871223849244416
If the numpy array is not cast to list, accumulate is still about 2 times faster:
from timeit import timeit
def sum1(l):
from itertools import accumulate
return list(accumulate(l))
def sum2(l):
from numpy import cumsum
return cumsum(l)
l = [1, 2, 3, 4, 5]*1000
print(timeit(lambda: sum1(l), number=100000))
# 19.18597290944308
print(timeit(lambda: sum2(l), number=100000))
# 37.759664884768426
If you put the imports outside of the two functions and still return a numpy array, accumulate is still nearly 2 times faster:
from timeit import timeit
from itertools import accumulate
from numpy import cumsum
def sum1(l):
return list(accumulate(l))
def sum2(l):
return cumsum(l)
l = [1, 2, 3, 4, 5]*1000
timeit(lambda: sum1(l), number=100000)
# 19.042188624851406
timeit(lambda: sum2(l), number=100000)
# 35.17324400227517
Try the
itertools.accumulate() function.
import itertools
list(itertools.accumulate([1,2,3,4,5]))
# [1, 3, 6, 10, 15]
Behold:
a = [4, 6, 12]
reduce(lambda c, x: c + [c[-1] + x], a, [0])[1:]
Will output (as expected):
[4, 10, 22]
Assignment expressions from PEP 572 (new in Python 3.8) offer yet another way to solve this:
time_interval = [4, 6, 12]
total_time = 0
cum_time = [total_time := total_time + t for t in time_interval]
You can calculate the cumulative sum list in linear time with a simple for loop:
def csum(lst):
s = lst.copy()
for i in range(1, len(s)):
s[i] += s[i-1]
return s
time_interval = [4, 6, 12]
print(csum(time_interval)) # [4, 10, 22]
The standard library's itertools.accumulate may be a faster alternative (since it's implemented in C):
from itertools import accumulate
time_interval = [4, 6, 12]
print(list(accumulate(time_interval))) # [4, 10, 22]
Since python 3.8 it's possible to use Assignment expressions, so things like this became easier to implement
nums = list(range(1, 10))
print(f'array: {nums}')
v = 0
cumsum = [v := v + n for n in nums]
print(f'cumsum: {cumsum}')
produces
array: [1, 2, 3, 4, 5, 6, 7, 8, 9]
cumsum: [1, 3, 6, 10, 15, 21, 28, 36, 45]
The same technique can be applied to find the cum product, mean, etc.
p = 1
cumprod = [p := p * n for n in nums]
print(f'cumprod: {cumprod}')
s = 0
c = 0
cumavg = [(s := s + n) / (c := c + 1) for n in nums]
print(f'cumavg: {cumavg}')
results in
cumprod: [1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
cumavg: [1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0]
First, you want a running list of subsequences:
subseqs = (seq[:i] for i in range(1, len(seq)+1))
Then you just call sum on each subsequence:
sums = [sum(subseq) for subseq in subseqs]
(This isn't the most efficient way to do it, because you're adding all of the prefixes repeatedly. But that probably won't matter for most use cases, and it's easier to understand if you don't have to think of the running totals.)
If you're using Python 3.2 or newer, you can use itertools.accumulate to do it for you:
sums = itertools.accumulate(seq)
And if you're using 3.1 or earlier, you can just copy the "equivalent to" source straight out of the docs (except for changing next(it) to it.next() for 2.5 and earlier).
If You want a pythonic way without numpy working in 2.7 this would be my way of doing it
l = [1,2,3,4]
_d={-1:0}
cumsum=[_d.setdefault(idx, _d[idx-1]+item) for idx,item in enumerate(l)]
now let's try it and test it against all other implementations
import timeit, sys
L=list(range(10000))
if sys.version_info >= (3, 0):
reduce = functools.reduce
xrange = range
def sum1(l):
cumsum=[]
total = 0
for v in l:
total += v
cumsum.append(total)
return cumsum
def sum2(l):
import numpy as np
return list(np.cumsum(l))
def sum3(l):
return [sum(l[:i+1]) for i in xrange(len(l))]
def sum4(l):
return reduce(lambda c, x: c + [c[-1] + x], l, [0])[1:]
def this_implementation(l):
_d={-1:0}
return [_d.setdefault(idx, _d[idx-1]+item) for idx,item in enumerate(l)]
# sanity check
sum1(L)==sum2(L)==sum3(L)==sum4(L)==this_implementation(L)
>>> True
# PERFORMANCE TEST
timeit.timeit('sum1(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.001018061637878418
timeit.timeit('sum2(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.000829620361328125
timeit.timeit('sum3(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.4606760001182556
timeit.timeit('sum4(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.18932826995849608
timeit.timeit('this_implementation(L)','from __main__ import sum1,sum2,sum3,sum4,this_implementation,L', number=100)/100.
>>> 0.002348129749298096
There could be many answers for this depending on the length of the list and the performance. One very simple way which I can think without thinking of the performance is this:
a = [1, 2, 3, 4]
a = [sum(a[0:x]) for x in range(1, len(a)+1)]
print(a)
[1, 3, 6, 10]
This is by using list comprehension and this may work fairly well it is just that here I am adding over the subarray many times, you could possibly improvise on this and make it simple!
Cheers to your endeavor!
values = [4, 6, 12]
total = 0
sums = []
for v in values:
total = total + v
sums.append(total)
print 'Values: ', values
print 'Sums: ', sums
Running this code gives
Values: [4, 6, 12]
Sums: [4, 10, 22]
Try this:
result = []
acc = 0
for i in time_interval:
acc += i
result.append(acc)
l = [1,-1,3]
cum_list = l
def sum_list(input_list):
index = 1
for i in input_list[1:]:
cum_list[index] = i + input_list[index-1]
index = index + 1
return cum_list
print(sum_list(l))
In Python3, To find the cumulative sum of a list where the ith element
is the sum of the first i+1 elements from the original list, you may do:
a = [4 , 6 , 12]
b = []
for i in range(0,len(a)):
b.append(sum(a[:i+1]))
print(b)
OR you may use list comprehension:
b = [sum(a[:x+1]) for x in range(0,len(a))]
Output
[4,10,22]
lst = [4, 6, 12]
[sum(lst[:i+1]) for i in xrange(len(lst))]
If you are looking for a more efficient solution (bigger lists?) a generator could be a good call (or just use numpy if you really care about performance).
def gen(lst):
acu = 0
for num in lst:
yield num + acu
acu += num
print list(gen([4, 6, 12]))
In [42]: a = [4, 6, 12]
In [43]: [sum(a[:i+1]) for i in xrange(len(a))]
Out[43]: [4, 10, 22]
This is slighlty faster than the generator method above by #Ashwini for small lists
In [48]: %timeit list(accumu([4,6,12]))
100000 loops, best of 3: 2.63 us per loop
In [49]: %timeit [sum(a[:i+1]) for i in xrange(len(a))]
100000 loops, best of 3: 2.46 us per loop
For larger lists, the generator is the way to go for sure. . .
In [50]: a = range(1000)
In [51]: %timeit [sum(a[:i+1]) for i in xrange(len(a))]
100 loops, best of 3: 6.04 ms per loop
In [52]: %timeit list(accumu(a))
10000 loops, best of 3: 162 us per loop
Somewhat hacky, but seems to work:
def cumulative_sum(l):
y = [0]
def inc(n):
y[0] += n
return y[0]
return [inc(x) for x in l]
I did think that the inner function would be able to modify the y declared in the outer lexical scope, but that didn't work, so we play some nasty hacks with structure modification instead. It is probably more elegant to use a generator.
Without having to use Numpy, you can loop directly over the array and accumulate the sum along the way. For example:
a=range(10)
i=1
while((i>0) & (i<10)):
a[i]=a[i-1]+a[i]
i=i+1
print a
Results in:
[0, 1, 3, 6, 10, 15, 21, 28, 36, 45]
A pure python oneliner for cumulative sum:
cumsum = lambda X: X[:1] + cumsum([X[0]+X[1]] + X[2:]) if X[1:] else X
This is a recursive version inspired by recursive cumulative sums. Some explanations:
The first term X[:1] is a list containing the previous element and is almost the same as [X[0]] (which would complain for empty lists).
The recursive cumsum call in the second term processes the current element [1] and remaining list whose length will be reduced by one.
if X[1:] is shorter for if len(X)>1.
Test:
cumsum([4,6,12])
#[4, 10, 22]
cumsum([])
#[]
And simular for cumulative product:
cumprod = lambda X: X[:1] + cumprod([X[0]*X[1]] + X[2:]) if X[1:] else X
Test:
cumprod([4,6,12])
#[4, 24, 288]
Here's another fun solution. This takes advantage of the locals() dict of a comprehension, i.e. local variables generated inside the list comprehension scope:
>>> [locals().setdefault(i, (elem + locals().get(i-1, 0))) for i, elem
in enumerate(time_interval)]
[4, 10, 22]
Here's what the locals() looks for each iteration:
>>> [[locals().setdefault(i, (elem + locals().get(i-1, 0))), locals().copy()][1]
for i, elem in enumerate(time_interval)]
[{'.0': <enumerate at 0x21f21f7fc80>, 'i': 0, 'elem': 4, 0: 4},
{'.0': <enumerate at 0x21f21f7fc80>, 'i': 1, 'elem': 6, 0: 4, 1: 10},
{'.0': <enumerate at 0x21f21f7fc80>, 'i': 2, 'elem': 12, 0: 4, 1: 10, 2: 22}]
Performance is not terrible for small lists:
>>> %timeit list(accumulate([4, 6, 12]))
387 ns ± 7.53 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
>>> %timeit np.cumsum([4, 6, 12])
5.31 µs ± 67.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit [locals().setdefault(i, (e + locals().get(i-1,0))) for i,e in enumerate(time_interval)]
1.57 µs ± 12 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
And obviously falls flat for larger lists.
>>> l = list(range(1_000_000))
>>> %timeit list(accumulate(l))
95.1 ms ± 5.22 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit np.cumsum(l)
79.3 ms ± 1.07 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit np.cumsum(l).tolist()
120 ms ± 1.23 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
>>> %timeit [locals().setdefault(i, (e + locals().get(i-1, 0))) for i, e in enumerate(l)]
660 ms ± 5.14 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Even though the method is ugly and not practical, it sure is fun.
I think the below code is the easiest:
a=[1,1,2,1,2]
b=[a[0]]+[sum(a[0:i]) for i in range(2,len(a)+1)]
def cumulative_sum(list):
l = []
for i in range(len(list)):
new_l = sum(list[:i+1])
l.append(new_l)
return l
time_interval = [4, 6, 12]
print(cumulative_sum(time_interval)
Maybe a more beginner-friendly solution.
So you need to make a list of cumulative sums. You can do it by using for loop and .append() method
time_interval = [4, 6, 12]
cumulative_sum = []
new_sum = 0
for i in time_interval:
new_sum += i
cumulative_sum.append(new_sum)
print(cumulative_sum)
or, using numpy module
import numpy
time_interval = [4, 6, 12]
c_sum = numpy.cumsum(time_interval)
print(c_sum.tolist())
This would be Haskell-style:
def wrand(vtlg):
def helpf(lalt,lneu):
if not lalt==[]:
return helpf(lalt[1::],[lalt[0]+lneu[0]]+lneu)
else:
lneu.reverse()
return lneu[1:]
return helpf(vtlg,[0])
Suppose I want the first element, the 3rd through 200th elements, and the 201st element through the last element by step-size 3, from a list in Python.
One way to do it is with distinct indexing and concatenation:
new_list = old_list[0:1] + old_list[3:201] + old_list[201::3]
Is there a way to do this with just one index on old_list? I would like something like the following (I know this doesn't syntactically work since list indices cannot be lists and since Python unfortunately doesn't have slice literals; I'm just looking for something close):
new_list = old_list[[0, 3:201, 201::3]]
I can achieve some of this by switching to NumPy arrays, but I'm more interested in how to do it for native Python lists. I could also create a slice maker or something like that, and possibly strong arm that into giving me an equivalent slice object to represent the composition of all my desired slices.
But I'm looking for something that doesn't involve creating a new class to manage the slices. I want to just sort of concatenate the slice syntax and feed that to my list and have the list understand that it means to separately get the slices and concatenate their respective results in the end.
A slice maker object (e.g. SliceMaker from your other question, or np.s_) can accept multiple comma-separated slices; they are received as a tuple of slices or other objects:
from numpy import s_
s_[0, 3:5, 6::3]
Out[1]: (0, slice(3, 5, None), slice(6, None, 3))
NumPy uses this for multidimensional arrays, but you can use it for slice concatenation:
def xslice(arr, slices):
if isinstance(slices, tuple):
return sum((arr[s] if isinstance(s, slice) else [arr[s]] for s in slices), [])
elif isinstance(slices, slice):
return arr[slices]
else:
return [arr[slices]]
xslice(list(range(10)), s_[0, 3:5, 6::3])
Out[1]: [0, 3, 4, 6, 9]
xslice(list(range(10)), s_[1])
Out[2]: [1]
xslice(list(range(10)), s_[:])
Out[3]: [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
import numpy as np
a = list(range(15, 50, 3))
# %%timeit -n 10000 -> 41.1 µs ± 1.71 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
[a[index] for index in np.r_[1:3, 5:7, 9:11]]
---
[18, 21, 30, 33, 42, 45]
import numpy as np
a = np.arange(15, 50, 3).astype(np.int32)
# %%timeit -n 10000 -> 31.9 µs ± 5.68 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
a[np.r_[1:3, 5:7, 9:11]]
---
array([18, 21, 30, 33, 42, 45], dtype=int32)
import numpy as np
a = np.arange(15, 50, 3).astype(np.int32)
# %%timeit -n 10000 -> 7.17 µs ± 1.17 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
slices = np.s_[1:3, 5:7, 9:11]
np.concatenate([a[_slice] for _slice in slices])
---
array([18, 21, 30, 33, 42, 45], dtype=int32)
Seems using numpy is a faster way.
Adding numpy part to the answer from ecatmur.
import numpy as np
def xslice(x, slices):
"""Extract slices from array-like
Args:
x: array-like
slices: slice or tuple of slice objects
"""
if isinstance(slices, tuple):
if isinstance(x, np.ndarray):
return np.concatenate([x[_slice] for _slice in slices])
else:
return sum((x[s] if isinstance(s, slice) else [x[s]] for s in slices), [])
elif isinstance(slices, slice):
return x[slices]
else:
return [x[slices]]
You're probably better off writing your own sequence type.
>>> L = range(20)
>>> L
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
>>> operator.itemgetter(*(range(1, 5) + range(10, 18, 3)))(L)
(1, 2, 3, 4, 10, 13, 16)
And to get you started on that:
>>> operator.itemgetter(*(range(*slice(1, 5).indices(len(L))) + range(*slice(10, 18, 3).indices(len(L)))))(L)
(1, 2, 3, 4, 10, 13, 16)
Not sure if this is "better", but it works so why not...
[y for x in [old_list[slice(*a)] for a in ((0,1),(3,201),(201,None,3))] for y in x]
It's probably slow (especially compared to chain) but it's basic python (3.5.2 used for testing)
Why don;t you create a custom slice for your purpose
>>> from itertools import chain, islice
>>> it = range(50)
>>> def cslice(iterable, *selectors):
return chain(*(islice(iterable,*s) for s in selectors))
>>> list(cslice(it,(1,5),(10,15),(25,None,3)))
[1, 2, 3, 4, 10, 11, 12, 13, 14, 25, 28, 31, 34, 37, 40, 43, 46, 49]
You could extend list to allow multiple slices and indices:
class MultindexList(list):
def __getitem__(self, key):
if type(key) is tuple or type(key) is list:
r = []
for index in key:
item = super().__getitem__(index)
if type(index) is slice:
r += item
else:
r.append(item)
return r
else:
return super().__getitem__(key)
a = MultindexList(range(10))
print(a[1:3]) # [1, 2]
print(a[[1, 2]]) # [1, 2]
print(a[1, 1:3, 4:6]) # [1, 1, 2, 4, 5]