Say I have a list:
l = [1, 2, 3, 4]
And I want to cycle through it. Normally, it would do something like this,
1, 2, 3, 4, 1, 2, 3, 4, 1, 2...
I want to be able to start at a certain point in the cycle, not necessarily an index, but perhaps matching an element. Say I wanted to start at whatever element in the list ==4, then the output would be,
4, 1, 2, 3, 4, 1, 2, 3, 4, 1...
How can I accomplish this?
Look at itertools module. It provides all the necessary functionality.
from itertools import cycle, islice, dropwhile
L = [1, 2, 3, 4]
cycled = cycle(L) # cycle thorugh the list 'L'
skipped = dropwhile(lambda x: x != 4, cycled) # drop the values until x==4
sliced = islice(skipped, None, 10) # take the first 10 values
result = list(sliced) # create a list from iterator
print(result)
Output:
[4, 1, 2, 3, 4, 1, 2, 3, 4, 1]
Use the arithmetic mod operator. Suppose you're starting from position k, then k should be updated like this:
k = (k + 1) % len(l)
If you want to start from a certain element, not index, you can always look it up like k = l.index(x) where x is the desired item.
I'm not such a big fan of importing modules when you can do things by your own in a couple of lines. Here's my solution without imports:
def cycle(my_list, start_at=None):
start_at = 0 if start_at is None else my_list.index(start_at)
while True:
yield my_list[start_at]
start_at = (start_at + 1) % len(my_list)
This will return an (infinite) iterator looping your list. To get the next element in the cycle you must use the next statement:
>>> it1 = cycle([101,102,103,104])
>>> next(it1), next(it1), next(it1), next(it1), next(it1)
(101, 102, 103, 104, 101) # and so on ...
>>> it1 = cycle([101,102,103,104], start_at=103)
>>> next(it1), next(it1), next(it1), next(it1), next(it1)
(103, 104, 101, 102, 103) # and so on ...
import itertools as it
l = [1, 2, 3, 4]
list(it.islice(it.dropwhile(lambda x: x != 4, it.cycle(l)), 10))
# returns: [4, 1, 2, 3, 4, 1, 2, 3, 4, 1]
so the iterator you want is:
it.dropwhile(lambda x: x != 4, it.cycle(l))
Hm, http://docs.python.org/library/itertools.html#itertools.cycle doesn't have such a start element.
Maybe you just start the cycle anyway and drop the first elements that you don't like.
Another weird option is that cycling through lists can be accomplished backwards. For instance:
# Run this once
myList = ['foo', 'bar', 'baz', 'boom']
myItem = 'baz'
# Run this repeatedly to cycle through the list
if myItem in myList:
myItem = myList[myList.index(myItem)-1]
print myItem
Can use something like this:
def my_cycle(data, start=None):
k = 0 if not start else start
while True:
yield data[k]
k = (k + 1) % len(data)
Then run:
for val in my_cycle([0,1,2,3], 2):
print(val)
Essentially the same as one of the previous answers. My bad.
Related
Hey this is my first question so I hope I'm doing it right.
I'm trying to write a function that given a list of integers and N as the maximum occurrence, would then return a list with any integer above the maximum occurrence deleted. For example if I input:
[20,37,20,21] #list of integers and 1 #maximum occurrence.
Then as output I would get:
[20,37,21] because the number 20 appears twice and the maximum occurrence is 1, so it is deleted from the list. Here's another example:
Input: [1,1,3,3,7,2,2,2,2], 3
Output: [1,1,3,3,7,2,2,2]
Here's what I wrote so far, how would I be able to optimize it? I keep on getting a timeout error. Thank you very much in advance.
def delete_nth(order,n):
order = Counter(order)
for i in order:
if order[i] > n:
while order[i] > n:
order[i] - 1
return order
print(delete_nth([20,37,20,21], 1))
You can remove building the Counter at the beginning - and just have temporary dictionary as counter:
def delete_nth(order,n):
out, counter = [], {}
for v in order:
counter.setdefault(v, 0)
if counter[v] < n:
out.append(v)
counter[v] += 1
return out
print(delete_nth([20,37,20,21], 1))
Prints:
[20, 37, 21]
You wrote:
while order[i] > n:
order[i] - 1
That second line should presumably be order[i] -= 1, or any code that enters the loop will never leave it.
You could use a predicate with a default argument collections.defaultdict to retain state as your list of numbers is being filtered.
def delete_nth(numbers, n):
from collections import defaultdict
def predicate(number, seen=defaultdict(int)):
seen[number] += 1
return seen[number] <= n
return list(filter(predicate, numbers))
print(delete_nth([1, 1, 3, 3, 7, 2, 2, 2, 2], 3))
Output:
[1, 1, 3, 3, 7, 2, 2, 2]
>>>
I've renamed variables to something that had more meaning for me:
This version, though very short and fairly efficient, will output identical values adjacently:
from collections import Counter
def delete_nth(order, n):
counters = Counter(order)
output = []
for value in counters:
cnt = min(counters[value], n)
output.extend([value] * cnt)
return output
print(delete_nth([1,1,2,3,3,2,7,2,2,2,2], 3))
print(delete_nth([20,37,20,21], 1))
Prints:
[1, 1, 2, 2, 2, 3, 3, 7]
[20, 37, 21]
This version will maintain original order, but run a bit more slowly:
from collections import Counter
def delete_nth(order, n):
counters = Counter(order)
for value in counters:
counters[value] = min(counters[value], n)
output = []
for value in order:
if counters[value]:
output.append(value)
counters[value] -= 1
return output
print(delete_nth([1,1,2,3,3,2,7,2,2,2,2], 3))
print(delete_nth([20,37,20,21], 1))
Prints:
[1, 1, 2, 3, 3, 2, 7, 2]
[20, 37, 21]
How can I fix my code to pass the test case for Delete occurrences of an element if it occurs more than n times?
My current code pass one test case and I'm sure that the problem is caused by order.remove(check_list[i]).
However, there is no way to delete the specific element with pop() because it is required to put an index number rather than the element in pop().
Test case
Test.assert_equals(delete_nth([20,37,20,21], 1), [20,37,21])
Test.assert_equals(delete_nth([1,1,3,3,7,2,2,2,2], 3), [1, 1, 3, 3, 7, 2, 2, 2])
Program
def delete_nth(order, max_e):
# code here
check_list = [x for x in dict.fromkeys(order) if order.count(x) > 1]
print(check_list)
print(order)
for i in range(len(check_list)):
while(order.count(check_list[i]) > max_e):
order.remove(check_list[i])
#order.pop(index)
return order
Your assertions fails, because the order is not preserved. Here is a simple example of how this could be done without doing redundant internal loops to count the occurrences for each number:
def delete_nth(order, max_e):
# Get a new list that we will return
result = []
# Get a dictionary to count the occurences
occurrences = {}
# Loop through all provided numbers
for n in order:
# Get the count of the current number, or assign it to 0
count = occurrences.setdefault(n, 0)
# If we reached the max occurence for that number, skip it
if count >= max_e:
continue
# Add the current number to the list
result.append(n)
# Increase the
occurrences[n] += 1
# We are done, return the list
return result
assert delete_nth([20,37,20,21], 1) == [20, 37, 21]
assert delete_nth([1, 1, 1, 1], 2) == [1, 1]
assert delete_nth([1, 1, 3, 3, 7, 2, 2, 2, 2], 3) == [1, 1, 3, 3, 7, 2, 2, 2]
assert delete_nth([1, 1, 2, 2], 1) == [1, 2]
A version which maintains the order:
from collections import defaultdict
def delete_nth(order, max_e):
count = defaultdict(int)
delet = []
for i, v in enumerate(order):
count[v] += 1
if count[v] > max_e:
delet.append(i)
for i in reversed(delet): # start deleting from the end
order.pop(i)
return order
print(delete_nth([1,1,2,2], 1))
print(delete_nth([20,37,20,21], 1))
print(delete_nth([1,1,3,3,7,2,2,2,2], 3))
This should do the trick:
from itertools import groupby
import numpy as np
def delete_nth(order, max_e):
if(len(order)<=max_e):
return order
elif(max_e<=0):
return []
return np.array(
sorted(
np.concatenate(
[list(v)[:max_e]
for k,v in groupby(
sorted(
zip(order, list(range(len(order)))),
key=lambda k: k[0]),
key=lambda k: k[0])
]
),
key=lambda k: k[1])
)[:,0].tolist()
Outputs:
print(delete_nth([2,3,4,5,3,2,3,2,1], 2))
[2, 3, 4, 5, 3, 2, 1]
print(delete_nth([2,3,4,5,5,3,2,3,2,1], 1))
[2, 3, 4, 5, 1]
print(delete_nth([2,3,4,5,3,2,3,2,1], 3))
[2, 3, 4, 5, 3, 2, 3, 2, 1]
print(delete_nth([2,2,1,1], 1))
[2, 1]
Originally my answer only worked for one test case, this is quick (not the prettiest) but works for both:
def delete_nth(x, e):
x = x[::-1]
for i in x:
while x.count(i) > e:
x.remove(i)
return x[::-1]
How to make the following code more compact and efficient.
Here, the code was to find the position where certain numerical value resides in the list.
For example, given set of number
ListNo = [[100,2,5], [50,10], 4, 1, [6,6,500]]
The value of 100, 50 and 500 was in the position of 0,3 and 9, respectively.
The testing code was as follows
ListNo = [[100,2,5], [50,10], 4, 1, [6,6,500]]
NumberedList = ListNo
Const = 0
items = 0
for i, item in enumerate(ListNo):
MaxRange = len(item) if isinstance(item, list) else 1
for x in range(0, MaxRange):
if MaxRange > 1:
NumberedList[i][x] = Const
else:
NumberedList[i] = Const
Const = Const + 1
print(NumberedList)
[[0, 1, 2], [3, 4], 5, 6, [7, 8, 9]]
My question is, whether there is another option to make this code more compact and efficient.
You can use itertools.count:
from itertools import count
i = count()
print([[next(i) for _ in range(len(l))] if isinstance(l, list) else next(i) for l in ListNo])
This outputs:
[[0, 1, 2], [3, 4], 5, 6, [7, 8, 9]]
A recursive solution would be more elegant, and handle more cases:
def nested_list_ordinal_recurse(l, it):
if isinstance(l, list):
return [nested_list_ordinal_recurse(item, it) for item in l]
else:
return next(it)
def nested_list_ordinal(l, _it=None):
return nested_list_ordinal_recurse(l, itertools.count())
ListNo = [[100,2,5], [50,10], 4, 1, [6,6,500]];
count=-1
def counter(l=[]):
global count
if l:
return [counter() for i in l]
else:
count+=1
return count
print [counter(item) if isinstance(item, list) else counter() for item in ListNo ]
Without iter tools
Given a list of data, I'm trying to create a new list in which the value at position i is the length of the longest run starting from position i in the original list. For instance, given
x_list = [1, 1, 2, 3, 3, 3]
Should return:
run_list = [2, 1, 1, 3, 2, 1]
My solution:
freq_list = []
current = x_list[0]
count = 0
for num in x_list:
if num == current:
count += 1
else:
freq_list.append((current,count))
current = num
count = 1
freq_list.append((current,count))
run_list = []
for i in freq_list:
z = i[1]
while z > 0:
run_list.append(z)
z -= 1
Firstly I create a list freq_list of tuples, where every tuple's first element is the element from x_list, and where the second element is the number of the total run.
In this case:
freq_list = [(1, 2), (2, 1), (3, 3)]
Having this, I create a new list and append appropriate values.
However, I was wondering if there is a shorter way/another way to do this?
Here's a simple solution that iterates over the list backwards and increments a counter each time a number is repeated:
last_num = None
result = []
for num in reversed(x_list):
if num != last_num:
# if the number changed, reset the counter to 1
counter = 1
last_num = num
else:
# if the number is the same, increment the counter
counter += 1
result.append(counter)
# reverse the result
result = list(reversed(result))
Result:
[2, 1, 1, 3, 2, 1]
This is possible using itertools:
from itertools import groupby, chain
x_list = [1, 1, 2, 3, 3, 3]
gen = (range(len(list(j)), 0, -1) for _, j in groupby(x_list))
res = list(chain.from_iterable(gen))
Result
[2, 1, 1, 3, 2, 1]
Explanation
First use itertools.groupby to group identical items in your list.
For each item in your groupby, create a range object which counts backwards from the length of the number of consecutive items to 1.
Turn this all into a generator to avoid building a list of lists.
Use itertools.chain to chain the ranges from the generator.
Performance note
Performance will be inferior to #Aran-Fey's solution. Although itertools.groupby is O(n), it makes heavy use of expensive __next__ calls. These do not scale as well as iteration in simple for loops. See itertools docs for groupby pseudo-code.
If performance is your main concern, stick with the for loop.
You are performing a reverse cumulative count on contiguous groups. We can create a Numpy cumulative count function with
import numpy as np
def cumcount(a):
a = np.asarray(a)
b = np.append(False, a[:-1] != a[1:])
c = b.cumsum()
r = np.arange(len(a))
return r - np.append(0, np.flatnonzero(b))[c] + 1
and then generate our result with
a = np.array(x_list)
cumcount(a[::-1])[::-1]
array([2, 1, 1, 3, 2, 1])
I would use a generator for this kind of task because it avoids building the resulting list incrementally and can be used lazily if one wanted:
def gen(iterable): # you have to think about a better name :-)
iterable = iter(iterable)
# Get the first element, in case that fails
# we can stop right now.
try:
last_seen = next(iterable)
except StopIteration:
return
count = 1
# Go through the remaining items
for item in iterable:
if item == last_seen:
count += 1
else:
# The consecutive run finished, return the
# desired values for the run and then reset
# counter and the new item for the next run.
yield from range(count, 0, -1)
count = 1
last_seen = item
# Return the result for the last run
yield from range(count, 0, -1)
This will also work if the input cannot be reversed (certain generators/iterators cannot be reversed):
>>> x_list = (i for i in range(10)) # it's a generator despite the variable name :-)
>>> ... arans solution ...
TypeError: 'generator' object is not reversible
>>> list(gen((i for i in range(10))))
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
And it works for your input:
>>> x_list = [1, 1, 2, 3, 3, 3]
>>> list(gen(x_list))
[2, 1, 1, 3, 2, 1]
This can actually be made simpler by using itertools.groupby:
import itertools
def gen(iterable):
for _, group in itertools.groupby(iterable):
length = sum(1 for _ in group) # or len(list(group))
yield from range(length, 0, -1)
>>> x_list = [1, 1, 2, 3, 3, 3]
>>> list(gen(x_list))
[2, 1, 1, 3, 2, 1]
I also did some benchmarks and according to these Aran-Feys solution is the fastest except for long lists where piRSquareds solution wins:
This was my benchmarking setup if you want to confirm the results:
from itertools import groupby, chain
import numpy as np
def gen1(iterable):
iterable = iter(iterable)
try:
last_seen = next(iterable)
except StopIteration:
return
count = 1
for item in iterable:
if item == last_seen:
count += 1
else:
yield from range(count, 0, -1)
count = 1
last_seen = item
yield from range(count, 0, -1)
def gen2(iterable):
for _, group in groupby(iterable):
length = sum(1 for _ in group)
yield from range(length, 0, -1)
def mseifert1(iterable):
return list(gen1(iterable))
def mseifert2(iterable):
return list(gen2(iterable))
def aran(x_list):
last_num = None
result = []
for num in reversed(x_list):
if num != last_num:
counter = 1
last_num = num
else:
counter += 1
result.append(counter)
return list(reversed(result))
def jpp(x_list):
gen = (range(len(list(j)), 0, -1) for _, j in groupby(x_list))
res = list(chain.from_iterable(gen))
return res
def cumcount(a):
a = np.asarray(a)
b = np.append(False, a[:-1] != a[1:])
c = b.cumsum()
r = np.arange(len(a))
return r - np.append(0, np.flatnonzero(b))[c] + 1
def pirsquared(x_list):
a = np.array(x_list)
return cumcount(a[::-1])[::-1]
from simple_benchmark import benchmark
import random
funcs = [mseifert1, mseifert2, aran, jpp, pirsquared]
args = {2**i: [random.randint(0, 5) for _ in range(2**i)] for i in range(1, 20)}
bench = benchmark(funcs, args, "list size")
%matplotlib notebook
bench.plot()
Python 3.6.5, NumPy 1.14
Here's a simple iterative approach to achieve it using collections.Counter:
from collections import Counter
x_list = [1, 1, 2, 3, 3, 3]
x_counter, run_list = Counter(x_list), []
for x in x_list:
run_list.append(x_counter[x])
x_counter[x] -= 1
which will return you run_list as:
[2, 1, 1, 3, 2, 1]
As an alternative, here's one-liner to achieve this using list comprehension with enumerate but it is not performance efficient due to iterative usage of list.index(..):
>>> [x_list[i:].count(x) for i, x in enumerate(x_list)]
[2, 1, 1, 3, 2, 1]
You can count the consecutive equal items and then add a countdown from count-of-items to 1 to the result:
def runs(p):
old = p[0]
n = 0
q = []
for x in p:
if x == old:
n += 1
else:
q.extend(range(n, 0, -1))
n = 1
old = x
q.extend(range(n, 0, -1))
return q
(A couple of minutes later) Oh, that's the same as MSeifert's code but without the iterable aspect. This version seems to be almost as fast as the method shown by Aran-Fey.
I am fairly new to python and am trying to figure out how to duplicate items within a list. I have tried several different things and searched for the answer extensively but always come up with an answer of how to remove duplicate items, and I feel like I am missing something that should be fairly apparent.
I want a list of items to duplicate such as if the list was [1, 4, 7, 10] to be [1, 1, 4, 4, 7, 7, 10, 10]
I know that
list = range(5)
for i in range(len(list)):
list.insert(i+i, i)
print list
will return [0, 0, 1, 1, 2, 2, 3, 3, 4, 4] but this does not work if the items are not in order.
To provide more context I am working with audio as a list, attempting to make the audio slower.
I am working with:
def slower():
left = Audio.getLeft()
right = Audio.getRight()
for i in range(len(left)):
left.insert(????)
right.insert(????)
Where "left" returns a list of items that are the "sounds" in the left headphone and "right" is a list of items that are sounds in the right headphone. Any help would be appreciated. Thanks.
Here is a simple way:
def slower(audio):
return [audio[i//2] for i in range(0,len(audio)*2)]
Something like this works:
>>> list = [1, 32, -45, 12]
>>> for i in range(len(list)):
... list.insert(2*i+1, list[2*i])
...
>>> list
[1, 1, 32, 32, -45, -45, 12, 12]
A few notes:
Don't use list as a variable name.
It's probably cleaner to flatten the list zipped with itself.
e.g.
>>> zip(list,list)
[(1, 1), (-1, -1), (32, 32), (42, 42)]
>>> [x for y in zip(list, list) for x in y]
[1, 1, -1, -1, 32, 32, 42, 42]
Or, you can do this whole thing lazily with itertools:
from itertools import izip, chain
for item in chain.from_iterable(izip(list, list)):
print item
I actually like this method best of all. When I look at the code, it is the one that I immediately know what it is doing (although others may have different opinions on that).
I suppose while I'm at it, I'll just point out that we can do the same thing as above with a generator function:
def multiply_elements(iterable, ntimes=2):
for item in iterable:
for _ in xrange(ntimes):
yield item
And lets face it -- Generators are just a lot of fun. :-)
listOld = [1,4,7,10]
listNew = []
for element in listOld:
listNew.extend([element,element])
This might not be the fastest way but it is pretty compact
a = range(5)
list(reduce(operator.add, zip(a,a)))
a then contains
[0, 0, 1, 1, 2, 2, 3, 3, 4, 4]
a = [0,1,2,3]
list(reduce(lambda x,y: x + y, zip(a,a))) #=> [0,0,1,1,2,2,3,3]