this is the function for which is the unittest written for:
def swap_k(L, k):
""" (list, int) -> NoneType
Precondtion: 0 <= k <= len(L) // 2
Swap the first k items of L with the last k items of L.
>>> nums = [1, 2, 3, 4, 5, 6]
>>> swap_k(nums, 2)
>>> nums
[5, 6, 3, 4, 1, 2]
"""
this is the unittest code:
def test_swap_k_list_length_6_swap_2(self):
"""Test swap_k with list of length 6 and number of items to swap 2.
Also allow for the fact that there are potentially four alternate
valid outcomes.
"""
list_original = [1, 2, 3, 4, 5, 6]
list_outcome_1 = [5, 6, 3, 4, 1, 2]
list_outcome_2 = [5, 6, 3, 4, 2, 1]
list_outcome_3 = [6, 5, 3, 4, 1, 2]
list_outcome_4 = [6, 5, 3, 4, 2, 1]
valid_outcomes = [list_outcome_1, list_outcome_2, list_outcome_3, list_outcome_4]
k = 2
a1.swap_k(list_original,k)
self.assertIn(list_original, valid_outcomes)
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
the unittest code passes and I don't understand why since I think the only valid outcome would be list_outcome_1 judging by the docstring of swap_k...
First of all, the test can pass even if "valid_outcomes" contains more than what's valid. (In your opinion, list_outcome_1). It just means it sometimes won't fail when it should.
Second, I think the test is correct: the doc doesn't say that the first "k" items will be placed last in their original order, nor does it guarantee the same for the last "k" items. So any order of [1,2] could appear at the end of the list, and any order of [5,6] could appear at the beginning.
In general, if something is not guaranteed then I prefer not to assume it, even if it seems logical (a list is ordered, after all, so it's almost natural to assume that).
"Fixing" the unittest would also mean fixing the doc to guarantee order.
self.assertEqual(list_original, list_outcome_1)
and
self.assertIn(list_original, valid_outcomes)
both satisfies the test. Here you are testing whether the true outcome is in list of outcome which is true, so the test is valid.
However as per docstring
self.assertEqual(list_original, list_outcome_1)
would have been better as it checks the equality.
Related
There is this question, 189 Rotate array on Leetcode. enter link description here Its statement is "Given an array, rotate the array to the right by k steps, where k is non-negative."
To understand it better, here's an example.
enter image description here
So, My code for this is
for _ in range(k):
j = nums[-1]
nums.remove(nums[-1])
nums.insert(0, j)
It cannot pass some of the test cases in it.
In the discussion panel, I found a code claiming it got submitted successfully that went like
for _ in range(k):
nums.insert(0, nums.pop(-1))
I would like to know, what is the difference between these two and why my code isn't able to pass some of the test cases.
If you do this on python shell [].remove.__doc__, you'll see the purpose of list.remove is to:
Remove first occurrence of value. Raises ValueError if the value is
not present.
In your code nums.remove(nums[-1]) does not remove the last item, but first occurrence of the value of your last item.
E.g.
If you have a list with values nums = [2, 4, 8, 3, 4] and if you do nums.remove(nums[-1]) the content of nums becomes [2, 8, 3, 4] not [2, 4, 8, 3] that you're expecting.
Just use slicing:
>>> def rotate(l, n):
... return l[-n:] + l[:-n]
...
>>> lst = [1, 2, 3, 4, 5, 6, 7]
>>> rotate(lst, 1)
[7, 1, 2, 3, 4, 5, 6]
>>> rotate(lst, 2)
[6, 7, 1, 2, 3, 4, 5]
>>> rotate(lst, 3)
[5, 6, 7, 1, 2, 3, 4]
In your code j = nums[-1] and you are trying to insert(0, nums[-1]).
In
for _ in range(k):
nums.insert(0, nums.pop(-1))
inserting other number - (0, nums.pop(-1))
Answer given by cesarv is good and simple, but on large arrays numpy will definitely perform better. Consider:
np.roll(lst, n)
The remove() method takes a single element as an argument and removes it's first occurence from the list.
The pop() method takes a single argument (index). The argument passed to the method is optional. If not passed, the default index -1 is passed as an argument (index of the last item).
If the test case has the same item before the last index then test case will fail.
Hence to correct your code replace remove with pop
for _ in range(k):
poppedElement = nums.pop()
nums.insert(0, poppedElement)
or to make it even concise -
for _ in range(k):
nums.insert(0, nums.pop())
I'm trying my hand at converting the following loop to a comprehension.
Problem is given an input_list = [1, 2, 3, 4, 5]
return a list with each element as multiple of all elements till that index starting from left to right.
Hence return list would be [1, 2, 6, 24, 120].
The normal loop I have (and it's working):
l2r = list()
for i in range(lst_len):
if i == 0:
l2r.append(lst_num[i])
else:
l2r.append(lst_num[i] * l2r[i-1])
Python 3.8+ solution:
:= Assignment Expressions
lst = [1, 2, 3, 4, 5]
curr = 1
out = [(curr:=curr*v) for v in lst]
print(out)
Prints:
[1, 2, 6, 24, 120]
Other solution (with itertools.accumulate):
from itertools import accumulate
out = [*accumulate(lst, lambda a, b: a*b)]
print(out)
Well, you could do it like this(a):
import math
orig = [1, 2, 3, 4, 5]
print([math.prod(orig[:pos]) for pos in range(1, len(orig) + 1)])
This generates what you wanted:
[1, 2, 6, 24, 120]
and basically works by running a counter from 1 to the size of the list, at each point working out the product of all terms before that position:
pos values prod
=== ========= ====
1 1 1
2 1,2 2
3 1,2,3 6
4 1,2,3,4 24
5 1,2,3,4,5 120
(a) Just keep in mind that's less efficient at runtime since it calculates the full product for every single element (rather than caching the most recently obtained product). You can avoid that while still making your code more compact (often the reason for using list comprehensions), with something like:
def listToListOfProds(orig):
curr = 1
newList = []
for item in orig:
curr *= item
newList.append(curr)
return newList
print(listToListOfProds([1, 2, 3, 4, 5]))
That's obviously not a list comprehension but still has the advantages in that it doesn't clutter up your code where you need to calculate it.
People seem to often discount the function solution in Python, simply because the language is so expressive and allows things like list comprehensions to do a lot of work in minimal source code.
But, other than the function itself, this solution has the same advantages of a one-line list comprehension in that it, well, takes up one line :-)
In addition, you're free to change the function whenever you want (if you find a better way in a later Python version, for example), without having to change all the different places in the code that call it.
This should not be made into a list comprehension if one iteration depends on the state of an earlier one!
If the goal is a one-liner, then there are lots of solutions with #AndrejKesely's itertools.accumulate() being an excellent one (+1). Here's mine that abuses functools.reduce():
from functools import reduce
lst = [1, 2, 3, 4, 5]
print(reduce(lambda x, y: x + [x[-1] * y], lst, [lst.pop(0)]))
But as far as list comprehensions go, #AndrejKesely's assignment-expression-based solution is the wrong thing to do (-1). Here's a more self contained comprehension that doesn't leak into the surrounding scope:
lst = [1, 2, 3, 4, 5]
seq = [a.append(a[-1] * b) or a.pop(0) for a in [[lst.pop(0)]] for b in [*lst, 1]]
print(seq)
But it's still the wrong thing to do! This is based on a similar problem that also got upvoted for the wrong reasons.
A recursive function could help.
input_list = [ 1, 2, 3, 4, 5]
def cumprod(ls, i=None):
i = len(ls)-1 if i is None else i
if i == 0:
return 1
return ls[i] * cumprod(ls, i-1)
output_list = [cumprod(input_list, i) for i in range(len(input_list))]
output_list has value [1, 2, 6, 24, 120]
This method can be compressed in python3.8 using the walrus operator
input_list = [ 1, 2, 3, 4, 5]
def cumprod_inline(ls, i=None):
return 1 if (i := len(ls)-1 if i is None else i) == 0 else ls[i] * cumprod_inline(ls, i-1)
output_list = [cumprod_inline(input_list, i) for i in range(len(input_list))]
output_list has value [1, 2, 6, 24, 120]
Because you plan to use this in list comprehension, there's no need to provide a default for the i argument. This removes the need to check if i is None.
input_list = [ 1, 2, 3, 4, 5]
def cumprod_inline_nodefault(ls, i):
return 1 if i == 0 else ls[i] * cumprod_inline_nodefault(ls, i-1)
output_list = [cumprod_inline_nodefault(input_list, i) for i in range(len(input_list))]
output_list has value [1, 2, 6, 24, 120]
Finally, if you really wanted to keep it to a single , self-contained list comprehension line, you can follow the approach note here to use recursive lambda calls
input_list = [ 1, 2, 3, 4, 5]
output_list = [(lambda func, x, y: func(func,x,y))(lambda func, ls, i: 1 if i == 0 else ls[i] * func(func, ls, i-1),input_list,i) for i in range(len(input_list))]
output_list has value [1, 2, 6, 24, 120]
It's entirely over-engineered, and barely legible, but hey! it works and its just for fun.
For your list, it might not be intentional that the numbers are consecutive, starting from 1. But for cases that that pattern is intentional, you can use the built in method, factorial():
from math import factorial
input_list = [1, 2, 3, 4, 5]
l2r = [factorial(i) for i in input_list]
print(l2r)
Output:
[1, 2, 6, 24, 120]
The package numpy has a number of fast implementations of list comprehensions built into it. To obtain, for example, a cumulative product:
>>> import numpy as np
>>> np.cumprod([1, 2, 3, 4, 5])
array([ 1, 2, 6, 24, 120])
The above returns a numpy array. If you are not familiar with numpy, you may prefer to obtain just a normal python list:
>>> list(np.cumprod([1, 2, 3, 4, 5]))
[1, 2, 6, 24, 120]
using itertools and operators:
from itertools import accumulate
import operator as op
ip_lst = [1,2,3,4,5]
print(list(accumulate(ip_lst, func=op.mul)))
Why is this not working? Actual result is [] for any entry.
def non_unique(ints):
"""
Return a list consisting of only the non-unique elements from the list lst.
You are given a non-empty list of integers (ints). You should return a
list consisting of only the non-unique elements in this list. To do so
you will need to remove all unique elements (elements which are
contained in a given list only once). When solving this task, do not
change the order of the list.
>>> non_unique([1, 2, 3, 1, 3])
[1, 3, 1, 3]
>>> non_unique([1, 2, 3, 4, 5])
[]
>>> non_unique([5, 5, 5, 5, 5])
[5, 5, 5, 5, 5]
>>> non_unique([10, 9, 10, 10, 9, 8])
[10, 9, 10, 10, 9]
"""
new_list = []
for x in ints:
for a in ints:
if ints.index(x) != ints.index(a):
if x == a:
new_list.append(a)
return new_list
Working code (not from me):
result = []
for c in ints:
if ints.count(c) > 1:
result.append(c)
return result
list.index will return the first index that contains the input parameter, so if x==a is true, then ints.index(x) will always equal ints.index(a). If you want to keep your same code structure, I'd recommend keeping track of the indicies within the loop using enumerate as in:
for x_ind, x in enumerate(ints):
for a_ind, a in enumerate(ints):
if x_ind != a_ind:
if x == a:
new_list.append(a)
Although, for what it's worth, I think your example of working code is a better way of accomplishing the same task.
Although the example of working code is correct, if suffers from quadratic complexity which makes it slow for larger lists. I'd prefer s.th. like this:
from nltk.probability import FreqDist
def non_unique(ints):
fd = FreqDist(ints)
return [x for x in ints if fd[x] > 1]
It precomputes a frequency distribution in the first step, and then selects all non-unique elements. Both steps have a O(n) performance characteristic.
I have a number of objects that I need to print out to the terminal (for debugging). The normal print function is almost perfect, except that some objects are too large, so print would create millions of lines of output. I'd like to create a function that does what print does, except that the output is truncated after a predefined number of characters, replacing the rest with ....
What's a good way to do that?
Note that performance is a concern, so ideally I'd prefer not to save a gigabyte-sized string and then take the first few characters from it; similarly, pprint is a bit of a problem since it sorts keys in dictionaries (and with millions of keys it takes a while).
Example:
obj = [ [1, 2, 3], list(range(1000000)) ]
my_print(obj, 20)
# should output:
# [[1, 2, 3], [0, 1, 2...
Python 3, if it matters.
The reprlib module (Python 3.x only) suggested by #m0nhawk is made exactly for this purpose. Here's how you would use it:
If you're fine with the default limits, you can simply use reprlib.repr(obj):
import reprlib
obj = [[1, 2, 3], list(range(10000))]
print(reprlib.repr(obj))
Output:
[[1, 2, 3], [0, 1, 2, 3, 4, 5, ...]]
In order to customize the available limits, simply create a reprlib.Repr instance and set the appropriate instance attributes:
r = reprlib.Repr()
r.maxlist = 4 # max elements displayed for lists
r.maxstring = 10 # max characters displayed for strings
obj = [[1, 2, 3], list(range(10000)), 'looooooong string', 'a', 'b', 'c']
print(r.repr(obj))
Output:
[[1, 2, 3], [0, 1, 2, 3, ...], 'lo...ing', 'a', ...]
If you're dealing with sequence objects that refer to themselves, you can use Repr.maxlevel to limit the recursion depth:
lst = [1, 2, 3]
lst.append(lst) # oh my!
r = reprlib.Repr()
r.maxlevel = 5 # max recursion depth
print(r.repr(lst))
Output:
[1, 2, 3, [1, 2, 3, [1, 2, 3, [1, 2, 3, [1, 2, 3, [...]]]]]]
Note that reprlib.repr() returns a string, but doesn't print it (unless you're in an interactive console where the result of every expression you enter gets evaluated and its representation displayed).
Why not just make a simple slice wrapper over the print function?
def my_print(obj, depth):
print(str(obj)[:depth])
print does the same thing as str before writing to the output stream. So what you want to do is do that casting early, before passing it into print, and then slice off a chunk of it that has a max size of whatever you want.
Python slicing is graceful, so a string slice like 'xyz'[:30000] evaluates simply to 'xyz' rather than raising an error.
I am trying to write a code that changes the position of an integer inside a list (basically swaps the position with another integer)
I have tried to use all logic, but still can't understand why my code is messing up:
SpecialNum = 10
def number_move(move_number):
for elements in range(len(move_number)):
if ( SpecialNum != move_number[-1]):
x = move_number.index(SpecialNum)
y = move_number.index(SpecialNum)+1
move_number[y], move_number[x] = move_number[x], move_number[y]
return (move_number)
the output should be:
[1,2,3,10,4,5,6]
>>>[1,2,3,4,10,5,6]
but output comes as:
[1,2,3,4,5,6,10]
Assuming your actual indentation looks like this:
SpecialNum = 10
def number_move(move_number):
for elements in range(len(move_number)):
if ( SpecialNum != move_number[-1]):
x = move_number.index(SpecialNum)
y = move_number.index(SpecialNum)+1
move_number[y], move_number[x] = move_number[x], move_number[y]
return move_number
… the problem is that you're swapping the 10 to the right over and over in a loop, until it reaches the very end.
If that isn't what you want, why do you have the for elements in range(len(move_number)) in the first place? Just take it out, and it will only get swapped right once.
As a side note, you rarely need range(len(eggs)); you can just do for egg in eggs (or, if you need the index along with the actual object, for index, egg in enumerate(eggs)).
Also, you've got a whole lot of extra parentheses that aren't needed, and make the code harder to read.
Meanwhile, every call to index has to search the entire list to find your object's position; if you already know the position, it's better to just use it. Not only is it a lot faster, and simpler, it's also more robust—if there are two elements of the list with the same value, index can only find the first one. In your case, there's no obvious way around using index, but at least you can avoid calling it twice.
Putting that together:
SpecialNum = 10
def number_move(move_number):
x = move_number.index(SpecialNum)
y = x + 1
if y != len(move_number):
move_number[y], move_number[x] = move_number[x], move_number[y]
Finally, I said there's no obvious way around using index… but is there a non-obvious way? Sure. If you're going to call index repeatedly on the same object, we can make the last-found index part of the interface to the function, or we can even store a cache inside the function. The simplest way to do this is to turn the whole thing into a generator. A generator that mutates its arguments can be kind of confusing, so let's make it return copies instead. And finally, to make it customizable, let's take a parameter so you can specify a different SpecialNum than 10.
SpecialNum = 10
def number_move(move_number, special_num=SpecialNum):
for x, element in reversed(list(enumerate(move_number))):
if element == special_num:
while x+1 < len(move_number):
move_number = (move_number[:x] +
[move_number[x+1], move_number[x]] +
move_number[x+2:])
yield move_number
x += 1
Now, it'll move all of the 10s to the end, one step at a time. Like this:
>>> n = [1, 10, 2, 3, 10, 4, 5, 6]
>>> for x in number_move(n):
... print(x)
[1, 10, 2, 3, 4, 10, 5, 6]
[1, 10, 2, 3, 4, 5, 10, 6]
[1, 10, 2, 3, 4, 5, 6, 10]
[1, 2, 10, 3, 4, 5, 6, 10]
[1, 2, 3, 10, 4, 5, 6, 10]
[1, 2, 3, 4, 10, 5, 6, 10]
[1, 2, 3, 4, 5, 10, 6, 10]
[1, 2, 3, 4, 5, 6, 10, 10]
[1, 2, 3, 4, 5, 6, 10, 10]
You don't need the for loop :)
def number_move(move_number):
x = move_number.index(SpecialNum)
y = move_number.index(SpecialNum)+1
move_number[y], move_number[x] = move_number[x], move_number[y]
Alternative:
>>> def number_move(m, i):
num = m.pop(i)
m.insert(i+1, num)
return m
>>> l = number_move([1,2,3,10,4,5,6], 3)
>>> l
[1, 2, 3, 4, 10, 5, 6]