I'm working on a simple size() method while working with Mutablelists and I keep getting the following error:
>>> xs = MutableList
>>> xs
<class __main__.MutableList at 0x02AC6848>
>>> xs.size()
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
xs
.size()
File "C:\Users\safim\Desktop\Python HW 4\a3_1.py", line 59, in size
for x in self :
TypeError: iteration over non-sequence
The code I used was:
result = 0
for x in self :
result + 1
return result
I appreciate the help in advance.
xs is the same object as MutableList, because you made it so:
xs = MutableList
The message printed even tells you this:
<class __main__.MutableList at 0x02AC6848>
As it says, xs is the class, not an instance of that class.
You can't call MutableList.size() (which is what you're trying to do, because xs and MutableList are the same thing) because that doesn't tell it what instance you want to use.
Did you mean to instantiate a MutableList? If so:
xs = MutableList()
Your other code won't work either, since result + 1 adds 1 to result and then throws away that number (you never assign it to a variable). Most likely you mean result += 1.
Related
I'm having trouble understanding generators when they are used as arguments. Here are a couple cases where I don't fully understand the source of the errors, or at least how to work around them. Explicit fixes as well as broader explanations on the functionality/use of generators would be appreciated.
(I included 3 examples because I have a feeling they are all just facets of the same misunderstanding. I think the question would be better served as one post rather than 3. Please comment if you disagree.)
In the first example: I understand that generator objects cannot be added explicitly, but how should the script be modified to use (unpack?) the generators?
>>> def f(x): return x
>>> def g(x): yield x
>>> def h(x, y): return x + y
>>> h(f(3), f(4))
7
>>> h(g(3), g(4))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in h
TypeError: unsupported operand type(s) for +: 'generator' and 'generator'
In the second example: I'm having trouble understanding how a predefined generator object could be used to generate argument values for a function.
>>> print(*(x*2 for x in range(3)))
0 2 4
>>> print(x*2 for x in range(3))
<generator object <genexpr> at 0x7fc71f050db0>
>>> print(*(yld(x) for x in range(3)))
<generator object yld at 0x7fc71f050c50> <generator object yld at 0x7fc71f050ca8> <generator object yld at 0x7fc71f050e08>
In this third example, how could a generator be used for a multi-argument function?
def yld(x): yield x
(lambda x, y: x + y)(*(yld(x) for x in range(2)))
I have the following python statement
x = lambda :8()
checking the type of x returns the following
<class 'function'>
but then doing this
x()
TypeError: 'int' object is not callable
I can solve this by putting parenthesis around the lambda like so
x = (lambda :8)()
But I am wondering what is going on.
The problem does not lie with calling x, you are trying to call 8 with 8(). Calling an integer raises an error because instances of int are not callable.
I can solve this by putting parenthesis around the lambda like so
What you are doing with x = (lambda :8)() is construct an anonymous function that always returns the number 8, then call it, and assign the name x to the return value.
>>> x = (lambda :8)()
>>> x
8
However, x() will still raise an error because again it's trying to call an integer.
>>> x()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'int' object is not callable
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger']
>>> show('tiger','cat')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (2 given)
>>> show('tiger','cat', {'name':'tom'})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (3 given)
Since the method append of alist only accepts one argument, why not detect a syntax error on the line alist.append(*args, **kwargs) in the definition of the method show?
It's not a syntax error because the syntax is perfectly fine and that function may or may not raise an error depending on how you call it.
The way you're calling it:
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger']
>>> show('tiger','cat')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in show
TypeError: append() takes exactly one argument (2 given)
A different way:
alist = []
def show(*args, **kwargs):
alist.append(*args, **kwargs)
print(alist)
>>> show('tiger')
['tiger', 'tiger']
>>> class L: pass
...
>>> alist = L()
>>> alist.append = print
>>> show('tiger','cat')
tiger cat
<__main__.L object at 0x000000A45DBCC048>
Python objects are strongly typed. The names that bind to them are not. Nor are function arguments. Given Python's dynamic nature it would be extremely difficult to statically predict what type a variable at a given source location will be at execution time, so the general rule is that Python doesn't bother trying.
In your specific example, alist is not in the local scope. Therefore it can be modified after your function definition was executed and the changes will be visible to your function, cf. code snippets below.
So, in accord with the general rule: predicting whether or not alist will be a list when you call .append? Near-impossible. In particular, the interpreter cannot predict that this will be an error.
Here is some code just to drive home the point that static type checking is by all practical means impossible in Python. It uses non-local variables as in your example.
funcs = []
for a in [1, "x", [2]]:
def b():
def f():
print(a)
return f
funcs.append(b())
for f in funcs:
f()
Output:
[2] # value of a at definition time (of f): 1
[2] # value of a at definition time (of f): 'x'
[2] # value of a at definition time (of f): [2]
And similarly for non-global non-local variables:
funcs = []
for a in [1, "x", [2]]:
def b(a):
def f():
print(a)
a = a+a
return f
funcs.append(b(a))
for f in funcs:
f()
Output:
2 # value of a at definition time (of f): 1
xx # value of a at definition time (of f): 'x'
[2, 2] # value of a at definition time (of f): [2]
It's not a syntax error because it's resolved at runtime. Syntax errors are caught initially during parsing. Things like unmatched brackets, undefined variable names, missing arguments (this is not a missing argument *args means any number of arguments).
show has no way of knowing what you'll pass it at runtime and since you are expanding your args variable inside show, there could be any number of arguments coming in and it's valid syntax! list.append takes one argument! One tuple, one list, one int, string, custom class etc. etc. What you are passing it is some number elements depending on input. If you remove the * it's all dandy as its one element e.g. alist.append(args).
All this means that your show function is faulty. It is equipped to handle args only when its of length 1. If its 0 you also get a TypeError at the point append is called. If its more than that its broken, but you wont know until you run it with the bad input.
You could loop over the elements in args (and kwargs) and add them one by one.
alist = []
def show(*args, **kwargs):
for a in args:
alist.append(a)
for kv in kwargs.items():
alist.append(kv)
print(alist)
a = 3
def f(x):
x = (x**3-4*x)/(3(x**2)-4)
return x
while True:
print(a)
a = f(a)
I'm getting a type error here, and I'm not sure why. I'm trying to run this recursive function, is there any way to fix this?
You need a * operator after your parentheses. Multiplication is only implied in mathematical notation in this context, in Python it looks like you're trying to call a function.
3(x**2)
So it would be
3*(x**2)
For example
>>> 3(5*2)
Traceback (most recent call last):
File "<pyshell#0>", line 1, in <module>
3(5*2)
TypeError: 'int' object is not callable
>>> 3*(5*2)
30
One of my coworkers was using the builtin max function (on Python 2.7), and he found a weird behavior.
By mistake, instead of using the keyword argument key (as in key=lambda n: n) to pre-sort the list passed as a parameter, he did:
>>> max([1,2,3,3], lambda n : n)
[1, 2, 3, 3]
He was doing what in the documentation is explained as:
If two or more positional arguments are provided, the largest of the positional arguments is returned., so now I'm curious about why this happens:
>>> (lambda n:n) < []
True
>>> def hello():
... pass
...
>>> hello < []
True
>>> len(hello)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: object of type 'function' has no len()
I know it's not a big deal, but I'd appreciate if any of the stackoverflowers could explain how those comparisons are internally made (or point me into a direction where I can find that information). :-)
Thank you in advance!
Python 2 orders objects of different types rather arbitrarily. It did this to make lists always sortable, whatever the contents. Which direction that comparison comes out as is really not of importance, just that one always wins. As it happens, the C implementation falls back to comparing type names; lambda's type name is function, which sorts before list.
In Python 3, your code would raise an exception instead:
>>> (lambda n: n) < []
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unorderable types: function() < list()
because, as you found out, supporting arbitrary comparisons mostly leads to hard-to-crack bugs.
Everything in Python (2) can be compared, but some are fairly nonsensical, as you've seen.
>>> (lambda n:n) < []
True
Python 3 resolves this, and produces exceptions instead.