I am calling a method and I need a static counter within this method. It's required to parse the elements of the list. The counter will tell which position of the list to lookup at.
For e.g
static_var_with_position = 0
noib_list = [3, 2, 2, 2, 2, 1, 2, 2]
def foo(orig_output, NOB):
# tried two ways
#static_var_with_position += 1 # doesn't work
#global static_var_with_position
#static_var_with_position += 1 # doesn't work either
bit_required = noib_list[static_var_with_position]
converted_output = convert_output(orig_output, NOB, bit_required)
The static_var_with_position value is never incremented. I have commented the two ways I tried to increment value.
In c++ its piece of cake, but I couldn't find anything similar in python so far. Any help will be appreciated :)
Thanks!
Instead of using a global/static counter variable, you could use an iterator:
iterator = iter(noib_list)
def foo(orig_output, NOB):
bit_required = next(iterator)
converted_output = convert_output(orig_output, NOB, bit_required)
The iterator will automatically keep track which is the next element internally.
When the iterator is exhausted (i.e. when you reached the end of the list), next will raise a StopIteration error, so it you do not know when the end is reached, you can use bit_required = next(iterator, None) instead; then just test whether the value is None and you know that the list is exhausted (or just use try/except).
Following this example, you could do the same with your counter :
def inc() :
global global_cpt
global_cpt +=1
print global_cpt
if __name__ == '__main__' :
global_cpt = 0
inc()
inc()
inc()
will print
> 1
> 2
> 3
I don't actually advocate doing this in your case, but it's a little-known hack for creating a "static" variable within a function: put it as a parameter with a mutable default value! You can modify it within the function and it will hold until the next function call, as long as the caller doesn't pass a value for it.
def foo(value=[0]):
value[0] += 1
print(value[0])
>>> foo()
1
>>> foo()
2
>>> foo()
3
>>> foo([906])
907
>>> foo()
4
Related
I know this has been answered in highly active 'least astonishment' question, I modified code snippet slightly but still don't understand why it produces new empty list and adds 1 to it..
def foo(var=[]):
if len(var) == 0:
var = []
print(var)
var.append(1)
return var
print(foo())
print(foo())
outputs:
[]
[1]
[]
[1]
My expected logic of this python snippet:
on the first call of foo(), var is initialized to empty list and indeed - evaluated at definition of foo ONCE - as the popular question answered.
That if clause check should not be entered at all - because var is initialized only once, on the 2nd call foo() it should simply skip it and add 1 to var = [ 1 ] thus making the result [1,1] after the 2nd call.
But it does still go to if clause and init var to [] every time.. why?
However, if I remove the if clause:
def foo(var=[]):
print(var)
var.append(1)
return var
print(foo())
print(foo())
It does append "1" to var every time and var grows:
[1]
[1,1]
So the if clause is entered and checked for validity.. confused about the execution..
Let's rewrite your example so that we make some change to the argument before reassigning:
In [262]: def foo(var=[]):
...: var.append(1) # 1
...: print(var) # 2
...: var = [] # 3
...: return var # 4
...:
In [263]: foo()
[1]
Out[263]: []
In [264]: foo()
[1, 1] # reflects the append from the previous call
Out[264]: []
The append step in line 1 mutates the default argument list. The reassignment step in line 3 simply reassigns the variable var to a new list (a completely different object), that's what you return.
You'll see each subsequent call modifies the default argument list, it's still there but you just don't see it because you lose the reference to it when you reassign.
I recommend reading this article by Ned Batchelder.
alright, so the two things:
var in your function definition is assigned to an empty list i.e [] that is good,
var in the if statement is again being re-assigned but is the same thing, an empty list, this you don't really need as it's preventing you from achieving [1,1]
this may help clear things of why you're "expecting" a var = [1,1] the first time around:
def foo(var=[]):
if len(var) == 0:
print(var)
var.append(1)
return var
print(foo()) # returns [],[1]
print(foo()) # returns [1,1] this list appended another 1 in because when you ran this instance of foo, this list was not empty anymore rather filled with 1 and bypassed the if statement and appended another 1 resulting to [1,1]
thus, you don't really need that var = [] in the if statement as it confuses the next steps as to what you want to achieve...
hope that helps somewhat :)
var=[] on line 1 gets evaluated once when the function is defined
var=[] on line 3 gets evaluated every time you pass a var with len(var) == 0, for instance when you pass no args and the default is used.
This means that the [] on line 3 is a new list every time the function is called, as that line of code is executed every time the function is called.
The itertools.count counter in Python (2.7.9) is very handy for thread-safe counting. How can I get the current value of the counter though?
The counter increments and returns the last value every time you call next():
import itertools
x = itertools.count()
print x.next() # 0
print x.next() # 1
print x.next() # 2
So far, so good.
I can't find a way to get the current value of the counter without calling next(), which would have the undesirable side-effect of increasing the counter, or using the repr() function.
Following on from the above:
print repr(x) # "count(3)"
So you could parse the output of repr(). Something like
current_value = int(repr(x)[6:-1])
would do the trick, but is really ugly.
Is there a way to get the current value of the counter more directly?
Another hack to get next value without advancing iterator is to abuse copy protocol:
>>> c = itertools.count()
>>> c.__reduce__()[1][0]
0
>>> next(c)
0
>>> c.__reduce__()[1][0]
1
Or just take it from object copy:
>>> from copy import copy
>>> next(copy(c))
1
Use the source, Luke!
According to module implementation, it's not possible.
typedef struct {
PyObject_HEAD
Py_ssize_t cnt;
PyObject *long_cnt;
PyObject *long_step;
} countobject;
Current state is stored in cnt and long_cnt members, and neither of them is exposed in object API. Only place where it may be retrieved is object __repr__, as you suggested.
Note that while parsing string you have to consider a non-singular increment case. repr(itertools.count(123, 4)) is equal to 'count(123, 4)' - logic suggested by you in question would fail in that case.
According to the documentation there is no way to access the current value of the function. itertools.count() is a generator method from the itertools module. As such, it is common practice to just simply assign the value of a generator's current value to a variable.
Simply store the the result of the next call:
current_value = x.next()
or ( Built-in python method for Python version ≥ 2.6 )
current_value = next(x)
You could make a wrapper function, or a utility decorator class if you would like some added syntactic sugar, but assignment is standard.
It is a generator, it wouldn't be easy to do what you want.
If you want to use it's value in several places, I'd recommend getting a value via .next() and storing it in a variable. If you are concerned that counter may be incremented between these 2 uses, you'd need to put them both in a critical section anyway.
If you don't want to pollute that counter with additional '+1's generated by those checks, you can use one more counter to count checks (put this in critical section too). Substracting latter from the former would give you what you need.
Also, are you really sure about thread-safety? Docs page has nothing about threads.
Ran into the same thing today. Here's what I ended up with:
class alt_count:
def __init__(self, start=0, step=1):
self.current = start - step
self.step = step
def __next__(self):
self.current = self.current + self.step
return self.current
Should give you almost all the itertools.count functionality plus the current property.
i = alt_count()
print(next(i)) # 0
print(next(i)) # 1
print(i.current) # 1
If the current value is not needed, I found using this simple closure also works. Note that nonlocal only works for Python version > 3.
def alt_next_maker(start=0, step=1):
res = start - step
def alt_next():
nonlocal res, step
res = res + step
return res
return alt_next
Can be used as a simple alternative if you don't want to use the itertools module.
alt_next = alt_next_maker()
print(alt_next()) # 0
print(alt_next()) # 1
The docs also mention the following as equivalent:
def count(start=0, step=1):
# count(10) --> 10 11 12 13 14 ...
# count(2.5, 0.5) -> 2.5 3.0 3.5 ...
n = start
while True:
yield n
n += step
This question already has answers here:
Why does += behave unexpectedly on lists?
(9 answers)
Closed 6 years ago.
i wrote 2 simple codes in python that should make the same work, but it doesnt.
if you can tell me why is this, and also explain me more about the "everything is an object in python". thanks :)
def f(p=[]):
print p
f.a += 1
print id(p)
if(f.a == 1):
p = p + [4]
else:
p = p + [5]
return p
f.a = 0
f()
f()
f()
answer:
[]
40564496
[]
40564496
[]
40564496
def f(p=[]):
print p
f.a += 1
print id(p)
if(f.a == 1):
p += [4]
else:
p +=[5]
return p
#print f()
#print f()
f.a = 0
f()
f()
f()
answer:
[]
40892176
[4]
40892176
[4, 5]
40892176
like you see - the first code every time does a new list, and the second add to the last used one...
You should never* pass a mutable object as a default value in python function as there will be just one. Consequently your second function always uses the same list, that was instationed during function creation. First one, on the other hand creates new lists during p = p + [4] assignments, thus there is no mutation of the default arg. Code this this way:
def f(p=None):
if p is None:
p = []
print p
f.a += 1
print id(p)
if(f.a == 1):
p += [4]
else:
p +=[5]
return p
* by never I mean "unless you really know what you are doing". There is always an exception from the rule, but if you need to pass a mutable as a default value, you need to understand its consequences very well, thus you would probably not be reading this answer anyway. In particular, it is discouraged in many Python style guides, such as Google python style guide
Do not use mutable objects as default values in the function or method definition.
This should also cause pylint warning W0102
dangerous-default-value (W0102):
Dangerous default value %s as argument Used when a mutable value as list or dictionary is detected in a default value for an argument.
The operators obj += other and obj = obj + other are not the same. The previous often uses inplace modification, whereas the later usually creates a new object. In the case of lists, obj += other is the same as obj.extend(other).
The second important thing is that def statements are only evaluated once per scope. If you define a function at module scope, def is evaluated once - that includes the creation of its default parameters!
def foo(bar=[]):
print(id(bar))
In this case, bar will always default to the same list object that has been created when def foo was first evaluated.
Whenever you modify the default value, it will carry over to the next time this default is used.
Contrast this with:
def foo(bar=None):
bar = bar if bar is not None else []
print(id(bar))
In this case, the default list is recreated every time the function is called.
I am implementing a recursive function in which I need to remember a global value. I will decrement this value in every recursive call and want it to reflect in other recursive calls also.
Here's a way I've done it.
First way:
global a
a = 3
def foo():
global a
if a == 1:
print 1
return None
print a
a -= 1 # This new 'a' should be available in the next call to foo()
foo()
The output:
3
2
1
But I want to use another way because my professor says global variables are dangerous and one should avoid using them.
Also I am not simply passing the variable 'a' as argument because 'a' in my actual code is just to keep track of some numbers, that is to track the numbering of nodes I am visiting first to last. So, I don't want to make my program complex by introducing 'a' as argument in every call.
Please suggest me whatever is the best programming practice to solve the above problem.
Don't use a global; just make a a parameter to the function:
def foo(a):
print a
if a == 1:
return None
foo(a-1)
foo(3)
Try this :
Use a parameter instead of a global variable.
Example code
a = 3
def foo(param):
if param == 1:
print 1
return None
print param
foo(param - 1)
foo(a)
This question already has answers here:
Does Python make a copy of objects on assignment?
(5 answers)
How do I pass a variable by reference?
(39 answers)
Why can a function modify some arguments as perceived by the caller, but not others?
(13 answers)
Closed last month.
For a project I'm working on, I'm implementing a linked-list data-structure, which is based on the idea of a pair, which I define as:
class Pair:
def __init__(self, name, prefs, score):
self.name = name
self.score = score
self.preferences = prefs
self.next_pair = 0
self.prev_pair = 0
where self.next_pair and self.prev_pair are pointers to the previous and next links, respectively.
To set up the linked-list, I have an install function that looks like this.
def install(i, pair):
flag = 0
try:
old_pair = pair_array[i]
while old_pair.next_pair != 0:
if old_pair == pair:
#if pair in remainders: remainders.remove(pair)
return 0
if old_pair.score < pair.score:
flag = 1
if old_pair.prev_pair == 0: # we are at the beginning
old_pair.prev_pair = pair
pair.next_pair = old_pair
pair_array[i] = pair
break
else: # we are not at the beginning
pair.prev_pair = old_pair.prev_pair
pair.next_pair = old_pair
old_pair.prev_pair = pair
pair.prev_pair.next_pair = pair
break
else:
old_pair = old_pair.next_pair
if flag==0:
if old_pair == pair:
#if pair in remainders: remainders.remove(pair)
return 0
if old_pair.score < pair.score:
if old_pair.prev_pair==0:
old_pair.prev_pair = pair
pair.next_pair = old_pair
pair_array[i] = pair
else:
pair.prev_pair = old_pair.prev_pair
pair.next_pair = old_pair
old_pair.prev_pair = pair
pair.prev_pair.next_pair = pair
else:
old_pair.next_pair = pair
pair.prev_pair = old_pair
except KeyError:
pair_array[i] = pair
pair.prev_pair = 0
pair.next_pair = 0
Over the course of the program, I am building up a dictionary of these linked-lists, and taking links off of some and adding them in others. Between being pruned and re-installed, the links are stored in an intermediate array.
Over the course of debugging this program, I have come to realize that my understanding of the way Python passes arguments to functions is flawed. Consider this test case I wrote:
def test_install():
p = Pair(20000, [3, 1, 2, 50], 45)
print p.next_pair
print p.prev_pair
parse_and_get(g)
first_run()
rat = len(juggler_array)/len(circuit_array)
pref_size = get_pref_size()
print pref_size
print install(3, p)
print p.next_pair.name
print p.prev_pair
When I run this test, I get the following result.
0
0
10
None
10108
0
What I don't understand is why the second call to p.next_pair produces a different result (10108) than the first call (0). install does not return a Pair object that can overwrite the one passed in (it returns None), and it's not as though I'm passing install a pointer.
My understanding of call-by-value is that the interpreter copies the values passed into a function, leaving the caller's variables unchanged. For example, if I say
def foo(x):
x = x+1
return x
baz = 2
y = foo(baz)
print y
print baz
Then 3 and 2 should be printed, respectively. And indeed, when I test that out in the Python interpreter, that's what happens.
I'd really appreciate it if anyone can point me in the right direction here.
In Python, everything is an object. Simple assignment stores a reference to the assigned object in the assigned-to name. As a result, it is more straightforward to think of Python variables as names that are assigned to objects, rather than objects that are stored in named locations.
For example:
baz = 2
... stores in baz a pointer, or reference, to the integer object 2 which is stored elsewhere. (Since the type int is immutable, Python actually has a pool of small integers and reuses the same 2 object everywhere, but this is an implementation detail that need not concern us much.)
When you call foo(baz), foo()'s local variable x also points to the integer object 2 at first. That is, the foo()-local name x and the global name baz are names for the same object, 2. Then x = x + 1 is executed. This changes x to point to a different object: 3.
It is important to understand: x is not a box that holds 2, and 2 is then incremented to 3. No, x initially points to 2 and that pointer is then changed to point to 3. Naturally, since we did not change what object baz points to, it still points to 2.
Another way to explain it is that in Python, all argument passing is by value, but all values are references to objects.
A counter-intuitive result of this is that if an object is mutable, it can be modified through any reference and all references will "see" the change. For example, consider this:
baz = [1, 2, 3]
def foo(x):
x[0] = x[0] + 1
foo(baz)
print baz
>>> [2, 2, 3]
This seems very different from our first example. But in reality, the argument is passed the same way. foo() receives a pointer to baz under the name x and then performs an operation on it that changes it (in this case, the first element of the list is pointed to a different int object). The difference is that the name x is never pointed to a new object; it is x[0] that is modified to point to a different object. x itself still points to the same object as baz. (In fact, under the hood the assignment to x[0] becomes a method call: x.__setitem__().) Therefore baz "sees" the modification to the list. How could it not?
You don't see this behavior with integers and strings because you can't change integers or strings; they are immutable types, and when you modify them (e.g. x = x + 1) you are not actually modifying them but binding your variable name to a completely different object. If you change baz to a tuple, e.g. baz = (1, 2, 3), you will find that foo() gives you an error because you can`t assign to elements of a tuple; tuples are another immutable type. "Changing" a tuple requires creating a new one, and assignment then points the variable to the new object.
Objects of classes you define are mutable and so your Pair instance can be modified by any function it is passed into -- that is, attributes may be added, deleted, or reassigned to other objects. None of these things will re-bind any of the names pointing to your object, so all the names that currently point to it will "see" the changes.
Python does not copy anything when passing variables to a function. It is neither call-by-value nor call-by-reference, but of those two it is more similar to call-by-reference. You could think of it as "call-by-value, but the value is a reference".
If you pass a mutable object to a function, then modifying that object inside the function will affect the object everywhere it appears. (If you pass an immutable object to a function, like a string or an integer, then by definition you can't modify the object at all.)
The reason this isn't technically pass-by-reference is that you can rebind a name so that the name refers to something else entirely. (For names of immutable objects, this is the only thing you can do to them.) Rebinding a name that exists only inside a function doesn't affect any names that might exist outside the function.
In your first example with the Pair objects, you are modifying an object, so you see the effects outside of the function.
In your second example, you are not modifying any objects, you are just rebinding names to other objects (other integers in this case). baz is a name that points to an integer object (in Python, everything is an object, even integers) with a value of 2. When you pass baz to foo(x), the name x is created locally inside the foo function on the stack, and x is set to the pointer that was passed into the function -- the same pointer as baz. But x and baz are not the same thing, they only contain pointers to the same object. On the x = x+1 line, x is rebound to point to an integer object with a value of 3, and that pointer is what is returned from the function and used to bind the integer object to y.
If you rewrote your first example to explicitly create a new Pair object inside your function based on the information from the Pair object passed into it (whether this is a copy you then modify, or if you make a constructor that modifies the data on construction) then your function would not have the side-effect of modifying the object that was passed in.
Edit: By the way, in Python you shouldn't use 0 as a placeholder to mean "I don't have a value" -- use None. And likewise you shouldn't use 0 to mean False, like you seem to be doing in flag. But all of 0, None and False evaluate to False in boolean expressions, so no matter which of those you use, you can say things like if not flag instead of if flag == 0.
I suggest that you forget about implementing a linked list, and simply use an instance of a Python list. If you need something other than the default Python list, maybe you can use something from a Python module such as collections.
A Python loop to follow the links in a linked list will run at Python interpreter speed, which is to say, slowly. If you simply use the built-in list class, your list operations will happen in Python's C code, and you will gain speed.
If you need something like a list but with fast insertion and fast deletion, can you make a dict work? If there is some sort of ID value (string or integer or whatever) that can be used to impose an ordering on your values, you could just use that as a key value and gain lightning fast insert and delete of values. Then if you need to extract values in order, you can use the dict.keys() method function to get a list of key values and use that.
But if you really need linked lists, I suggest you find code written and debugged by someone else, and adapt it to your needs. Google search for "python linked list recipe" or "python linked list module".
I'm going to throw in a slightly complicating factor:
>>> def foo(x):
... x *= 2
... return x
...
Define a slightly different function using a method I know is supported for numbers, lists, and strings.
First, call it with strings:
>>> baz = "hello"
>>> y = foo(baz)
>>> y
'hellohello'
>>> baz
'hello'
Next, call it with lists:
>>> baz=[1,2,2]
>>> y = foo(baz)
>>> y
[1, 2, 2, 1, 2, 2]
>>> baz
[1, 2, 2, 1, 2, 2]
>>>
With strings, the argument isn't modified. With lists, the argument is modified.
If it were me, I'd avoid modifying arguments within methods.