I was getting confused by the purpose of "return" and "yield"
def countMoreThanOne():
return (yy for yy in xrange(1,10,2))
def countMoreThanOne():
yield (yy for yy in xrange(1,10,2))
What is the difference on the above function?
Is it impossible to access the content inside the function using yield?
In first you return a generator
from itertools import chain
def countMoreThanOne():
return (yy for yy in xrange(1,10,2))
print list(countMoreThanOne())
>>>
[1, 3, 5, 7, 9]
while in this you are yielding a generator so that a generator within the generator
def countMoreThanOne():
yield (yy for yy in xrange(1,10,2))
print list(countMoreThanOne())
print list(chain.from_iterable(countMoreThanOne()))
[<generator object <genexpr> at 0x7f0fd85c8f00>]
[1, 3, 5, 7, 9]
if you use list comprehension then difference can be clearly seen:-
in first:-
def countMoreThanOne():
return [yy for yy in xrange(1,10,2)]
print countMoreThanOne()
>>>
[1, 3, 5, 7, 9]
def countMoreThanOne1():
yield [yy for yy in xrange(1,10,2)]
print countMoreThanOne1()
<generator object countMoreThanOne1 at 0x7fca33f70eb0>
>>>
After reading your other comments I think you should write the function like this:
def countMoreThanOne():
return xrange(1, 10, 2)
>>> print countMoreThanOne()
xrange(1, 11, 2)
>>> print list(countMoreThanOne())
[1, 3, 5, 7, 9]
or even better, to have some point in making it a function:
def oddNumbersLessThan(stop):
return xrange(1, stop, 2)
>>> print list(oddNumbersLessThan(15))
[1, 3, 5, 7, 9, 11, 13]
Related
When I apply multiprocessing.pool.map to list object, the list object would not be affected:
from multiprocessing import Pool
def identity(x):
return x
num_list = list(range(0, 10))
print("before multiprocessing:")
with Pool(10) as p:
print(p.map(identity, num_list))
print("after multiprocessing:")
print(list(num_list))
prints
before multiprocessing:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
after multiprocessing:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
But when I apply multiprocessing.pool.map upon map object, it seems to got erased:
from multiprocessing import Pool
def identity(x):
return x
num_list = list(range(0, 10))
num_list = map(identity, num_list)
print("before multiprocessing:")
with Pool(10) as p:
print(p.map(identity, num_list))
print("after multiprocessing:")
print(list(num_list))
prints
before multiprocessing:
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
after multiprocessing:
[]
The only difference is num_list = map(identity, num_list).
Does num_list(map object) got erased by multiprocessing.pool.map?
I'm not sure about this but I couldn't find another explanation.
map function return an iterator, After p.map() traversing the last element of the map obj, it will return nothing when accessing the map obj again. This is the feature of the iterator
What's happening here is that the first and second element of every tuple are getting multiplied, and it adds all of the products in the end. I know how to enter it in the Python shell, but how do I write it out as a function? Thanks for the help.
>>> x = [(70.9, 1, 24.8),
(15.4, 2, 70.5),
(30.0, 3, 34.6),
(25.0, 4, 68.4),
(45.00, 5, 99.0)]
>>> result = (a[0]*a[1] for a in x)
>>> sum(result)
>>> 516.7
Create the function:
def my_func(x):
result = (a[0]*a[1] for a in x)
return sum(result)
Call the function:
x = [(70.9, 1, 24.8),
(15.4, 2, 70.5),
(30.0, 3, 34.6),
(25.0, 4, 68.4),
(45.00, 5, 99.0)]
my_func(x)
Result will be 516.7
using numpy packege dot product also we can archive this easily
import numpy as np
x = [(70.9, 1, 24.8),(15.4, 2, 70.5),(30.0, 3, 34.6),(25.0, 4, 68.4),(45.00, 5, 99.0)]
def func(list):
nmpyArray = np.array(list)
mul = np.dot(nmpyArray[:, 0], nmpyArray[:, 1])
print(mul)
return mul
func(x)
I'm trying to make an iterator that prints the repeating sequence
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 1, 2, 3, 4, 5, 6, 7, ...
I want an iterator so I can use .next(), and I want it to loop around to 0 when .next() is called while the iterator is at 9.
But the thing is that I'll probably have a lot of these, so I don't just want to do itertools.cycle([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]).
I don't want to have that many repeated lists of the same sequence in memory. I'd rather have the function x + 1 % 10 in each of the iterators and just have the iterator increment x every time next is called. I can't seem to figure out how to do this though with itertools. Is there a pythonic way of doing this?
You can write an generator that uses range
def my_cycle(start, stop, step=1):
while True:
for x in range(start, stop, step):
yield x
c = my_cycle(0, 10)
You can use your own custom generator:
def cycle_range(xr):
while True:
for x in xr:
yield x
Assuming you are on Python 2, use:
r = xrange(9)
it1 = cycle_range(xr)
it2 = cycle_range(xr)
For memory efficiency.
This is one way via itertools:
import itertools
def counter():
for i in itertools.count(1):
yield i%10
g = counter()
You can use a custom generator like this:
def single_digit_ints():
i = 0
while True:
yield i
i = (i + 1) % 10
for i in single_digit_ints():
# ...
I am trying to create a method (sum) that takes a variable number of vectors and adds them in. For educational purposes, I have written my own Vector class, and the underlying data is stored in an instance variable named data.
My code for the #classmethod sum works (for each of the vectors passed in, loop through each element in the data variable and add it to a result list), but it seems non-Pythonic, and wondering if there is a better way?
class Vector(object):
def __init__(self, data):
self.data = data
#classmethod
def sum(cls, *args):
result = [0 for _ in range(len(args[0].data))]
for v in args:
if len(v.data) != len(result): raise
for i, element in enumerate(v.data):
result[i] += element
return cls(result)
itertools.izip_longest may come very handy in your situation:
a = [1, 2, 3, 4]
b = [1, 2, 3, 4, 5, 6]
c = [1, 2]
lists = (a, b, c)
result = [sum(el) for el in itertools.izip_longest(*lists, fillvalue=0)]
And here you got what you wanted:
>>> result
[3, 6, 6, 8, 5, 6]
What it does is simply zips up your lists together, by filling empty value with 0. e.g. izip_longest(a, b) would be [(1, 1), (2, 2), (3, 0), (4, 0)]. Then just sums up all the values in each tuple element of the intermediate list.
So here you go step by step:
>>> lists
([1, 2, 3, 4], [1, 2, 3, 4, 5, 6], [1, 2])
>>> list(itertools.izip_longest(*lists, fillvalue=0))
[(1, 1, 1), (2, 2, 2), (3, 3, 0), (4, 4, 0), (0, 5, 0), (0, 6, 0)]
So if you run a list comprehension, summing up all sub-elements, you get your result.
Another thing that you could do (and that might be more "pythonic") would be to implement the __add__ magic method, so you can use + and sum directly on vectors.
class Vector(object):
def __init__(self, data):
self.data = data
def __add__(self, other):
if isinstance(other, Vector):
return Vector([s + o for s, o in zip(self.data, other.data)])
if isinstance(other, int):
return Vector([s + other for s in self.data])
raise TypeError("can not add %s to vector" % other)
def __radd__(self, other):
return self.__add__(other)
def __repr__(self):
return "Vector(%r)" % self.data
Here, I also implemented addition of Vector and int, adding the number on each of the Vector's data elements, and the "reverse addition" __radd__, to make sum work properly.
Example:
>>> v1 = Vector([1,2,3])
>>> v2 = Vector([4,5,6])
>>> v3 = Vector([7,8,9])
>>> v1 + v2 + v3
Vector([12, 15, 18])
>>> sum([v1,v2,v3])
Vector([12, 15, 18])
args = [[1, 2, 3],
[10, 20, 30],
[7, 3, 15]]
result = [sum(data) for data in zip(*args)]
# [18, 25, 48]
Is this what you want?
The original code was somehow complex, i simplify it as:
Given:
list of class instance, e.g.: l=[c1,c2,c3, ...]
each instance has a member variable list, e.g. c1.memList=[3,2,5], c2.memList=[1,2]
Todo:
select those instances in l, whose memList has only '3'-modulo item , e.g, c3.memList=[3,6,9,3,27]
I thought to code it like this:
newl = [ n for n in l if len( [m for m in n.memList if m%3] )==0 ]
But: list comprehension does not allow this by saying 'm is not defined'
Question: how to code this in a pythonic way?
New edit: Sorry I made a typo (mistyped if to in), it worked. I will propose to close this question.
I did not get any Error concerning 'm is not defined' the reasons must be outside of this snippet.
newl = [ n for n in l if all([ m % 3 == 0 for m in n.memList]) ]
I would recommend something like this, the all() function improves readability. It is allways good to use the list Syntax cause it speeds calculations.
The code you have given works for me! I'm not sure what problem you are having. However, I would write my list comprehension a bit differently:
[n for n in l if not any(m % 3 for m in n.memList)]
Tested:
>>> class Obj(object):
... def __init__(self, name, a):
... self.name = name
... self.memList = a
... def __repr__(self):
... return self.name
...
>>> objs = [Obj('a', [3, 2, 5]), Obj('b', [3, 6, 9, 3, 27])]
>>> [n for n in objs if not any(m for m in n.memList if m % 3)]
[b]
This I think is what you are looking for. It is much more verbose than your method, but python emphasizes readability.
class c(object):
def __init__(self, memlist):
self.m = memlist
c1 = c([3,6,9])
c2 = c([1,5,7])
l = [c1,c2]
newl = []
for n in l:
b = True
for x in n.m:
if x % 3 != 0:
b = False
if b != False:
newl.append(n)
Here is what I've got, with
c1.memList = [0, 1, 2, 3]
c2.memList = [0, 1]
c3.memList = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]
and this code:
for y in l:
newl = []
for m in y.memList:
if m%3 == 0:
newl.append(m)
print newl
I get this as a result:
[0, 3]
[0]
[0, 3, 6, 9, 12, 15, 18, 21, 24, 27]