I need big help, please check out this code:
import.math
dose =20.0
a = [[[2,3,4],[5,8,9],[12,56,32]]
[[25,36,45][21,65,987][21,58,89]]
[[78,21,98],[54,36,78],[23,12,36]]]
PAC = math.exp(-dose*a)
this what I would like to do. However the error I am getting is
TypeError: only length-1 arrays can be converted to Python scalars
If you want to perform mathematical operations on arrays (whatever their dimensions...), you should really consider using NumPy which is designed just for that. In your case, the corresponding NumPy command would be:
PAC = numpy.exp(-dose*np.array(a))
If NumPy is not an option, you'll have to loop on each element of a, compute your math.exp, store the result in a list... Really cumbersome and inefficient. That's because the math functions require a scalar as input (as the exception told you), when you're passing a list (of lists). You can combine all the loops in a single list comprehension, though:
PAC = [[[math.exp(-dose*j) for j in elem] for elem in row] for row in a]
but once again, I would strongly recommend NumPy.
You should really use NumPy for that.
And here is how you should do it using nested loops:
>>> for item in a:
... for sub in item:
... for idx, number in enumerate(sub):
... print number, math.exp(-dose*number)
... sub[idx] = math.exp(-dose*number)
Using append is slow, because every time you copy the previous array and stack the new item to it.
Using enumerate, changes numbers in place. If you want to keep a copy of a, do:
acopy = a[:]
If you don't have much numbers, and NumPy is an over kill, the above could be done a tiny bit faster using list comprehensions.
If you want, for each element of the array to have it multiplied by -dose then apply math.exp on the result, you need a loop :
new_a = []
for subarray in a:
new_sub_array = []
for element in sub_array:
new_element = math.exp(-dose*element)
new_sub_array.append(new_element)
new_a.append(new_sub_array)
Alternatvely, if you have a mathlab background, you could inquire numpy, that enable transformations on array.
Related
So I was wondering how to best create a list of blank lists:
[[],[],[]...]
Because of how Python works with lists in memory, this doesn't work:
[[]]*n
This does create [[],[],...] but each element is the same list:
d = [[]]*n
d[0].append(1)
#[[1],[1],...]
Something like a list comprehension works:
d = [[] for x in xrange(0,n)]
But this uses the Python VM for looping. Is there any way to use an implied loop (taking advantage of it being written in C)?
d = []
map(lambda n: d.append([]),xrange(0,10))
This is actually slower. :(
The probably only way which is marginally faster than
d = [[] for x in xrange(n)]
is
from itertools import repeat
d = [[] for i in repeat(None, n)]
It does not have to create a new int object in every iteration and is about 15 % faster on my machine.
Edit: Using NumPy, you can avoid the Python loop using
d = numpy.empty((n, 0)).tolist()
but this is actually 2.5 times slower than the list comprehension.
The list comprehensions actually are implemented more efficiently than explicit looping (see the dis output for example functions) and the map way has to invoke an ophaque callable object on every iteration, which incurs considerable overhead overhead.
Regardless, [[] for _dummy in xrange(n)] is the right way to do it and none of the tiny (if existent at all) speed differences between various other ways should matter. Unless of course you spend most of your time doing this - but in that case, you should work on your algorithms instead. How often do you create these lists?
Here are two methods, one sweet and simple(and conceptual), the other more formal and can be extended in a variety of situations, after having read a dataset.
Method 1: Conceptual
X2=[]
X1=[1,2,3]
X2.append(X1)
X3=[4,5,6]
X2.append(X3)
X2 thus has [[1,2,3],[4,5,6]] ie a list of lists.
Method 2 : Formal and extensible
Another elegant way to store a list as a list of lists of different numbers - which it reads from a file. (The file here has the dataset train)
Train is a data-set with say 50 rows and 20 columns. ie. Train[0] gives me the 1st row of a csv file, train[1] gives me the 2nd row and so on. I am interested in separating the dataset with 50 rows as one list, except the column 0 , which is my explained variable here, so must be removed from the orignal train dataset, and then scaling up list after list- ie a list of a list. Here's the code that does that.
Note that I am reading from "1" in the inner loop since I am interested in explanatory variables only. And I re-initialize X1=[] in the other loop, else the X2.append([0:(len(train[0])-1)]) will rewrite X1 over and over again - besides it more memory efficient.
X2=[]
for j in range(0,len(train)):
X1=[]
for k in range(1,len(train[0])):
txt2=train[j][k]
X1.append(txt2)
X2.append(X1[0:(len(train[0])-1)])
To create list and list of lists use below syntax
x = [[] for i in range(10)]
this will create 1-d list and to initialize it put number in [[number] and set length of list put length in range(length)
To create list of lists use below syntax.
x = [[[0] for i in range(3)] for i in range(10)]
this will initialize list of lists with 10*3 dimension and with value 0
To access/manipulate element
x[1][5]=value
So I did some speed comparisons to get the fastest way.
List comprehensions are indeed very fast. The only way to get close is to avoid bytecode getting exectuded during construction of the list.
My first attempt was the following method, which would appear to be faster in principle:
l = [[]]
for _ in range(n): l.extend(map(list,l))
(produces a list of length 2**n, of course)
This construction is twice as slow as the list comprehension, according to timeit, for both short and long (a million) lists.
My second attempt was to use starmap to call the list constructor for me, There is one construction, which appears to run the list constructor at top speed, but still is slower, but only by a tiny amount:
from itertools import starmap
l = list(starmap(list,[()]*(1<<n)))
Interesting enough the execution time suggests that it is the final list call that is makes the starmap solution slow, since its execution time is almost exactly equal to the speed of:
l = list([] for _ in range(1<<n))
My third attempt came when I realized that list(()) also produces a list, so I tried the apperently simple:
l = list(map(list, [()]*(1<<n)))
but this was slower than the starmap call.
Conclusion: for the speed maniacs:
Do use the list comprehension.
Only call functions, if you have to.
Use builtins.
I want to create dynamic arrays inside a dynamic array because I dont know how many lists it will take to get the actual result. So using python 2.x when I write
Arrays = [[]]
does this mean that there is only one dynamic array inside an array or it can mean to be more than one when call for it in for loop like arrays[i]?
If it's not the case do you know a different method?
You can just define
Arrays = []
It is enough to hold your dynamic array.
AnotherArray1 = []
AnotherArray2 = []
Arrays.append(AnotherArray1)
Arrays.append(AnotherArray2)
print Arrays
Hope this solves your problem!
Consider using
Arrays = []
and later, when you are assigning your results use
Arrays.append([result])
This is assuming that your result comes in slices, but not as an array. No matter your actual return value layout, a variation of the above .append() should do the trick, as it allows you to dynamically extend your array. If your result comes as an array, it would simply be
Arrays.append(result)
and so on
If your array is going to be sparse, that is a lot of empty elements, you can consider to have a dict with coordinates as keys instead of nested lists:
grid = {}
grid[(x, y)] = value
print(grid)
output: {(x, y): value}
I imported a csv file containing zip codes as a string using the following line:
my_data = genfromtext('path\to\file.csv', delimiter = ',', dtype=str, autostrip=True)
I am importing as a string in order to keep the leading zeroes some zip codes may contain. Now I need to also loop through the entire numpy array and I wanted to do so like this:
for i in np.nditer(my_data):
do something with my_data[i]
But unfortunately it is returning the following error:
Arrays used as indices must be of integer (or boolean) type
Any idea how I can loop through each element of this numpy array?
While looping over NumPy arrays is often not a good solution, you can do it like this:
for i in range(len(my_data)):
do something with my_data[i]
You might be better off reading your data into a list, process the strings, and convert into NumPy array afterwards.
You should do something with i, not with my_data[i]. i is already your element (a part if mydata).
Thats why my_data[i] is not working, becouse i is not an index. it is a numpy array.
If you want to use index, and the given element too, use enumerate()
Example:
lista = [20,50,70]
for idx, element in enumerate(lista):
print (idx, element)
For more info visit this site numpy iteration tutorial
Have table want to use numpy to slice into sections
table = ['212:3:0:70.13911:-89.85361:3', '212:3:1:70.28725:-89.77466:7', '212:3:2:70.39231:-89.74908:9', '212:3:3:70.48806:-89.6414:11', '212:3:4:70.60366:-89.51539:14', '212:3:5:70.60366:-89.51539:14', '212:3:6:70.66518:-89.4048:16']
t = np.asarray (table, dtype ='object')
Want to use numpy to slice all........ 212:3:0, 212:3:1 as k.
Want all '212:3:0:70.13911:-89.85361:3','212:3:1:70.28725:-89.77466:7' as v
into a dictionany dict (k,v). I dont want to use a for-loop to do this...
I have done this as for loop its slow.
NOTE: the row has ":", but the ":" does mean the dict ':'.
Basics of dict comprehensions
To convert something into a dict, you need to make it into an iterable that generates 2-sequences (anything that generates a sequence of two elements), like [[1,2],[3,4]] or [(1,2),(3,4)] or zip([1,2,3,4], [5,6,7,8]))
E.g.
>>> mylst = [(1,2), (3,4), (5,6)]
>>> print dict(mylst)
{1:2, 3:4, 5:6}
so you need to split each of your strings in such a way that you produce a
tuple. say you've already written a function that does this, called
split_item that takes in a two strings and returns a tuple. You could then
write a generator expression like the following so that you don't need to load
everything into memory until you create the dict.
def generate_tuples(table):
length = len(table)
for i in range(1, length - 1):
yield split_item(table[i-1], table[i])
then just call the dict builtin on your generator function.
>>> dict(generate_tuples(table))
Since you say you already wrote this with a for-loop, I'm guessing you already have a split_items function written.
Making it fast
Here's a guide to high-performance Python, written by Ian Ozsvald, that can help you experiment with other ways to increase the speed of processing. (credit to #AndrewWalker 's SO post here)
Is this what you're after?
dict( (t.rsplit(':', 3)[0], t) for t in table ) )
So I was wondering how to best create a list of blank lists:
[[],[],[]...]
Because of how Python works with lists in memory, this doesn't work:
[[]]*n
This does create [[],[],...] but each element is the same list:
d = [[]]*n
d[0].append(1)
#[[1],[1],...]
Something like a list comprehension works:
d = [[] for x in xrange(0,n)]
But this uses the Python VM for looping. Is there any way to use an implied loop (taking advantage of it being written in C)?
d = []
map(lambda n: d.append([]),xrange(0,10))
This is actually slower. :(
The probably only way which is marginally faster than
d = [[] for x in xrange(n)]
is
from itertools import repeat
d = [[] for i in repeat(None, n)]
It does not have to create a new int object in every iteration and is about 15 % faster on my machine.
Edit: Using NumPy, you can avoid the Python loop using
d = numpy.empty((n, 0)).tolist()
but this is actually 2.5 times slower than the list comprehension.
The list comprehensions actually are implemented more efficiently than explicit looping (see the dis output for example functions) and the map way has to invoke an ophaque callable object on every iteration, which incurs considerable overhead overhead.
Regardless, [[] for _dummy in xrange(n)] is the right way to do it and none of the tiny (if existent at all) speed differences between various other ways should matter. Unless of course you spend most of your time doing this - but in that case, you should work on your algorithms instead. How often do you create these lists?
Here are two methods, one sweet and simple(and conceptual), the other more formal and can be extended in a variety of situations, after having read a dataset.
Method 1: Conceptual
X2=[]
X1=[1,2,3]
X2.append(X1)
X3=[4,5,6]
X2.append(X3)
X2 thus has [[1,2,3],[4,5,6]] ie a list of lists.
Method 2 : Formal and extensible
Another elegant way to store a list as a list of lists of different numbers - which it reads from a file. (The file here has the dataset train)
Train is a data-set with say 50 rows and 20 columns. ie. Train[0] gives me the 1st row of a csv file, train[1] gives me the 2nd row and so on. I am interested in separating the dataset with 50 rows as one list, except the column 0 , which is my explained variable here, so must be removed from the orignal train dataset, and then scaling up list after list- ie a list of a list. Here's the code that does that.
Note that I am reading from "1" in the inner loop since I am interested in explanatory variables only. And I re-initialize X1=[] in the other loop, else the X2.append([0:(len(train[0])-1)]) will rewrite X1 over and over again - besides it more memory efficient.
X2=[]
for j in range(0,len(train)):
X1=[]
for k in range(1,len(train[0])):
txt2=train[j][k]
X1.append(txt2)
X2.append(X1[0:(len(train[0])-1)])
To create list and list of lists use below syntax
x = [[] for i in range(10)]
this will create 1-d list and to initialize it put number in [[number] and set length of list put length in range(length)
To create list of lists use below syntax.
x = [[[0] for i in range(3)] for i in range(10)]
this will initialize list of lists with 10*3 dimension and with value 0
To access/manipulate element
x[1][5]=value
So I did some speed comparisons to get the fastest way.
List comprehensions are indeed very fast. The only way to get close is to avoid bytecode getting exectuded during construction of the list.
My first attempt was the following method, which would appear to be faster in principle:
l = [[]]
for _ in range(n): l.extend(map(list,l))
(produces a list of length 2**n, of course)
This construction is twice as slow as the list comprehension, according to timeit, for both short and long (a million) lists.
My second attempt was to use starmap to call the list constructor for me, There is one construction, which appears to run the list constructor at top speed, but still is slower, but only by a tiny amount:
from itertools import starmap
l = list(starmap(list,[()]*(1<<n)))
Interesting enough the execution time suggests that it is the final list call that is makes the starmap solution slow, since its execution time is almost exactly equal to the speed of:
l = list([] for _ in range(1<<n))
My third attempt came when I realized that list(()) also produces a list, so I tried the apperently simple:
l = list(map(list, [()]*(1<<n)))
but this was slower than the starmap call.
Conclusion: for the speed maniacs:
Do use the list comprehension.
Only call functions, if you have to.
Use builtins.