I'm using the great quantities package for Python. I would like to know how I can get at just the numerical value of the quantity, without the unit.
I.e., if I have
E = 5.3*quantities.joule
I would like to get at just the 5.3. I know I can simply divide by the "undesired" unit, but hoping there was a better way to do this.
E.item() seems to be what you want, if you want a Python float. E.magnitude, offered by tzaman, is a 0-dimensional NumPy array with the value, if you'd prefer that.
The documentation for quantities doesn't seem to have a very good API reference.
I believe E.magnitude gets you what you want.
>>> import quantities
>>> E=5.3*quantities.joule
>>> E.item()
5.3
Related
I need to do symbolic manipulations to very large systems of equations, and end up with well over 200 variables that I need to do computations with. The problem is, one would usually name their variables x, y, possibly z when solving small system of equations. Even starting at a, b, ... you only get 26 unique variables this way.
Is there a nice way of fixing this problem? Say for instance I wanted to fill up a 14x14 matrix with a different variable in each spot. How would I go about doing this?
You could use symbolic matrices via MatrixSymbol
>>> A = MatrixSymbol('A', 14, 14)
This can be accessed as you would expect
>>> A[2, 3]
A[2, 3]
I think the most straightfoward way to do this is to use sympy.symarray, like so:
x = sympy.symarray("x",(5,5,5))
This creates an accordingly sized (numpy) array - here the size is 5x5x5 - that contains sympy variables, more specifically these variables are prefixed with whatever you chose - here "x"- and have as many indices as you provided dimensions, here 3. Of course you can make as many of these arrays as you need - perhaps it makes sense to use different prefixes for different groups of variables for readability etc.
You can then use these in your code by using e.g. x[i,j,k]:
In [6]: x[0,1,4]
Out[6]: x_0_1_4
(note that you can not access the elements via x_i_j_k - I found this a bit counterintuitive when I started using sympy, but once you get the hang on python vs. sympy variables, it makes perfect sense.)
You can of course also use slicing on the array, e.g. x[:,0,0].
If you need a python list of your variables, you can use e.g. x.flatten().tolist().
This is in my opinion preferable to using sympy.MatrixSymbol because (a) you get to decide the number of indices you want, and (b) the elements are "normal" sympy.Symbols, meaning you can be sure you can do anything with them you could also do with them if you declared them as regular symbols.
(I'm not sure this is still the case in sympy 1.1, but in sympy 1.0 it used to be that not all functionality was implemented for MatrixElement.)
I'd recommend the package numpy so you can use NumPy arrays.
# import statement
import numpy as np
# instantiate a NumPy array (matrix) with 14 rows and 14 columns
variableMatrix = np.zeros((14,14))
Note ```np.zeros((14,14))`` will fill the matrix with 0s, and you can replace each element with your desired variable value later. Notice that the extra pair of parentheses in the function call is necessary!
You can access the i,jth element of the matrix using the syntax variableMatrix[i-1,j-1]. I subtracted one from the index since Python indexing starts at 0 of course.
I noticed that numpy has a built in function linalg.norm(vector), which produces the magnitude. For small values I get the desired output
>>> import numpy as np
>>> np.linalg.norm([0,2])
2.0
However for large values:
>>> np.linalg.norm([0,149600000000])
2063840737.6330884
This is a huge error, what could I do instead. Making my own function seems to produce the same error. What is the problem here, is a rounding error this big?, and what can I do instead?
Your number is written as an integer, and yet it is too big to fit into a numpy.int32. This problem seems to happen even in python3, where
the native numbers are big.
In numerical work I try to make everything floating point unless it is an index. So I tried:
In [3]: np.linalg.norm([0.0,149600000000.0])
Out[3]: 149600000000.0
To elaborate: in this case Adding the .0 was an easy way of turning integers into doubles. In more realistic code, you might have incoming data which is of uncertain type. The safest (but not always the right) thing to do is just coerce to a floating point array at the top of your function.
def do_something_with_array(arr):
arr = np.double(arr) # or np.float32 if you prefer.
... do something ...
I am running a function developed by Esri to get list of values in a integer column of a spatial table (however, the same behaviour is observed even when running the function on a non-spatial table). According to the help, I should get NumPy structured array. After running the function, I have a numpy array. I run print in this format:
in_table = r"C:\geodb101#server.sde\DataTable" #
data = arcpy.da.TableToNumPyArray(in_table, "Field3")
print data
Which gives me back this in IDE (copy/pasted from IDE interpreter):
[(20130825,) (20130827,) (20130102,)]
I am running:
allvalues = data.tolist()
and getting:
[(20130825,), (20130827,), (20130102,)]
Same result when running data.reshape(len(data)).tolist() as suggested in comments.
Running type() lets me know that in the first case it is <type 'numpy.ndarray'> and in the second case <type 'list'>. I am expecting to get my output list in another format [20130825, 20130827, 20130102]. What am I doing wrong or what else should I do to get the output list in the specified format?
I have a possible approach, but I'm not 100% sure it will work, as I can't figure out how you got tuples into an array (when I tried to create an array of tuples, it looks like the tuples got converted to arrays). In any case, give this a shot:
my_list = map(lambda x: x[0], my_np_array_with_tuples_in_it)
This assumes you're dealing specifically with the single element tuples you describe above. And like I said, when I tried to recreate your circumstances, numpy did some conversion moves that I don't fully understand (not really a numpy expert).
Hope that helps.
Update: Just saw the new edits. Not sure if my answer applies anymore.
Update 2: Glad that worked, here's a bit of elaboration.
Lambda is basically just an inline function, and is a construct common in a lot of languages. It's essentially a temporary, anonymous function. You could have just as easily done something like this:
def my_main_func():
def extract_tuple_value(tup):
return tup[0]
my_list = map(extract_tuple_value, my_np_array_with_tuples_in_it)
But as you can see, the lambda version is more concise. The "x" in my initial example is the equivalent of "tup" in the more verbose example.
Lambda expressions are generally limited to very simple operations, basically one line of logic, which is what is returned (there is no explicit return statement).
Update 3: After chatting with a buddy and doing some research, list comprehension is definitely the way to go (see Python List Comprehension Vs. Map).
From acushner's comment below, you can definitely go with this instead:
my_list = [tup[0] for tup in my_np_array_with_tuples_in_it]
I have a matlab code that I'm trying to translate in python.
I'm new on python but I have been able to answer a lot of questions googling a little bit.
But now, I'm trying to figure out the following:
I have a for loop when I apply different things on each column, but you don't know the number of columns. For example.
In matlab, nothing easier than this:
for n = 1:size(x,2); y(n) = mean(x(:,n)); end
But I have no idea how to do it on python when, for example, the number of columns is 1, because I can't do x[:,1] in python.
Any idea?
Thanx
Yes, if you use numpy you can use x[:,1], and also you get other data structures (vectors instead of lists), the main difference between matlab and numpy is that matlab uses matrices for calculations and numpy uses vectors, but you get used to it, I think this guide will help you out.
Try numpy. It is a python bindings for high-performance math library written in C. I believe it has the same concepts of matrix slice operations, and it is significantly faster than the same code written in pure python (in most cases).
Regarding your example, I think the closest would be something using numpy.mean.
In pure python it is hard to calculate mean of column, but it you are able to transpose the matrix you could do it using something like this:
# there are no builtin avg function
def avg(lst):
return sum(lst)/len(lst)
rows = list(avg(row) for row in a)
this is one way to do it
from numpy import *
x=matrix([[1,2,3],[2,3,4]])
[mean(x[:,n]) for n in range(shape(x)[1])]
# [1.5, 2.5, 3.5]
How do I trim the output of Python Pyenchat Module's 'suggested words list ?
Quite often it gives me a huge list of 20 suggested words that looks awkward when displayed on the screen and also has a tendency to go out of the screen .
Like sentinel, I'm not sure if the problem you're having is specific to pyenchant or a python-familiarity issue. If I assume the latter, you could simply select the number of values you'd like as part of your program. In simple form, this could be as easy as:
suggestion_list = pyenchant_function(document_filled_with_typos)
number_of_suggestions = len(suggestion_list)
MAX_SUGGESTIONS = 3 # you choose what you like
if number_of_suggestions > MAX_SUGGESTIONS:
answer = suggestion_list[0:(MAX_Suggestions-1)] # python lists are indexed to 0
else:
answer = suggestion_list
Note: I'm choosing to be clear rather than concise here, since I'm guessing that will be valued by asker, if asker is unclear on using list indices.
Hope this helps and good luck with python.
Assuming it returns a standard Python list, you use standard Python slicing syntax. E.g. suggestedwords[:10] gets just the first 10.