numpy savetxt: save a matrix as row - python

I'm using numpy savetxt() to save the elements of a matrix to file as a single row (I need to print lots of them in order). This is the method I've found:
import numpy as np
mat = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
with open('myfile.dat','a') as handle:
np.savetxt(handle, mat.reshape(1,mat.size), fmt='%+.8e')
handle.close()
There are 2 questions:
1) Is savetxt() the best option? I need to print 1e5 to 1e7 of these things... and I don't want i/o bottlenecking the actual computation. I'm guessing reopening the file each iteration is a bad plan, speed-wise.
2) Ideally I would have some context data printed to start each row so my output might look like:
(N foo mat):
...
6 -2.309 +1.000 +2.000 ...
7 -4.273 +1.000 +2.000 ...
8 -3.664 +1.000 +2.000 ...
...
I could do this using np.append(), but then the first number won't print as an INT. Is this sort of thing doable directly in savetxt()? Or do I need a C-like fprintf() anyways?

Pandas has a good to_csv method:
import pandas as pd
import numpy as np
mat = np.array([[1,2,3],
[4,5,6],
[7,8,9]])
df = pd.DataFrame(data=mat.astype(np.float))
df.to_csv('myfile.dat', sep=' ', float_format='%+.8e', header=False)
By default it'll add the index (index=True), though if you wanted different context data you could just add that to your data frame and set index=False
$ cat myfile.dat
0 +1.00000000e+00 +2.00000000e+00 +3.00000000e+00
1 +4.00000000e+00 +5.00000000e+00 +6.00000000e+00
2 +7.00000000e+00 +8.00000000e+00 +9.00000000e+00

OK. My original code for printing as an array only works if you want to print once. The mat.reshape() method doesn't just return the reshaped matrix it alters mmat itself. This means the next time through the loop any linalg routines will fail.
To avoid this we need to reshape a copy() of mat. I've also added a tmp variable for clarity.
import numpy as np
mat = np.array([[1,2,3],
[4,5,6],
[7,8,9]]) # initialize mat to see format
handle = open('myfile.dat', 'ab')
for n in range(N):
# perform linalg calculations on mat ...
meta = foo # based on current mat
tmp = np.hstack( ([[n]], [[meta]], (mat.copy()).reshape(1,mat.size)) )
np.savetxt(handle, tmp, fmt='%+.8e')
handle.close()
This gets the context data n and meta in this case. I can live with n being saved as a float.
I did some bench marking to check the i/o cost. I set N=100,000 for the loop, and average the run time for 6 runs:
no i/o, just computations: 9.1 sec
as coded above: 17.2 sec
open 'myfile.dat' to append each iteration: 30.6 sec
So the i/o doubles the runtime and, as expected, constantly opening and closing a file is a bad plan.

Related

Storing all values in the array in Python

I have an array A with shape (2,4,1). I want to calculate the mean of A[0] and A[1] and store both the means in A_mean. I present the current and expected outputs.
import numpy as np
A=np.array([[[1.7],
[2.8],
[3.9],
[5.2]],
[[2.1],
[8.7],
[6.9],
[4.9]]])
for i in range(0,len(A)):
A_mean=np.mean(A[i])
print(A_mean)
The current output is
5.65
The expected output is
[3.4,5.65]
The for loop is not necessary because NumPy already knows how to operate on vectors/matrices.
solution would be to remove the loop and just change axis as follows:
A_mean=np.mean(A, axis=1)
print(A_mean)
Outputs:
[[3.4 ]
[5.65]]
Now you can also do some editing to remove the brackets with [3.4 5.65]:
print(A_mean.ravel())
Try this.
import numpy as np
A=np.array([[[1.7],
[2.8],
[3.9],
[5.2]],
[[2.1],
[8.7],
[6.9],
[4.9]]])
A_mean = []
for i in range(0,len(A)):
A_mean.append(np.mean(A[i]))
print(A_mean)

Running Scipy Linregress Across Dataframe Where Each Element is a List

I am working with a Pandas dataframe where each element contains a list of values. I would like to run a regression between the lists in the first column and the lists in each subsequent column for every row in the dataframe, and store the t-stats of each regression (currently using a numpy array to store them). I am able to do this using a nested for loop that loops through each row and column, but the performance is not optimal for the amount of data I am working with.
Here is a quick sample of what I have so far:
import numpy as np
import pandas as pd
from scipy.stats import linregress
df = pd.DataFrame(
{'a': [list(np.random.rand(11)) for i in range(100)],
'b': [list(np.random.rand(11)) for i in range(100)],
'c': [list(np.random.rand(11)) for i in range(100)],
'd': [list(np.random.rand(11)) for i in range(100)],
'e': [list(np.random.rand(11)) for i in range(100)],
'f': [list(np.random.rand(11)) for i in range(100)]
}
)
Here is what the data looks like:
a b c d e f
0 [0.279347961395256, 0.07198822780319691, 0.209... [0.4733815106836531, 0.5807425586417414, 0.068... [0.9377037591435088, 0.9698329284595916, 0.241... [0.03984770879654953, 0.650429630364027, 0.875... [0.04654151678901641, 0.1959629573862498, 0.36... [0.01328000288459652, 0.10429773699794731, 0.0...
1 [0.1739544898167934, 0.5279297754363472, 0.635... [0.6464841177367048, 0.004013634850660308, 0.2... [0.0403944630279538, 0.9163938509072009, 0.350... [0.8818108296208096, 0.2910758930807579, 0.739... [0.5263032002243185, 0.3746299115677546, 0.122... [0.5511171062367501, 0.327702669239891, 0.9147...
2 [0.49678125158054476, 0.807770957943305, 0.396... [0.6218806473477556, 0.01720135741717188, 0.15... [0.6110516368605904, 0.20848099927159314, 0.51... [0.7473669581190695, 0.5107081859246958, 0.442... [0.8231961741887535, 0.9686869510163731, 0.473... [0.34358121300094313, 0.9787339533782848, 0.72...
3 [0.7672751789941814, 0.412055981587398, 0.9951... [0.8470471648467321, 0.9967427749160083, 0.818... [0.8591072331661481, 0.6279199806511635, 0.365... [0.9456189188046846, 0.5084362869897466, 0.586... [0.2685328112579779, 0.8893788305422594, 0.235... [0.029919732007230193, 0.6377951981939682, 0.1...
4 [0.21420195955828203, 0.15178914447352077, 0.9... [0.6865307542882283, 0.0620359602798356, 0.382... [0.6469510945986712, 0.676059598071864, 0.0396... [0.2320436872397288, 0.09558341089961908, 0.98... [0.7733653233006889, 0.2405189745554751, 0.016... [0.8359561624563979, 0.24335481664355396, 0.38...
... ... ... ... ... ... ...
95 [0.42373270776373506, 0.7731750012629109, 0.90... [0.9430465078763153, 0.8506292743184455, 0.567... [0.41367168515273345, 0.9040247409476362, 0.72... [0.23016875953835192, 0.8206550830081965, 0.26... [0.954233948805146, 0.995068745046983, 0.20247... [0.26269690906898413, 0.5032835345055103, 0.26...
96 [0.36114607798432685, 0.11322299769211142, 0.0... [0.729848741496316, 0.9946930423163686, 0.2265... [0.17207915211677138, 0.3270055732644267, 0.73... [0.13211243241239223, 0.28382298905995607, 0.2... [0.03915259352564071, 0.05639914089770948, 0.0... [0.12681415759423675, 0.006417761276839351, 0....
97 [0.5020186971295065, 0.04018166955309821, 0.19... [0.9082402680300308, 0.1334790715379094, 0.991... [0.7003469664104871, 0.9444397336912727, 0.113... [0.7982221018200218, 0.9097963438776192, 0.163... [0.07834894180973451, 0.7948519146738178, 0.56... [0.5833962514812425, 0.403689767723475, 0.7792...
98 [0.16413822314461857, 0.40683312270714234, 0.4... [0.07366489230864415, 0.2706766599711766, 0.71... [0.6410967759869383, 0.5780018716586993, 0.622... [0.5466463581695835, 0.4949639043264169, 0.749... [0.40235314091318986, 0.8305539205264385, 0.35... [0.009668651763079184, 0.8071825962911674, 0.0...
99 [0.8189246990381518, 0.69175150213841, 0.82687... [0.40469941577758317, 0.49004906937461257, 0.7... [0.4940080411615112, 0.33621539942693246, 0.67... [0.8637418291877355, 0.34876318713083676, 0.09... [0.3526913672876807, 0.5177762589812651, 0.746... [0.3463129199717484, 0.9694802522161138, 0.732...
100 rows × 6 columns
My code to run the regressions and store the t-stats:
rows = len(df)
cols = len(df.columns)
tstats = np.zeros(shape=(rows,cols-1))
for i in range(0,rows):
for j in range(1,cols):
lg = linregress(df.iloc[i,0],df.iloc[i,j])
tstats[i,j-1] = lg.slope/lg.stderr
The code above works just fine and is doing exactly what I need, however as I mentioned above the performance begins to slow down when the # of rows and columns in df increases substantially.
I'm hoping someone could offer advice on how to optimize my code for better performance.
Thank you!
I am newbie to this but I do optimization your original code:
by purely use python builtin list object (there is no need to use pandas and to be honest I cannot find a better way to solve your problem in pandas than you original code :D)
by using numpy, which should be (at least they claimed) faster than python builtin list.
You can jump to see the code, its in Jupyter notebook format so you need to install Jupyter first.
Conclusion
Here is the test result:
On a (100, 100) matrix containing (30,) length random lists,
the total time difference is around 1 second.
Time elapsed to run 1 times on new method is 24.282760 seconds.
Time elapsed to run 1 times on old method is 25.954801 seconds.
Refer to
test_perf
in sample code for result.
PS: During test only one thread is used, so maybe multi-thread will help to improve performance, but that's out of my ability...
Idea
I think numpy.nditer is suitable for your request, though the result of optimization is not that significant. Here is my idea:
Generate the input array
I have altered you first part of script, I think using list comprehension along is enough to build a matrix of random lists. Refer to
get_matrix_from_builtin.
Please note I have stored the random lists in another 1-element tuple to keep the shape as ndarray generate from numpy.
As a compare, you can also construct such matrix with numpy. Refer to
get_matrix_from_numpy.
Because ndarray try to boardcast list-like object (and I don't know how to stop it), I have to wrap it into a tuple to avoid auto boardcast from numpy.array constructor. If anyone have a better solution please note it, thanks :)
Calculate the result
I altered you original code using pandas.DataFrame to access element by row/col index, but it is not that way.
Pandas provides some iteration tool for DataFrame: pipe, apply, agg, and appymap, search API for more info, but it seems not suitable for your request here, as you want to obtain the current index of row and col during iteration.
I searched and found numpy.nditer can provide that needs: it return a iterator of ndarray, which have an attribution multi_index that provide the row/col pair of current element. see iterating-over-arrays
Explain on solve.ipynb
I use Jupyter Notebook to test this, you might need got one, here is the instruction of install.
I have altered your original code, which remove the request of pandas and purely used builtin list. Refer to
old_calc_tstat
in the sample code.
Also, I used numpy.nditer to calc your tstats matrix, Refer to
new_calc_tstat
in the sample code.
Then, I tested if the result of both methods are equal, I used same input array to ensure random won't affect the test. Refer to
test_equal
for result.
Finally, do the time performance. I am not patient so I only run it for one time, you may add the repeats count of test in the
test_perf function.
The code
# To add a new cell, type '# %%'
# To add a new markdown cell, type '# %% [markdown]'
# %% [markdown]
# [origin question](https://stackoverflow.com/questions/69228572/running-scipy-linregress-across-dataframe-where-each-element-is-a-list)
#
# %%
import sys
import time
import numpy as np
from scipy.stats import linregress
# %%
def get_matrix_from_builtin():
# use builtin list to construct matrix of random list
# note I put random list inside a tuple to keep it same shape
# as I later use numpy to do the same thing.
return [
[(list(np.random.rand(11)),)
for col in range(6)]
for row in range(100)
]
# %timeit get_matrix_from_builtin()
# %%
def get_matrix_from_numpy(
gen=np.random.rand,
shape=(1, 1),
nest_shape=(1, ),
):
# custom dtype for random lists
mydtype = [
('randonlist', 'f', nest_shape)
]
a = np.empty(shape, dtype=mydtype)
# [DOC] moditfying array values
# https://numpy.org/doc/stable/reference/arrays.nditer.html#modifying-array-values
# enable per operation flags 'readwrite' to modify element in ndarray
# enable global flag 'refs_ok' to allow use callable function 'gen' in iteration
with np.nditer(a, op_flags=['readwrite'], flags=['refs_ok']) as it:
for x in it:
# pack list in a 1-d turple to prevent numpy boardcast it
x[...] = (gen(nest_shape[0]), )
return a
def test_get_matrix_from_numpy():
gen = np.random.rand # generator of random list
shape = (6, 100) # shape of matrix to hold random lists
nest_shape = (11, ) # shape of random lists
return get_matrix_from_numpy(gen, shape, nest_shape)
# access a random list by a[row][col][0]
# %timeit test_get_matrix_from_numpy()
# %%
def test_get_matrix_from_numpy():
gen = np.random.rand
shape = (6, 100)
nest_shape = (11, )
return get_matrix_from_numpy(gen, shape, nest_shape)
# %%
def old_calc_tstat(a=None):
if a is None:
a = get_matrix_from_builtin()
a = np.array(a)
rows, cols = a.shape[:2]
tstats = np.zeros(shape=(rows, cols))
for i in range(0, rows):
for j in range(1, cols):
lg = linregress(a[i][0][0], a[i][j][0])
tstats[i, j-1] = lg.slope/lg.stderr
return tstats
# %%
def new_calc_tstat(a=None):
# read input metrix of random lists
if a is None:
gen = np.random.rand
shape = (6, 100)
nest_shape = (11, )
a = get_matrix_from_numpy(gen, shape, nest_shape)
# construct ndarray for t-stat result
tstats = np.empty(a.shape)
# enable global flags 'multi_index' to retrive index of current element
# [DOC] Tracking an Index or Multi-Index
# https://numpy.org/doc/stable/reference/arrays.nditer.html#tracking-an-index-or-multi-index
it = np.nditer(tstats, op_flags=['readwrite'], flags=['multi_index'])
# obtain total columns count of tstats's shape
col = tstats.shape[1]
for x in it:
i, j = it.multi_index
# trick to avoid IndexError: substract len(list) after +1 to index
j = j + 1 - col
lg = linregress(
a[i][0][0],
a[i][j][0]
)
# note: nditer ignore ZeroDivisionError by default, and return np.inf to the element
# you have to override it manually:
if lg.stderr == 0:
x[...] = 0
else:
x[...] = lg.slope / lg.stderr
return tstats
# new_calc_tstat()
# %%
def test_equal():
"""Test if the new method has equal output to old one"""
# use same input list to avoid affect of rand
a = test_get_matrix_from_numpy()
old = old_calc_tstat(a)
new = new_calc_tstat(a)
print(
"Is the shape of old and new same ?\n%s. old: %s, new: %s\n" % (
old.shape == new.shape, old.shape, new.shape),
)
res = (old == new)
print(
"Is the result object same?"
)
if res.all() == True:
print("True.")
else:
print("False. Difference(new - old) as below:\n")
print(new - old)
return old, new
old, new = test_equal()
# %%
# the only diff is the last element
# in old method it is 0
# in new method it is inf
# if you perfer the old method, just add condition in new method to override
# [new[x][99] for x in range(6)]
# %%
# python version: 3.8.8
timer = time.clock if sys.platform[:3] == 'win' else time.time
def total(func, *args, _reps=1, **kwargs):
start = timer()
for i in range(_reps):
ret = func(*args, **kwargs)
elapsed = timer() - start
return elapsed
def test_perf():
"""Test of performance"""
# first, get a larger input array
gen = np.random.rand
shape = (1000, 100)
nest_shape = (30, )
a = get_matrix_from_numpy(gen, shape, nest_shape)
# repeat how many time for each test
reps = 1
# then, time both old and new calculation method
old = total(old_calc_tstat, a, _reps=reps)
new = total(new_calc_tstat, a, _reps=reps)
msg = "Time elapsed to run %d times on %s is %f seconds."
print(msg % (reps, 'new method', new))
print(msg % (reps, 'old method', old))
test_perf()

Python/PyTables: Is it possible to have different data types for different columns of an array?

I create an expandable earray of Nx4 columns. Some columns require float64 datatype, the others can be managed with int32. Is it possible to vary the data types among the columns? Right now I just use one (float64, below) for all, but it takes huge disk space for (>10 GB) files.
For example, how can I ensure column 1-2 elements are int32 and 3-4 elements are float64?
import tables
f1 = tables.open_file("table.h5", "w")
a = f1.create_earray(f1.root, "dataset_1", atom=tables.Float32Atom(), shape=(0, 4))
Here is a simplistic version of how I am appending using Earray:
Matrix = np.ones(shape=(10**6, 4))
if counter <= 10**6: # keep appending to Matrix until 10**6 rows
Matrix[s:s+length, 0:4] = chunk2[left:right] # chunk2 is input np.ndarray
s += length
# save to disk when rows = 10**6
if counter > 10**6:
a.append(Matrix[:s])
del Matrix
Matrix = np.ones(shape=(10**6, 4))
What are the cons for the following method?
import tables as tb
import numpy as np
filename = 'foo.h5'
f = tb.open_file(filename, mode='w')
int_app = f.create_earray(f.root, "col1", atom=tb.Int32Atom(), shape=(0,2), chunkshape=(3,2))
float_app = f.create_earray(f.root, "col2", atom=tb.Float64Atom(), shape=(0,2), chunkshape=(3,2))
# array containing ints..in reality it will be 10**6x2
arr1 = np.array([[1, 1],
[2, 2],
[3, 3]], dtype=np.int32)
# array containing floats..in reality it will be 10**6x2
arr2 = np.array([[1.1,1.2],
[1.1,1.2],
[1.1,1.2]], dtype=np.float64)
for i in range(3):
int_app.append(arr1)
float_app.append(arr2)
f.close()
print('\n*********************************************************')
print("\t\t Reading Now=> ")
print('*********************************************************')
c = tb.open_file('foo.h5', mode='r')
chunks1 = c.root.col1
chunks2 = c.root.col2
chunk1 = chunks1.read()
chunk2 = chunks2.read()
print(chunk1)
print(chunk2)
No and Yes. All PyTables array types (Array, CArray, EArray, VLArray) are for homogeneous datatypes (similar to a NumPy ndarray). If you want to mix datatypes, you need to use a Table. Tables are extendable; they have an .append() method to add rows of data.
The creation process is similar to this answer (only the dtype is different): PyTables create_array fails to save numpy array. You only define the datatypes for a row. You don't define the shape or number of rows. That is implied as you add data to the table. If you already have your data in a NumPy recarray, you can reference it with the description= entry, and the Table will use the dtype for the table and populate with the data. More info here: PyTables Tables Class
Your code would look something like this:
import tables as tb
import numpy as np
table_dt = np.dtype(
{'names': ['int1', 'int2', 'float1', 'float2'],
'formats': [int, int, float, float] } )
# Create some random data:
i1 = np.random.randint(0,1000, (10**6,) )
i2 = np.random.randint(0,1000, (10**6,) )
f1 = np.random.rand(10**6)
f2 = np.random.rand(10**6)
with tb.File('table.h5', 'w') as h5f:
a = h5f.create_table('/', 'dataset_1', description=table_dt)
# Method 1 to create empty recarray 'Matrix', then add data:
Matrix = np.recarray( (10**6,), dtype=table_dt)
Matrix['int1'] = i1
Matrix['int2'] = i2
Matrix['float1'] = f1
Matrix['float2'] = f2
# Append Matrix to the table
a.append(Matrix)
# Method 2 to create recarray 'Matrix' with data in 1 step:
Matrix = np.rec.fromarrays([i1, i2, f1, f2], dtype=table_dt)
# Append Matrix to the table
a.append(Matrix)
You mentioned creating a very large file, but did not say how many rows (obviously way more than 10**6). Here are some additional thoughts based on comments in another thread.
The .create_table() method has an optional parameter: expectedrows=. This parameter is used 'to optimize the HDF5 B-Tree and amount of memory used'. Default value is set in tables/parameters.py (look for EXPECTED_ROWS_TABLE; It's only 10000 in my installation.) I highly suggest you set this to a larger value if you are creating 10**6 (or more) rows.
Also, you should consider file compression. There's a trade-off: compression reduces the file size, but will reduce I/O performance (increases access time).
There are a few options:
Enable compression when you create the file (add the filters= parameter when you create the file). Start with tb.Filters(complevel=1).
Use the HDF Group utility h5repack - run against a HDF5 file to create a new file (useful to go from uncompressed to compressed, or vice-versa).
Use the PyTables utility ptrepack - works similar to h4repack and delivered with PyTables.
I tend to use uncompressed files I work with often for best I/O performance. Then when done, I convert to compressed format for long term archiving.

I want 10 numbers between RFMIn and RFMax with using linspace in python

I am reading csv file and in that csv file have a Columns
RFMin and RFMax
1000 3333
5125.5 5888
I want 10 numbers between RFMIn and RFMax with using linspace in python
import pandas as pd
Import numpy as np
df = csv.read_csv(filePath)
RFRange = np.linspace(RFMIn, RFMax, 10)
RFRange = RFRange.flatten()
RFarray=[]
for i in RFRange:
RFarray.append(i)
dict = {‘RFRange’: RFarray}
data = pd.DataFrame(dict)
data.to_csv(‘Output.csv’, header=True, sep=’\t’)
I want something like this:
1000
1259.22
1518.44
1777.67
……..
…….
3333
5125.5
5210.22
5294.94
……..
…….
5888
Your problem is coming from the call to flatten. The flatten function of matplotlib converts a 2d array into a 1d array. However this is done in row-major order by default (https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.flatten.html).
In [1]: a = [1000,5125.5]
In [2]: b = [3333,5888]
In [3]: import numpy as np
In [4]: np.linspace(a,b,10)
Out[4]:
array([[1000. , 5125.5 ],
[1259.22222222, 5210.22222222],
[1518.44444444, 5294.94444444],
[1777.66666667, 5379.66666667],
[2036.88888889, 5464.38888889],
[2296.11111111, 5549.11111111],
[2555.33333333, 5633.83333333],
[2814.55555556, 5718.55555556],
[3073.77777778, 5803.27777778],
[3333. , 5888. ]])
In [5]: np.linspace(a,b,10).flatten()
Out[5]:
array([1000. , 5125.5 , 1259.22222222, 5210.22222222,
1518.44444444, 5294.94444444, 1777.66666667, 5379.66666667,
2036.88888889, 5464.38888889, 2296.11111111, 5549.11111111,
2555.33333333, 5633.83333333, 2814.55555556, 5718.55555556,
3073.77777778, 5803.27777778, 3333. , 5888. ])
As you can see this means that it converts your data into a different format to what you are expecting.
There are a few ways to change the order.
1) As per https://numpy.org/doc/1.18/reference/generated/numpy.ndarray.flatten.html you can use fortran ordering (column-major) when flattening
2) You can transpose your data before flattening
RFRange = RFRange.T.flatten() / RFRange = RFRange.transpose().flatten()
3) You can add a second loop when appending and append directly from the 2D array
I would suggest that this method is to be avoided though. It is ok for 10 points, however large loops can be quite slow in python and it is therefore better to use python built in functions where possible. For example in this case a numpy1d array can easily be converted to a list with the following command:
RFArray = list(RFRange)
You want the array in ascending order? If it is, just do RFarray.sort()

Index by condition in python-numpy?

I'm trying to migrate from Matlab to Python. I'm rewriting some code that I had in Matlab to Python for testing. I've installed Anaconda and currently using Spyder IDE. Using Matlab I created a function that returns the values of the commercial API 5L diameter(diametro) and thickness(espesor) of pipes that are closer to the input parameters of the function. I did this using a Matlab table.
Note that the inputs of the diameter(diametro_entrada) and thickness(espesor_entrada) are in meters[m] and the thickness inside the function are in millimeters [mm], that's why at the end I had to multiply espesor_entrada*1000
function tabla_seleccion=tablaAPI(diametro_entrada,espesor_entrada)
%Proporciona la tabla de caños API 5L, introducir diámetro en [m] y espesor
%en [m]
Diametro_m=[0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;0.3556;...
0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;0.4064;...
0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;0.4570;...
0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;0.5080;...
0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;0.559;...
0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;0.610;...
0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;0.660;...
0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;0.711;...
0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;0.762;...
0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813;0.813];
Espesor_mm=[4.8;5.2;5.3;5.6;6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;27.0;28.6;31.8;...
4.8;5.2;5.6;6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;27.0;28.6;30.2;31.8;...
4.8;5.6;6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;27.0;28.6;30.2;31.8;...
5.6;6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;27.0;28.6;30.2;31.8;33.3;34.9;...
5.6;6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;27.0;28.6;30.2;31.8;33.3;34.9;36.5;38.1;...
6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;27.0;28.6;30.2;31.8;33.3;34.9;36.5;38.1;39.7;...
6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;...
6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;...
6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;27.0;28.6;30.2;31.8;...
6.4;7.1;7.9;8.7;9.5;10.3;11.1;11.9;12.7;14.3;15.9;17.5;19.1;20.6;22.2;23.8;25.4;27.0;28.6;30.2;31.8];
TablaAPI=table(Diametro_m,Espesor_mm);
tabla_seleccion=TablaAPI(abs(TablaAPI.Diametro_m-diametro_entrada)<0.05 & abs(TablaAPI.Espesor_mm-(espesor_entrada*1000))<1.2,:);
end
With the input diameter(d) and the input thickness(e) I get the commercial pipe that has less than 0.05 in diameter and 1.2 in thickness from the former.
I want to do reproduce this in Python with Numpy or another package.
First I defined 2 Numpy arrays, with the same names as in Matlab but comma separated instead of semicolon and without the "..." at the end of each line, then defined another Numpy array as:
TablaAPI=numpy.array([Diametro_m,Espesor_mm])
I want to know if I can index that array in some way like I did in Matlab or I have to define something else totally different.
Thanks a lot!
You sure can!
Here's an example of how you can use numpy:
Using Numpy
import math
import numpy as np
# Declare your Diametro_m, Espesor_mmhere just like you did in your example
# Transpose and merge the columns
arr = np.concatenate((Diametro_m, Espesor_mm.T), axis=1)
selection = arr[np.ix_(abs(arr[:0])<0.05,abs(arr[:1]-(math.e*1000)) > <1.2 )]
Example usage from John Zwinck's answer
Using Dataframes
Dataframes may also be great for your application in case you need to do heavier queries or mix column datatypes. This code should work for you, if you choose that option:
# These imports go at the top of your document
import pandas as pd
import numpy as np
import math
# Declare your Diametro_m, Espesor_mmhere just like you did in your example
df_d = pd.DataFrame(data=Diametro_m,
index=np.array(range(1, len(Diametro_m))),
columns=np.array(range(1, len(Diametro_m))))
df_e = pd.DataFrame(data=Espesor_mm,
index=np.array(range(1, len(Diametro_m))),
columns=np.array(range(1, len(Diametro_m))))
# Merge the dataframes
merged_df = pd.merge(left=df_d , left_index=True
right=df_e , right_index=True,
how='inner')
# Now you can perform your selections like this:
selection = merged_df.loc[abs(merged_df['df_d']) <0.05, abs(merged_df['df_e']-(math.e*1000))) <1.2]
# This "mask" of the dataframe will return all results that satisfy your query.
print(selection)
Since you have not given an example of your expected output it's a bit of guessing what you are really after, but here is one version with numpy.
# rewritten arrays for numpy
Diametro_m=[0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,0.3556,
0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,0.4064,
0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,0.4570,
0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,0.5080,
0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,0.559,
0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,0.610,
0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,0.660,
0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,0.711,
0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,0.762,
0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813,0.813]
Espesor_mm=[4.8,5.2,5.3,5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,31.8,
4.8,5.2,5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8,
4.8,5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8,
5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8,33.3,34.9,
5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8,33.3,34.9,36.5,38.1,
6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8,33.3,34.9,36.5,38.1,39.7,
6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,
6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,
6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8,
6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8]
import numpy as np
diametro_entrada = 0.4
espesor_entrada = 5
Diametro_m = np.array(Diametro_m)
Espesor_mm = np.array(Espesor_mm)
# Diametro_m and Espesor_mm has shape (223,)
# if not change so that they have that shape
table = np.array([Diametro_m, Espesor_mm]).T
mask = np.where((np.abs(Diametro_m - diametro_entrada) < 0.05) &
(np.abs(Espesor_mm - espesor_entrada) < 1.2)
)
result = table[mask]
print('with numpy')
print(result)
or you can do it with just python...
# redo with python only
# based on a simple dict and list comprehension
D_m = [0.3556, 0.4064, 0.4570, 0.5080, 0.559, 0.610, 0.660, 0.711, 0.762, 0.813]
E_mm = [[4.8,5.2,5.3,5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,31.8],
[4.8,5.2,5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8],
[4.8,5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8],
[5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8,33.3,34.9],
[5.6,6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8,33.3,34.9,36.5,38.1],
[6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8,33.3,34.9,36.5,38.1,39.7],
[6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4],
[6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4],
[6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8],
[6.4,7.1,7.9,8.7,9.5,10.3,11.1,11.9,12.7,14.3,15.9,17.5,19.1,20.6,22.2,23.8,25.4,27.0,28.6,30.2,31.8]]
table2 = dict(zip(D_m, E_mm))
result2 = []
for D, E in table2.items():
if abs(D - diametro_entrada) < 0.05:
Et = [t for t in E if abs(t - espesor_entrada) < 1.2]
result2 += [(D, t) for t in Et]
print('with vanilla python')
print('\n'.join((str(r) for r in result2)))
Once you are in python there are endless ways to do this, you could easily do the same with pandas, or sqlite. My personal preference tends to lean towards as little dependencies as possible, in this case I would go for a csv file as input and then do it without numpy, if it was a true large scale problem I would consider sqlite/numpy/pandas.
Good luck with the transition, I don't think you will regret it.

Categories

Resources