Call a function with 3 parameters to run through all combinations - python

I have a function multirun that I want to run. It has 3 parameters. I want to feed it a list of numbers for each parameter, and i want it to run through all combinations.
ie: for function multirun(a,b,c) i have
a = [1,2]
b=[3,4]
c=[5,6]
and I want it to run all (1,3,5) , (1,3,6) (1,4,5) ,(1,4,6) (2,3,5) etc...
Below I have my actual code:
CO2 = [0.00007, 0.00008, 0.00009]
H2O = [0.00003, 0.000035, 0.00004]
FO2 = [-2,-1,0,1,2]
for i in CO2:
for j in H2O:
for k in FO2:
multirun( WTCO2_START = [i], WTH2O_START = [j], FO2_buffer_START= [k])
This doesn't seem to do it. What do i have to change?

itertools.product should do exactly what you want. I think this will work for you:
import itertools
for i, j, k in itertools.product(CO2, H20, FO2):
multirun(WTCO2_START = [i], WTH2O_START = [j], FO2_buffer_START= [k])
You may have to play around with the repeat parameter, but that should be easier to determine.

try it like this:
for i in CO2:
for j in H2O:
for k in FO2:
multirun(i, j, k)

Related

How to make the code runs faster in Python (multiple nested for loops efficiency)

I have some codes here that I want to know if there's a way to make the code run better and take less time... I use multiple nested loops here and I don't really know how to improve this piece of code, so thanks for any help that you guys shared down here :)
from time import time
def mult_comb(combinations: list[tuple], T: list) -> None:
for x in range(len(combinations)):
for _ in range(len(T)):
for j in range(len(T)):
combinations.append(combinations[x] + (T[j], ))
def all_possible_combinations(T: list, r: int) -> list:
combinations: list = []
for i in range(len(T)):
for j in range(len(T)): combinations.append((T[i], T[j]))
for _ in range(r - 2): mult_comb(combinations, T)
combs_result: set = set(combinations)
final_combs_result = sorted(combs_result)
return final_combs_result
if __name__ == "__main__":
start: float = time()
n, k = ['N', 'P'], 12
data: list = all_possible_combinations(n, k)
print(data)
print(f"The runtime took: {time() - start:.3f}s")
So, this code simply takes probability things, like this example...
"One coin consists of two main body parts, showing the number of the coin and the picture of the coin (N represents the number, and P represents the picture). If I have 3 coins, then what's the probability of the coins of the samples?" (Sorry for my bad English...)
Basically, the answer is just: NN, NP, PN, PP, NNN, NNP, NPN, NPP, PNN, PNP, PPN, PPP.
The code already outputs the correct answer, but when I give the k value to 12, the code runs for about81.933s, which is took so much time considering this code only does simple things that our brain can even do this faster... Is there any ways to improve those nested loops? Thank you once again...
you can create permutations like this
l = ['N', 'P']
k = 12
def perm(l, k):
n, p = l
res = ['N', 'P']
prev_gen = (i for i in res)
for i in range(1, k+1):
res = (f'{j}{i}' for i in prev_gen for j in (n, p))
prev_gen = res
return res
print(list(perm(l,k)))

How can I use zip function for this?

for i in range(2200):
calculate_path(r_mercury, mercury_track, i)
calculate_path(r_venus, venus_track, i)
calculate_path(r_earth, earth_track, i)
calculate_path(r_mars, mars_track, i)
calculate_path(r_jupiter, jupiter_track, i)
calculate_path(r_saturn, saturn_track, i)
calculate_path(r_uranus, uranus_track, i)
calculate_path(r_neptune, neptune_track, i)
This is the code, I would like to optimize it using zip, is there any way I can do that?
And the first And the first parameter of calculate_path is an int, second one is empty list, but I am appending values in function.
I would not call this optimizing since it doesn't improve anything, but here is a shorter implementation:
r_planets = [r_mercury, r_venus, r_earth]
planets_tracks = [mercury_track, venus_track, earth_track]
for i in range(2200):
for r_planet, planets_track in zip(r_planets, planets_tracks):
calculate_path(r_planet, planets_track, i)
Alternatively, with one less for loop (but still the same number of iteration anyway):
import itertools
r_planets = [r_mercury, r_venus, r_earth]
planets_tracks = [mercury_track, venus_track, earth_track]
for p, i in itertools.product(zip(r_planets, planets_tracks), range(2200)):
r_planet = p[0] # can be remove
planets_track = p[1] # can be remove
calculate_path(r_planet, planets_track, i) # can be replace with calculate_path(p[0], p[1], i)

How can I write a method or a for loop for very similar code pieces

I have a code sample below. Code works perfectly but my problem is, this code isn't clean and costing too much line, I believe this code can be reduced with a method or for-loop, but I couldn't figure out how can I achieve this. The code pieces are %90 same, only changes are happening in variable side. I only put 2 of the pieces but my code consists of 5 pieces just like this
#KFOLD-1
all_fold_X_1 = pd.DataFrame(columns=['Sentence_txt'])
index = 0
for k, i in enumerate(dfNew['Sentence_txt'].values):
if k in kFoldsTrain1:
all_fold_X_1 = all_fold_X_1.append({index:i}, ignore_index=True)
X_train1 = count_vect.fit_transform(all_fold_X_1[0].values)
Y_train1 = [i for k,i in enumerate(dfNew['Sentence_Polarity'].values) if k in kFoldsTrain1]
Y_train1 = np.asarray(Y_train1)
#KFOLD-2
all_fold_X_2 = pd.DataFrame(columns=['Sentence_txt'])
index = 0
for k, i in enumerate(dfNew['Sentence_txt'].values):
if k in kFoldsTrain2:
all_fold_X_2 = all_fold_X_2.append({index:i}, ignore_index=True)
X_train2 = count_vect.fit_transform(all_fold_X_2[0].values)
Y_train2 = [i for k,i in enumerate(dfNew['Sentence_Polarity'].values) if k in kFoldsTrain2]
Y_train2 = np.asarray(Y_train2)
A full example hasn't been provided, so I'm making some assumptions. Perhaps something along these lines:
def train(dataVar, dfNew):
ret = {}
index = 0
for k, i in enumerate(dfNew['Sentence_txt'].values):
if k in kFoldsTrain1:
dataVar = dataVar.append({index:i}, ignore_index=True)
ret['x'] = count_vect.fit_transform(dataVar[0].values)
ret['y'] = [i for k,i in enumerate(dfNew['Sentence_Polarity'].values) if k in kFoldsTrain1]
ret['y'] = np.asarray(Y_train1)
return ret
#KFOLD-1
kfold1 = train(pd.DataFrame(columns=['Sentence_txt']), dfNew)
#KFOLD-2
kfold2 = train(pd.DataFrame(columns=['Sentence_txt']), dfNew)
You perhaps get the idea. You may not need the second argument in the function dependent on if the variable 'dfNew' is global. I'm also far from a Python expert! ;)

iterate over several collections in parallel

I am trying to create a list of objects (from a class defined earlier) through a loop. The structure looks something like:
ticker_symbols = ["AZN", "AAPL", "YHOO"]
stock_list = []
for i in ticker_symbols:
stock = Share(i)
pe = stock.get_price_earnings_ratio()
ps = stock.get_price_sales()
stock_object = Company(pe, ps)
stock_list.append(stock_object)
I would however want to add one more attribute to the Company-objects (stock_object) through the loop. The attribute would be a value from another list, like (arbitrary numbers) [5, 10, 20] where the first attribute would go to the first object, the second to the second object etc.Is it possible to do something like:
for i, j in ticker_symbols, list2:
#dostuff
? Could not get this sort of nested loop to work on my own. Thankful for any help.
I believe that all you have to do is change the for loop.
Instead of "for i in ticker_symbols:" you should loop like
"for i in range(len(ticker_symbols))" and then use the index i to do whatever you want with the second list.
ticker_symbols = ["AZN", "AAPL", "YHOO"]
stock_list = []
for i in range(len(ticker_symbols):
stock = Share(ticker_symbols[i])
pe = stock.get_price_earnings_ratio()
ps = stock.get_price_sales()
# And then you can write
px = whatever list2[i]
stock_object = Company(pe, ps, px)
stock_list.append(stock_object)
Some people say that using index to iterate is not good practice, but I don't think so specially if the code works.
Try:
for i, j in zip(ticker_symbols, list2):
Or
for (k, i) in enumerate(ticker_symbols):
j = list2[k]
Equivalently:
for index in range(len(ticker_symbols)):
i = ticker_symbols[index]
j = list2[index]

looping dictionaries of {tuple:NumPy.array}

i have a set of dictionaries k of the form {(i,j):NumPy.array} over which I want to loop the NumPy.arrays for a certain evaluation.
I made the dictionarries as follows:
datarr = ['PowUse', 'PowHea', 'PowSol', 'Top']
for i in range(len(dat)): exec(datarr[i]+'={}')
so i can always change the set of data i want to evaluate in my bigger set of code by changeing the original list of strings. However, this means i have to call for my dictionaries as eval(k) for k in datarr.
As a result, the loop i want to do looks like this for the moment :
for i in filarr:
for j in buiarr:
for l in datarrdif:
a = eval(l)[(i, j)]
a[abs(a)<.01] = float('NaN')
eval(l).update({(i, j):a})
but is there a much nicer way to write this ? I tried following, but this didn't work:
[eval(l)[(i, j)][abs(eval(l)[(i, j)])<.01 for i in filarr for j in buiarr for k in datarrdiff] = float('NaN')`
Thx in advance
datarr = ['PowUse', 'PowHea', 'PowSol', 'Top']
for i in range(len(dat)): exec(datarr[i]+'={}')
Why don't you create them as a dictionary of dictionaries?
datarr = ['PowUse', 'PowHea', 'PowSol', 'Top']
data = dict((name, {}) for name in datarr)
Then you can avoid all the eval().
for i in filarr:
for j in buiarr:
for l in datarr:
a = data[l][(i, j)]
np.putmask(a, np.abs(a)<.01, np.nan)
data[l].update({(i, j):a})
or probably just:
for arr in data.itervalues():
np.putmask(arr, np.abs(arr)<.01, np.nan)
if you want to set all elements of all dictionary values where abs(element) < .01 to NaN .

Categories

Resources