How to segregate my data better within a Python class - python

I've been working on a project that is a calculator for an electronic part. This is my first full-scale Python project and it has gone through many iterations along the way.
The data is not organized as well as I would like, which I think is making it difficult to improve the program.
I have a class for the module, which has two parts inside of it. Each part has its own attributes that are independent of each other, and the whole module itself has some attributes that the parts don't care about, and some attributes that the parts need. The module also has to do some calculations based on the results of the independent calculations done inside the parts. Here is a photo of the idea I'm trying to explain. As an aside, sometimes there are modules with only one part, but I can suppress the other part by applying some 0's arrays. I would like to be able to have a module where the one part is entirely missing, but that is not my current goal.
The problem is that my class has ~100 lines of self.XXX = None at the beginning to initialize everything, and several functions which IMO are repetitive. It is also quite difficult to traverse the data when stepping through the code - for example, I have to find variables like self.current__partA and self.current__partB. What I think would be helpful is to have something like self.partA.current. If it is done this way, I think it would be more readable.
The problem is, I tried subclasses and it seems like I can't achieve this kind of idea because when I initialize the subclass, I have to initalize a new superclass, meaning that there are two superclasses (two modules with a total of 4 parts, when I want 1 module/2 parts), so I can't really access the info of both subclasses from the superclass because each subclass will have its own instance of the superclass.
I also looked at inner classes but there is an issue where I don't think I can truly access the outer class from the inner class, which kind of defeats the purpose of using this. That could work to a degree, but it would make my code longer and less readable, from what I am seeing.
The first solutions I had were things like dictionaries, which I don't totally hate, but that lead to really janky code that had very little tolerance for errors. The reason is, when you add a list to a dictionary, you can't have a function that automatically throws an error. I can check the dictionary, but it just feels unnatural. It seems to me that it would make more sense to keep each value as a class variable and use functions, getters and setters to manipulate it through the calculation.
My main goal is to organize the data and code effectively so that I am using less lines of code and the program is easier to modify, and it is easier to step through the process. I am not entirely sold on the class structure, it just seemed to be the best way to accommodate what I am trying to do. Is there a way to achieve what I am asking here, or is there a generally more pythonic way to organize my code that would result in a more effective solution?
class Module:
def __init__(self, module_file):
temp_ic, temp_value = self.get__corrected_value(module_file)
temp_if, temp_vf = self.get__corrected_value(module_file)
self.ic_value = interp1d(temp_ic, temp_value, fill_value='extrapolate')
self.ic_esw_on = interp1d(self.get__corrected_esw(module_file), self.get__corrected_esw(module_file["ESWON - IC ESWON"]), fill_value='extrapolate')
self.ic_esw_off = interp1d(self.get__corrected_esw(module_file["IC - IC ESWOFF"]), self.get__corrected_esw(module_file["ESWOFF - IC ESWOFF"]), fill_value='extrapolate')
self.rg_on_esw_on = interp1d(module_file["RGON - ESWON RGON"], module_file["ESWON - ESWON RGON"], fill_value='extrapolate')
self.rg_off_esw_off = interp1d(module_file["RGOFF - ESWOFF RGOFF"], module_file["ESWOFF - ESWOFF RGOFF"], fill_value='extrapolate')
self.ic_err = interp1d(self.get__corrected_esw(module_file["IC - IC ERR"]), self.get__corrected_esw(module_file["ERR - IC ERR"]), fill_value='extrapolate')
self.if_vf = interp1d(temp_if, temp_vf, fill_value='extrapolate')
self.rg_on_err = interp1d(module_file["RGON - ERR RGON"], module_file["ERR - ERR RGON"], fill_value='extrapolate')
self.nameplate_vcc = module_file['Nameplate VCC']
if module_file['vcc_ratio'] > 0:
self.vcc_ratio = module_file['vcc_ratio']
else:
self.vcc_ratio = 0
self.name = self.get__module_name(module_file)
self.current__PartA = []
self.current__PartB = []
self.some_thing_loss__PartA = []
self.esw_on_loss = []
self.esw_off_loss = []
self.esw_loss__PartA = []
self.energy__PartA = []
self.value__PartA = []
self.some_thing_loss__PartB = []
self.err_loss = []
self.energy__PartB = []
self.value__PartB = []
self.rg_scalar_esw_on = None
self.rg_scalar_esw_off = None
self.rg_scalar_err = None
self.value_dc__PartA = module_file['PartA value DC']
self.value_dc__PartB = module_file['PartB value DC']
self.value_dc__module = module_file['Module value DC']
self.trans_r_values__PartA = module_file["PartA R Values"]
self.trans_t_values__PartA = module_file["PartA T Values"]
self.trans_r_values__PartB = module_file["PartB R Values"]
self.trans_t_values__PartB = module_file["PartB T Values"]
self.some_thing_loss_total__PartA = None
self.some_thing_loss_total__PartB = None
self.esw_on_loss_total = None
self.esw_off_loss_total = None
self.esw_loss_total = None
self.err_loss_total = None
self.device_loss_total__PartA = None
self.device_loss_total__PartB = None
self.module_loss_total = None
self.delta_tcase_ave = None
self.delta_value_ave__PartA = None
self.delta_value_ave__PartB = None
self.nominal_value_ave__PartA = None
self.nominal_value_ave__PartB = None
self.delta_value_max__PartA = None
self.delta_value_max__PartB = None
self.nominal_value_max__PartA = None
self.nominal_value_max__PartB = None
self.value_max_PartA_list = []
self.value_max_PartB_list = []
self.thermal_interp_is_four_degree = self.check__thermal_interp()
self.switches_per_degree = None
self.input_output_freq = None
self.time_division = None
self.input_t_sink = None
self.step_size = None
self.step_range = None
self.sec_per_cycle_degree = None
self.duty_p = None
self.value_PartA_list = None
self.value_PartB_list = None
self.time_list = None
self.rad_list = None
self.value_max__PartA_thermo = None
self.value_max__PartB_thermo = None
self.value_max__time_value = None
def check__some_input_conditions_and_change_input(self): # todo could this be cleaned?
blah
def get__max_current(self):
return max(self.nominal_value_max__PartB, self.nominal_value_max__PartA)
def set__some_module_values(self, is_three_level, system): # todo call this something different, and break it out for 3-level
blah
def set_values_for_both_parts(self, input_instance, system_instance, module_location=None):
lots of blah
def set__current_PartA(self, current):
self.current__PartA = current
def set__current_partB(self, current):
blah
def calculate__another_other_loss_for_part_A(self, duty):
blah
def calculate__another_loss_for_partB(self, duty):
blah
def calculate__another_loss_for_partA(self, duty=None):
blah
def calculate__some_loss_for_partA(self, duty=None):
blah
def calculate__some_loss_for_partB(self, duty=None):
blah
def calculate__energy_power_for_both_parts(self):
blah
def calculate__temperatures_for_both_parts(self):
blah
def calculate__max_temp(self): # maybe split into PartA and PartB separately?
self.create_thermal_resistance_dict()
value_PartA_list = []
value_PartB_list = []
next_array_PartA = self.value__PartA
next_array_PartA = self.rotate(next_array_PartA, -1)
delta_p_PartA = [next_el - last_el for next_el, last_el in zip(next_array_PartA, self.value__PartA)]
last_power_PartA = self.value__PartA[-1] - self.device_loss_total__PartA
first_power_PartA = self.value__PartA[0] - self.device_loss_total__PartA
value_dict_PartA_added = [self.get_PartA_value_from_time(i * self.sec_per_cycle_degree + self.value_max__time_value) for i in range(self.step_range)]
value_dict_PartA_added = [old + new for old, new in zip(self.value_max__PartA_thermo, value_dict_PartA_added)]
value_PartA_inst_init = [self.input_t_sink + self.delta_value_ave__PartA + self.delta_tcase_ave - last_power_PartA * self.value_max__PartA_thermo[i] + first_power_PartA * value_dict_PartA_added[i] for i in range(self.step_range)]
delta_value_PartB = self.device_loss_total__PartB * self.value_dc__PartB
next_array_PartB = self.value__PartB
next_array_PartB = self.rotate(next_array_PartB, -1)
delta_p_PartB = [next_el - last_el for next_el, last_el in zip(next_array_PartB, self.value__PartB)]
last_power_PartB = self.value__PartB[-1] - self.device_loss_total__PartB
first_power_PartB = self.value__PartB[0] - self.device_loss_total__PartB
value_dict_PartB_added = [self.get_PartB_value_from_time(i * self.sec_per_cycle_degree + self.value_max__time_value) for i in range(self.step_range)]
value_dict_PartB_added = [old + new for old, new in zip(self.value_max__PartB_thermo, value_dict_PartB_added)]
value_PartB_inst_init = [self.input_t_sink + delta_value_PartB + self.delta_tcase_ave - last_power_PartB * self.value_max__PartB_thermo[i] + first_power_PartB * value_dict_PartB_added[i] for i in range(self.step_range)]
for index in range(self.step_range):
value_dict_PartA_fix = [value_dict_PartA_added[i] if i <= index else self.value_max__PartA_thermo[i] for i in range(self.step_range)]
# value_dict_PartA_fix_orig = [val for val in value_dict_PartA_fix]
value_dict_PartA_fix.reverse()
new_value_PartA = self.rotate(value_dict_PartA_fix, index)
new_value_PartA = new_value_PartA[:359]
temp_add_vals_PartA = [delta_p * value for delta_p, value in zip(delta_p_PartA, new_value_PartA)]
sum_temp_add_vals_PartA = sum(temp_add_vals_PartA)
value_PartA_list.append(sum_temp_add_vals_PartA)
value_dict_PartB_fix = [value_dict_PartB_added[i] if i <= index else self.value_max__PartB_thermo[i] for i in range(self.step_range)]
# value_dict_PartB_fix_orig = [val for val in value_dict_PartB_fix]
value_dict_PartB_fix.reverse()
new_value_PartB = self.rotate(value_dict_PartB_fix, index)
new_value_PartB = new_value_PartB[:359]
temp_add_vals_PartB = [delta_p * value for delta_p, value in zip(delta_p_PartB, new_value_PartB)]
sum_temp_add_vals_PartB = sum(temp_add_vals_PartB)
value_PartB_list.append(sum_temp_add_vals_PartB)
value_PartA_list = [value + diff for value, diff in zip(value_PartA_inst_init, value_PartA_list)]
value_ave_PartA = self.nominal_value_ave__PartA - np.average(value_PartA_list)
self.value_PartA_list = [value + value_ave_PartA for value in value_PartA_list]
value_PartB_list = [value + diff for value, diff in zip(value_PartB_inst_init, value_PartB_list)]
value_ave_PartB = self.nominal_value_ave__PartB - np.average(value_PartB_list)
self.value_PartB_list = [value + value_ave_PartB for value in value_PartB_list]
self.time_list = [i * self.sec_per_cycle_degree + self.value_max__time_value for i in range(self.step_range)]
self.rad_list = [i * self.step_size for i in range(self.step_range)]
self.nominal_value_max__PartA = max(value_PartA_list)
self.nominal_value_max__PartB = max(value_PartB_list)
self.delta_value_max__PartA = max(self.value_PartA_list) - self.input_t_sink
self.delta_value_max__PartB = max(self.value_PartB_list) - self.input_t_sink
self.value_max_PartA_list = value_PartA_list
self.value_max_PartB_list = value_PartB_list
def rotate(self, l, n):
return l[-n:] + l[:-n]
def do_calculation_for_either_part(self, step, spcd, index, scalar, growth, time): # todo does everything need to be passed in?
blah
def get_other_part's_value(self, time): # todo could this be folded into below
blah
def get_one_part's_value(self, time):
blah
def integrate_value_for_other_part(self, step, spcd, start_time, index): # todo could this be folded into below
blah
def integrate_value_for_one_part(self, step, spcd, start_time, index): # todo remove interp check
blah
def create_some_dict_for_both_parts(self): # todo could this be cleaned
50 lines of blah
def get__other_corrected_array(self, array): # todo could this be simplified?
blah
def get__corrected_array(self, input arrays): # todo is this necessary
blah
def get__some_value(self, value): # todo isn't there one of these already?
blah
def get__module_name(self, module_file):
blah

The commentators are correct that an MCVE would definitely enhance your post and so my answer is a bit limited. I just want to point out that your data members can be any python object.
So if your data access pattern would benefit from storing your data in pandas and interacting with it as pandas:
class YourClass:
def __init__(self, data):
self.data = # Your pandas df
Or json:
import json
class YourClass:
def __init__(self, data):
self.data = json.loads(data)
Or numpy:
class YourClass:
def __init__(self, data):
self.data = # Your numpy ndarray
And then your class can be called simply as YourClass(data)
Edit: Looking at your code, literally ALL of your self.value = None lines are superfluous in my view. if they are a members of a tabular data input they can be initialized:
class Module:
def __init__(self, data):
self.data = pd.DataFrame()
Once they are initialized as an empty dataframe their CRUD operations can map to the very mature pandas CRUD operations. Similarly self.data = {} for key-value pairs data structure like JSON, and so on. For the rest, you can catch the case where data.key is undefined in generic getters and setters and not bother with initializing them.

Related

AttributeError: 'list' object has no attribute 'i_sd'Which function can be used to get values from a Callback Class

The callback is called when specific events occur in an environment (e.g. at the beginning/end of a reset and beginning/end of a step).
I have written a stub of a GEM-Callback and added it to the environment.
That can be utilized to log data at every step.
Now, which function can be called to get data from that class in List Format / .CSV Format.
Code of callback class is attached below:
class CurrentLoggingCallback(gem.core.Callback):
def __init__(self):
super().__init__ ()
self._i_sd_idx = None
self._i_sq_idx = None
self._i_sd_max = None
self._i_sq_max = None
def set_env(self, env):
super().set_env(env)
assert 'i_sd'in env.state_names, 'the environment has no state "i_sd".'
assert 'i_sq'in env.state_names, 'the environment has no state "i_sq".'
self._i_sd_idx = env.state_names.index('i_sd')
self._i_sq_idx = env.state_names.index('i_sq')
#Environments Observations are normalized by their limits.
#for current values in Amperes, the observations have to be multiplied with their limits.
self._i_sd_max = env.limits[self._i_sd_idx]
self._i_sq_max = env.limits[self._i_sq_idx]
def on_step_end(self, k, state, reference, reward, done):
""" Gets called at the end of each step"""
i_sd = state[self._i_sd_idx] * self._i_sd_max
i_sq = state[self._i_sq_idx] * self._i_sq_max
i_sd_ref = reference[self._i_sd_idx] * self._i_sd_max
i_sq_ref = reference[self._i_sq_idx] * self._i_sq_max
# Append to list or store to file...
def on_reset_end(self, state, reference):
"""Gets called at the end of each reset """
i_sd = state[self._i_sd_idx] * self._i_sd_max
i_sq = state[self._i_sq_idx] * self._i_sq_max
i_sd_ref = reference[self._i_sd_idx] * self._i_sd_max
i_sq_ref = reference[self._i_sq_idx] * self._i_sq_max
# Append to list or store to file...
current_logger = CurrentLoggingCallback()
Now, i want to log values of i_sd and i_sq in list/.CSV .

Is there a proper way to append JSON Data to a Numpy array

I am trying to add data that I am reading from a series of JSON files to a Numpy array (or whatever data collection would work best). My idea, is that I want to sort a collection of episodes of a tv show by episode title.
The problem I have encountered, is actually creating the collection from the data.
The intent, is that I want to be able to have a collection of the items found within the for loop [a,b,c,d]; for each episode of the show.
Is a Numpy array the best way to go about making this collection, or should I use something else?
season1 = open('THEJSONFILES\seasonone.json', 'r')
season_array = np.array(['episodeTitle','seasonNum', 'episodeNum', 'plotContents'])
def ReadTheDarnJsonFile(jsonTitle):
seasondata = jsonTitle.read()
seasonobj = j.loads(seasondata)
list = (seasonobj['episodes'])
for i in range(len(list)):
a = str(list[i].get('title'))
b = str(list[i].get('seasonNumber'))
c = str(list[i].get('episodeNumber'))
d = str(list[i].get('plot'))
print(a, b, c, d)
print("----------------")
# np.append(season_array, [a,b,c,d]) this is not correct
ReadTheDarnJsonFile(season1)
print(season_array)
2 notes. First I would avoid using list as a variable name because it is a keyword in python. Second I would recommend using a custom class for your data for maximum readability.
season1 = open('THEJSONFILES\seasonone.json', 'r')
season_array = np.array(['episodeTitle','seasonNum', 'episodeNum', 'plotContents'])
class episode:
def __init__(self,title,seasonNumber,episodeNumber,plot):
self.title = title
self.seasonNumber = seasonNumber
self.episodeNumber = episodeNumber
self.plot = plot
def summary(self):
print("Season "+str(self.seasonNumber)+" Episode "+str(self.episodeNumber))
print(self.title)
print(self.plot)
def ReadTheDarnJsonFile(jsonTitle):
seasondata = jsonTitle.read()
seasonobj = j.loads(seasondata)
episodes = (seasonobj['episodes'])
season_array = []
for i in range(len(episodes)):
a = str(list[i].get('title'))
b = str(list[i].get('seasonNumber'))
c = str(list[i].get('episodeNumber'))
d = str(list[i].get('plot'))
season_array.append(episode(a,b,c,d)) this is not correct
return season_array
season_array = Read
TheDarnJsonFile(season1)
for item in season_array:
item.summary()
Here is what I ended up doing.
import json as j
import pandas as pd
emptyArray = []
season1 = open('THEJSONFILES\seasonone.json', 'r')
season2 = open('THEJSONFILES\seasontwo.json', 'r')
season3 = open('THEJSONFILES\seasonthree.json', 'r')
season4 = open('THEJSONFILES\seasonfour.json', 'r')
season5 = open('THEJSONFILES\seasonfive.json', 'r')
season6 = open('THEJSONFILES\seasonsix.json', 'r')
season7 = open('THEJSONFILES\seasonseven.json', 'r')
columnData = ["episodeTitle", "seasonIndex", "episodeIndex", "plot", "imageURL"]
finalDf = pd.DataFrame
def ReadTheDarnJsonFile(jsonTitle):
df = pd.DataFrame(columns = columnData)
seasonData = jsonTitle.read()
seasonObj = j.loads(seasonData)
currentSeasonList = (seasonObj['episodes'])
for i in range(len(currentSeasonList)):
tempTitle = str(currentSeasonList[i].get('title'))
tempSN = str(currentSeasonList[i].get('seasonNumber'))
tempEN = str(currentSeasonList[i].get('episodeNumber'))
tempPlot = str(currentSeasonList[i].get('plot'))
tempImage = str(currentSeasonList[i].get('image'))
dataObj = pd.Series([tempTitle, tempSN, tempEN, tempPlot, tempImage], index=(df.columns))
df.loc[i] = dataObj
emptyArray.append(df)
ReadTheDarnJsonFile(season1)
ReadTheDarnJsonFile(season2)
ReadTheDarnJsonFile(season3)
ReadTheDarnJsonFile(season4)
ReadTheDarnJsonFile(season5)
ReadTheDarnJsonFile(season6)
ReadTheDarnJsonFile(season7)
finalDf = pd.concat(emptyArray)
print(emptyArray)
holyOutput = finalDf.sort_values(by=['episodeTitle'])
holyOutput.reset_index(inplace=True)
holyOutput.to_json("P:\\ProjectForStarWarsCloneWarsJson\JSON\OutputJsonV2.json")

Python multithreaded random generation

I am trying to implement in my simulation this code:
https://numpy.org/doc/stable/reference/random/multithreading.html
but I can't work it out.
If I follow the example in the link, I get
mrng = MultithreadedRNG(10000000, seed=0)
mrng.fill()
print(mrng.values[-1])
> 0.0
and all the other values are 0 too.
If I give a smaller input number, as 40, I get
mrng = MultithreadedRNG(40)
mrng.fill()
print(mrng.values[-1])
> array([1.08305179e-311, 1.08304781e-311, 1.36362118e-321, nan,
6.95195359e-310, ...., 7.27916164e-095, 3.81693953e+180])
What am I doing wrong? I just would like to implement this multiprocessing code to a random bits (0 / 1) generator.
There is a bug in the example, I believe. You have to wrap PCG64 into Generator interface.
Try code below
class MultithreadedRNG(object):
def __init__(self, n, seed=None, threads=None):
rg = PCG64(seed)
if threads is None:
threads = multiprocessing.cpu_count()
self.threads = threads
self._random_generators = [Generator(rg)]
last_rg = rg
for _ in range(0, threads-1):
new_rg = last_rg.jumped()
self._random_generators.append(Generator(new_rg))
last_rg = new_rg
self.n = n
self.executor = concurrent.futures.ThreadPoolExecutor(threads)
self.values = np.empty(n)
self.step = np.ceil(n / threads).astype(np.int_)
def fill(self):
def _fill(gen, out, first, last):
gen.standard_normal(out=out[first:last])
futures = {}
for i in range(self.threads):
args = (_fill,
self._random_generators[i],
self.values,
i * self.step,
(i + 1) * self.step)
futures[self.executor.submit(*args)] = i
concurrent.futures.wait(futures)
def __del__(self):
self.executor.shutdown(False)
Didn't test it much, but values looks ok

'numpy.ndarray' object has no attribute 'fitness'

I have this code for nsga3(evolutionary algorithm) but I get the error 'numpy.ndarray' object has no attribute 'fitness'.Generates reference points for NSGA-III selection. This code is based onjMetal NSGA-III implementation <https://github.com/jMetal/jMetal>_. Please help to remove this error
import copy
import random
import numpy as np
from deap import tools
class ReferencePoint(list): # A reference point exists in objective space an has a set of individuals associated with it
def __init__(self, *args):
list.__init__(self, *args)
self.associations_count = 0
self.associations = []
def generate_reference_points(num_objs, num_divisions_per_obj):
def gen_refs_recursive(work_point, num_objs, left, total, depth):
if depth == num_objs - 1:
work_point[depth] = left/total
ref = ReferencePoint(copy.deepcopy(work_point))
return [ref]
else:
res = []
for i in range(left):
work_point[depth] = i/total
res = res + gen_refs_recursive(work_point, num_objs, left-i, total, depth+1)
return res
print(gen_refs_recursive([0]*num_objs, num_objs, num_objs*num_divisions_per_obj,
num_objs*num_divisions_per_obj, 0))
def find_ideal_point(individuals):
'Finds the ideal point from a set individuals.'
current_ideal = [np.infty] * len(individuals[0].fitness.values) # Here th error is coming
for ind in individuals:
# Use wvalues to accomodate for maximization and minimization problems.
current_ideal = np.minimum(current_ideal,
np.multiply(ind.fitness.wvalues, -1))
print("Ideal POint is\n",current_ideal)
global individulas
individulas=np.random.rand(10,4)
generate_reference_points(2, 4)
find_ideal_point(individulas)
You can check how to prepare an input to find_ideal_point in this jupyter notebook. The implementation deals with records from deap.tools.Logbook which is "evolution records as a chronological list of dictionaries" not NumPy arrays.

Putting Python Output into .xlxs File

The following code runs and does what it's supposed to do but I'm having trouble using the XlxsWriter module in Python in order to get some of the results into a .xlxs file. The goal is for the output to contain information from the function block_trial where it tracks each block and gives me the all_powers variable the corresponds with that trial. Installing the module into my user directory goes smoothly but it won't give me a file that gives me both sets of information.
At the moment, I'm using:
import xlsxwriter
workbook = xlsxwriter.Workbook('SRT_data.xlsx')
worksheet = workbook.add_worksheet()
Widen the first column to make the text clearer.
worksheet.set_column('A:A', 20)
Add a bold format to use to highlight cells.
bold = workbook.add_format({'bold': True})
Write some simple text.
worksheet.write('A1', 'RT')
workbook.close()
But can't get any of my data to show up.
import random, math
num_features = 20
stim_to_vect = {}
all_stim = [1,2,3,4,5]
all_features = range(num_features)
zeros=[0 for i in all_stim]
memory=[]
def snoc(xs,x):
new_xs=xs.copy()
new_xs.append(x)
return new_xs
def concat(xss):
new_xs = []
for xs in xss:
new_xs.extend(xs)
return new_xs
def point_wise_mul(xs,ys):
return [x*y for x,y in zip(xs,ys)]
for s in snoc(all_stim, 0):
stim_to_vect[s]= []
for i in all_features:
stim_to_vect[s].append(random.choice([-1, 1]))
def similarity(x,y):
return(math.fsum(point_wise_mul(x,y))/math.sqrt(math.fsum(point_wise_mul(x,x))*math.fsum(point_wise_mul(y,y))))
def echo(probe,power):
echo_vect=[]
for j in all_features:
total=0
for i in range(len(memory)):
total+=math.pow(similarity(probe, memory[i]),power)*memory[i][j]
echo_vect.append(total)
return echo_vect
fixed_seq=[1,5,3,4,2,1,3,5,4,2,5,1]
prev_states={}
prev_states[0]=[]
prev=0
for curr in fixed_seq:
if curr not in prev_states.keys():
prev_states[curr]=[]
prev_states[curr].append(prev)
prev=curr
def update_memory(learning_parameter,event):
memory.append([i if random.random() <= learning_parameter else 0 for i in event])
for i in snoc(all_stim,0):
for j in prev_states[i]:
curr_stim = stim_to_vect[i]
prev_resp = stim_to_vect[j]
curr_resp = stim_to_vect[i]
update_memory(1.0, concat([curr_stim, prev_resp, curr_resp]))
def first_part(x):
return x[:2*num_features-1]
def second_part(x):
return x[2*num_features:]
def compare(curr_stim, prev_resp):
for power in range(1,10):
probe=concat([curr_stim,prev_resp,zeros])
theEcho=echo(probe,power)
if similarity(first_part(probe),first_part(theEcho))>0.97:
curr_resp=second_part(theEcho)
return power,curr_resp
return 10,zeros
def block_trial(sequence):
all_powers=[]
prev_resp = stim_to_vect[0]
for i in sequence:
curr_stim = stim_to_vect[i]
power,curr_resp=compare(curr_stim,prev_resp)
update_memory(0.7,concat([curr_stim,prev_resp,curr_resp]))
all_powers.append(power)
prev_resp=curr_resp
return all_powers

Categories

Resources