I am having a problem with my genetic feature optimization algorithm that I am attempting to build. The idea is that a specific combination of features will be tested and if the model accuracy using those features is higher than the previous maximum, then the combination of features replaces the previous maximum combination. through running through the remaining potential features in this way, the final combination should be the optimal combination of features given the feature vector size. Currently, the code that looks to achieve this looks like:
def mutate_features(features, feature):
new_features = features
index = random.randint(0,len(features)-1)
new_features[index] = feature
return new_features
def run_series(n, f_list, df):
features_list = []
results_list = []
max_results_list = [[0,0,0,0,0]]
max_feature_list = []
features = [0,0,0,0,1]
for i in range(0,5): # 5 has just been chosen as the range for testing purposes
results = run_algorithm(df, f_list, features)
features_list.append(features)
results_list.append(results)
if (check_result_vector(max_results_list, results)):
max_results_list.append(results)
max_feature_list.append(features)
else:
print("Revert to previous :" +str(max_feature_list[-1]))
features = max_feature_list[-1]
features = mutate_features(features, f_list[i])
print("Feature List = " +str(features_list))
print("Results List = " +str(results_list))
print("Max Results List = " +str(max_results_list))
print("Max Feature List = "+str(max_feature_list))
The output from this code has been included below;
Click to zoom or enlarge the photo
The section that I do not understand is the output of the max_feature_list and feature_list.
If anything is added through the use of .append() to the max_feature_list or the feature_list inside the for loop, it seems to change all items that are already members of the list to be the same as the latest addition to the list. I may not be fully understanding of the syntax/logic around this and would really appreciate any feedback as to why the program is doing this.
It happens because you change the values of features inside mutate_features function and then, since the append to max_feature_list is by reference, the populated values in max_feature_list are changing too because their underlying value changed.
One way to prevent such behaviour is to deepcopy features inside mutate_features, mutate the copied features as you want and then return it.
For example:
from copy import deepcopy
def mutate_features(features, feature):
new_features = deepcopy(features)
index = random.randint(0,len(features)-1)
new_features[index] = feature
return new_features
features = [1, 2, 3]
res = []
res.append(features)
features = mutate_features(features, feature)
res.append(features)
print(res)
Related
I have a list of brain metastasis MRIs that I want to use for training and testing purposes.
These images are all similar but the original tumor sites differs. See the following example:
From Lungs:
"Image01.1"
"Image01.2"
"Image01.3"
"Image01.4"
From Breasts:
"Image02.1"
"Image02.2"
"Image02.3"
"Image02.4"
"Image02.5"
From Skin:
"Image03.1"
"Image03.2"
From Lung Tissue:
"Image04.1"
"Image04.2"
"Image04.3"
From Bone Marrow:
"Image05.1"
"Image05.2"
I want the testing and validation set to contain the same amount of images without losing a similar composition (both lists containing the same amount of each subtype).
For this purpose can I create lists for each subtype and then randomly split those 50/50. Followed by adding all these lists together?
If you want to get specific rows from a pandas DataFrame that meet certain criteria, you can filter. In your case, something like:
reader_lung = reader[reader["Image_Title"] == "Lung"]
"Image_Title" you need to change to the name of the column you're looking for your keyword (e.g., Lung) in. This needs to be an exact match.
For something that doesn't require an exact match, you could also do the following:
reader_lung = reader[reader["Image_Title"].str.contains("Lung")]
Could you create a list of lists (one for each type) and then take the first N and put them into training and the last N and put them in test?
Something like this pseudocode:
with open(r"B:/.../excell.csv", newline='') as f:
reader = csv.reader(f, dialect="excel",delimiter=';')
test = []
training = []
type_map = {}
for row in reader:
if row[33] in type_map:
# If the type has already been viewed, append to the existing list of those images
type_map[row[33]].append(row)
else:
# If this type is seen for the first time, create a new array with that row in it
type_map[row[33]] = [row]
# Now you should have a map like : {"Lung": ["image1", "image2" ...], "Heart": ["imageA"....]}
for image_type in type_map:
type_images = type_map[image_type]
half_way_index = len(type_images)/2 # For odd elements i.e 13 elems this will give you 6 (integer division)
test += type_images[0:half_way_index] # First half of the type_images are test
training += type_images[half_way_index:(half_way_index*2)] # Second half are training
I am trying to get the doc2vec function to work in python 3.
I Have the following code:
tekstdata = [[ index, str(row["StatementOfTargetFiguresAndPoliciesForTheUnderrepresentedGender"])] for index, row in data.iterrows()]
def prep (x):
low = x.lower()
return word_tokenize(low)
def cleanMuch(data, clean):
output = []
for x, y in data:
z = clean(y)
output.append([str(x), z])
return output
tekstdata = cleanMuch(tekstdata, prep)
def tagdocs(docs):
output = []
for x,y in docs:
output.append(gensim.models.doc2vec.TaggedDocument(y, x))
return output
tekstdata = tagdocs(tekstdata)
print(tekstdata[100])
vectorModel = gensim.models.doc2vec.Doc2Vec(tekstdata, size = 100, window = 4,min_count = 3, iter = 2)
ranks = []
second_ranks = []
for x, y in tekstdata:
print (x)
print (y)
inferred_vector = vectorModel.infer_vector(y)
sims = vectorModel.docvecs.most_similar([inferred_vector], topn=1001, restrict_vocab = None)
rank = [docid for docid, sim in sims].index(y)
ranks.append(rank)
All works as far as I can understand until the rank function.
The error I get is that there is no zero in my list e.g. the documents I am putting in does not have 10 in list:
File "C:/Users/Niels Helsø/Documents/github/Speciale/Test/Data prep.py", line 59, in <module>
rank = [docid for docid, sim in sims].index(y)
ValueError: '10' is not in list
It seems to me that it is the similar function that does not work.
the model trains on my data (1000 documents) and build a vocab which is tagged.
The documentation I mainly have used is this:
Gensim dokumentation
Torturial
I hope that some one can help. If any additional info is need please let me know.
best
Niels
If you're getting ValueError: '10' is not in list, you can rely on the fact that '10' is not in the list. So have you looked at the list, to see what is there, and if it matches what you expect?
It's not clear from your code excerpts that tagdocs() is ever called, and thus unclear what form tekstdata is in when provided to Doc2Vec. The intent is a bit convoluted, and there's nothing to display what the data appears as in its raw, original form.
But perhaps the tags you are supplying to TaggedDocument are not the required list-of-tags, but rather a simple string, which will be interpreted as a list-of-characters. As a result, even if you're supplying a tags of '10', it will be seen as ['1', '0'] – and len(vectorModel.doctags) will be just 10 (for the 10 single-digit strings).
Separate comments on your setup:
1000 documents is pretty small for Doc2Vec, where most published results use tens-of-thousands to millions of documents
an iter of 10-20 is more common in Doc2Vec work (and even larger values might be helpful with smaller datasets)
infer_vector() often works better with non-default values in its optional parameters, especially a steps that's much larger (20-200) or a starting alpha that's more like the bulk-training default (0.025)
I have read a description, how to apply random forest regression here. In this example the authors use the following code to create the features:
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(analyzer = "word",max_features = 5000)
train_data_features = vectorizer.fit_transform(clean_train_reviews)
train_data_features = train_data_features.toarray()
I am thinking of combining several possibilities as features and turn them on and off. And I don't know how to do it.
What I have so far is that I define a class, where I will be able to turn on and off the features and see if it brings something (for example, all unigrams and 20 most frequent unigrams, it could be then 10 most frequent adjectives, tf-idf). But for now I don't understand how to combine them together.
The code looks like this, and in the function part I am lost (the kind of function I have would replicate what they do in the tutorial, but it doesn't seem to be really helpful the way I do it):
class FeatureGen: #for example, feat = FeatureGen(unigrams = False) creates feature set without the turned off feature
def __init__(self, unigrams = True, unigrams_freq = True)
self.unigrams = unigrams
self.unigrams_freq = unigrams_freq
def get_features(self, input):
vectorizer = CountVectorizer(analyzer = "word",max_features = 5000)
tokens = input["token"]
if self.unigrams:
train_data_features = vectorizer.fit_transform(tokens)
return train_data_features
What should I do to add one more feature possibility? Like contains 10 most frequent words.
if self.unigrams
train_data_features = vectorizer.fit_transform(tokens)
if self.unigrams_freq:
#something else
return features #and this should be a combination somehow
Looks like you need np.hstack
However you need each features array to have one row per training case.
The command xgb.importance returns a graph of feature importance measured by an f score.
What does this f score represent and how is it calculated?
Output:
Graph of feature importance
This is a metric that simply sums up how many times each feature is split on. It is analogous to the Frequency metric in the R version.https://cran.r-project.org/web/packages/xgboost/xgboost.pdf
It is about as basic a feature importance metric as you can get.
i.e. How many times was this variable split on?
The code for this method shows it is simply adding of the presence of a given feature in all the trees.
[here..https://github.com/dmlc/xgboost/blob/master/python-package/xgboost/core.py#L953][1]
def get_fscore(self, fmap=''):
"""Get feature importance of each feature.
Parameters
----------
fmap: str (optional)
The name of feature map file
"""
trees = self.get_dump(fmap) ## dump all the trees to text
fmap = {}
for tree in trees: ## loop through the trees
for line in tree.split('\n'): # text processing
arr = line.split('[')
if len(arr) == 1: # text processing
continue
fid = arr[1].split(']')[0] # text processing
fid = fid.split('<')[0] # split on the greater/less(find variable name)
if fid not in fmap: # if the feature id hasn't been seen yet
fmap[fid] = 1 # add it
else:
fmap[fid] += 1 # else increment it
return fmap # return the fmap, which has the counts of each time a variable was split on
I found this answer correct and thorough. It shows the implementation of the feature_importances.
https://stats.stackexchange.com/questions/162162/relative-variable-importance-for-boosting
Currently, I can retrieve the ID of each node of my grown on my training sample to which each row of my test sample is most likely to belong to:
tree.tree_.apply(np.array(X_test).astype(np.float32)) where X_test represents the inputs of the decision tree.
But, for each leaf of my grown tree, I would like to get the IDs of my training sample which are contained in it. So that I would know which training sample are the most similar to one test input.
I ended up using the "apply" function to my training sample to get the leaf_id it belongs to.
def get_nearest_points(self, tr, input_train):
inside_leaves = {}
tmp = tr.tree_.apply(np.array(input_train).astype(np.float32))
leaves_list = set(tmp)
for leaf in leaves_list:
inside_leaves[leaf] = [idx for idx, elt in enumerate(tmp) if elt == leaf]
return inside_leaves
inside_leaves is now a dictionary containing for each leaf_id a list of the row involved in this leaf.