Python OpenCV Random trees parameters - python

I am trying to create an image classifier based on RandomTrees from OpenCV (version 2.4). Right now, I initialize everything likes this:
self.model = cv.RTrees()
max_num_trees=10
max_error=1
max_d=5
criteria=cv.TERM_CRITERIA_MAX_ITER+cv.TERM_CRITERIA_EPS
parameters = dict(max_depth=max_d, min_sample_count=5, use_surrogates=False, nactive_vars=0,
term_crit=(criteria, max_num_trees, max_error))
self.model.train(dataset, cv.CV_ROW_SAMPLE, responses, params=parameters)
I did it by looking at this question. The only problem is, whatever I change in the parameters, classification always remains the same (and wrong). Since the python documentation on this is very very scarce, I have no choice but to ask here what to do and how to check what I am doing. How to get the number of trees it generates and all the other things that are explained for C++ but not for Python - like train error? For example, I tried:
self.model.tree_count
self.model.get_tree_count()
but got an error every time. Also, am I doing the termination criteria initialization correctly?

Related

OpenAI Prompt in Python

I just want to be sure if I write the prompt correctly. In a text which includes a journal's title, abstract and keyword information, I want to extract only the names of the ML/AI methods used/proposed for the problem. You can see the code snippet below where test_text is the text which I read from a txt file. I used ### as mentioned in best practices (https://help.openai.com/en/articles/6654000-best-practices-for-prompt-engineering-with-openai-api).
response2 = openai.Completion.create(
model=“text-davinci-003”,
prompt=“Extract the specific names of used artificial intelligence methods or machine learning methods by using commas from the following text. Text:###{” + str(test_text) + “}### \nA:”,
temperature=0,
max_tokens=100,
top_p=1,
frequency_penalty=0.0,
presence_penalty=0.0,
stop=[“\n”]
)
The sample result I got is as follows: Bounding box algorithm, Faster R-CNN, Artificial intelligence, Machine learning, Python.
I tried changing the structure of the prompt (for example instead of "...names of used...", I tried "...names of proposed...") so that the meaning of the sentence remains the same, but the results are almost the same. As you can see, irrelevant results also return.
Do you think is the above prompt correct to extract AI/ML methods used in the journal?
How can I update the prompt to get more accurate and robust results?
In addition, do you think that are “###” and “+” usages, etc. correct?
Thank you so much.

Tokenizing a Gensim dataset

Im trying to tokenize a gensim dataset, which I've never worked with before and Im not sure if its a small bug or im not doing it properly.
I loaded the dataset using
model = api.load('word2vec-google-news-300')
and from my understanding, to tokenize using nltk all I need to do it call
tokens = word_tokenize(model)
However, the error im getting is "TypeError: expected string or bytes-like object". What am I doing wrong?
word2vec-google-news-300 isn't a dataset that's appropriate to 'tokenize'; it's the pretrained GoogleNews word2vec model released by Google circa 2013 with 3 million word-vectors. It's got lots of word-tokens, each with a 300-dimensional vector, but no multiword texts needing tokenization.
You can run type(model) on the object that api.load() returns to see its Python type, which will offer more clues as to what's appropriate to do with it.
Also, something like nltk's word_tokenize() appears to take a single string; you'd typically not pass it any full large dataset, in one call, in any case. (You'd be more likely to iterate over many individual texts as strings, tokenizing each in turn.)
Rewind a bit & think more about what kind of dataset you're looking for.
Try to get it in a simple format you can inspect it yourself, as files, before doing extra steps. (Gensim's api.load() is really bad/underdocumented for that, returning who-knows-what depending on what you've requested.)
Try building on well-explained examples that already work, making minimal individual changes that you understand individually, checking continued proper operation after each step.
(Also, for future SO questions that may be any more complicated than this: it's usually best to include the full error message you've received, including all lines of 'traceback' context showing involved files and lines-of-code, in order to better point at relevant lines-of-code in your code, or the libraries you're using, that are most-directly involved.)

How to get action_propability() in stable baselines 3

I am just getting started self-studying reinforcement-learning with stable-baselines 3. My long-term goal is to train an agent to play a specific turn-based boardgame. Currently I am quite overwhelmed with new stuff, though.
I have implemented a gym-environment that I can use to play my game manually or by having it pick random actions.
Currently I am stuck with trying to get a model to hand me actions in response to an observation. The action-space of my environment is a DiscreteSpace(256). I create the model with the environment as model = PPO('MlpPolicy', env, verbose=1). When I later call model.predict(observation) I do get back a number that looks like an action. When run repeatedly I get different numbers, which I assume is to be expected on an untrained model.
Unfortunately in my game most of the actions are illegal in most states and I would like to filter them and pick the best legal one. Or simply dump the output result for all the actions out to get an insight on what's happening.
In browsing other peoples code I have seen references to model.action_probability(observation). Unfortunately method is not part of stable baselines 3 as far as I can tell. The guide for migration from stable baselines 2 to v3 only mentions it not being implemented [1].
Can you give me a hint on how to go on?
In case anyone comes across this post in the future, this is how you do it for PPO.
import numpy as np
from stable_baselines3.common.policies import obs_as_tensor
def predict_proba(model, state):
obs = obs_as_tensor(state, model.policy.device)
dis = model.policy.get_distribution(obs)
probs = dis.distribution.probs
probs_np = probs.detach().numpy()
return probs_np
About this point.
when I later call model.predict(observation) I do get back a number that looks like an action.
You can prevent that behavior with the following line
model.predict(observation, deterministic=True)
when you add deterministic=True, all the predicted actions will be always determined by the maximum probability, instead of the probability by itself.
Just to give you an example, let's suppose you have the following probabilities:
25% of action A
75% of action B
If you don't use the deterministic=True, the model will use those probabilities to return a prediction.
If you use deterministic=True, the model is going to return always action B.

Batch Norm - Extract Running Mean & Running Variance in TensorFlow

I am trying to look at the running mean and running variance of a trained tensorflow model that is exported via GCMLE (saved_model.pb, assets/* & variables/*). Where are these values kept in the graph? I can access gamma/beta values from tf.GraphKeys.TRAINABLE_VARIABLES but I have not been able to find the running mean and running variance in any of the tf.GraphKeys.MODEL_VARIABLES. Are the running mean and running variance stored somewhere else?
I know that at test time (ie. Modes.EVAL), the running mean and running variance are used to normalize the incoming data, then the normalized data is scaled and shifted using gamma and beta. I am trying to look at all of the variables that I need at inference time, but I cannot find the running mean and running variance. Are these only used at test time and not at inference time (Modes.PREDICT)? If so, that would explain why I can't find them in the exported model, but I am expecting them to be there.
Based on tf.GraphKeys I have tried other things like tf.GraphKeys.MOVING_AVERAGE_VARIABLES but they are also empty. I also saw this line in the batch_normalization documentation "Note: when training, the moving_mean and moving_variance need to be updated. By default the update ops are placed in tf.GraphKeys.UPDATE_OPS, so they need to be added as a dependency to the train_op." so I then tried looking at tf.GraphKeys.UPDATE_OPS from my saved model and they contain an assign op batch_normalization/AssignMovingAvg:0 but still not clear where I would get the value from.
It appears that the moving mean and moving variance are stored within tf.GraphKeys.GLOBAL_VARIABLES and it looks like the reason nothing showed up in MODEL_VARIABLES is because you need to use tf.contrib.framework.local_variable
In addition to #reese0106's answer, if you'd like to take out the moving_mean, moving_variance for BatchNorm, you can index them with names as follows.
vars = tf.global_variables() # shows every variable being used.
vars_moving_mean_variance = []
for var in vars:
if ("moving_mean" in var.name) or ("moving_variance" in var.name):
vars_moving_mean_variance.append(var)
print(vars_moving_mean_variance)
p.s. Thanks for the question and the answer. I solved my own problem too.

Which dataset used in jakeret/tf_unet U-Net implementation?

I am trying to implement U-net and I use https://github.com/jakeret/tf_unet/tree/master/scripts this link as reference. I don't understand which dataset they used. please give me some idea or link which dataset i use.
On their github README.md they show three different datasets, that they applied their implementation to. Their implementation is dataset agnostic, therefore it shouldn't matter too much what data they use if you're trying to solve your own problem with your own data. But if you're looking for a toy data set to play around, check out their demos. There you'll see two readily available examples and how they can be used:
demo_radio_data.ipynb which uses an astronomic radio data example set from here: http://people.phys.ethz.ch/~ast/cosmo/bgs_example_data/
demo_toy_problem.ipynb which uses their built-in data generator of a noisy image with circles that are to be detected.
The latter is probably the easiest one if it comes to just having something to play with. To see how data is generated, check out the class:
image_gen.py -> GrayScaleDataProvider
(with an IDE like PyCharm you can just jump into the according classes in the demo source code)

Categories

Resources