I got an error for AttributeError: 'NoneType' object has no attribute 'config' when I run a Treasure Island game I created (very original). I've looked at other posts about this and tried the same thing to no solution, anyway, here's the code:
import tkinter
window = tkinter.Tk()
window.title("Treasure Island")
window.geometry("500x500")
text = tkinter.Label(window, text='''
| | | |
_________|________________.=""_;=.______________|_____________________|_______
| | ,-"_,="" `"=.| |
|___________________|__"=._o`"-._ `"=.______________|___________________
| `"=._o`"=._ _`"=._ |
_________|_____________________:=._o "=._."_.-="'"=.__________________|_______
| | __.--" , ; `"=._o." ,-"""-._ ". |
|___________________|_._" ,. .` ` `` , `"-._"-._ ". '__|___________________
| |o`"=._` , "` `; .". , "-._"-._; ; |
_________|___________| ;`-.o`"=._; ." ` '`."\` . "-._ /_______________|_______
| | |o; `"-.o`"=._`` '` " ,__.--o; |
|___________________|_| ; (#) `-.o `"=.`_.--"_o.-; ;___|___________________
____/______/______/___|o;._ " `".o|o_.--" ;o;____/______/______/____
/______/______/______/_"=._o--._ ; | ; ; ;/______/______/______/_
____/______/______/______/__"=._o--._ ;o|o; _._;o;____/______/______/____
/______/______/______/______/____"=._o._; | ;_.--"o.--"_/______/______/______/_
____/______/______/______/______/_____"=.o|o_.--""___/______/______/______/____
/______/______/______/______/______/______/______/______/______/______/_____ /
''')
text.pack()
text = text.config(font=('length',15))
line1 = tkinter.Label(window, text="Welcome to Treasure Island,\n"
"Your goal is to find the buried Treasure!")
line1.pack()
line1 = text.config(font=('length',15))
window.mainloop()
Exact error:
line1 = text.config(font=('length',15))
AttributeError: 'NoneType' object has no attribute 'config'
You should check this from tkinter documents, but I suppose that tkinter.Label().config() -method does not return anything (or put it another words, it returns None).
text.pack()
## Here you set text as return value from config method
# text = text.config(font=('length',15))
## Try this intestad
text.config(font=('length',15))
line1 = tkinter.Label(window, text="Welcome to Treasure Island,\n"
"Your goal is to find the buried Treasure!")
line1.pack()
## Here you try to re-use the value which is None
#line1 = text.config(font=('length',15))
## Try this instead
line1.config(font=('length',15))
window.mainloop()
import google_trans_new
from google_trans_new
import google_translator
translator = google_translator()
langd = []
for i in range(10):
try:
t= translator.detect(df['Title'][i])
except:
t='no'
langd.append(t)
df["Language"] = langd
df.head(10)
Here is the code which I am using, and I get the output sometimes and sometimes I don't.
Output:
Title| Description | Language
0 | तूर विकणे आहे 20किटल | no
1 | Foolkobi Holsel ret madhe milel | no
2 | power tiller खरेदी करायचा आहे VST Shakti | no
3 | जीरा बेचना है | no
Getting my output "no" most times, dont know why!
I'm currently using code from OpenAI baselines to train a model, using the following code in my train.py:
from baselines.common import tf_util as U
import tensorflow as tf
import gym, logging
from visak_dartdeepmimic import VisakDartDeepMimicArgParse
def train(env, initial_params_path,
save_interval, out_prefix, num_timesteps, num_cpus):
from baselines.ppo1 import mlp_policy, pposgd_simple
sess = U.make_session(num_cpu=num_cpus).__enter__()
U.initialize()
def policy_fn(name, ob_space, ac_space):
print("Policy with name: ", name)
policy = mlp_policy.MlpPolicy(name=name, ob_space=ob_space, ac_space=ac_space,
hid_size=64, num_hid_layers=2)
saver = tf.train.Saver()
if initial_params_path is not None:
print("Tried to restore from ", initial_params_path)
saver.restore(tf.get_default_session(), initial_params_path)
return policy
def callback_fn(local_vars, global_vars):
iters = local_vars["iters_so_far"]
saver = tf.train.Saver()
if iters % save_interval == 0:
saver.save(sess, out_prefix + str(iters))
pposgd_simple.learn(env, policy_fn,
max_timesteps=num_timesteps,
callback=callback_fn,
timesteps_per_actorbatch=2048,
clip_param=0.2, entcoeff=0.0,
optim_epochs=10, optim_stepsize=3e-4, optim_batchsize=64,
gamma=1.0, lam=0.95, schedule='linear',
)
env.close()
Which is based off of the code that OpenAI itself provides in the baselines repository
This works fine, except that I get some pretty weird looking learning curves which I suspect are due to some hyperparameters passed to the learn function which cause performance to decay / high variance as things go on (though I don't know for certain)
Anyways, to confirm this hypothesis I'd like to retrain the model but not from scratch: I'd like to start it off from a high point: say, iteration 1600 for which I have a saved model lying around (having saved it with saver.save in callback_fn
So now I call the train function, but this time I provide it with an inital_params_path pointing to the save prefix for iteration 1600. By my understanding, the call to saver.restore in policy_fn should restore "reset" the model to where it was at 1teration 1600 (and I've confirmed that the load routine runs using the print statement)
However, in practice I find that it's almost like nothing gets loaded. For instance, if I got statistics like
----------------------------------
| EpLenMean | 74.2 |
| EpRewMean | 38.7 |
| EpThisIter | 209 |
| EpisodesSoFar | 662438 |
| TimeElapsed | 2.15e+04 |
| TimestepsSoFar | 26230266 |
| ev_tdlam_before | 0.95 |
| loss_ent | 2.7640965 |
| loss_kl | 0.09064759 |
| loss_pol_entpen | 0.0 |
| loss_pol_surr | -0.048767302 |
| loss_vf_loss | 3.8620138 |
----------------------------------
for iteration 1600, then for iteration 1 of the new trial (ostensibly using 1600's parameters as a starting point), I get something like
----------------------------------
| EpLenMean | 2.12 |
| EpRewMean | 0.486 |
| EpThisIter | 7676 |
| EpisodesSoFar | 7676 |
| TimeElapsed | 12.3 |
| TimestepsSoFar | 16381 |
| ev_tdlam_before | -4.47 |
| loss_ent | 45.355236 |
| loss_kl | 0.016298374 |
| loss_pol_entpen | 0.0 |
| loss_pol_surr | -0.039200217 |
| loss_vf_loss | 0.043219414 |
----------------------------------
which is back to square one (this is around where my models trained from scratch start)
The funny thing is I know that the model is being saved properly at least, since I can actually replay it using eval.py
from baselines.common import tf_util as U
from baselines.ppo1 import mlp_policy, pposgd_simple
import numpy as np
import tensorflow as tf
class PolicyLoaderAgent(object):
"""The world's simplest agent!"""
def __init__(self, param_path, obs_space, action_space):
self.action_space = action_space
self.actor = mlp_policy.MlpPolicy("pi", obs_space, action_space,
hid_size = 64, num_hid_layers=2)
U.initialize()
saver = tf.train.Saver()
saver.restore(tf.get_default_session(), param_path)
def act(self, observation, reward, done):
action2, unknown = self.actor.act(False, observation)
return action2
if __name__ == "__main__":
parser = VisakDartDeepMimicArgParse()
parser.add_argument("--params-prefix", required=True, type=str)
args = parser.parse_args()
env = parser.get_env()
U.make_session(num_cpu=1).__enter__()
U.initialize()
agent = PolicyLoaderAgent(args.params_prefix, env.observation_space, env.action_space)
while True:
ob = env.reset(0, pos_stdv=0, vel_stdv=0)
done = False
while not done:
action = agent.act(ob, reward, done)
ob, reward, done, _ = env.step(action)
env.render()
and I can clearly see that its learned something as compared to an untrained baseline. The loading action is the same across both files (or rather, if there's a mistake there then I can't find it), so it appears probable to me that train.py is correctly loading the model and then, due to something in the pposdg_simple.learn function's, promptly forgets about it.
Could anyone shed some light on this situation?
Not sure if this is still relevant since the baselines repository has changed quite a bit since this question was posted, but it seems that you are not actually initialising the variables before restoring them. Try moving the call of U.initialize() inside your policy_fn:
def policy_fn(name, ob_space, ac_space):
print("Policy with name: ", name)
policy = mlp_policy.MlpPolicy(name=name, ob_space=ob_space,
ac_space=ac_space, hid_size=64, num_hid_layers=2)
saver = tf.train.Saver()
if initial_params_path is not None:
print("Tried to restore from ", initial_params_path)
U.initialize()
saver.restore(tf.get_default_session(), initial_params_path)
return policy
I need to print some headers around some strings, you can see it working fine here, although if the string is really long I need to split it then print a much longer header.
+===================================================+
| Running: sdt_test |
| Skipping:inquiry/"Inq VPD C0" mem/"Maint In Cmnd" |
+===================================================+
sh: /net/flanders/export/ws/ned/proto/bin/sdt_test: No such file or directory
+=====================+
| Running: dtd_tester |
+=====================+
sh: /net/flanders/export/ws/ned/proto/bin/dtd_tester: No such file or directory
+===============+
| Running: pssm |
+===============+
sh: /net/flanders/export/ws/ned/proto/bin/pssm: No such file or directory
+==============+
| Running: psm |
+==============+
sh: /net/flanders/export/ws/ned/proto/bin/psm: No such file or directory
+===============================================================================================================================================================================================================================================================================================================================================+
| Running: ssm |
| Skipping:"Secondary Subset Manager Tests"/"SSSM_3 Multi Sequence" "Secondary Subset Manager Tests"/"SSSM_2 Steady State" "Secondary Subset Manager Tests"/"SSSM_4 Test Abort" "Secondary Subset Manager Tests"/"SSSM_6 Test extend" "Secondary Subset Manager Tests"/"SSSM_9 exceptions" "Secondary Subset Manager Tests"/"SSSM_11 failed io" |
+===============================================================================================================================================================================================================================================================================================================================================+
It appears fine there, although the SSM test, I would like split up on a certain amount of characters, maybe 100 or just on the whitespace between suites.
I'm really not too sure on how to do this, this is the code that currently does it.
#calculate lengths to make sure header is correct length
l1 = len(x)
l2 = len(y)
skip = False
if 'disable=' in test and 'disable="*"' not in test:
skip = True
#if entire test suite is to be disabled or not run
if disable:
headerBreak ="+" + "="*(l1+12) + "+"
print headerBreak
print "| Skipping: %s |" % x
#if the test suite will be executed
else:
if skip == False:
l2 = 0
headerBreak = "+" + "="*(max(l1,l2)+11) + "+"
print headerBreak
print "| Running: %s" % x, ' '*(l2-l1)+ '|'
#if some suites are disabled but some are still running
if skip:
print "| Skipping:%s |" % y
print headerBreak
sys.stdout.flush()
You can use the textwrap module to simplify this
For example if the max width was 44
>>> max_width = 44
>>> header='''Skipping:"Secondary Subset Manager Tests"/"SSSM_3 Multi Sequence" "Secondary Subset Manager Tests"/"SSSM_2 Steady State" "Secondary Subset Manager Tests"/"SSSM_4 Test Abort" "Secondary Subset Manager Tests"/"SSSM_6 Test extend" "Secondary Subset Manager Tests"/"SSSM_9 exceptions" "Secondary Subset Manager Tests"/"SSSM_11 failed io"'''
>>> h = ["Running: ssm"] + textwrap.wrap(header, width=max_width-4)
>>> maxh = len(max(h, key=len))
>>> print "+=" + "="*maxh + "=+"
+==========================================+
>>> for i in h:
... print "| " + i.ljust(maxh) + " |"...
| Running: ssm |
| Skipping:"Secondary Subset Manager |
| Tests"/"SSSM_3 Multi Sequence" |
| "Secondary Subset Manager Tests"/"SSSM_2 |
| Steady State" "Secondary Subset Manager |
| Tests"/"SSSM_4 Test Abort" "Secondary |
| Subset Manager Tests"/"SSSM_6 Test |
| extend" "Secondary Subset Manager |
| Tests"/"SSSM_9 exceptions" "Secondary |
| Subset Manager Tests"/"SSSM_11 failed |
| io" |
>>> print "+=" + "="*maxh + "=+"
+==========================================+
I'm new to GTK, I'm trying to figure out how to accomplish something like this:
+---+------+---+
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
+---+------+---+
I want this done in an HBox. How would I accomplish this? Thanks.
It is done with "packing".
I always keep the class reference under my pillow : http://www.pygtk.org/docs/pygtk/gtk-class-reference.html
Samples in the good tutorial found here :
http://www.pygtk.org/pygtk2tutorial/sec-DetailsOfBoxes.html
And finally, this shows up something like your drawing :
import gtk as g
win = g.Window ()
win.set_default_size(600, 400)
win.set_position(g.WIN_POS_CENTER)
win.connect ('delete_event', g.main_quit)
hBox = g.HBox()
win.add (hBox)
f1 = g.Frame()
f2 = g.Frame()
f3 = g.Frame()
hBox.pack_start(f1)
hBox.pack_start(f2)
hBox.pack_start(f3)
win.show_all ()
g.main ()
Have fun ! (and I hope my answer is helpful)
The answer is pack_start() and pack_end()
The function has a few parameters you can send to it that give you the desired effect
If you use Louis' example:
hBox.pack_start(f1, expand =False, fill=False)
hBox.pack_start( f2, expand=True, fill=True, padding=50)
hBox.pack_end(f3, expand=False, fill=False)
Hope that helps!