How to change layer parent in python in Gimp? - python

Simple problem. I want to change the parent of LayerA to GroupB.
The member "parent" of layer is read only, and I can't use pdb.gimp_image_insert_layer because the layer already has been added to image. I also tried removing it first by gimp_image_remove_layer, and it also doesn't work.

I cannot find an API for this in Python. Using image.remove_layer() deletes the layer so it cannot be re-inserted, so the best I can think of is to copy the layer using something like this:
def moveLayer(image,layer,group,position):
layerName=layer.name
layerCopy=layer.copy()
image.remove_layer(layer)
layerCopy.name=layerName # Can't have two layers with same name
image.insert_layer(layerCopy,group,position)
return layerCopy # this one has a new ID
This said, I've written many Python scripts and never needed to change a layer parent, so maybe there is a way to avoid doing this...

Related

Can I access the inner layer outputs of DeepLab in pytorch?

Using Pytorch, I am trying to implement a network that is using the pre=trained DeepLab ResNet-101.
I found two possible methods for using this network:
this one
or
torchvision.models.segmentation.deeplabv3_resnet101(
pretrained=False, progress=True, num_classes=21, aux_loss=None, **kwargs)
However, I might not only need this network's output, but also several inside layers' outputs.
Is there a way to access the inner layer outputs using one of these methods?
If not - Is it possible to manually copy the trained resnet's parameters so I can manually recreate it and add those outputs myself? (Hopefully the first option is possible so I won't need to do this)
Thanks!
You can achieve this without too much trouble using forward hooks.
The idea is to loop over the modules of your model, find the layers you're interested in, hook a callback function onto them. When called, those layers will trigger the hook. We will take advantage of this to save the intermediate outputs.
For example, let's say you want to get the outputs of layer classifier.0.convs.3.1:
layers = ['classifier.0.convs.3.1']
activations = {}
def forward_hook(name):
def hook(module, x, y):
activations[name] = y
return hook
for name, module in model.named_modules():
if name in layers:
module.register_forward_hook(forward_hook(name))
*The closure around hook() made by forward_hook's scope is used to enclose the module's name which you wouldn't otherwise have access to at this point.
Everything is ready, we can call the model
>>> model = torchvision.models.segmentation.deeplabv3_resnet101(
pretrained=True, progress=True, num_classes=21, aux_loss=None)
>>> model(torch.rand(16, 3, 100, 100))
And as expected, after inference, activations will have a new entry 'classifier.0.convs.3.1' which - in this case - will contain a tensor of shape (16, 256, 13, 13).
Not so long ago, I wrote an answer about a similar question which goes a little bit more in detail on how hooks can be used to inspect the intermediate output shapes.

Is there a way to create nested class attributes in Python?

I've been struggling with creating a class for my image processing code in Python.
The code requires a whole bunch of different parameters (set in a params.txt file) which can easily be grouped into different categories. For example, some are paths, some are related to the experimental geometry, some are just switches for turning certain image processing features on/off etc etc.
If my "structure" (not sure how I should create it yet) is created as P, I would like to have something like,
P = my_param_object()
P.load_parameters('path/to/params.txt')
and then, from the main code, I can access whatever elements I need like so,
print(P.paths.basepath())
'/some/path/to/data'
print(P.paths.year())
2019
print(P.filenames.lightfield())
'andor_lightfield.dat'
print(P.geometry.dist_lens_to_sample())
1.5
print(P.settings.subtract_background())
False
print(P.settings.apply_threshold())
True
I already tried creating my own class to do this but everything is just in one massive block. I don't know how to create nested parts for the class. So for example, I have a setting and a function called "load_background". This makes sense because the load_background function always loads a specific filename in a specific location but should only do so if the load_background parameter is set to True
From within the class, I tried doing something like
self.setting_load_background = False
def method_load_background(self):
myutils.load_dat(self.background_fname())
but that's very ugly. It would be nicer to have,
if P.settings.load_background() == True:
P.load_background()
else:
P.generate_random_background()

How to load a layer from checkpoint

I have this config:
network = {"source_embed_raw": {"class": "linear", ...}}
I want to load the params for layer source_embed_raw from some existing checkpoint.
In that checkpoint, param is called differently (output/rec/target_embed_raw/W).
I understand, that I can load parameters with preload_from_files, but I am not sure about the exact way to do that in my case, because the names of the layers differ, thus simply adding a prefix does not do the job.
This is currently not possible with preload_from_files in this way.
So I currently see these possible options:
We could extend the logic of preload_from_files (and CustomCheckpointLoader) to allow for sth like that (some generic variable/layer name mapping).
Or you could rename your layer from source_embed_raw to e.g. old_model__target_embed_raw and then use preload_from_files with the prefix option. If you do not want to rename it, you could still add a layer like old_model__target_embed_raw and then use parameter sharing in source_embed_raw.
If the parameter in the checkpoint is actually called sth like output/rec/target_embed_raw/..., you could create a SubnetworkLayer named old_model__output, in that another SubnetworkLayer with name rec, and in that a layer named target_embed_raw.
You could write a script to simply load the existing checkpoint, and store is as a new checkpoint but with renamed variable names (this is also totally independent from RETURNN).
LinearLayer (and most other layers) allows to specify exactly how the parameters are initialized (forward_weights_init and bias_init). The parameter initialization is quite flexible. E.g. there is sth like load_txt_file_initializer which can be used. Currently there is no such function to directly load it from an existing checkpoint but we could add that. Or you could simply implement the logic inside your config (it will only be sth like 5 lines of code or so).
Instead of using preload_from_files, you could also use SubnetworkLayer and the load_on_init option. And then a similar logic as in option 2.

Overwriting methods via mixin pattern does not work as intended

I am trying to introduce a mod/mixin for a problem. In particular I am focusing here on a SpeechRecognitionProblem. I intend to modify this problem and therefore I seek to do the following:
class SpeechRecognitionProblemMod(speech_recognition.SpeechRecognitionProblem):
def hparams(self, defaults, model_hparams):
SpeechRecognitionProblem.hparams(self, defaults, model_hparams)
vocab_size = self.feature_encoders(model_hparams.data_dir)['targets'].vocab_size
p = defaults
p.vocab_size['targets'] = vocab_size
def feature_encoders(self, data_dir):
# ...
So this one does not do much. It calls the hparams() function from the base class and then changes some values.
Now, there are already some ready-to-go problems e.g. Libri Speech:
#registry.register_problem()
class Librispeech(speech_recognition.SpeechRecognitionProblem):
# ..
However, in order to apply my modifications I am doing this:
#registry.register_problem()
class LibrispeechMod(SpeechRecognitionProblemMod, Librispeech):
# ..
This should, if I am not mistaken, overwrite everything (with identical signatures) in Librispeech and instead call functions of SpeechRecognitionProblemMod.
Since I was able to train a model with this code I am assuming that it's working as intended so far.
Now here comes the my problem:
After training I want to serialize the model. This usually works. However, it does not with my mod and I actually know why:
At a certain point hparams() gets called. Debugging to that point will show me the following:
self # {LibrispeechMod}
self.hparams # <bound method SpeechRecognitionProblem.hparams of ..>
self.feature_encoders # <bound method SpeechRecognitionProblemMod.feature_encoders of ..>
self.hparams should be <bound method SpeechRecognitionProblemMod.hparams of ..>! It would seem that for some reason hparams() of SpeechRecognitionProblem gets called directly instead of SpeechRecognitionProblemMod. But please note that it's the correct type for feature_encoders()!
The thing is that I know this is working during training. I can see that the hyper-paramaters (hparams) are applied accordingly simply because the model's graph node names change through my modifications.
There is one specialty I need to point out. tensor2tensor allows to dynamically load a t2t_usr_dir, which are additional python modules which get loaded by import_usr_dir. I make use of that function in my serialization script as well:
if usr_dir:
logging.info('Loading user dir %s' % usr_dir)
import_usr_dir(usr_dir)
This could be the only culprit I can see at the moment although I would not be able to tell why this may cause the problem.
If anybody sees something I do not I'd be glad to get a hint what I'm doing wrong here.
So what is the error you're getting?
For the sake of completeness, this is the result of the wrong hparams() method being called:
NotFoundError (see above for traceback): Restoring from checkpoint failed.
Key transformer/symbol_modality_256_256/softmax/weights_0 not found in checkpoint
symbol_modality_256_256 is wrong. It should be symbol_modality_<vocab-size>_256 where <vocab-size> is a vocabulary size which gets set in SpeechRecognitionProblemMod.hparams.
So, this weird behavior came from the fact that I was remote debugging and that the source files of the usr_dir were not correctly synchronized. Everything works as intended but the source files where not matching.
Case closed.

How do I use the GeometryConstraint class?

I've been trying to get this to work for so long now, I've read the docs here, but I can't seem to understand how to implement the GeometryConstraint.
Normally, the derivative version of this would be:
geometryConstraintNode = pm.geometryConstraint(target, object)
However, in Pymel, It looks a little nicer when setting attributes, which is why I want to use it, because it's much more readable.
I've tried this:
geometryConstraintNode = nt.GeometryConstraint(target, object).setName('geoConstraint')
But no luck, can someone take a look?
Shannon
this doesn't work for you?
import pymel.core as pm
const = pm.geometryConstraint('pSphere1', 'locator1', n='geoConstraint')
print const
const.rename('fred')
print const
output would be
geoConstraint
fred
and a constraint object named 'fred'.
The pymel node is the return value that comes back from the command defined in pm.animation.geometryConstraint. What it returns is a class wrapper for the actual in-scene constraint, which is defined in pm.nodetypes.GeometryConstraint. It's the class version where you get to do all the attribute setting, etc; the command version is a match for the same thing in maya.cmds with sometimes a little syntactic sugar added.
In this case, the pymel node is like any other pymel node, so things like renamimg use the same '.rename' functionality inherited from DagNode. You could also use functions inherited from Transform, like 'getChildren()' or 'setParent()' The docs make this clear in a round-about way by including the inheritance tree at the top of the nodetype's page. Basically all pynode returns will share at least DagNode (stuff like naming) and usually Transform (things like move, rotate, parent) or Shape (query components, etc)

Categories

Resources