How to access the embeddings using tensorflow hub.module? - python

I am using the following code to access the embeddings using TF Hub Universal Sentence encoder.
import tensorflow as tf
import tensorflow_hub as hub
model = hub.load("https://tfhub.dev/google/universal-sentence-encoder/4")
def embed(input):
return model(input)
messages = ["There is no hard limit on how long the paragraph is. Roughly, the longer the more 'diluted' the embedding will be."]
message_embeddings = embed(messages)
How can I access the actual vectors now?

Actual Embedding Vectors can be accessed from the Variable, message_embeddings.
message_embeddings is a Vector of shape=(1, 512), meaning, the Dimensionality of the Vector returned by USE-4 is 512.
In other words, Every Sentence is encoded into 512 Columned Vector.
Output of the code,
print(message_embeddings)
is
tf.Tensor(
[[-0.00366504 -0.00703163 -0.0061244 0.02026021 -0.09436475 0.00027828
0.05004153 -0.01591516 0.088241 0.07551358 -0.01868021 0.04386544
0.00105771 0.03730893 -0.05554571 0.02852311 0.01709696 0.08152976
-0.03092775 0.00683713 -0.08059237 0.042355 -0.07580714 -0.00443942
-0.03430099 0.03240041 -0.05212452 -0.04247908 -0.05534476 -0.02328587
-0.0438301 -0.03972115 0.01639873 0.00163302 0.07708091 -0.02310511
0.01288455 0.04831124 0.0089498 -0.02632253 -0.01840279 0.02118563
0.03758964 0.08740229 0.02880297 -0.00486817 0.0115555 -0.00451289
-0.00162866 0.01446948 0.00189139 -0.07941346 -0.0216493 -0.02580371
-0.00930381 -0.00526039 -0.01272183 0.02215818 0.04742621 0.02226813
0.0110765 -0.01790449 0.01739751 -0.08388933 0.05826297 -0.05230762
-0.07484917 0.06905693 0.01646299 0.00850342 -0.0022191 -0.07555264
0.01601691 0.06028103 0.00524664 0.03776945 -0.05246941 0.03556651
0.06253887 -0.04647287 -0.03415112 -0.03473583 0.04833042 -0.01264609
0.01788526 -0.07143527 -0.02432756 0.04081429 -0.0524265 -0.05402376
-0.02753968 0.06558003 0.01936845 -0.08112626 0.0157347 0.05620547
-0.06219236 -0.03654391 0.03936478 -0.01247254 -0.03957544 0.07394353
-0.06131149 -0.0550663 0.08301188 -0.01699291 0.03726438 0.00248359
-0.00569713 0.04109528 -0.05154289 0.05428214 -0.06594346 0.06009263
0.02753788 0.01492724 -0.01422153 0.02779302 0.02881143 -0.01985389
0.05809831 -0.02661227 -0.06907296 0.01192496 -0.03630216 0.03146286
-0.02979902 0.05192203 -0.0479207 0.03564131 0.05351846 0.02681697
0.02597373 -0.03392426 -0.05286925 -0.05110073 0.01331552 -0.00612995
-0.04932296 -0.0185418 -0.0841584 0.02415963 -0.01051812 0.05603031
-0.0083728 -0.05966095 0.0321536 -0.03968453 0.03799454 -0.05958865
-0.07585841 0.04390398 -0.03674331 0.01918785 0.03446485 -0.04106916
-0.05183128 0.02947152 -0.03531763 0.03698466 0.06261521 -0.00646621
0.01130813 -0.02275244 -0.04280937 0.01955702 -0.03919312 0.00476116
0.01887495 -0.00195181 -0.02401051 -0.06942239 -0.06978329 0.06458326
0.00362934 0.03588834 0.04921037 -0.03195003 0.02806171 -0.0193333
0.00994556 -0.02342404 0.10165592 -0.02853323 0.04147425 0.00914851
0.00497671 0.00073764 -0.00318258 0.03595887 -0.01817959 0.01496308
-0.03551586 0.02536247 -0.07170779 -0.03153825 -0.04042004 -0.01769615
0.00958568 0.00038516 0.00799816 0.04089458 0.02171035 -0.08852603
-0.06747856 0.05664572 -0.06597329 0.02299296 0.03397151 -0.03845559
0.00395073 0.00314357 0.01119022 0.05957965 -0.05583638 0.02908287
0.0112076 0.07695369 -0.03935304 -0.02383705 -0.04208985 -0.00359387
0.06851663 -0.05395376 -0.00246254 -0.01888378 -0.01391678 -0.07573339
0.05811501 0.02059502 -0.00418438 -0.01210096 -0.06286791 -0.07645103
-0.02463043 -0.03153505 0.05593796 -0.02202086 -0.00274707 0.04458077
-0.06263509 0.06126784 -0.04235342 0.00322403 0.02189728 -0.06388599
-0.03919036 -0.00010863 0.02531325 0.02581233 -0.01304512 -0.03001025
-0.02754986 0.0531372 -0.02369525 -0.04376267 0.0641819 0.09532097
-0.06730784 0.04478338 0.02004733 0.05244097 -0.01885018 -0.06137342
-0.08407518 -0.00084469 -0.02145135 -0.0091182 -0.06907462 0.06986497
0.0600312 -0.04390564 -0.00131028 0.06390417 0.03533437 0.03813365
0.04030495 -0.01402102 -0.06857175 -0.06571147 0.01421791 -0.0381003
-0.04138157 0.05040992 -0.05724671 0.01490439 -0.07905842 -0.03806996
-0.01071311 -0.01229521 -0.00771822 -0.03641455 -0.04578875 0.00925799
0.0403841 0.00132017 0.031641 0.01162737 0.0101506 -0.01761867
0.0579349 0.03595775 -0.01147426 -0.01525036 0.05006553 0.03747585
-0.05307707 -0.08915938 0.02942844 -0.05546442 -0.0128964 0.04225868
-0.01534053 -0.04580414 0.01088955 -0.03184818 0.02326705 -0.08861458
-0.07253686 -0.02572111 -0.03711193 0.0474383 -0.05628109 -0.01391787
0.00941848 -0.06177152 -0.06071901 -0.0092127 -0.10220838 -0.01376523
0.03162379 0.03983926 0.00640659 -0.00418033 -0.01612685 0.01891562
-0.04313575 0.01139805 -0.00378637 0.08349139 0.08300766 -0.0494319
-0.03658734 0.00325003 -0.05251636 -0.04457545 -0.079386 -0.05799922
-0.01254137 0.02311826 -0.00766293 -0.06729192 -0.03971054 -0.0663051
0.08720677 0.04582898 -0.08557201 -0.01054355 -0.02762848 0.06243869
-0.08848279 0.02289506 0.05723204 -0.01221769 -0.0393519 -0.00582338
0.02841124 -0.03293297 -0.03143778 -0.00352248 0.0073043 0.01209227
-0.00148794 0.03695554 0.03136331 -0.03311655 -0.0221175 -0.07959055
-0.04138357 -0.00950083 -0.01173625 0.01499144 -0.0121095 0.00823302
0.07642982 0.05198056 0.05955188 0.03240911 0.09211077 -0.05317325
-0.06024589 0.00489183 0.04719653 0.02498623 0.03750401 -0.02352423
0.05042319 -0.01633615 -0.02236294 0.04443104 0.02694818 0.00881322
0.02469178 -0.06206469 -0.00215397 -0.02641553 0.00405129 -0.07184313
-0.02841844 0.0309756 0.02459977 -0.03155032 0.01407542 0.00524732
-0.01893367 0.0102607 -0.00333736 0.02885202 -0.03275619 -0.08507563
0.02076722 -0.02471628 -0.00449985 0.0004644 -0.0923043 0.02101186
0.0352884 0.03790538 -0.00372656 0.06751391 0.02638355 0.01678842
0.03843728 0.10451197 -0.06375936 -0.05324562 0.03276567 -0.01112294
-0.0082361 -0.01735083 -0.03767544 -0.04266915 -0.04767371 0.07573947
-0.01247379 -0.01048137 -0.02308911 -0.01484709 -0.00733855 0.06788232
-0.08163249 -0.01530467 -0.01805264 -0.07910046 -0.06530869 0.07402557
0.06713054 -0.01659747 -0.00980262 0.05586078 0.03396358 -0.06102567
-0.06640005 0.02269907 0.03265672 -0.01353668 -0.08313932 -0.02356159
-0.03383274 0.05942128 -0.08610516 -0.08445066 -0.01306568 -0.05279852
0.00986506 0.00461306 0.08119206 0.00604 0.10107437 0.00191085
-0.05926891 0.01157635 0.0284292 -0.08671403 0.01851062 0.05745851
-0.06798992 0.02700593 0.00208116 -0.00829788 0.08901995 -0.00418414
-0.06217562 -0.07832154 0.02027107 0.06713033 0.04617893 0.05885412
-0.04505047 0.09581003 0.033753 -0.00888314 -0.07608356 -0.03729891
0.02724086 0.02371461 -0.01081131 -0.00809431 -0.04376922 -0.04656423
0.00886904 0.01995739]], shape=(1, 512), dtype=float32)
Hope this helps. Happy Learning!

Related

How to add all standard special tokens to my hugging face tokenizer and model?

I want all special tokens to always be available. How do I do this?
My first attempt to give it to my tokenizer:
def does_t5_have_sep_token():
tokenizer: PreTrainedTokenizerFast = AutoTokenizer.from_pretrained('t5-small')
assert isinstance(tokenizer, PreTrainedTokenizerFast)
print(tokenizer)
print(f'{len(tokenizer)=}')
# print(f'{tokenizer.all_special_tokens=}')
print(f'{tokenizer.sep_token=}')
print(f'{tokenizer.eos_token=}')
print(f'{tokenizer.all_special_tokens=}')
special_tokens_dict = {'additional_special_tokens': ['<bos>', '<cls>', '<s>'] + tokenizer.all_special_tokens }
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print(f'{tokenizer.sep_token=}')
print(f'{tokenizer.eos_token=}')
print(f'{tokenizer.all_special_tokens=}')
if __name__ == '__main__':
does_t5_have_sep_token()
print('Done\a')
but feels hacky.
refs:
https://github.com/huggingface/tokenizers/issues/247
https://discuss.huggingface.co/t/how-to-add-all-standard-special-tokens-to-my-tokenizer-and-model/21529
seems useful: https://huggingface.co/docs/transformers/v4.21.1/en/main_classes/model#transformers.PreTrainedModel.resize_token_embeddings
I want to add standard tokens by adding the right "standard tokens" the solution provided didn't work for me since the .bos_token is still None. See:
tokenizer.bos_token=None
tokenizer.cls_token=None
tokenizer.sep_token=None
tokenizer.mask_token=None
tokenizer.eos_token='</s>'
tokenizer.unk_token='<unk>'
tokenizer.bos_token_id=None
tokenizer.cls_token_id=None
tokenizer.sep_token_id=None
tokenizer.mask_token_id=None
tokenizer.eos_token_id=1
tokenizer.unk_token_id=2
tokenizer.all_special_tokens=['</s>', '<unk>', '<pad>', '<extra_id_0>', '<extra_id_1>', '<extra_id_2>', '<extra_id_3>', '<extra_id_4>', '<extra_id_5>', '<extra_id_6>', '<extra_id_7>', '<extra_id_8>', '<extra_id_9>', '<extra_id_10>', '<extra_id_11>', '<extra_id_12>', '<extra_id_13>', '<extra_id_14>', '<extra_id_15>', '<extra_id_16>', '<extra_id_17>', '<extra_id_18>', '<extra_id_19>', '<extra_id_20>', '<extra_id_21>', '<extra_id_22>', '<extra_id_23>', '<extra_id_24>', '<extra_id_25>', '<extra_id_26>', '<extra_id_27>', '<extra_id_28>', '<extra_id_29>', '<extra_id_30>', '<extra_id_31>', '<extra_id_32>', '<extra_id_33>', '<extra_id_34>', '<extra_id_35>', '<extra_id_36>', '<extra_id_37>', '<extra_id_38>', '<extra_id_39>', '<extra_id_40>', '<extra_id_41>', '<extra_id_42>', '<extra_id_43>', '<extra_id_44>', '<extra_id_45>', '<extra_id_46>', '<extra_id_47>', '<extra_id_48>', '<extra_id_49>', '<extra_id_50>', '<extra_id_51>', '<extra_id_52>', '<extra_id_53>', '<extra_id_54>', '<extra_id_55>', '<extra_id_56>', '<extra_id_57>', '<extra_id_58>', '<extra_id_59>', '<extra_id_60>', '<extra_id_61>', '<extra_id_62>', '<extra_id_63>', '<extra_id_64>', '<extra_id_65>', '<extra_id_66>', '<extra_id_67>', '<extra_id_68>', '<extra_id_69>', '<extra_id_70>', '<extra_id_71>', '<extra_id_72>', '<extra_id_73>', '<extra_id_74>', '<extra_id_75>', '<extra_id_76>', '<extra_id_77>', '<extra_id_78>', '<extra_id_79>', '<extra_id_80>', '<extra_id_81>', '<extra_id_82>', '<extra_id_83>', '<extra_id_84>', '<extra_id_85>', '<extra_id_86>', '<extra_id_87>', '<extra_id_88>', '<extra_id_89>', '<extra_id_90>', '<extra_id_91>', '<extra_id_92>', '<extra_id_93>', '<extra_id_94>', '<extra_id_95>', '<extra_id_96>', '<extra_id_97>', '<extra_id_98>', '<extra_id_99>']
Using bos_token, but it is not set yet.
Using cls_token, but it is not set yet.
Using sep_token, but it is not set yet.
Using mask_token, but it is not set yet.
code:
def does_t5_have_sep_token():
"""
https://huggingface.co/docs/transformers/v4.21.1/en/main_classes/model#transformers.PreTrainedModel.resize_token_embeddings
"""
import torch
from transformers import AutoModelForSeq2SeqLM
tokenizer: PreTrainedTokenizerFast = AutoTokenizer.from_pretrained('t5-small')
assert isinstance(tokenizer, PreTrainedTokenizerFast)
print(tokenizer)
print(f'{len(tokenizer)=}')
print()
print(f'{tokenizer.sep_token=}')
print(f'{tokenizer.eos_token=}')
print(f'{tokenizer.all_special_tokens=}')
print()
# special_tokens_dict = {'additional_special_tokens': ['<bos>', '<cls>', '<s>'] + tokenizer.all_special_tokens}
# num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
tokenizer.add_tokens([f"_{n}" for n in range(1, 100)], special_tokens=True)
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small")
assert isinstance(model, torch.nn.Module)
model.resize_token_embeddings(len(tokenizer))
# tokenizer.save_pretrained('pathToExtendedTokenizer/')
# tokenizer = T5Tokenizer.from_pretrained("sandbox/t5_models/pretrained/tokenizer/")
print()
print(f'{tokenizer.bos_token=}')
print(f'{tokenizer.cls_token=}')
print(f'{tokenizer.sep_token=}')
print(f'{tokenizer.mask_token=}')
print(f'{tokenizer.eos_token=}')
print(f'{tokenizer.unk_token=}')
print(f'{tokenizer.bos_token_id=}')
print(f'{tokenizer.cls_token_id=}')
print(f'{tokenizer.sep_token_id=}')
print(f'{tokenizer.mask_token_id=}')
print(f'{tokenizer.eos_token_id=}')
print(f'{tokenizer.unk_token_id=}')
print(f'{tokenizer.all_special_tokens=}')
print()
if __name__ == '__main__':
does_t5_have_sep_token()
print('Done\a')
I do not entirely understand what you're trying to accomplish, but here are some notes that might help:
T5 documentation shows that T5 has only three special tokens (</s>, <unk> and <pad>). You can also see this in the T5Tokenizer class definition. I am confident this is because the original T5 model was trained only with these special tokens (no BOS, no MASK, no CLS).
Running, e.g.,
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('t5-small')
print(tokenizer.all_special_tokens)
will show you these three tokens as well as the <extra_id_*> tokens.
Is there a reason you want the other tokens like BOS?
(Edit - to answer your comments):
(I really think you would benefit from reading the linked documentation at huggingface. The point of a pretrained model is to take advantage of what has already been done. T5 does not use BOS nor CLS in the way you seem to be imagining. Maybe you can get it to work, but IMO it makes more sense to adapt the task you want to solve to the T5 approach)
</s> is the sep token and is already available.
As I understand, for the T5 model, masking (for the sake of ignoring loss) is implemented using attention_mask. On the other hand, if you want to "fill in the blank" then <extra_id> is used to indicate to the model that it should predict the missing token (this is how semi-supervised pretraining is done). See the section on training in the documentation.
BOS is similar - T5 is not trained to use a BOS token. (E.g. (again from documentation),
Note that T5 uses the pad_token_id as the decoder_start_token_id, so
when doing generation without using generate(), make sure you start it
with the pad_token_id.
t5 does not use the CLS token. If you want to do classification, you should finetune a new task (or find a corresponding one done in pretraining), finetuning the model to generate a word (or words) that correspond to the classifications you want.
(again from documentation:)
Build model inputs from a sequence or a pair of sequence for sequence
classification tasks by concatenating and adding special tokens. A
sequence has the following format:
I think this is correct. Please correct me if I'm wrong:
def add_special_all_special_tokens(tokenizer: Union[PreTrainedTokenizer, PreTrainedTokenizerFast]):
"""
special_tokens_dict = {"cls_token": "<CLS>"}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print("We have added", num_added_toks, "tokens")
# Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
assert tokenizer.cls_token == "<CLS>"
"""
original_len: int = len(tokenizer)
num_added_toks: dict = {}
if tokenizer.bos_token is None:
num_added_toks['bos_token'] = "<bos>"
if tokenizer.bos_token is None:
num_added_toks['cls_token'] = "<cls>"
if tokenizer.bos_token is None:
num_added_toks['sep_token'] = "<s>"
if tokenizer.bos_token is None:
num_added_toks['mask_token'] = "<mask>"
# num_added_toks = {"bos_token": "<bos>", "cls_token": "<cls>", "sep_token": "<s>", "mask_token": "<mask>"}
# special_tokens_dict = {'additional_special_tokens': new_special_tokens + tokenizer.all_special_tokens}
num_new_tokens: int = tokenizer.add_special_tokens(num_added_toks)
assert tokenizer.bos_token == "<bos>"
assert tokenizer.cls_token == "<cls>"
assert tokenizer.sep_token == "<s>"
assert tokenizer.mask_token == "<mask>"
msg = f"Error, not equal: {len(tokenizer)=}, {original_len + num_new_tokens=}"
assert len(tokenizer) == original_len + num_new_tokens, msg
left comment from doc that inspired my answer:
def add_special_tokens(self, special_tokens_dict: Dict[str, Union[str, AddedToken]]) -> int:
"""
Add a dictionary of special tokens (eos, pad, cls, etc.) to the encoder and link them to class attributes. If
special tokens are NOT in the vocabulary, they are added to it (indexed starting from the last index of the
current vocabulary).
Note,None When adding new tokens to the vocabulary, you should make sure to also resize the token embedding
matrix of the model so that its embedding matrix matches the tokenizer.
In order to do that, please use the [`~PreTrainedModel.resize_token_embeddings`] method.
Using `add_special_tokens` will ensure your special tokens can be used in several ways:
- Special tokens are carefully handled by the tokenizer (they are never split).
- You can easily refer to special tokens using tokenizer class attributes like `tokenizer.cls_token`. This
makes it easy to develop model-agnostic training and fine-tuning scripts.
When possible, special tokens are already registered for provided pretrained models (for instance
[`BertTokenizer`] `cls_token` is already registered to be :obj*'[CLS]'* and XLM's one is also registered to be
`'</s>'`).
Args:
special_tokens_dict (dictionary *str* to *str* or `tokenizers.AddedToken`):
Keys should be in the list of predefined special attributes: [`bos_token`, `eos_token`, `unk_token`,
`sep_token`, `pad_token`, `cls_token`, `mask_token`, `additional_special_tokens`].
Tokens are only added if they are not already in the vocabulary (tested by checking if the tokenizer
assign the index of the `unk_token` to them).
Returns:
`int`: Number of tokens added to the vocabulary.
Examples:
```python
# Let's see how to add a new classification token to GPT-2
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2Model.from_pretrained("gpt2")
special_tokens_dict = {"cls_token": "<CLS>"}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print("We have added", num_added_toks, "tokens")
# Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
assert tokenizer.cls_token == "<CLS>"
```"""
it was in hf's tokenization_utils_base.py
I think the right answer is here: https://stackoverflow.com/a/73361984/1601580
Links can be bad answers so here is the code:
def add_special_all_special_tokens(tokenizer: Union[PreTrainedTokenizer, PreTrainedTokenizerFast]):
"""
special_tokens_dict = {"cls_token": "<CLS>"}
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
print("We have added", num_added_toks, "tokens")
# Notice: resize_token_embeddings expect to receive the full size of the new vocabulary, i.e., the length of the tokenizer.
model.resize_token_embeddings(len(tokenizer))
assert tokenizer.cls_token == "<CLS>"
"""
original_len: int = len(tokenizer)
num_added_toks: dict = {}
if tokenizer.bos_token is None:
num_added_toks['bos_token'] = "<bos>"
if tokenizer.bos_token is None:
num_added_toks['cls_token'] = "<cls>"
if tokenizer.bos_token is None:
num_added_toks['sep_token'] = "<s>"
if tokenizer.bos_token is None:
num_added_toks['mask_token'] = "<mask>"
# num_added_toks = {"bos_token": "<bos>", "cls_token": "<cls>", "sep_token": "<s>", "mask_token": "<mask>"}
# special_tokens_dict = {'additional_special_tokens': new_special_tokens + tokenizer.all_special_tokens}
num_new_tokens: int = tokenizer.add_special_tokens(num_added_toks)
assert tokenizer.bos_token == "<bos>"
assert tokenizer.cls_token == "<cls>"
assert tokenizer.sep_token == "<s>"
assert tokenizer.mask_token == "<mask>"
err_msg = f"Error, not equal: {len(tokenizer)=}, {original_len + num_new_tokens=}"
assert len(tokenizer) == original_len + num_new_tokens, err_msg
Feedback is always welcomed.

Error while loading model weights in pytorch

My model was in a .pth file and for loading the model i wrote this code.
model = torch.jit.load('/content/drive/MyDrive/fod.pth')
torch.save(model.state_dict(), 'weights.pt')
u2net = U2NETP()
u2net.eval()
u2net.load_state_dict(torch.load('/content/weights.pt'), strict = False)
U2NETP is the network architecture, but the problem here is that I am getting an error which goes like this
_IncompatibleKeys(missing_keys=['stage1.rebnconvin.bn_s1.running_mean', 'stage1.rebnconvin.bn_s1.running_var', 'stage1.rebnconv1.bn_s1.running_mean', 'stage1.rebnconv1.bn_s1.running_var', 'stage1.rebnconv2.bn_s1.running_mean', 'stage1.rebnconv2.bn_s1.running_var', 'stage1.rebnconv3.bn_s1.running_mean', 'stage1.rebnconv3.bn_s1.running_var', 'stage1.rebnconv4.bn_s1.running_mean', 'stage1.rebnconv4.bn_s1.running_var', 'stage1.rebnconv5.bn_s1.running_mean', 'stage1.rebnconv5.bn_s1.running_var', 'stage1.rebnconv6.bn_s1.running_mean', 'stage1.rebnconv6.bn_s1.running_var', 'stage1.rebnconv7.bn_s1.running_mean', 'stage1.rebnconv7.bn_s1.running_var', 'stage1.rebnconv6d.bn_s1.running_mean', 'stage1.rebnconv6d.bn_s1.running_var', 'stage1.rebnconv5d.bn_s1.running_mean', 'stage1.rebnconv5d.bn_s1.running_var', 'stage1.rebnconv4d.bn_s1.running_mean', 'stage1.rebnconv4d.bn_s1.running_var', 'stage1.rebnconv3d.bn_s1.running_mean', 'stage1.rebnconv3d.bn_s1.running_var', 'stage1.rebnconv2d.bn_s1.running_mean', 'stage1.rebnconv2d.bn_s1.running_var', 'stage1.rebnconv1d.bn_s1.running_mean', 'stage1.rebnconv1d.bn_s1.running_var', 'stage2.rebnconvin.bn_s1.running_mean', 'stage2.rebnconvin.bn_s1.running_var', 'stage2.rebnconv1.bn_s1.running_mean', 'stage2.rebnconv1.bn_s1.running_var', 'stage2.rebnconv2.bn_s1.running_mean', 'stage2.rebnconv2.bn_s1.running_var', 'stage2.rebnconv3.bn_s1.running_mean', 'stage2.rebnconv3.bn_s1.running_var', 'stage2.rebnconv4.bn_s1.running_mean', 'stage2.rebnconv4.bn_s1.running_var', 'stage2.rebnconv5.bn_s1.running_mean', 'stage2.rebnconv5.bn_s1.running_var', 'stage2.rebnconv6.bn_s1.running_mean', 'stage2.rebnconv6.bn_s1.running_var', 'stage2.rebnconv5d.bn_s1.running_mean', 'stage2.rebnconv5d.bn_s1.running_var', 'stage2.rebnconv4d.bn_s1.running_mean', 'stage2.rebnconv4d.bn_s1.running_var', 'stage2.rebnconv3d.bn_s1.running_mean', 'stage2.rebnconv3d.bn_s1.running_var', 'stage2.rebnconv2d.bn_s1.running_mean', 'stage2.rebnconv2d.bn_s1.running_var', 'stage2.rebnconv1d.bn_s1.running_mean', 'stage2.rebnconv1d.bn_s1.running_var', 'stage3.rebnconvin.bn_s1.running_mean', 'stage3.rebnconvin.bn_s1.running_var', 'stage3.rebnconv1.bn_s1.running_mean', 'stage3.rebnconv1.bn_s1.running_var', 'stage3.rebnconv2.bn_s1.running_mean', 'stage3.rebnconv2.bn_s1.running_var', 'stage3.rebnconv3.bn_s1.running_mean', 'stage3.rebnconv3.bn_s1.running_var', 'stage3.rebnconv4.bn_s1.running_mean', 'stage3.rebnconv4.bn_s1.running_var', 'stage3.rebnconv5.bn_s1.running_mean', 'stage3.rebnconv5.bn_s1.running_var', 'stage3.rebnconv4d.bn_s1.running_mean', 'stage3.rebnconv4d.bn_s1.running_var', 'stage3.rebnconv3d.bn_s1.running_mean', 'stage3.rebnconv3d.bn_s1.running_var', 'stage3.rebnconv2d.bn_s1.running_mean', 'stage3.rebnconv2d.bn_s1.running_var', 'stage3.rebnconv1d.bn_s1.running_mean', 'stage3.rebnconv1d.bn_s1.running_var', 'stage4.rebnconvin.bn_s1.running_mean', 'stage4.rebnconvin.bn_s1.running_var', 'stage4.rebnconv1.bn_s1.running_mean', 'stage4.rebnconv1.bn_s1.running_var', 'stage4.rebnconv2.bn_s1.running_mean', 'stage4.rebnconv2.bn_s1.running_var', 'stage4.rebnconv3.bn_s1.running_mean', 'stage4.rebnconv3.bn_s1.running_var', 'stage4.rebnconv4.bn_s1.running_mean', 'stage4.rebnconv4.bn_s1.running_var', 'stage4.rebnconv3d.bn_s1.running_mean', 'stage4.rebnconv3d.bn_s1.running_var', 'stage4.rebnconv2d.bn_s1.running_mean', 'stage4.rebnconv2d.bn_s1.running_var', 'stage4.rebnconv1d.bn_s1.running_mean', 'stage4.rebnconv1d.bn_s1.running_var', 'stage5.rebnconvin.bn_s1.running_mean', 'stage5.rebnconvin.bn_s1.running_var', 'stage5.rebnconv1.bn_s1.running_mean', 'stage5.rebnconv1.bn_s1.running_var', 'stage5.rebnconv2.bn_s1.running_mean', 'stage5.rebnconv2.bn_s1.running_var', 'stage5.rebnconv3.bn_s1.running_mean', 'stage5.rebnconv3.bn_s1.running_var', 'stage5.rebnconv4.bn_s1.running_mean', 'stage5.rebnconv4.bn_s1.running_var', 'stage5.rebnconv3d.bn_s1.running_mean', 'stage5.rebnconv3d.bn_s1.running_var', 'stage5.rebnconv2d.bn_s1.running_mean', 'stage5.rebnconv2d.bn_s1.running_var', 'stage5.rebnconv1d.bn_s1.running_mean', 'stage5.rebnconv1d.bn_s1.running_var', 'stage6.rebnconvin.bn_s1.running_mean', 'stage6.rebnconvin.bn_s1.running_var', 'stage6.rebnconv1.bn_s1.running_mean', 'stage6.rebnconv1.bn_s1.running_var', 'stage6.rebnconv2.bn_s1.running_mean', 'stage6.rebnconv2.bn_s1.running_var', 'stage6.rebnconv3.bn_s1.running_mean', 'stage6.rebnconv3.bn_s1.running_var', 'stage6.rebnconv4.bn_s1.running_mean', 'stage6.rebnconv4.bn_s1.running_var', 'stage6.rebnconv3d.bn_s1.running_mean', 'stage6.rebnconv3d.bn_s1.running_var', 'stage6.rebnconv2d.bn_s1.running_mean', 'stage6.rebnconv2d.bn_s1.running_var', 'stage6.rebnconv1d.bn_s1.running_mean', 'stage6.rebnconv1d.bn_s1.running_var', 'stage5d.rebnconvin.bn_s1.running_mean', 'stage5d.rebnconvin.bn_s1.running_var', 'stage5d.rebnconv1.bn_s1.running_mean', 'stage5d.rebnconv1.bn_s1.running_var', 'stage5d.rebnconv2.bn_s1.running_mean', 'stage5d.rebnconv2.bn_s1.running_var', 'stage5d.rebnconv3.bn_s1.running_mean', 'stage5d.rebnconv3.bn_s1.running_var', 'stage5d.rebnconv4.bn_s1.running_mean', 'stage5d.rebnconv4.bn_s1.running_var', 'stage5d.rebnconv3d.bn_s1.running_mean', 'stage5d.rebnconv3d.bn_s1.running_var', 'stage5d.rebnconv2d.bn_s1.running_mean', 'stage5d.rebnconv2d.bn_s1.running_var', 'stage5d.rebnconv1d.bn_s1.running_mean', 'stage5d.rebnconv1d.bn_s1.running_var', 'stage4d.rebnconvin.bn_s1.running_mean', 'stage4d.rebnconvin.bn_s1.running_var', 'stage4d.rebnconv1.bn_s1.running_mean', 'stage4d.rebnconv1.bn_s1.running_var', 'stage4d.rebnconv2.bn_s1.running_mean', 'stage4d.rebnconv2.bn_s1.running_var', 'stage4d.rebnconv3.bn_s1.running_mean', 'stage4d.rebnconv3.bn_s1.running_var', 'stage4d.rebnconv4.bn_s1.running_mean', 'stage4d.rebnconv4.bn_s1.running_var', 'stage4d.rebnconv3d.bn_s1.running_mean', 'stage4d.rebnconv3d.bn_s1.running_var', 'stage4d.rebnconv2d.bn_s1.running_mean', 'stage4d.rebnconv2d.bn_s1.running_var', 'stage4d.rebnconv1d.bn_s1.running_mean', 'stage4d.rebnconv1d.bn_s1.running_var', 'stage3d.rebnconvin.bn_s1.running_mean', 'stage3d.rebnconvin.bn_s1.running_var', 'stage3d.rebnconv1.bn_s1.running_mean', 'stage3d.rebnconv1.bn_s1.running_var', 'stage3d.rebnconv2.bn_s1.running_mean', 'stage3d.rebnconv2.bn_s1.running_var', 'stage3d.rebnconv3.bn_s1.running_mean', 'stage3d.rebnconv3.bn_s1.running_var', 'stage3d.rebnconv4.bn_s1.running_mean', 'stage3d.rebnconv4.bn_s1.running_var', 'stage3d.rebnconv5.bn_s1.running_mean', 'stage3d.rebnconv5.bn_s1.running_var', 'stage3d.rebnconv4d.bn_s1.running_mean', 'stage3d.rebnconv4d.bn_s1.running_var', 'stage3d.rebnconv3d.bn_s1.running_mean', 'stage3d.rebnconv3d.bn_s1.running_var', 'stage3d.rebnconv2d.bn_s1.running_mean', 'stage3d.rebnconv2d.bn_s1.running_var', 'stage3d.rebnconv1d.bn_s1.running_mean', 'stage3d.rebnconv1d.bn_s1.running_var', 'stage2d.rebnconvin.bn_s1.running_mean', 'stage2d.rebnconvin.bn_s1.running_var', 'stage2d.rebnconv1.bn_s1.running_mean', 'stage2d.rebnconv1.bn_s1.running_var', 'stage2d.rebnconv2.bn_s1.running_mean', 'stage2d.rebnconv2.bn_s1.running_var', 'stage2d.rebnconv3.bn_s1.running_mean', 'stage2d.rebnconv3.bn_s1.running_var', 'stage2d.rebnconv4.bn_s1.running_mean', 'stage2d.rebnconv4.bn_s1.running_var', 'stage2d.rebnconv5.bn_s1.running_mean', 'stage2d.rebnconv5.bn_s1.running_var', 'stage2d.rebnconv6.bn_s1.running_mean', 'stage2d.rebnconv6.bn_s1.running_var', 'stage2d.rebnconv5d.bn_s1.running_mean', 'stage2d.rebnconv5d.bn_s1.running_var', 'stage2d.rebnconv4d.bn_s1.running_mean', 'stage2d.rebnconv4d.bn_s1.running_var', 'stage2d.rebnconv3d.bn_s1.running_mean', 'stage2d.rebnconv3d.bn_s1.running_var', 'stage2d.rebnconv2d.bn_s1.running_mean', 'stage2d.rebnconv2d.bn_s1.running_var', 'stage2d.rebnconv1d.bn_s1.running_mean', 'stage2d.rebnconv1d.bn_s1.running_var', 'stage1d.rebnconvin.bn_s1.running_mean', 'stage1d.rebnconvin.bn_s1.running_var', 'stage1d.rebnconv1.bn_s1.running_mean', 'stage1d.rebnconv1.bn_s1.running_var', 'stage1d.rebnconv2.bn_s1.running_mean', 'stage1d.rebnconv2.bn_s1.running_var', 'stage1d.rebnconv3.bn_s1.running_mean', 'stage1d.rebnconv3.bn_s1.running_var', 'stage1d.rebnconv4.bn_s1.running_mean', 'stage1d.rebnconv4.bn_s1.running_var', 'stage1d.rebnconv5.bn_s1.running_mean', 'stage1d.rebnconv5.bn_s1.running_var', 'stage1d.rebnconv6.bn_s1.running_mean', 'stage1d.rebnconv6.bn_s1.running_var', 'stage1d.rebnconv7.bn_s1.running_mean', 'stage1d.rebnconv7.bn_s1.running_var', 'stage1d.rebnconv6d.bn_s1.running_mean', 'stage1d.rebnconv6d.bn_s1.running_var', 'stage1d.rebnconv5d.bn_s1.running_mean', 'stage1d.rebnconv5d.bn_s1.running_var', 'stage1d.rebnconv4d.bn_s1.running_mean', 'stage1d.rebnconv4d.bn_s1.running_var', 'stage1d.rebnconv3d.bn_s1.running_mean', 'stage1d.rebnconv3d.bn_s1.running_var', 'stage1d.rebnconv2d.bn_s1.running_mean', 'stage1d.rebnconv2d.bn_s1.running_var', 'stage1d.rebnconv1d.bn_s1.running_mean', 'stage1d.rebnconv1d.bn_s1.running_var'], unexpected_keys=[])
for param_tensor in sd:
print(param_tensor, "\t", model.state_dict()[param_tensor].size())
I used this code for printing the weights. Seems like it contains weights and bias keys but not running mean/ variance

Tensorflow synapse output and input as text file

I have a object-detection output result with tensorflow.
which is a 13*13*125 float32 array. (tiny-yolo-v2)
which I can output with a text file.
Now I want to write another program, to parse the text file.
How can I use numpy or something else, to load the text file back to array?
I just want to load the text file, and place with the right position,
just like it's original (13*13*125) array, so i can do something with it.
Below is the sample of text output, full file is too long so i truncate some of it.
[[[ 1.02357492e-02 -4.58745286e-02 -3.53223026e-01 1.05358012e-01
-4.35989094e+00 -6.99408484e+00 -4.73687983e+00 -4.24568987e+00
-5.61564398e+00 -2.89398456e+00 -6.23602247e+00 -1.50982809e+00
-5.22811985e+00 -2.62615561e+00 -5.22997189e+00 -7.89167452e+00
-5.44317961e+00 -6.20110512e+00 -5.04192924e+00 -2.02722573e+00
-4.10300493e+00 -6.34364843e+00 -5.39359808e+00 -8.28967571e+00
-5.09567547e+00 7.58925438e-01 3.76614660e-01 -1.04403806e+00
-6.47649229e-01 -3.87396884e+00 -6.38732052e+00 -4.58577824e+00
-4.16016054e+00 -5.30572081e+00 -3.85717177e+00 -6.05014038e+00
-1.81165671e+00 -3.75763679e+00 -2.88092995e+00 -4.67342377e+00
-4.71987104e+00 -4.39510489e+00 -5.11723280e+00 -5.39322329e+00
-2.66565347e+00 -3.81883574e+00 -5.66908169e+00 -4.71839476e+00
-6.30157852e+00 -4.82082224e+00 2.92960078e-01 5.97341239e-01
-1.40992713e+00 -1.11044288e+00 -4.18370819e+00 -7.12151051e+00
-5.35866928e+00 -4.90489340e+00 -5.99866199e+00 -5.80428791e+00
-6.52985144e+00 -3.30560398e+00 -5.25323820e+00 -3.49718428e+00
-5.35445309e+00 -4.74132204e+00 -5.51197958e+00 -5.49749136e+00
-5.92978096e+00 -3.28844571e+00 -5.49212742e+00 -6.66666460e+00
-4.93477535e+00 -7.05240107e+00 -5.52950144e+00 3.19003403e-01
4.28509444e-01 -1.37157345e+00 -7.52444148e-01 -5.07417107e+00
-6.86192179e+00 -5.41829157e+00 -4.65392542e+00 -6.23983860e+00
-5.52950287e+00 -5.39464998e+00 -3.28175688e+00 -4.32805681e+00
-3.33854008e+00 -5.02955580e+00 -5.26897430e+00 -4.58855247e+00
-5.45577812e+00 -7.06963253e+00 -3.20757794e+00 -4.94750690e+00
-6.26993036e+00 -5.85515499e+00 -7.07721949e+00 -5.31591606e+00
3.89474362e-01 3.98911506e-01 -1.40775967e+00 -1.07488203e+00
-3.43887043e+00 -7.79423666e+00 -6.68394804e+00 -5.61368179e+00
-6.34769297e+00 -6.83281803e+00 -6.70960665e+00 -3.65747070e+00
-5.44360542e+00 -4.12546444e+00 -5.75262260e+00 -5.18836164e+00
-5.15945435e+00 -6.07800007e+00 -6.70447493e+00 -4.12499619e+00
-6.12767649e+00 -6.96556044e+00 -4.83318233e+00 -7.69717407e+00
-6.56721640e+00]
[ 1.61234528e-01 -4.70345289e-01 2.98500329e-01 8.74740630e-03
-8.38169765e+00 -1.54907408e+01 -1.07314129e+01 -8.34741211e+00
-1.13781738e+01 -7.54940701e+00 -1.22155275e+01 -4.80996466e+00
-1.10561914e+01 -5.93502522e+00 -1.11998129e+01 -1.67726002e+01
-1.05111227e+01 -1.29506483e+01 -9.64104748e+00 -5.52446413e+00
-8.78175163e+00 -1.36118546e+01 -1.10795031e+01 -1.83356972e+01
-1.16452293e+01 4.82036829e-01 5.17037362e-02 -9.76964355e-01
-1.00542104e+00 -8.73878384e+00 -1.47731256e+01 -9.41302872e+00
-7.78294754e+00 -1.06155653e+01 -8.81066895e+00 -1.25413752e+01
-5.20565796e+00 -8.54619980e+00 -6.01115370e+00 -1.01015797e+01
-9.61278534e+00 -8.36357498e+00 -1.08267517e+01 -1.04049816e+01
-5.86520195e+00 -7.45389271e+00 -1.18097382e+01 -9.29529381e+00
-1.35695248e+01 -1.10684004e+01 -4.49465424e-01 7.23291993e-01
-1.33733368e+00 -1.57884443e+00 -9.29384422e+00 -1.66824398e+01
-1.17275839e+01 -1.19206190e+01 -1.26649590e+01 -1.42257290e+01
-1.43436852e+01 -9.10658264e+00 -1.34929924e+01 -8.85398579e+00
-1.30774622e+01 -9.42021656e+00 -1.28294401e+01 -1.38840723e+01
-1.24558411e+01 -8.84429169e+00 -1.17602949e+01 -1.50427999e+01
-1.07627420e+01 -1.64452419e+01 -1.44604845e+01 -1.37743965e-01
9.12959814e-01 -1.42015231e+00 -1.60620236e+00 -1.05381451e+01
-1.63558159e+01 -1.20377207e+01 -9.80946445e+00 -1.27067585e+01
-1.21898088e+01 -1.22341051e+01 -7.98180962e+00 -1.05312557e+01
-7.84510660e+00 -1.18309231e+01 -1.10772572e+01 -9.54380226e+00
-1.25601234e+01 -1.43880062e+01 -6.95701408e+00 -1.07601147e+01
-1.41597824e+01 -1.32586575e+01 -1.58946438e+01 -1.29948921e+01
-1.14585623e-01 6.38350546e-01 -1.65523076e+00 -1.67830050e+00
-6.13089418e+00 -1.99999962e+01 -1.58348789e+01 -1.47631931e+01
-1.50945425e+01 -1.69028625e+01 -1.67195797e+01 -1.10481138e+01
-1.45758791e+01 -1.19301071e+01 -1.46967478e+01 -1.34157457e+01
-1.31685534e+01 -1.52268438e+01 -1.63326721e+01 -1.15273895e+01
-1.43742008e+01 -1.75251045e+01 -1.33377829e+01 -1.88240013e+01
-1.68392487e+01]
[ 7.36066699e-01 -3.97309035e-01 3.82082105e-01 -4.82245922e-01
-1.08697920e+01 -2.12837963e+01 -1.57861137e+01 -1.22496452e+01
-1.57928638e+01 -1.18547354e+01 -1.80118999e+01 -7.86523342e+00
-1.56991911e+01 -1.01874857e+01 -1.54575453e+01 -2.39257183e+01
-1.58380346e+01 -1.81504345e+01 -1.34132137e+01 -8.03548622e+00
-1.13906355e+01 -1.92472000e+01 -1.61356659e+01 -2.55897179e+01
-1.63397522e+01 1.27788866e+00 1.47072300e-01 -1.28866732e+00
-1.59696829e+00 -1.16580629e+01 -2.02393227e+01 -1.35793419e+01
-1.12705021e+01 -1.47425270e+01 -1.25504894e+01 -1.83755665e+01
-7.93362045e+00 -1.18984699e+01 -9.53976822e+00 -1.38737183e+01
-1.44854469e+01 -1.36983585e+01 -1.51859636e+01 -1.39583693e+01
-7.84845734e+00 -8.99971676e+00 -1.61694489e+01 -1.35863504e+01
-1.87846413e+01 -1.51690617e+01 -2.49404103e-01 7.78339028e-01
-1.56386018e+00 -1.99905097e+00 -1.37179251e+01 -2.25832596e+01
-1.66161022e+01 -1.74464626e+01 -1.75472069e+01 -1.98302021e+01
-2.10891953e+01 -1.31870728e+01 -1.92208481e+01 -1.34429426e+01
-1.86840858e+01 -1.41525555e+01 -1.94452248e+01 -1.92487373e+01
-1.68426666e+01 -1.22350378e+01 -1.52343149e+01 -2.06815968e+01
-1.60188885e+01 -2.32096100e+01 -2.02435265e+01 -4.40291494e-01
9.96976972e-01 -1.84825552e+00 -2.43193603e+00 -1.60602741e+01
-2.19036083e+01 -1.68400517e+01 -1.44771357e+01 -1.67081776e+01
-1.65251980e+01 -1.87576752e+01 -1.10540638e+01 -1.53755598e+01
-1.19390879e+01 -1.68008995e+01 -1.63949070e+01 -1.50700588e+01
-1.70634537e+01 -1.89655972e+01 -8.94600296e+00 -1.35194263e+01
-1.92807941e+01 -1.87041073e+01 -2.21856689e+01 -1.78996506e+01
-1.16609700e-01 8.56797874e-01 -1.91140997e+00 -2.10396862e+00
-1.01619406e+01 -2.68737984e+01 -2.20997658e+01 -2.17673626e+01
-2.05160961e+01 -2.39515285e+01 -2.49530716e+01 -1.63933392e+01
-2.10805988e+01 -1.75326271e+01 -2.07077827e+01 -2.00489578e+01
-1.99268951e+01 -2.12444305e+01 -2.24091759e+01 -1.62087784e+01
-1.91759224e+01 -2.42828751e+01 -1.97305717e+01 -2.62558804e+01
-2.35940552e+01]
[ 1.64787069e-01 -1.68441325e-01 4.64288980e-01 -7.47514069e-01
-1.13719883e+01 -2.10964813e+01 -1.58254576e+01 -1.20305328e+01
-1.56141787e+01 -1.24658909e+01 -1.81972294e+01 -8.23118114e+00
-1.59399385e+01 -1.12004852e+01 -1.55562868e+01 -2.40003681e+01
-1.60093613e+01 -1.83026943e+01 -1.34068794e+01 -6.93028069e+00
-1.03299189e+01 -1.94805012e+01 -1.63801708e+01 -2.56201134e+01
-1.63976402e+01 1.10720372e+00 1.62255913e-01 -1.05587530e+00
-1.77236927e+00 -1.20724707e+01 -2.02502041e+01 -1.33943348e+01
-1.08683929e+01 -1.41024590e+01 -1.30132847e+01 -1.84846783e+01
-7.94123554e+00 -1.22969742e+01 -1.03535299e+01 -1.35685444e+01
-1.45834017e+01 -1.35028725e+01 -1.57420397e+01 -1.32303410e+01
-6.45265770e+00 -7.93988371e+00 -1.61845226e+01 -1.36617155e+01
-1.87895412e+01 -1.55598927e+01 -1.80259988e-01 1.03606677e+00
-1.41069508e+00 -2.04342628e+00 -1.45336590e+01 -2.16282444e+01
-1.64379387e+01 -1.69099007e+01 -1.71524391e+01 -2.03189278e+01
-2.16217175e+01 -1.30156775e+01 -1.94296246e+01 -1.43709736e+01
-1.84245052e+01 -1.49390011e+01 -1.87584820e+01 -1.90721817e+01
-1.62022648e+01 -1.08158903e+01 -1.40190802e+01 -2.06927853e+01
-1.63408413e+01 -2.31514969e+01 -2.11648083e+01 -5.68280160e-01
1.25197387e+00 -1.61449420e+00 -2.57362318e+00 -1.61436348e+01
-2.13765507e+01 -1.67169514e+01 -1.40701723e+01 -1.61057911e+01
-1.66000061e+01 -1.96425610e+01 -1.07086115e+01 -1.61478901e+01
-1.29794235e+01 -1.68822880e+01 -1.73082714e+01 -1.47676716e+01
-1.75105286e+01 -1.80940990e+01 -7.51458597e+00 -1.21162443e+01
-1.94071865e+01 -1.87942944e+01 -2.22906494e+01 -1.84283085e+01
1.27930269e-01 1.27133667e+00 -1.74558055e+00 -2.15854144e+00
-1.09730215e+01 -2.62110558e+01 -2.23466148e+01 -2.14312553e+01
-1.95770702e+01 -2.44276924e+01 -2.58058815e+01 -1.61417542e+01
-2.14367542e+01 -1.83109913e+01 -2.04648132e+01 -2.08876133e+01
-1.97048931e+01 -2.15082779e+01 -2.18297424e+01 -1.52893324e+01
-1.85209503e+01 -2.46299267e+01 -1.97363987e+01 -2.58215981e+01
-2.42690868e+01]
.
.
.
[-4.79533195e-01 -7.68981576e-01 -1.77157074e-01 -2.00561881e-01
-6.15729952e+00 -1.10808945e+01 -8.79792976e+00 -9.85613537e+00
-1.03026905e+01 -6.87720442e+00 -1.21334057e+01 -7.71150637e+00
-1.17531910e+01 -6.80997133e+00 -1.25995235e+01 -1.25998411e+01
-1.17608614e+01 -1.11489801e+01 -9.56524754e+00 -5.58242607e+00
-8.39977741e+00 -1.11510391e+01 -1.21788979e+01 -1.27471313e+01
-9.78714943e+00 1.04039955e+00 -1.47011960e+00 -1.81243956e+00
-1.05356848e+00 -7.45137024e+00 -1.10531626e+01 -7.84224939e+00
-9.17456150e+00 -8.58650589e+00 -5.74499321e+00 -1.13209743e+01
-8.41722298e+00 -1.01030149e+01 -6.03559208e+00 -1.16964493e+01
-1.01863642e+01 -1.12248173e+01 -1.11838684e+01 -8.50443459e+00
-5.21056652e+00 -8.14196777e+00 -1.00152636e+01 -1.10866699e+01
-1.11966391e+01 -9.41972065e+00 -1.03180420e+00 -1.99524832e+00
-1.77945817e+00 -1.55166841e+00 -8.39315605e+00 -1.05041189e+01
-8.82065392e+00 -9.86864853e+00 -9.29331779e+00 -7.26610661e+00
-1.02731133e+01 -8.88734341e+00 -1.01116686e+01 -7.06797981e+00
-1.14280262e+01 -1.06731510e+01 -1.14312458e+01 -1.05220804e+01
-9.01323318e+00 -5.39441586e+00 -8.52228928e+00 -9.87894917e+00
-1.08900566e+01 -1.09518299e+01 -9.72581577e+00 -2.16618586e+00
-1.49259973e+00 -1.68929064e+00 -1.36120069e+00 -8.58770275e+00
-1.00534191e+01 -9.97209167e+00 -1.05166273e+01 -9.29751587e+00
-7.26231098e+00 -1.13336391e+01 -9.37137890e+00 -1.04662256e+01
-7.50347424e+00 -1.16567268e+01 -1.01384783e+01 -1.12277622e+01
-1.08337603e+01 -9.39500999e+00 -6.51295757e+00 -9.02511883e+00
-1.06069736e+01 -1.13605204e+01 -1.08889675e+01 -9.81694508e+00
-1.06967080e+00 -1.58631933e+00 -1.36870170e+00 -1.17178059e+00
-7.71994257e+00 -9.72991562e+00 -9.93029594e+00 -1.08334541e+01
-9.51464462e+00 -9.31674480e+00 -1.08087626e+01 -9.48193550e+00
-9.81091881e+00 -8.34595966e+00 -1.10064735e+01 -9.91674614e+00
-1.18921404e+01 -1.08112535e+01 -9.73410416e+00 -7.14041519e+00
-8.62492752e+00 -1.10488844e+01 -1.03341990e+01 -1.04487247e+01
-1.07167931e+01]]]
I tried to use following format, it can read out something but the sequence is totally wrong
net_out = np.fromfile(filename, np.float32, count=13*13*125).reshape(13,13,125)

Data analysis-Data cleaning

In the process of doing data fitting, I got 300 data, but the difference between the data is too big, I want 30 value an last, is there has any method? The following is a simple analysis of these 300 data, I have 300 numbers, and I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this:
mean 159.4604181
std 14.08847471
min 125.115372
25% 149.8761805
50% 159.13718
75% 168.4387325
max 206.164806
my data
179.810669 175.676689 184.907814 185.026245 176.046089 180.488594 162.373795 142.625153156.730606 171.185516182.506538 171.279892 157.094376 175.523239 138.663803 154.037315 146.47934 166.05497 154.187614 160.262428165.520088 159.510735 168.477242 155.714188 159.907532 154.23027 148.736519 168.688332 160.950378 150.446991154.586983 169.485733 179.650826177.456154 164.688042 158.481346 157.529823 177.155075 195.534782 185.3591 166.674973 169.619331 163.814583 171.758568 178.962212 166.57563 168.160606 175.033401 166.368645 155.475143 164.136368 129.815918 171.380379 139.883484 155.503197 144.567993 148.340202 146.3424 159.094955 138.338203 140.137421 171.319687 153.106209 165.617142 165.526573 162.551674 149.410728 169.361961 169.951523 167.23304 154.108025 147.144096 146.922226 143.736519 174.33152 178.938164 174.354897 168.984444 169.659035 159.179405 164.230553 174.717148 168.7015 168.840828 165.545853 149.285713 149.012978 156.370644 171.694534 160.497284 152.712379 162.705696 150.347458 173.261192 147.494514 175.424751 178.045708 171.828514 165.673157 160.281517 159.184944 149.384384 156.638893 173.703499 162.53994 154.150589 161.535164 162.097717 166.458252 152.737953 152.43277 164.732224 161.109459 161.410538 151.811913 144.878899 151.292107 174.201546 179.860199 156.777939 157.412254 128.193626 134.273026 165.721443 151.169197 146.080826 158.473038 156.739098 164.361488 165.708656 168.370171 131.646339 138.349174 140.834816 140.425423 141.339821 153.862488 132.983589 143.058212 157.537552 140.514938 165.444321 164.64386 152.068802 164.700546 157.977196 161.475334 152.682129 159.353088 181.302414 177.709511 174.072334 161.575821 153.542702 167.670769 177.191048 161.461784 163.927948 166.825462 147.836052 155.826508 165.665619 147.922226 150.064796 142.898941 154.046188 176.772972 162.681747 133.64357 153.721153 183.379425 170.639091 174.363014 152.349274 171.839104 154.853706 160.088364 152.339348 147.243919 167.363869 163.696533 169.445465 149.822212 188.664185 149.297455 149.38324 162.856377 167.585457 149.559311 142.964882 129.458916 160.373032 162.140373 155.924423 155.679741 154.898079 139.470657 155.763046 156.382004 149.876038 134.911156 134.708656 157.063934 143.389061 138.854088 146.451187 143.490692 170.697395 165.124054 171.297455 148.049622 156.399467 149.876228 162.300697 149.619713 143.149063 148.450333 155.154694 168.817543 169.427711 161.739861 149.329041 137.551292 162.045517 155.713234 167.576202 147.337914 156.440407 169.3153 170.732796 158.153931 148.641853 160.371506 152.678215 151.62735 144.289818 150.412636 135.979866 158.555397 144.935394 152.542038 131.922989 125.718201 160.387344 150.361771 159.63842 150.993797 141.396507 161.954765 125.115372 138.610939 156.862579 155.175972 146.390205 185.087791 164.873558 160.305084 194.200592 168.425896 168.09037 147.101822 161.535454 151.740341 162.898369 169.418549 143.773643 164.353752 175.510162 206.164806 198.906193 204.26844 173.567131 177.433823 187.072243 187.453857 180.494034 202.663326 198.192097 145.617523 148.858864 174.640137 159.404617 153.754082 150.244583 133.0271 156.841972 152.707985 172.945221 155.878998 150.713615 162.749596 151.863152 153.257278 133.326752 147.612083 151.167763 158.199638 161.290581 149.243629 155.955055
if you are using sklearn
you can use StandardScaler
only transform the data you think the difference is too big
from sklearn.preprocessing import StandardScaler
X_scaler = StandardScaler()
X_train = X_scaler.fit_transform(X_train)
X_test = X_scaler.transform(X_test)
FYI:http://scikit-learn.org/stable/auto_examples/applications/plot_prediction_latency.html#sphx-glr-auto-examples-applications-plot-prediction-latency-py

Tensorflow Object Detection Jupyter Notebook no detection

I tried to run the Jupyter Notebook example for the object detection of tensorflow (tutorial) but there are no detections. I printed the scores and it's seems to work but the results are very bad. Does anyone have an idea what I might have done wrong.
print(scores[0]):
[ 0.03587309 0.02224856 0.01864638 0.01096715 0.0100315
0.0065446
0.00633551 0.00534311 0.00495995 0.00410238 0.00362363 0.00339175
0.00308251 0.0030337 0.00293387 0.00277085 0.00269581 0.00266825
0.00263924 0.00263331 0.00258721 0.00240822 0.00225823 0.00186966
0.00184308 0.00180467 0.00177474 0.00173643 0.0017281 0.00171935
0.00171891 0.00170284 0.00163754 0.00162967 0.00160267 0.00156545
0.00153614 0.00140936 0.00132406 0.00131524 0.00131041 0.00129431
0.00125819 0.0012553 0.00122365 0.00119179 0.00115673 0.00115186
0.00112368 0.00107096 0.00105803 0.00104337 0.00102719 0.00102337
0.00100349 0.00097767 0.0009685 0.00092741 0.00088506 0.00087696
0.0008734 0.00084825 0.00084135 0.00083512 0.00083396 0.00082068
0.00080583 0.00078979 0.00078059 0.00077475 0.00075449 0.00074426
0.00074421 0.00070195 0.00068741 0.00068138 0.00067261 0.00067125
0.00067032 0.00066041 0.0006473 0.00064205 0.00061964 0.00061793
0.00060834 0.00060468 0.00059547 0.00059478 0.00059461 0.00059436
0.00059426 0.00059411 0.00059406 0.00059392 0.00059365 0.00059351
0.00059191 0.00058798 0.00058682 0.00058148]
[ 0.01044157 0.00982138 0.00942336 0.00846517 0.00613665 0.00398568
0.00357755 0.00300539 0.00255862 0.00236576 0.00232631 0.00220291
0.00185227 0.00163544 0.00159791 0.00145071 0.0014366 0.0014137
0.00122685 0.00118978 0.00108457 0.00104252 0.00099215 0.00096401
0.0008708 0.00084774 0.00080484 0.00078507 0.00078379 0.00076875
0.00072774 0.00071732 0.00071343 0.00070812 0.00069253 0.0006762
0.00067269 0.00059905 0.00059367 0.000588 0.00056114 0.0005504
0.00051472 0.00051055 0.00050973 0.00048484 0.00047297 0.00046204
0.00044787 0.00043259 0.00042987 0.00042673 0.00041978 0.00040494
0.00040087 0.00039576 0.00039059 0.00037274 0.00036828 0.00036417
0.0003612 0.00034645 0.00034479 0.00034078 0.00033771 0.00033605
0.0003333 0.0003304 0.0003294 0.00032325 0.00031787 0.00031773
0.00031748 0.00031741 0.00031732 0.00031729 0.00031724 0.00031722
0.00031717 0.00031708 0.00031702 0.00031579 0.00030416 0.00030222
0.00029739 0.00029726 0.00028289 0.00026527 0.00026325 0.00024584
0.00024221 0.00024156 0.0002391 0.00023335 0.00021617 0.0002001
0.00019127 0.00018342 0.00017271 0.00015507]
I'm running the example with tensorflow 1.4, python 3.5 and I tested the installation as suggested.
I had the same issue. I found in a post that you have to change:
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_08'
To:
MODEL_NAME = 'ssd_mobilenet_v1_coco_11_06_2017'
and it worked fine.
Original answer: https://stackoverflow.com/a/47332228/8954260

Categories

Resources