PAYG tokens - error messages (on both Disco and Stable Diffusion) - python

I signed up for some pay as you go credit - and to my dismay now receive this error both on Disco AND Stable Diffusion:
FileNotFoundError Traceback (most recent call last)
<ipython-input-4-191981527364> in <module>
38 import py3d_tools as p3d
39
---> 40 from helpers import DepthModel, sampler_fn
41 from k_diffusion.external import CompVisDenoiser
42 from ldm.util import instantiate_from_config
4 frames
/content/MiDaS/midas/backbones/next_vit.py in <module>
6 from .utils import activations, forward_default, get_activation
7
----> 8 file = open("./externals/Next_ViT/classification/nextvit.py", "r")
9 source_code = file.read().replace(" utils", " externals.Next_ViT.classification.utils")
10 exec(source_code)
FileNotFoundError: [Errno 2] No such file or directory: './externals/Next_ViT/classification/nextvit.py'
I also receive this error on Stable:
NameError Traceback (most recent call last)
<ipython-input-5-d64464a7a6a5> in <module>
154 if load_on_run_all and ckpt_valid:
155 local_config = OmegaConf.load(f"{ckpt_config_path}")
--> 156 model = load_model_from_config(local_config, f"{ckpt_path}", half_precision=half_precision)
157 device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
158 model = model.to(device)
<ipython-input-5-d64464a7a6a5> in load_model_from_config(config, ckpt, verbose, device, half_precision)
136 print(f"Global Step: {pl_sd['global_step']}")
137 sd = pl_sd["state_dict"]
--> 138 model = instantiate_from_config(config.model)
139 m, u = model.load_state_dict(sd, strict=False)
140 if len(m) > 0 and verbose:
Despite clearing my Google Drive and reloading Disco and Stable Diffusion, (including the .ckpt file, placed correctly as before in the models folder) the same errors occur.

Related

error with unknown path on jupyter notebook

I am running a code on jupyter notebook using virtual machine (Ubuntu). Normally when there is an error it points to the file where the error is giving its path but I got an error with a path I do not have
ValueError Traceback (most recent call last)
/tmp/ipykernel_84/10827517.py in <module>
----> 1 vae_cl.train(max_epochs=500, plan_kwargs=dict(weight_decay=0.0))
/tmp/ipykernel_84/1274271952.py in train(self, max_epochs, n_samples_per_label, check_val_every_n_epoch, train_size, validation_size, batch_size, use_gpu, plan_kwargs, **trainer_kwargs)
187 use_gpu=use_gpu,
188 check_val_every_n_epoch=check_val_every_n_epoch,
--> 189 **trainer_kwargs,
190 )
191 return runner()
/tmp/ipykernel_84/2912366039.py in __init__(self, model, training_plan, data_splitter, max_epochs, use_gpu, **trainer_kwargs)
59 self.data_splitter = data_splitter
60 self.model = model
---> 61 gpus, device = parse_use_gpu_arg(use_gpu)
62 self.gpus = gpus
63 self.device = device
ValueError: too many values to unpack (expected 2)
What I noticed is that the function it is pointing to (---> 61) is actually in the file I am running but I do not understand why the path is like that '/tmp/ipykernel_84/1274271952.py'
Any idea?

UPDATED - Further complications - ImportError: `load_weights` requires h5py

I'm working with a previously saved model, however, when I try to load the model, I receive the following error message:
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-11-f8d353bad309> in <module>
----> 1 estimator, corpus_encoder, label_encoder, tokenizer = load_model(MODEL_PATH, loss=weighted_binary_crossentropy)
<ipython-input-6-03f944f0a51f> in load_model(model_base_dir, loss, optimizer, metrics)
9 json_file.close()
10 model = model_from_json(loaded_model_json)
---> 11 model.load_weights(os.path.join(MODEL_PATH, 'model.h5'))
12 model.compile(loss=loss, optimizer=optimizer, metrics=metrics)
13 # loading context_encoder
~\anaconda3\envs\davis_env\lib\site-packages\keras\engine\network.py in load_weights(self, filepath, by_name, skip_mismatch, reshape)
1154 """
1155 if h5py is None:
-> 1156 raise ImportError('`load_weights` requires h5py.')
1157 with h5py.File(filepath, mode='r') as f:
1158 if 'layer_names' not in f.attrs and 'model_weights' in f:
ImportError: `load_weights` requires h5py.
I have since install cython and h5py as described in the answers found here and here
Are there any other solutions?
I'm using tensorflow version 1.12.0 and keras version 2.2.4
UPDATE:
When simply trying to import h5py (via ```import h5py``) I receive the following error message:
ImportError Traceback (most recent call last)
<ipython-input-15-c9f0b8c65221> in <module>
----> 1 import h5py
~\anaconda3\envs\davis_env\lib\site-packages\h5py\__init__.py in <module>
32 raise
33
---> 34 from . import version
35
36 if version.hdf5_version_tuple != version.hdf5_built_version_tuple:
~\anaconda3\envs\davis_env\lib\site-packages\h5py\version.py in <module>
15
16 from collections import namedtuple
---> 17 from . import h5 as _h5
18 import sys
19 import numpy
h5py\h5.pyx in init h5py.h5()
ImportError: DLL load failed: The specified procedure could not be found.```

File not found: archive/constants.pkl at Python Pytorch

I'm getting error while preparing List of operators of your serialized torchscript model. What is the issue here? Is it coming when loading it?
Python code
# Dump list of operators used by MobileNetV2:
import torch, yaml
root = '/content/drive/My Drive/Monitoring/'
model = torch.jit.load(root+'model.pt')
ops = torch.jit.export_opnames(model)
with open('MobileNetV2.yaml', 'w') as output:
yaml.dump(ops, output)
Stack Trace
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-39-8b61c35fb898> in <module>()
3
4 root = '/content/drive/My Drive/Monitoring/'
----> 5 model = torch.jit.load(root+'model.pt')
6 ops = torch.jit.export_opnames(model)
7 with open('MobileNetV2.yaml', 'w') as output:
/usr/local/lib/python3.6/dist-packages/torch/jit/_serialization.py in load(f, map_location, _extra_files)
159 cu = torch._C.CompilationUnit()
160 if isinstance(f, str) or isinstance(f, pathlib.Path):
--> 161 cpp_module = torch._C.import_ir_module(cu, f, map_location, _extra_files)
162 else:
163 cpp_module = torch._C.import_ir_module_from_buffer(
RuntimeError: [enforce fail at inline_container.cc:222] . file not found: archive/constants.pkl

NameError: name 'exit' is not defined/tfnet/darkflow/custom dataset object detection

I'm trying to train my custom dataset by creating yolov3.cfg file and yolov3.weights file with labelled annotations and images using darkflow. However when I'm trying to run tfnet = TFNet(history), it throws an error of "exit not defined".
I have installed darkflow by the following steps:
In Anaconda Prompt:
git clone https://github.com/thtrieu/darkflow.git
cd darkflow
python3 setup.py build_ext –inplace
pip install
then:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import cv2
from darkflow.net.build import TFNet
history = {"model": "C:/Users/Business Intelli/Desktop/Object-Detection/Dataset/yolov3.cfg",
"load": "C:/Users/Business Intelli/Desktop/Object-Detection/Dataset/yolov3.weights",
"batch": 8,
"epoch": 50,
"gpu": 1.0,
"train": True,
"annotation": "C:/Users/Business Intelli/Desktop/Object-Detection/Dataser/Stumps",
"dataset": "C:/Users/Business Intelli/Desktop/Object-Detection/Dataser/Stumps"}
tfnet = TFNet(history)
Parsing C:/Users/Business Intelli/Desktop/Object-Detection/Dataset/yolov3.cfg
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-10-6f6b945047c5> in <module>
----> 1 tfnet = TFNet(history)
~\Anaconda3\lib\site-packages\darkflow\net\build.py in __init__(self, FLAGS, darknet)
56
57 if darknet is None:
---> 58 darknet = Darknet(FLAGS)
59 self.ntrain = len(darknet.layers)
60
~\Anaconda3\lib\site-packages\darkflow\dark\darknet.py in __init__(self, FLAGS)
15
16 print('Parsing {}'.format(self.src_cfg))
---> 17 src_parsed = self.parse_cfg(self.src_cfg, FLAGS)
18 self.src_meta, self.src_layers = src_parsed
19
~\Anaconda3\lib\site-packages\darkflow\dark\darknet.py in parse_cfg(self, model, FLAGS)
66 cfg_layers = cfg_yielder(*args)
67 meta = dict(); layers = list()
---> 68 for i, info in enumerate(cfg_layers):
69 if i == 0: meta = info; continue
70 else: new = create_darkop(*info)
~\Anaconda3\lib\site-packages\darkflow\utils\process.py in cfg_yielder(model, binary)
314 #-----------------------------------------------------
315 else:
--> 316 exit('Layer {} not
implemented'.format(d['type']))
317
318 d['_size'] = list([h, w, c, l, flat])
NameError: name 'exit' is not defined
So I faced the same problem and the issue is with the process.py file in
darkflow --> utils folder.
Apparently the exit() method is not in-built so you have to add this line in process.py
from sys import exit
Note : If your code is reaching this point it means the models can't read the layers. The weights file that I downloaded for yolov3 gave me same trouble and I couldn't find a link that has a proper weight file for yolov3 that works in darkflow. So I had to stick to yolo.cfg and yolo.weights .

Theano step gives File in wrong format error

I am trying to replicate the example
Getting Started PyMC3.
on Windows 2012 R2.
The step
from pymc3 import Model, Normal, HalfNormal
gives
D:\Anaconda3\libs/python35.lib: error adding symbols: File in wrong format
collect2.exe: error: ld returned 1 exit status
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
D:\Anaconda3\lib\site-packages\theano\gof\lazylinker_c.py in <module>()
64 if version != getattr(lazylinker_ext, '_version', None):
---> 65 raise ImportError()
66 except ImportError:
ImportError:
During handling of the above exception, another exception occurred:
ImportError Traceback (most recent call last)
D:\Anaconda3\lib\site-packages\theano\gof\lazylinker_c.py in <module>()
81 if version != getattr(lazylinker_ext, '_version', None):
---> 82 raise ImportError()
83 except ImportError:
ImportError:
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
<ipython-input-4-582c90a634a6> in <module>()
----> 1 from pymc3 import Model, Normal, HalfNormal
D:\Anaconda3\lib\site-packages\pymc3\__init__.py in <module>()
1 __version__ = "3.0"
2
----> 3 from .core import *
4 from .distributions import *
5 from .math import *
D:\Anaconda3\lib\site-packages\pymc3\core.py in <module>()
1 from .vartypes import *
----> 2 from .model import *
3 from .theanof import *
4 from .blocking import *
5 import numpy as np
D:\Anaconda3\lib\site-packages\pymc3\model.py in <module>()
1 from .vartypes import *
2
----> 3 from theano import theano, tensor as t, function
4 from theano.tensor.var import TensorVariable
5
D:\Anaconda3\lib\site-packages\theano\__init__.py in <module>()
53 object2, utils
54
---> 55 from theano.compile import \
56 SymbolicInput, In, \
57 SymbolicOutput, Out, \
D:\Anaconda3\lib\site-packages\theano\compile\__init__.py in <module>()
7 SpecifyShape, specify_shape, register_specify_shape_c_code)
8
----> 9 from theano.compile.function_module import *
10
11 from theano.compile.mode import *
D:\Anaconda3\lib\site-packages\theano\compile\function_module.py in <module>()
16 from theano import gof
17 from theano.compat.python2x import partial
---> 18 import theano.compile.mode
19 from theano.compile.io import (
20 In, SymbolicInput, SymbolicInputKit, SymbolicOutput)
D:\Anaconda3\lib\site-packages\theano\compile\mode.py in <module>()
9 import theano
10 from theano import gof
---> 11 import theano.gof.vm
12 from theano.configparser import config, AddConfigVar, StrParam
13 from theano.compile.ops import register_view_op_c_code, _output_guard
D:\Anaconda3\lib\site-packages\theano\gof\vm.py in <module>()
566
567 try:
--> 568 from . import lazylinker_c
569
570 class CVM(lazylinker_c.CLazyLinker, VM):
D:\Anaconda3\lib\site-packages\theano\gof\lazylinker_c.py in <module>()
115 args = cmodule.GCC_compiler.compile_args()
116 cmodule.GCC_compiler.compile_str(dirname, code, location=loc,
--> 117 preargs=args)
118 # Save version into the __init__.py file.
119 init_py = os.path.join(loc, '__init__.py')
D:\Anaconda3\lib\site-packages\theano\gof\cmodule.py in compile_str(module_name, src_code, location, include_dirs, lib_dirs, libs, preargs, py_module)
2008 # difficult to read.
2009 raise Exception('Compilation failed (return status=%s): %s' %
-> 2010 (status, compile_stderr.replace('\n', '. ')))
2011 elif config.cmodule.compilation_warning and compile_stderr:
2012 # Print errors just below the command line.
I have for gcc
C:\>gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=C:/TDM-GCC-64/bin/../libexec/gcc/x86_64-w64-mingw32/5.1.0/lt
o-wrapper.exe
Target: x86_64-w64-mingw32
Configured with: ../../../src/gcc-5.1.0/configure --build=x86_64-w64-mingw32 --e
nable-targets=all --enable-languages=ada,c,c++,fortran,lto,objc,obj-c++ --enable
-libgomp --enable-lto --enable-graphite --enable-cxx-flags=-DWINPTHREAD_STATIC -
-disable-build-with-cxx --disable-build-poststage1-with-cxx --enable-libstdcxx-d
ebug --enable-threads=posix --enable-version-specific-runtime-libs --enable-full
y-dynamic-string --enable-libstdcxx-threads --enable-libstdcxx-time --with-gnu-l
d --disable-werror --disable-nls --disable-win32-registry --prefix=/mingw64tdm -
-with-local-prefix=/mingw64tdm --with-pkgversion=tdm64-1 --with-bugurl=http://td
m-gcc.tdragon.net/bugs
Thread model: posix
gcc version 5.1.0 (tdm64-1)
what am i missing?
thanks
Whatever the issue was creating a python 3.4 environment together with
conda install mingw libpython
solved it.
What solved my issue is installing Python 2.7 version of anaconda
I got the same error using ipython. Using just python I imported theano without problem. Subsequent uses with ipython worked fine after the first compilation with just python.

Categories

Resources