OSError: [WinError 126] The specified module could not be found? - python

Firstly sorry, my grammer might be bad. And if I missed any solutions in here, you can give me url, but I did not find any proper one for this case.
I'm taking a course about deep learning, and they provide us to this code. However there was an error which I encounter only. This is the main.py;
from __future__ import print_function
import os
import torch
import torch.multiprocessing as mp
from envs import create_atari_env
from model import ActorCritic
from train import train
from testt import test
import my_optim
# Gathering all the parameters (that we can modify to explore)
class Params():
def __init__(self):
self.lr = 0.0001
self.gamma = 0.99
self.tau = 1.
self.seed = 1
self.num_processes = 16
self.num_steps = 20
self.max_episode_length = 10000
self.env_name = 'Breakout-v0'
# Main run
os.environ['OMP_NUM_THREADS'] = '1'
params = Params()
torch.manual_seed(params.seed)
env = create_atari_env(params.env_name)
shared_model = ActorCritic(env.observation_space.shape[0], env.action_space)
shared_model.share_memory()
optimizer = my_optim.SharedAdam(shared_model.parameters(), lr=params.lr)
optimizer.share_memory()
processes = []
p = mp.Process(target=test, args=(params.num_processes, params, shared_model))
p.start()
processes.append(p)
for rank in range(0, params.num_processes):
p = mp.Process(target=train, args=(rank, params, shared_model, optimizer))
p.start()
processes.append(p)
for p in processes:
p.join()
Is it caused by "from envs import create_atari_env". Because after install create_atari_env, after I install it. And also I do not use any paths in scripts.
Error;
****\lib\ctypes_init_.py line 426 in LoadLibrary self._handle = _dlopen(self._name, mode) OSError: [WinError 126] The specified module could not be found

This error is often caused in relation to path, for example, which requires the correct use of double-slashes, forward-slashes or raw-strings but since you mentioned this wasn't your case I believe the root cause of this error is by
import torch
There are known cases of PyTorch breaking DLL loading on specific Python and PyTorch versions (applies to PyTorch 1.5.0 on Python 3.7 specifically). Once you run import torch, any further DLL loads will fail. So if you're using PyTorch and loading your own DLLs you'll have to rearrange your code to import all DLLs first and then import torch.

Related

Getting "NameError: np is not defined", but np is defined both in main and in imported module

I wrote a module for my dataset classes, located in the same directory as my main file.
When i import it and try to instantiate a class i get a "np is not defined" error even though numpy is imported correctly in both the main file and the module.
I'm saying correctly because if i try both to call another numpy function from the main or to execute the module on alone no error rises.
This is the code in the main file:
import torch
import numpy as np
from myDatasets import SFCDataset
trainDs = SFCDataset(paths["train"])
and this is the module:
import torch
import numpy as np
from torch.utils.data import Dataset
#Single Fragment Classification Dataset
class SFCDataset(Dataset):
def __init__(self, path, transform=None, norms=False, areas=False, **kwargs, ):
super(SFCDataset, self).__init__()
self.data = torch.load(path)
self.fragments = []
self.labels = []
for item in self.data:
self.fragments.append(item[0])
self.labels.append(item[1])
self.fragments=np.array(self.fragments)
self.labels=np.array(self.labels)
if norms:
if areas:
self.fragments = np.transpose(self.fragments[:], (0,2,1,3)).reshape(-1,1024,9)[:,:,:7]
else:
self.fragments = np.transpose(self.fragments[:], (0,2,1,3)).reshape(-1,1024,9)[:,:,:6]
else:
if areas:
self.fragments = np.transpose(self.fragments[:], (0,2,1,3)).reshape(-1,1024,9)[:,:,[0,1,2,6]]
else:
self.fragments = np.transpose(self.fragments[:], (0,2,1,3)).reshape(-1,1024,9)[:,:,:3]
self.transform = transform
def __len__(self) -> int:
return len(self.data)
def __getitem__(self, index):
label = self.labels[index]
pointdata = self.fragments[index]
if self.transform:
pointdata = self.transform(pointdata)
return pointdata, label
if __name__ == "__main__":
print("huh")
path = "C:\\Users\\ale23\\OneDrive\\Desktop\\Università\\Tesi\\data\\dataset_1024_AB\\train_dataset_AED_norm_area.pt"
SFCDataset(path)
I don't know what else to try.
I'm on VSCode, using a 3.9.13 virtual enviroment.
edit:
this is the error i'm getting:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Cell In[75], line 5
2 import numpy as np
3 from myDatasets import SFCDataset
----> 5 trainDs = SFCDataset(paths["train"])
File c:\Users\ale23\OneDrive\Desktop\Università\Dottorato\EquivariantNN\myDatasets.py:17, in SFCDataset.__init__(self, path, transform, norms, areas, **kwargs)
15 self.fragments.append(item[0])
16 self.labels.append(item[1])
---> 17 self.fragments=np.array(self.fragments)
18 self.labels=np.array(self.labels)
20 if norms:
NameError: name 'np' is not defined
edit2: i edited some stuff like path names and parts occuring after the error to try to lighten up the code, sorry, i should have uploaded all the code as it is run now.
Edit3: I was trying to reproduce the error in some other way and was building a dummy module. I tried to import the class in that other module and running the dummy.py and it runs. That appears to be a problem with the fact i'm working on a notebook, is that possible?
the dummy module:
import numpy as np
def test():
print(np.array(1))
print(np.array(2))
print(np.sum(np.random.rand(2)))
from myDatasets import SFCDataset
trainDs=SFCDataset("C:\\Users\\ale23\\OneDrive\\Desktop\\Università\\Tesi\\data\\dataset_1024_AB\\train_dataset_AED_norm_area.pt")
this runs by calling "python testmodule.py" in the console
Edit 4:
Today i restarted my pc and run the same code as yesterday and the notebook works. Yesterday i tried to close vscode and restart it, but it did not help.
Maybe something is wrong with the virtual environment? I don't know where to look at honestly.
Anyways the program now runs with no errors, should i close this?
Thank you all for your time and help

ImportError: cannot import 'rendering' from 'gym.envs.classic_control'

I'm working with RL agents, and was trying to replicate the finding of the this paper, wherein they make a custom parkour environment based on Gym open AI, however when trying to render this environment I run into.
import numpy as np
import time
import gym
import TeachMyAgent.environments
env = gym.make('parametric-continuous-parkour-v0', agent_body_type='fish', movable_creepers=True)
env.set_environment(input_vector=np.zeros(3), water_level = 0.1)
env.reset()
while True:
_, _, d, _ = env.step(env.action_space.sample())
env.render(mode='human')
time.sleep(0.1)
c:\users\manu dwivedi\teachmyagent\TeachMyAgent\environments\envs\parametric_continuous_parkour.py in render(self, mode, draw_lidars)
462
463 def render(self, mode='human', draw_lidars=True):
--> 464 from gym.envs.classic_control import rendering
465 if self.viewer is None:
466 self.viewer = rendering.Viewer(RENDERING_VIEWER_W, RENDERING_VIEWER_H)
ImportError: cannot import name 'rendering' from 'gym.envs.classic_control' (C:\ProgramData\Anaconda3\envs\teachagent\lib\site-packages\gym\envs\classic_control\__init__.py)
[1]: https://github.com/flowersteam/TeachMyAgent
I thought this might be a problem with this custom environments and how the authors decided to render it, however, when I try just
from gym.envs.classic_control import rendering
I run into the same error, github users here suggested this can be solved by adding rendor_mode='human' when calling gym.make() rendering, but this seems to only goes for their specific case.
I got (with help from a fellow student) it to work by downgrading the gym package to 0.21.0.
Performed the command pip install gym==0.21.0 for this.
Update, from Github issue:
Based on https://github.com/openai/gym/issues/2779
This should be a problem of gymgrid, there is an open PR: wsgdrfz/gymgrid#1
If you want to use the last version of gym, you can try using the branch of that PR (https://github.com/CedricHermansBIT/gymgrid2); you can install it with pip install gymgrid2

Importing any function from an R package into python

While using the rpy2 library of Python to work with R. I get the following error message while trying to import a function of the bnlearn package:
# Using R inside python
import rpy2
import rpy2.robjects as robjects
import rpy2.robjects.packages as rpackages
from rpy2.robjects.vectors import StrVector
from rpy2.robjects.packages import importr
utils = rpackages.importr('utils')
utils.chooseCRANmirror(ind=1)
# Install packages
packnames = ('visNetwork', 'bnlearn')
utils.install_packages(StrVector(packnames))
# Load packages
visNetwork = importr('visNetwork')
bnlearn = importr('bnlearn')
tabu = bnlearn.tabu
fit = bn.learn.bn.fit
With the error:
AttributeError: module 'bnlearn' has no attribute 'bn'
While checking the bnlearn documentation one finds out that bn is a class structure. So one should check out all the attributes of the object in question, that is, running:
bnlearn.__dict__['_rpy2r']
After that you should get a similar output like the next one, where you find how you would import each attribute of bnlearn:
...
...
'bn_boot': 'bn.boot',
'bn_cv': 'bn.cv',
'bn_cv_algorithm': 'bn.cv.algorithm',
'bn_cv_structure': 'bn.cv.structure',
'bn_fit': 'bn.fit',
'bn_fit_backend': 'bn.fit.backend',
'bn_fit_backend_continuous': 'bn.fit.backend.continuous',
'bn_fit_backend_discrete': 'bn.fit.backend.discrete',
'bn_fit_backend_mixedcg': 'bn.fit.backend.mixedcg',
'bn_fit_barchart': 'bn.fit.barchart',
'bn_fit_dotplot': 'bn.fit.dotplot',
...
...
Then, running the following will solve the issue:
bn_fit = bnlearn.bn_fit
Now, you could, for example, run a bayesian Network:
structure = tabu(datos, score = "loglik-g")
bn_mod = bn_fit(structure, data = datos, method = "mle")
In general, this approach solves the issue of importing any function from an R package into Python through the rpy2 package.

An attempt has been made to start a new process before the current process has finished its bootstrapping phase

I am new to dask and I found so nice to have a module that makes it easy to get parallelization. I am working on a project where I was able to parallelize in a single machine a loop as you can see here . However, I would like to move over to dask.distributed. I applied the following changes to the class above:
diff --git a/mlchem/fingerprints/gaussian.py b/mlchem/fingerprints/gaussian.py
index ce6a72b..89f8638 100644
--- a/mlchem/fingerprints/gaussian.py
+++ b/mlchem/fingerprints/gaussian.py
## -6,7 +6,7 ## from sklearn.externals import joblib
from .cutoff import Cosine
from collections import OrderedDict
import dask
-import dask.multiprocessing
+from dask.distributed import Client
import time
## -141,13 +141,14 ## class Gaussian(object):
for image in images.items():
computations.append(self.fingerprints_per_image(image))
+ client = Client()
if self.scaler is None:
- feature_space = dask.compute(*computations, scheduler='processes',
+ feature_space = dask.compute(*computations, scheduler='distributed',
num_workers=self.cores)
feature_space = OrderedDict(feature_space)
else:
stacked_features = dask.compute(*computations,
- scheduler='processes',
+ scheduler='distributed',
num_workers=self.cores)
stacked_features = numpy.array(stacked_features)
Doing so generates this error:
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/spawn.py", line 136, in _check_not_importing_main
is not going to be frozen to produce an executable.''')
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
I have tried different ways of adding if __name__ == '__main__': without any success. This can be reproduced by running this example. I would appreciate if anyone could help me to figure this out. I have no clue on how I should change my code to make it work.
Thanks.
Edit: The example is cu_training.py.
The Client command starts up new processes, so it will have to be within the if __name__ == '__main__': block as described in this SO question or this GitHub issue
This is the same as with the multiprocessing module
I faced several issues in my code even after including main if __name__ == '__main__': .
I am using several python files and modules and multiprocess is used by only one function for some sampling operation. The only fix that worked for me is to include main at the very first file & first line of the whole code (even including imports). The following worked well:
if __name__ == '__main__':
from mjrl.utils.gym_env import GymEnv
from mjrl.policies.gaussian_mlp import MLP
from mjrl.baselines.quadratic_baseline import QuadraticBaseline
from mjrl.baselines.mlp_baseline import MLPBaseline
from mjrl.algos.npg_cg import NPG
from mjrl.algos.dapg import DAPG
from mjrl.algos.behavior_cloning import BC
from mjrl.utils.train_agent import train_agent
from mjrl.samplers.core import sample_paths
import os
import json
import mjrl.envs
import mj_envs
import time as timer
import pickle
import argparse
import numpy as np
# ===============================================================================
# Get command line arguments
# ===============================================================================
parser = argparse.ArgumentParser(description='Policy gradient algorithms with demonstration data.')
parser.add_argument('--output', type=str, required=True, help='location to store results')
parser.add_argument('--config', type=str, required=True, help='path to config file with exp params')
args = parser.parse_args()
JOB_DIR = args.output
if not os.path.exists(JOB_DIR):
os.mkdir(JOB_DIR)
with open(args.config, 'r') as f:
job_data = eval(f.read())
assert 'algorithm' in job_data.keys()
assert any([job_data['algorithm'] == a for a in ['NPG', 'BCRL', 'DAPG']])
job_data['lam_0'] = 0.0 if 'lam_0' not in job_data.keys() else job_data['lam_0']
job_data['lam_1'] = 0.0 if 'lam_1' not in job_data.keys() else job_data['lam_1']
EXP_FILE = JOB_DIR + '/job_config.json'
with open(EXP_FILE, 'w') as f:
json.dump(job_data, f, indent=4)
# ===============================================================================
# Train Loop
# ===============================================================================
e = GymEnv(job_data['env'])
policy = MLP(e.spec, hidden_sizes=job_data['policy_size'], seed=job_data['seed'])
baseline = MLPBaseline(e.spec, reg_coef=1e-3, batch_size=job_data['vf_batch_size'],
epochs=job_data['vf_epochs'], learn_rate=job_data['vf_learn_rate'])

Can't import package file (no module named...) (Python)

I am receving this error while I am try to run the code (from CMD):
ModuleNotFoundError: No module named 'numbers.hog'; numbers is not a package
Here is the hog.py file code...
from skimage import feature
class HOG:
def __init__(self, orientations = 9, pixelsPerCell = (8, 8),
cellsPerBlock = (3, 3), normalize = False):
self.orienations = orientations
self.pixelsPerCell = pixelsPerCell
self.cellsPerBlock = cellsPerBlock
self.normalize = normalize
def describe(self, image):
hist = feature.hog(image,
orientations = self.orienations,
pixels_per_cell = self.pixelsPerCell,
cells_per_block = self.cellsPerBlock,
normalize = self.normalize)
return hist
...and the main (train.py) which return the error.
from sklearn.svm import LinearSVC
from numbers.hog import HOG
from numbers import dataset
import argparse
import pickle as cPickle
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", required = True,
help = "path to the dataset file")
ap.add_argument("-m", "--model", required = True,
help = "path to where the model will be stored")
args = vars(ap.parse_args())
(digits, target) = dataset.load_digits(args["dataset"])
data = []
hog = HOG(orientations = 18, pixelsPerCell = (10, 10),
cellsPerBlock = (1, 1), normalize = True)
for image in digits:
image = dataset.deskew(image, 20)
image = dataset.center_extent(image, (20, 20))
hist = hog.describe(image)
data.append(hist)
model = LinearSVC(random_state = 42)
model.fit(data, target)
f = open(args["model"], "w")
f.write(cPickle.dumps(model))
f.close()
I don't uderstand why it gives me error on module package. numbers is a package, why it don't import it as well (as it seems) ?
UPDATE: tried to put from .hog import HOG and then execute from CMD..It prints:
No module named '__main__.hog'; '__main__' is not a package
Is it crazy ? hog.py is in the main package together with the other files. As you can see, it also contains HOG class.... Can't understand.. Some one can reproduce the error ?
In the IDE console it prints:
usage: train.py [-h] -d DATASET -m MODEL
train.py: error: the following arguments are required: -d/--dataset, -m/--model
This should be correct as soon as it is executed in IDE because the program MUST run in CMD.
UPDATE 2: for who is interested, this is the project https://github.com/VAUTPL/Number_Detection
Change from numbers.hog import HOG to from hog import HOG and change from numbers import dataset to import dataset.
You are already in the "numbers" package so you don't have to precise it again when you import it.
When you type from numbers import dataset, Python will look for a package numbers (inside the actual package) that contains a dataset.py file.
If your train.py was outside the numbers package then you have to put the package name (numbers) before.
Important
numbers is a python standard package
https://docs.python.org/2/library/numbers.html
Check if you are not really importing that package or rename your package to a more specific name.
Also:
It might looks like python doesnt recognize your package.
Open a python shell and write:
import sys
print sys.path
Check if your number path is there.
If it's not there you have to add it.
sys.path.insert(0, "/path/to/your/package_or_module")
Your train.py file is already in the package "numbers", so you don't have to import numbers.
Try this instead:
from hog import HOG
I saw in comment that it gives you "error (red line)".
Can you be more precise, because I don't see errors there.

Categories

Resources