I am running under Python 3.7 on Linux Ubuntu 18.04 under Eclipse 4.8 and Pydev.
The declaration:
args:Dict[str: Optional[Any]] = {}
is in a module that is imported from my testing code. and it is flagged with the following error message from typing.py:
TypeError: Parameters to generic types must be types. Got slice(<class 'str'>, typing.Union[typing.Any, NoneType], None). The stack trace follows: Finding files... done. Importing test modules ... Traceback (most recent call last): File "/Data/WiseOldBird/Eclipse/pool/plugins/org.python.pydev.core_7.0.3.201811082356/pysrc/_pydev_runfiles/pydev_runfiles.py", line 468, in __get_module_from_str
mod = __import__(modname) File "/Data/WiseOldBird/Workspaces/WikimediaAccess/WikimediaLoader/Tests/wiseoldbird/loaders/TestWikimediaLoader.py", line 10, in <module>
from wiseoldbird.application_controller import main File "/Data/WiseOldBird/Workspaces/WikimediaAccess/WikimediaLoader/src/wiseoldbird/application_controller.py", line 36, in <module>
args:Dict[str: Optional[Any]] = {} File "/usr/local/lib/python3.7/typing.py", line 251, in inner
return func(*args, **kwds) File "/usr/local/lib/python3.7/typing.py", line 626, in __getitem__
params = tuple(_type_check(p, msg) for p in params) File "/usr/local/lib/python3.7/typing.py", line 626, in <genexpr>
params = tuple(_type_check(p, msg) for p in params) File "/usr/local/lib/python3.7/typing.py", line 139, in _type_check
raise TypeError(f"{msg} Got {arg!r:.100}.") TypeError: Parameters
This prevents my testing module from being imported.
What am I doing wrong?
The proper syntax for a dict's type is
Dict[str, Optional[Any]]
When you write [a: b], Python interprets this as a slice, i.e. the thing that makes taking parts of arrays work, like a[1:10]. You can see this in the error message: Got slice(<class 'str'>, typing.Union[typing.Any, NoneType], None).
Related
OmegaConf allows you to register a custom resolver. Here is an example of resolving a tuple.
def resolve_tuple(*args):
return tuple(args)
OmegaConf.register_new_resolver("tuple", resolve_tuple)
This can be used to resolve a value in a config file with a structure like ${tuple:1,2} to a tuple (1, 2). Along with hydra.utils.instantiate this can be used to create objects that contain or utilize tuples. For example:
config.yaml
obj:
tuple: ${tuple:1,2}
test.py
import hydra
import hydra.utils as hu
from omegaconf import OmegaConf
def resolve_tuple(*args):
return tuple(args)
OmegaConf.register_new_resolver('tuple', resolve_tuple)
#hydra.main(config_path='conf', config_name='config_test')
def main(cfg):
obj = hu.instantiate(cfg.obj, _convert_='partial')
print(obj)
if __name__ == '__main__':
main()
Running this example returns:
$ python test.py
{'tuple': (1, 2)}
However, imagine you had a much more complex config structure. You may want to use interpolation to bring in configs from other files like so.
tuple/base.yaml
tuple: ${tuple:1,2}
config.yaml
defaults:
- tuple: base
- _self_
obj:
tuple: ${tuple}
Running this example you get an error:
$ python test.py
Error executing job with overrides: []
Traceback (most recent call last):
File "test.py", line 16, in main
obj = hu.instantiate(cfg.obj, _convert_='partial')
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 175, in instantiate
OmegaConf.resolve(config)
omegaconf.errors.UnsupportedValueType: Value 'tuple' is not a supported primitive type
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
The full traceback from hydra is:
Error executing job with overrides: []
Traceback (most recent call last):
File "test.py", line 21, in <module>
main()
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/main.py", line 52, in decorated_main
config_name=config_name,
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/_internal/utils.py", line 378, in _run_hydra
lambda: hydra.run(
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/_internal/utils.py", line 214, in run_and_report
raise ex
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
return func()
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/_internal/utils.py", line 381, in <lambda>
overrides=args.overrides,
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/_internal/hydra.py", line 111, in run
_ = ret.return_value
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/core/utils.py", line 233, in return_value
raise self._return_value
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/core/utils.py", line 160, in run_job
ret.return_value = task_function(task_cfg)
File "test.py", line 17, in main
model = hu.instantiate(cfg.obj, _convert_='partial')
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/hydra/_internal/instantiate/_instantiate2.py", line 175, in instantiate
OmegaConf.resolve(config)
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/omegaconf/omegaconf.py", line 792, in resolve
omegaconf._impl._resolve(cfg)
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/omegaconf/_impl.py", line 40, in _resolve
_resolve_container_value(cfg, k)
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/omegaconf/_impl.py", line 19, in _resolve_container_value
_resolve(resolved)
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/omegaconf/_impl.py", line 40, in _resolve
_resolve_container_value(cfg, k)
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/omegaconf/_impl.py", line 23, in _resolve_container_value
node._set_value(resolved._value())
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/omegaconf/nodes.py", line 44, in _set_value
self._val = self.validate_and_convert(value)
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/omegaconf/nodes.py", line 57, in validate_and_convert
return self._validate_and_convert_impl(value)
File "/Users/me/anaconda3/envs/my_env/lib/python3.7/site-packages/omegaconf/nodes.py", line 134, in _validate_and_convert_impl
f"Value '{t.__name__}' is not a supported primitive type"
omegaconf.errors.UnsupportedValueType: Value 'tuple' is not a supported primitive type
If you really dig around in the omegaconf code in the trace you will find that there is a flag for the config object allow_objects that is True in the example that passes and None in the example that does not. What is interesting is that in the _instantaite2.py file just before calling Omegaconf.resolve(config) several flags are set, one being allow_objects as True.
Is the intended behavior for these interpolated/resolved values populated from separate files to override this flag? If so, is there some way to ensure that the allow_objects flag is (or remains) true for all resolved and interpolated values?
I think there is some confusion because you are using the word tuple for multiple different purposes :)
Here is an example that works for me:
# my_app.py
import hydra
import hydra.utils as hu
from omegaconf import OmegaConf
def resolve_tuple(*args):
return tuple(args)
OmegaConf.register_new_resolver('as_tuple', resolve_tuple)
#hydra.main(config_path='conf', config_name='config')
def main(cfg):
obj = hu.instantiate(cfg.obj, _convert_='partial')
print(obj)
if __name__ == '__main__':
main()
# conf/config.yaml
defaults:
- subdir: base
- _self_
obj:
a_tuple: ${subdir.my_tuple}
# conf/subdir/base.yaml
my_tuple: ${as_tuple:1,2}
$ python my_app.py # at the command line:
{'a_tuple': (1, 2)}
The main difference here is that we've got a_tuple: ${subdir.my_tuple} instead of a_tuple: ${my_tuple}.
Notes:
Tuples may be supported by OmegaConf as a first-class type at some point in the future. Here's the relevant issue: https://github.com/omry/omegaconf/issues/392
The allow_objects flag that you mentioned is undocumented and it's behavior is subject to change.
I am trying to run a preprocessing pipeline using nipype and I get the following error message:
Traceback (most recent call last):
File "preprocscript.py", line 211, in <module>
preproc.run('MultiProc', plugin_args={'n_procs': 8})
File "/sw/anaconda/3/lib/python3.6/site-packages/nipype/pipeline/engine/workflows.py", line 579, in run
runner = plugin_mod(plugin_args=plugin_args)
File "/sw/anaconda/3/lib/python3.6/site-packages/nipype/pipeline/plugins/multiproc.py", line 162, in __init__
initargs=(self._cwd,)
File "/sw/anaconda/3/lib/python3.6/multiprocessing/pool.py", line 175, in __init__
self._repopulate_pool()
File "/sw/anaconda/3/lib/python3.6/multiprocessing/pool.py", line 236, in _repopulate_pool
self._wrap_exception)
File "/sw/anaconda/3/lib/python3.6/multiprocessing/pool.py", line 250, in _repopulate_pool_static
wrap_exception)
File "/sw/anaconda/3/lib/python3.6/multiprocessing/process.py", line 73, in __init__
assert group is None, 'group argument must be None for now'
AssertionError: group argument must be None for now
and I am not sure what exactly might be wrong in my code that leads to this or if this is an issue with my software.
I am on a linux system and use python 3.6.
The module you are using has a ProcessPoolExecuter being used in it. In Python 3.7 they added some additional arguments to that class, namely initargs which is what is being called in nipype multiprocess module you are using. Unfortunately it is not backwards compatible to 3.6 and they did not write in another way to use that module.
Your options are to upgrade or not use the multiprocessing portion of nipype.
I'm running some Matlab code in parallel from inside a Python context (I know, but that's what's going on), and I'm hitting an import error involving matlab.double. The same code works fine in a multiprocessing.Pool, so I am having trouble figuring out what the problem is. Here's a minimal reproducing test case.
import matlab
from multiprocessing import Pool
from joblib import Parallel, delayed
# A global object that I would like to be available in the parallel subroutine
x = matlab.double([[0.0]])
def f(i):
print(i, x)
with Pool(4) as p:
p.map(f, range(10))
# This prints 1, [[0.0]]\n2, [[0.0]]\n... as expected
for _ in Parallel(4, backend='multiprocessing')(delayed(f)(i) for i in range(10)):
pass
# This also prints 1, [[0.0]]\n2, [[0.0]]\n... as expected
# Now run with default `backend='loky'`
for _ in Parallel(4)(delayed(f)(i) for i in range(10)):
pass
# ^ this crashes.
So, the only problematic one is the one using the 'loky' backend.
The full traceback is:
exception calling callback for <Future at 0x7f63b5a57358 state=finished raised BrokenProcessPool>
joblib.externals.loky.process_executor._RemoteTraceback:
'''
Traceback (most recent call last):
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/externals/loky/process_executor.py", line 391, in _process_worker
call_item = call_queue.get(block=True, timeout=timeout)
File "~/miniconda3/envs/myenv/lib/python3.6/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/matlab/mlarray.py", line 31, in <module>
from _internal.mlarray_sequence import _MLArrayMetaClass
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/matlab/_internal/mlarray_sequence.py", line 3, in <module>
from _internal.mlarray_utils import _get_strides, _get_size, \
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/matlab/_internal/mlarray_utils.py", line 4, in <module>
import matlab
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/matlab/__init__.py", line 24, in <module>
from mlarray import double, single, uint8, int8, uint16, \
ImportError: cannot import name 'double'
'''
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/externals/loky/_base.py", line 625, in _invoke_callbacks
callback(self)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 309, in __call__
self.parallel.dispatch_next()
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 731, in dispatch_next
if not self.dispatch_one_batch(self._original_iterator):
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
self._dispatch(tasks)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 510, in apply_async
future = self._workers.submit(SafeFunction(func))
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/externals/loky/reusable_executor.py", line 151, in submit
fn, *args, **kwargs)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/externals/loky/process_executor.py", line 1022, in submit
raise self._flags.broken
joblib.externals.loky.process_executor.BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
joblib.externals.loky.process_executor._RemoteTraceback:
'''
Traceback (most recent call last):
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/externals/loky/process_executor.py", line 391, in _process_worker
call_item = call_queue.get(block=True, timeout=timeout)
File "~/miniconda3/envs/myenv/lib/python3.6/multiprocessing/queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/matlab/mlarray.py", line 31, in <module>
from _internal.mlarray_sequence import _MLArrayMetaClass
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/matlab/_internal/mlarray_sequence.py", line 3, in <module>
from _internal.mlarray_utils import _get_strides, _get_size, \
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/matlab/_internal/mlarray_utils.py", line 4, in <module>
import matlab
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/matlab/__init__.py", line 24, in <module>
from mlarray import double, single, uint8, int8, uint16, \
ImportError: cannot import name 'double'
'''
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "test.py", line 20, in <module>
for _ in Parallel(4)(delayed(f)(i) for i in range(10)):
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 934, in __call__
self.retrieve()
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 833, in retrieve
self._output.extend(job.get(timeout=self.timeout))
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 521, in wrap_future_result
return future.result(timeout=timeout)
File "~/miniconda3/envs/myenv/lib/python3.6/concurrent/futures/_base.py", line 432, in result
return self.__get_result()
File "~/miniconda3/envs/myenv/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
raise self._exception
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/externals/loky/_base.py", line 625, in _invoke_callbacks
callback(self)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 309, in __call__
self.parallel.dispatch_next()
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 731, in dispatch_next
if not self.dispatch_one_batch(self._original_iterator):
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 759, in dispatch_one_batch
self._dispatch(tasks)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/parallel.py", line 716, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/_parallel_backends.py", line 510, in apply_async
future = self._workers.submit(SafeFunction(func))
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/externals/loky/reusable_executor.py", line 151, in submit
fn, *args, **kwargs)
File "~/miniconda3/envs/myenv/lib/python3.6/site-packages/joblib/externals/loky/process_executor.py", line 1022, in submit
raise self._flags.broken
joblib.externals.loky.process_executor.BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
Looking at the traceback, it seems like the root cause is an issue importing the matlab package in the child process.
It's probably worth noting that this all runs just fine if instead I had defined x = np.array([[0.0]]) (after importing numpy as np). And of course the main process has no problem with any matlab imports, so I am not sure why the child process would.
I'm not sure if this error has anything in particular to do with the matlab package, or if it's something to do with global variables and cloudpickle or loky. In my application it would help to stick with loky, so I'd appreciate any insight!
I should also note that I'm using the official Matlab engine for Python: https://www.mathworks.com/help/matlab/matlab-engine-for-python.html. I suppose that might make it hard for others to try out the test cases, so I wish I could reproduce this error with a type other than matlab.double, but I haven't found another yet.
Digging around more, I've noticed that the process of importing the matlab package is more circular than I would expect, and I'm speculating that this could be part of the problem? The issue is that when import matlab is run by loky's _ForkingPickler, first some file matlab/mlarray.py is imported, which imports some other files, one of which contains import matlab, and this causes matlab/__init__.py to be run, which internally has from mlarray import double, single, uint8, ... which is the line that causes the crash.
Could this circularity be the issue? If so, why can I import this module in the main process but not in the loky backend?
The error is caused by incorrect loading order of global objects in the child processes. It can be seen clearly in the traceback
_ForkingPickler.loads(res) -> ... -> import matlab -> from mlarray import ...
that matlab is not yet imported when the global variable x is loaded by cloudpickle.
joblib with loky seems to treat modules as normal global objects and send them dynamically to the child processes. joblib doesn't record the order in which those objects/modules were defined. Therefore they are loaded (initialized) in a random order in the child processes.
A simple workaround is to manually pickle the matlab object and load it after importing matlab inside your function.
import matlab
import pickle
px = pickle.dumps(matlab.double([[0.0]]))
def f(i):
import matlab
x=pickle.loads(px)
print(i, x)
Of course you can also use the joblib.dumps and loads to serialize the objects.
Use initializer
Thanks to the suggestion of #Aaron, you can also use an initializer (for loky) to import Matlab before loading x.
Currently there's no simple API to specify initializer. So I wrote a simple function:
def with_initializer(self, f_init):
# Overwrite initializer hook in the Loky ProcessPoolExecutor
# https://github.com/tomMoral/loky/blob/f4739e123acb711781e46581d5ed31ed8201c7a9/loky/process_executor.py#L850
hasattr(self._backend, '_workers') or self.__enter__()
origin_init = self._backend._workers._initializer
def new_init():
origin_init()
f_init()
self._backend._workers._initializer = new_init if callable(origin_init) else f_init
return self
It is a little bit hacky but works well with the current version of joblib and loky.
Then you can use it like:
import matlab
from joblib import Parallel, delayed
x = matlab.double([[0.0]])
def f(i):
print(i, x)
def _init_matlab():
import matlab
with Parallel(4) as p:
for _ in with_initializer(p, _init_matlab)(delayed(f)(i) for i in range(10)):
pass
I hope the developers of joblib will add initializer argument to the constructor of Parallel in the future.
I am trying to run the program genipe to do some genome-wide survival analysis. I have installed genipe and all the relevant directories. However, when I go to run the program I get the error:
"TypeError: _ init _() got an unexpected keyword argument 'normalize'"
I haven't edited any of the genipe scripts and I have run genipe with no issues on a different server so I am not sure what is going wrong! Any help would be greatly appreciated.
Many thanks,
Caragh
Edit:
I am using python version 3.6.1
Traceback as follows:
Traceback (most recent call last):
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/site-packages/genipe/tools/imputed_stats.py", line 965, in process_impute2_site
use_ml=site_info.use_ml,
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/site-packages/genipe/tools/imputed_stats.py", line 1048, in fit_cox
cf = CoxPHFitter(alpha=0.95, tie_method="Efron", normalize=False)
TypeError: __init__() got an unexpected keyword argument 'normalize'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/site-packages/genipe/tools/imputed_stats.py", line 811, in compute_statistics
for result in pool.map(process_impute2_site, sites_to_process):
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/multiprocessing/pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/multiprocessing/pool.py", line 608, in get
raise self._value
TypeError: __init__() got an unexpected keyword argument 'normalize'
[2017-05-31 14:18:53 ERROR] __init__() got an unexpected keyword argument 'normalize'
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/multiprocessing/pool.py", line 44, in mapstar
return list(map(*args))
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/site-packages/genipe/tools/imputed_stats.py", line 965, in process_impute2_site
use_ml=site_info.use_ml,
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/site-packages/genipe/tools/imputed_stats.py", line 1048, in fit_cox
cf = CoxPHFitter(alpha=0.95, tie_method="Efron", normalize=False)
TypeError: __init__() got an unexpected keyword argument 'normalize'
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/users/k1640238/miniconda/envs/genipe_pyvenv/bin/imputed-stats", line 11, in <module>
sys.exit(main())
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/site-packages/genipe/tools/imputed_stats.py", line 161, in main
options=args,
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/site-packages/genipe/tools/imputed_stats.py", line 811, in compute_statistics
for result in pool.map(process_impute2_site, sites_to_process):
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/multiprocessing/pool.py", line 260, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/users/k1640238/miniconda/envs/genipe_pyvenv/lib/python3.6/multiprocessing/pool.py", line 608, in get
raise self._value
TypeError: __init__() got an unexpected keyword argument 'normalize'
Judging by the lifeline changelog this keyword argument has been removed from this particular function. Lifeline is a package which contains this particular function and is used by genipe.
You can either install previous version of lifeline by yourself and see if that will help or wait for updates in genipe library.
Looking at further errors from your comments, it seems like this is problematic code. You are trying to use dmatrices but it seems like it is not defined. Probably because mentioned try/catch block couldn't find statsmodel installed and therefore patsy wasn't imported either.
Try to install few more packages manually, starting with
statsmodel
patsy
and see if you will get any errors then...
See errata's answer above, I was using the wrong versions of some of the dependancies, but even with that the program was still giving me errors. However when I reverted to python 3.4, the program worked.
Currently I am reading the book Learning Selenium with Python and I'm having trouble running a suite. Below I will post my two test classes and my file that contains the suite.
searchproducts.py
https://gist.github.com/anonymous/0a054c6c8728d91f9ad8
homepagetest.py
https://gist.github.com/anonymous/5043f2432f2316345c3f
smoketest.py
https://gist.github.com/anonymous/8220d861fce77d0ea197
When I try run the smoketest.py file, is shown the error:
Traceback (most recent call last):
File "smoketests.py", line 12, in <module>
unittest.TextTestRunner(verbosity=2).run(smoke_tests)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib' /python2.7/unittest/runner.py", line 151, in run
test(result)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib /python2.7/unittest/suite.py", line 70, in __call__
return self.run(*args, **kwds)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/suite.py", line 108, in run
test(result)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/unittest/loader.py", line 50, in loadTestsFromTestCase
if issubclass(testCaseClass, suite.TestSuite):
TypeError: issubclass() arg 1 must be a class
I could not fix the loadTestsFromTestCase.
But this change works for me:
search_tests = unittest.TestLoader().loadTestsFromModule(SearchTests, )
home_page_tests = unittest.TestLoader().loadTestsFromModule(HomePageTest, )