Thanks in advance!
I am working on portfolio optimisation with PyPortfolioOptimisation.
I have the prices of my underlying assets starting from 2015-01-01 up till 2021-05-19
The dataframe shape is [1666 rows x 20 columns]
I ran the following codes
mu = expected_returns.mean_historical_return(df)
cov = risk_models.sample_cov(df)
print('Mean' '\n'+str(mu))
print('Covariance: ' '\n' +str(cov))
ef = EfficientFrontier(mu,cov)
weights = ef.max_sharpe()
cleaned_w = ef.cleaned_weights()
print(cleaned_w)
ef.portfolio_performance(verbose=True)
But it gives an error stating workspace allocation error pointing to line weights = ef.max_sharpe()
Traceback (most recent call last):
File "F:\Python projects\KS\Investment\Efficient frontier.py", line 34, in <module>
weights = ef.max_sharpe()
File "F:\Python projects\KS\lib\site-packages\pypfopt\efficient_frontier\efficient_frontier.py", line 278, in max_sharpe
self._solve_cvxpy_opt_problem()
File "F:\Python projects\KS\lib\site-packages\pypfopt\base_optimizer.py", line 239, in _solve_cvxpy_opt_problem
self._opt.solve(verbose=self._verbose, **self._solver_options)
File "F:\Python projects\KS\lib\site-packages\cvxpy\problems\problem.py", line 459, in solve
return solve_func(self, *args, **kwargs)
File "F:\Python projects\KS\lib\site-packages\cvxpy\problems\problem.py", line 947, in _solve
solution = solving_chain.solve_via_data(
File "F:\Python projects\KS\lib\site-packages\cvxpy\reductions\solvers\solving_chain.py", line 343, in solve_via_data
return self.solver.solve_via_data(data, warm_start, verbose,
File "F:\Python projects\KS\lib\site-packages\cvxpy\reductions\solvers\qp_solvers\osqp_qpif.py", line 103, in solve_via_data
solver.setup(P, q, A, lA, uA, verbose=verbose, **solver_opts)
File "F:\Python projects\KS\lib\site-packages\osqp\interface.py", line 37, in setup
self._model.setup(*unpacked_data, **settings)
ValueError: Workspace allocation error!
I tried changing the memory setting in Pycharm but to no avail. Is memory the same as workspace allocation? Sorry for these fundamental questions...
Cheers mate
Related
I'm using python image_match library. I need to use search_image method of this library. but when I se this method I got the below error:
Traceback (most recent call last):
File "/var/www/html/Panel/test2.py", line 16, in <module>
ses.search_image('https://upload.wikimedia.org/wikipedia/commons/thumb/e/ec/Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg/687px-Mona_Lisa,_by_Leonardo_da_Vinci,_from_C2RMF_retouched.jpg')
File "/usr/local/lib/python3.10/site-packages/image_match/signature_database_base.py", line 268, in search_image
transformed_record = make_record(img, self.gis, self.k, self.N)
File "/usr/local/lib/python3.10/site-packages/image_match/signature_database_base.py", line 356, in make_record
signature = gis.generate_signature(path)
File "/usr/local/lib/python3.10/site-packages/image_match/goldberg.py", line 161, in generate_signature
im_array = self.preprocess_image(path_or_image, handle_mpo=self.handle_mpo, bytestream=bytestream)
File "/usr/local/lib/python3.10/site-packages/image_match/goldberg.py", line 257, in preprocess_image
return rgb2gray(image_or_path)
File "/usr/local/lib/python3.10/site-packages/skimage/_shared/utils.py", line 394, in fixed_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/skimage/color/colorconv.py", line 875, in rgb2gray
rgb = _prepare_colorarray(rgb)
File "/usr/local/lib/python3.10/site-packages/skimage/color/colorconv.py", line 140, in _prepare_colorarray
raise ValueError(msg)
ValueError: the input array must have size 3 along `channel_axis`, got (1024, 687)
Can you please help me?
How to resume training from the saved snapshot in chainer.I was trying to implement DCGAN using chainer using the following github link:
https://github.com/chainer/chainer/blob/master/examples/dcgan/train_dcgan.py
When I try to give the --resume parameter it is showing shape mismatch error in the network.
In the python code there is option to give snaptshot from which we need to resume the training.These snapshots are automatically getting saved to result folder.That is also given as argument in the code.So I tried to resume training from the saved snapshot by giving the below command.
$ python train.py --resume 'snapshot.npz'
where train.py is the modified code with label for dcgan ciphar10 dataset.
The error I got by giving the above command is :
chainer.utils.type_check.InvalidType:
Invalid operation is performed in: LinearFunction (Forward)
Expect: x.shape[1] == W.shape[1]
Actual: 110 != 100
When I run the python file with the below command there is no error:
$ python train.py
Complete error trace:
Exception in main training loop:
Invalid operation is performed in: LinearFunction (Forward)
Expect: x.shape[1] == W.shape[1]
Actual: 110 != 100
Traceback (most recent call last):
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/training/trainer.py, line 315, in run
update()
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/training/updaters/standard_updater.py, line 165, in update
self.update_core()
File /home/964769/Lakshmi/DCGAN/updater_with_label.py, line 50, in update_core
x_fake = gen(z,labels)
File /home/964769/Lakshmi/DCGAN/net_with_label.py, line 61, in call
h = F.reshape(F.relu(self.bn0(self.l0(F.concat((z,t),axis=1)))),
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/link.py, line 242, in call
out = forward(args, *kwargs)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/links/connection/linear.py, line 138, in forward
return linear.linear(x, self.W, self.b, n_batch_axes=n_batch_axes)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/functions/connection/linear.py, line 288, in linear
y, = LinearFunction().apply(args)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/function_node.py, line 245, in apply
self.check_data_type_forward(in_data)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/function_node.py, line 330, in check_data_type_forward
self.check_type_forward(in_type)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/functions/connection/linear.py, line 27, in check_type_forward
x_type.shape[1] == w_type.shape[1],
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/utils/typecheck.py, line 546, in expect
expr.expect()
File/home/964769/anaconda3/lib/python3.6/site-packages/chainer/utils/typecheck.py, line 483, in expect
'{0} {1} {2}'.format(left, self.inv, right))
Will finalize trainer extensions and updater before reraising the exception.
Traceback (most recent call last):
Filetrain.py, line 140, in
main()
Filetrain.py, line 135, in main
trainer.run()
File/home/964769/anaconda3/lib/python3.6/site-packages/chainer/training/trainer.py, line 329, in run
six.reraise(sys.exc_info())
File /home/964769/anaconda3/lib/python3.6/site-packages/six.py, line 686, in reraise
raise value
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/training/trainer.py, line 315, in run
update()
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/training/updaters/standard_updater.py, line 165, in update
self.update_core()
File /home/964769/Lakshmi/DCGAN/updater_with_label.py, line 50, in update_core
x_fake = gen(z,labels)
File /home/964769/Lakshmi/DCGAN/net_with_label.py, line 61, in call
h = F.reshape(F.relu(self.bn0(self.l0(F.concat((z,t),axis=1)))),
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/link.py, line 242, in call
out = forward(args, **kwargs)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/links/connection/linear.py, line 138, in forward
return linear.linear(x, self.W, self.b, n_batch_axes=n_batch_axes)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/functions/connection/linear.py, line 288, in linear
y, = LinearFunction().apply(args)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/function_node.py, line 245, in apply
self.check_data_type_forward(in_data)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/function_node.py, line 330, in check_data_type_forward
self.check_type_forward(in_type)
File /home/964769/anaconda3/lib/python3.6/site-packages/chainer/functions/connection/linear.py, line 27, in checktype_forward
x_type.shape[1] == w_type.shape[1],
File "/home/964769/anaconda3/lib/python3.6/site-packages/chainer/utils/typecheck.py, line 546, in expect
expr.expect()
File/home/964769/anaconda3/lib/python3.6/site-packages/chainer/utils/type_check.py", line 483, in expect
'{0} {1} {2}'.format(left, self.inv, right))
chainer.utils.type_check.InvalidType:
Invalid operation is performed in: LinearFunction (Forward)
Expect: x.shape[1] == W.shape[1]
Actual: 110 != 100
I have fitted a Random Forest Classifier on my dataset containing 7 features and about 1 million rows or records.
Following is my code.
randForestClassifier=RandomForestClassifier(n_estimators=10,max_depth=3)
randForestClassifier.fit(X_train,y)
pred=randForestClassifier.predict(featues_test)
I am getting Memory error when I use predict method of my classifier.How to fix it?
Following is my complete log
randForestClassifier.predict(featues_test)
Traceback (most recent call last):
File "<ipython-input-15-0b7612d6e958>", line 1, in <module>
randForestClassifier.predict(featues_test)
File "C:\Python27\lib\site-packages\sklearn\ensemble\forest.py", line 462, in predict
proba = self.predict_proba(X)
File "C:\Python27\lib\site-packages\sklearn\ensemble\forest.py", line 513, in predict_proba
for e in self.estimators_)
File "C:\Python27\lib\site-packages\sklearn\externals\joblib\parallel.py", line 659, in __call__
self.dispatch(function, args, kwargs)
File "C:\Python27\lib\site-packages\sklearn\externals\joblib\parallel.py", line 406, in dispatch
job = ImmediateApply(func, args, kwargs)
File "C:\Python27\lib\site-packages\sklearn\externals\joblib\parallel.py", line 140, in __init__
self.results = func(*args, **kwargs)
File "C:\Python27\lib\site-packages\sklearn\ensemble\forest.py", line 106, in _parallel_helper
return getattr(obj, methodname)(*args, **kwargs)
File "C:\Python27\lib\site-packages\sklearn\tree\tree.py", line 592, in predict_proba
proba = self.tree_.predict(X)
File "sklearn/tree/_tree.pyx", line 3207, in sklearn.tree._tree.Tree.predict (sklearn\tree\_tree.c:24468)
File "sklearn/tree/_tree.pyx", line 3209, in sklearn.tree._tree.Tree.predict (sklearn\tree\_tree.c:24340)
MemoryError
Yes, you are getting the MemoryError at randForestClassifier.predict(featues_test), as shown by the stack trace:
File "<ipython-input-15-0b7612d6e958>", line 1, in <module>
randForestClassifier.predict(featues_test)
The remaining lines of the stack trace shows that the problems comes from sklearn, in the C code: sklearn\tree\_tree.c:24340
I want to analyze log file of mongodb. I am referred to use m-tools. But I'm getting following error.
I've typed in terminal
mlonginfo /..logfilelocation../ --queries
Error:
QUERIES [============================== ] 74.1 % Traceback (most recent call last):
File "/usr/local/bin/mloginfo", line 9, in <module>
load_entry_point('mtools==1.1.8', 'console_scripts', 'mloginfo')()
File "/usr/local/lib/python2.7/dist-packages/mtools/mloginfo/mloginfo.py", line 82, in main
tool.run()
File "/usr/local/lib/python2.7/dist-packages/mtools/mloginfo/mloginfo.py", line 77, in run
section.run()
File "/usr/local/lib/python2.7/dist-packages/mtools/mloginfo/sections/query_section.py", line 51, in run
for i, le in enumerate(logfile):
File "/usr/local/lib/python2.7/dist-packages/mtools/util/logfile.py", line 208, in __iter__
le = self.next()
File "/usr/local/lib/python2.7/dist-packages/mtools/util/logfile.py", line 190, in next
ret = le.set_datetime_hint(self._datetime_format, self._datetime_nextpos, self.year_rollover)
File "/usr/local/lib/python2.7/dist-packages/mtools/util/logevent.py", line 246, in set_datetime_hint
if not self.split_tokens[self._datetime_nextpos-1][0].isdigit():
IndexError: list index out of range
Thanks in advance.
I'm trying to perform factorial analysis on a distance matrix (made of distances between about 1700 points, all ranging between 0.0 and 1.0, inclusively). I'm a total FA newbie.
Anyways, this code:
fan=mdp.nodes.FANode()
far=fan.execute(a)
# a is a numpy.array, size 1780x1780
Gives me:
Traceback (most recent call last):
File "<pyshell#29>", line 1, in <module>
far=fan.execute(a)
File "/usr/lib/pymodules/python2.7/mdp/signal_node.py", line 575, in execute
self._pre_execution_checks(x)
File "/usr/lib/pymodules/python2.7/mdp/signal_node.py", line 451, in _pre_execution_checks
self._if_training_stop_training()
File "/usr/lib/pymodules/python2.7/mdp/signal_node.py", line 431, in _if_training_stop_training
self.stop_training()
File "/usr/lib/pymodules/python2.7/mdp/signal_node.py", line 556, in stop_training
self._train_seq[self._train_phase][1](*args, **kwargs)
File "/usr/lib/pymodules/python2.7/mdp/nodes/em_nodes.py", line 93, in _stop_training
A = normal(0., sqrt(scale/k), size=(d, k)).astype(typ)
File "mtrand.pyx", line 1279, in mtrand.RandomState.normal (numpy/random/mtrand/mtrand.c:6943)
ValueError: scale <= 0
I tried replacing 0 values with 0.00001, to no avail. Any idea what this might mean?