Updating to the latest Yocto version sanity.bbclass issues - python

I have taken over a project which was using Yocto Fido version from 2015. I need to update this to the latest stable version Thud.
I have cloned the Poky-Thud repository and cloned the latest layers which are required by our customized layer such as meta-openembedded etc. and added our customized layer back to it.
Now, I wasn't expecting this to build straight away without issues by any means, but the below errors I'm getting with the new layers, I just don't understand. There are more errors like this relating to not enough values, but posted below is one.
There is an interface issue in meta/classes/sanity.bbclass. I can't just revert to the older version of meta to solve this, nor I don't think it makes sense to modify the code myself? Any ideas why this is and how to solve it?
ERROR: Execution of event handler 'config_reparse_eventhandler' failed
Traceback (most recent call last):
File "/home/ubuntu/new-repo/poky-thud/build-
bbgw/../meta/classes/sanity.bbclass", line 971, in
config_reparse_eventhandler(e=<bb.event.ConfigParsed object at
0x7ff4103bf3c8>):
python config_reparse_eventhandler() {
> sanity_check_conffiles(e.data)
}
File "/home/ubuntu/new-repo/poky-thud/build-
bbgw/../meta/classes/sanity.bbclass", line 572, in sanity_check_conffiles(d=
<bb.data_smart.DataSmart object at 0x7ff4108d35c0>):
for func in funcs:
> conffile, current_version, required_version, func =
func.split(":")
if check_conf_exists(conffile, d) and d.getVar(current_version)
is not None and \
ValueError: not enough values to unpack (expected 4, got 1)

Related

pygit2 raises KeyError: 'the requested type does not match the type in the ODB'

I'm working on porting some python2 code to python3 - single codebase. I'm using pygit2 0.28.2 on cpython 2.7, and pygit2 1.9.2 on cpython3.10, at least for now.
I'm getting an error (-3) back from:
err = C.git_remote_push(self._remote, refspecs, opts)
...and payload.check_error(err) is mapping that to:
KeyError: 'the requested type does not match the type in the ODB'
That error only surfaces on cpython3.10, not cpython2.7.
I'm afraid I don't know what to make of the error. I googled for about 90 minutes, and didn't find much.
Here's the full traceback:
Traceback (most recent call last):
File "/app/shared/common/git/handlers.py", line 488, in Push
remote.push(temp3, callbacks=self.callbacks)
File "/usr/local/lib/python3.10/site-packages/pygit2/remote.py", line 257, in push
payload.check_error(err)
File "/usr/local/lib/python3.10/site-packages/pygit2/callbacks.py", line 93, in check_error
check_error(error_code)
File "/usr/local/lib/python3.10/site-packages/pygit2/errors.py", line 56, in check_error
raise KeyError(message)'
KeyError: 'the requested type does not match the type in the ODB'
Can anyone please give me a nudge in the right direction? What types is it complaining about? To pygit2, the data passed appears to be pretty opaque.
Is it possible that pygit2 0.28.2 would 'force' always, while pygit2 1.9.2 will only force by request? We've got libgit2's "strict mode" turned off in Python 3.
Thanks!
It turned out that pygit2 0.28.2 works, if we Start with 0.28.2. If we start with something later, like 1.5.0 and manually switch back to 0.28.2, the damage has already been done to the git repo, causing 0.28.2 to give errors too.
There are likely (somewhat) later versions that are happy as well, but that's another story.

Semantic versioning in dask repository

Why didn't the the commit 7138f470f0e55f2ebdb7638ddc4dfe2e78671403 trigger a new major version of dask since the function read_metadata is incompatible with older versions? The commit introduced the return of 4 values, but the old version only returned 3. According to semantic versioning this would have been the correct behavior.
cudf got broken, because of that commit.
Code from the issue:
>>> import cudf
>>> import dask_cudf
>>> dask_cudf.from_cudf(cudf.DataFrame({'a':[1,2,3]}),npartitions=1).to_parquet('test_parquet')
>>> dask_cudf.read_parquet('test_parquet')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/nvme/0/vjawa/conda/envs/cudf_15_june_25/lib/python3.7/site-packages/dask_cudf/io/parquet.py", line 213, in read_parquet
**kwargs,
File "/nvme/0/vjawa/conda/envs/cudf_15_june_25/lib/python3.7/site-packages/dask/dataframe/io/parquet/core.py", line 234, in read_parquet
**kwargs
File "/nvme/0/vjawa/conda/envs/cudf_15_june_25/lib/python3.7/site-packages/dask_cudf/io/parquet.py", line 17, in read_metadata
meta, stats, parts, index = ArrowEngine.read_metadata(*args, **kwargs)
ValueError: not enough values to unpack (expected 4, got 3)
dask_cudf==0.14 is only compatible with dask<=0.19. In dask_cudf==0.16 the issue is fixed.
Edit: Link to the issue
Whilst Dask does not have a concrete policy around the value of the version string, one could argue that in this particular case, the IO code is non-core, and largely being pushed by upstream (pyarrow) development rather than our own initiative.
We are sorry that your code broke, but of course picking the correct versions of packages and expecting downstream packages to catch up is part of the open source ecosystem.
You may want to raise this as a github issue, if you'd like to get input from more of the dask maintenance team. (There isn't really much to "answer" here, from a stackoverflow perspective)

'UnityEnvironment' object has no attribute 'behavior_spec'

I followed this link to doc to create environment of my own.
But when i run this
from mlagents_envs.environment import UnityEnvironment
env = UnityEnvironment(file_name="v1-ball-cube-game.x86_64")
env.reset()
behavior_names = env.behavior_spec.keys()
print(behavior_names)
Game window pop up and then terminal show error saying
Traceback (most recent call last):
File "index.py", line 6, in <module>
behavior_names = env.behavior_spec.keys()
AttributeError: 'UnityEnvironment' object has no attribute 'behavior_spec'
despite the fact that this is the exact snippet as shown in the documentation.
I created environment by following this (it make without brain) and i was able to train the model by .conf file. Now i wanted to connect to python API.
You need to use stable documents and stable repo( RELEASE_TAGS ) to achieve stable results. Unity ML Agents changes it's syntax every few months so that is problem if you are following master branch.
env.get_behavior_spec(behavior_name: str)
Should solve your problem.
https://github.com/Unity-Technologies/ml-agents/blob/release_2/docs/Python-API.md

Code for gensim Word2vec as an HTTP service 'KeyedVectors' Attribute error

I am using the w2v_server_googlenews code from the word2vec HTTP server running at https://rare-technologies.com/word2vec-tutorial/#bonus_app. I changed the loaded file to a file of vectors trained with the original C version of word2vec. I load the file with
gensim.models.KeyedVectors.load_word2vec_format(fname, binary=True)
and it seems to load without problems. But when I test the HTTP service with, let's say
curl 'http://127.0.0.1/most_similar?positive%5B%5D=woman&positive%5B%5D=king&negative%5B%5D=man'
I got an empty result with only the execution time.
{"taken": 0.0003361701965332031, "similars": [], "success": 1}
I put a traceback.print_exc() on the except part of the related method, which is in this case def most_similar(self, *args, **kwargs): and I got:
Traceback (most recent call last):
File "./w2v_server.py", line 114, in most_similar
topn=5)
File "/usr/local/lib/python2.7/dist-packages/gensim/models/keyedvectors.py", line 304, in most_similar
self.init_sims()
File "/usr/local/lib/python2.7/dist-packages/gensim/models/keyedvectors.py", line 817, in init_sims
self.syn0norm = (self.syn0 / sqrt((self.syn0 ** 2).sum(-1))[..., newaxis]).astype(REAL)
AttributeError: 'KeyedVectors' object has no attribute 'syn0'
Any idea on why this might happens?
Note: I use python 2.7 and I installed gensim using pip, which gave me gensim 2.1.0.
FYI that demo code was baed on gensim 0.12.3 (from 2015, as listed in its requirements.txt), and would need updating to work with the latest gensim.
It might be sufficient to add a line to w2v_server.py at line 70 (just after the load_word2vec_format()), to force the creation of the needed syn0norm property (which in older gensims was auto-created on load), before deleting the raw syn0 values. Specifically:
self.model.init_sims(replace=True)
(You would leave out the replace=True if you were going to be doing operations other than most_similar(), that might require raw vectors.)
If this works to fix the problem for you, a pull-request to the w2v_server_googlenews repo would be favorably received!

Error in scikit-neuralnetwork classifier

I am using the Python(2.7.6) on ubuntu(14.04.2 LTS), numpy(1.11.3) and scikit-learn version(0.18.1). But it throws the following exception.
Here is the link of official document.
nn = Classifier(
layers=[
Layer("Maxout", units=100, pieces=2),
Layer("Softmax")],
learning_rate=0.001,
n_iter=25)
Error :
Traceback (most recent call last):
File "LeadScore.py", line 19, in <module>
Layer("Maxout", units=100, pieces=2),
TypeError: __init__() got an unexpected keyword argument 'pieces'
(Disclaimer: i never used this lib)
(1) scikit-neuralnetwork has not much to do with scikit-learn, so you should probably mention the version of scikit-neuralnetwork which you are using.
(2) According to this and this Maxout was removed from the library. If you search for pieces or maxout within the project-sources search-link no code is found!
(3) The basic problem here seems to be a mismatch between the example and your version. Maybe there was a version with maxout, but without the parameter pieces. I don't know.
(4) My opinion: this library/project does not seem that active anymore (at least compared to keras and co.) and while using Pybrain in the past (dead), it seems it's using Lasange (somewhat a dying project too) now. Together with these mismatches between examples and code this would give me a lot of headaches and i would switch libraries.

Categories

Resources