I am a begineer in yocto, I would like to know how do I get a python module in yocto project, I have tried adding it as a recipe but I had no luck, appreciate any kind of help.
I have added python module speedtest in the common.yml as a recipe but when i do build it again, the tasks are same so i have no luck adding it again, I have crawled against all kinds of forums and links, but I had no luck adding it, maybe I am doing something wrong in the process, so I would appreciate a systematic walkthrough for this.
Related
This is a direct copy of my post on r/AWS since I didn't get any useful answers there (edit: fix came from a comment on the post shortly afterward).
I’m working on a project in python that uses boto3 and I’ve recently come across an issue I’d like some clarification on. When I run my script normally, it runs as intended. However, when I package it into an executable (--onefile or otherwise) and run it, when it hits the first boto3 call, it spits out “botocore.exceptions.DataNotFoundError: Unable to load data for: endpoints”. Before this, I also had to do some finagling with hidden imports, because it was giving ModuleNotFoundErrors for html.parser and cffi. That makes me think there might be something weird going on in the background that’s causing both of these errors, because I’ve gotten boto3 programs to package into a pyinstaller exe pretty easily in the past.
I’ve looked online and can’t really find much on this error. What I do know from my digging is that there’s a data folder in the package that contains an endpoints json. Am I just somehow not packaging all of boto3 into the executable, or is there something else going on here?
EDIT:
I solved it - my pyinstaller package was out of date (thanks conda) so it was missing the required hooks. After updating using -c conda-forge, it worked.
There probably isn't one "right answer" to this question. I'm interested in thoughts and opinions.
We have a couple hundred RHEL7/Centos7/Rocky8 nodes. Many of them have python modules installed via pip/pip3.
I've been searching for a best practices on routine/monthly patching these modules...so I far haven't found any. Obviously things installed with rpm/yum/dnf are pretty easy to deal with.
From the pip man page:
pip install --upgrade SomePackage
Great!
But how do you update all of them?
Sure. It is possible to do a "pip list/freeze" pipe that to awk...etc..
Surely, there's a better way. Ideally, one that captures things like "boto3 V1.2 replaced with boto3 V1.3"
Right now it feels like I'm the only one thinking about this. Maybe I am and it is stupid. I'm ok with that response as well (but please tell me why).
A common solution is to deploy the application code inside a Docker container - the container image contains its own version of Python and all the dependency modules, so you don't have to update each module on all the host machines individually. It also means that the combination of OS, Python and modules that you deploy can be tested and then "frozen" into an immutable image which is then deployed the same everywhere.
Right now it feels like I'm the only one thinking about this.
I realise the above answer is probably not helpful in your situation as you already have a fairly large system deployed... but it might help to explain why not many people are developing solutions to your problem!
I am really new to Jenkins and Python so when I have initially researched for this problem, there has been a limit to my understanding. I am looking to write a Python script and for it to be run on Jenkins as part of some automated testing I wish to do. My script interacts with an API and hence imports the 'requests' module on Python. It works fine using the Python interpreter on my local machine but I have had issues when I have tried using the Jenkins Python script builder and so I am looking for a way around this.
As I mentioned, I have looked around the internet for solutions but as my knowledge on this topic is limited I have found it difficult to understand certain ideas that have been mentioned on the web. One lead I have had is related to the use of virtual environments on Jenkins, but as its something I've never used, I have struggled implementing it. I have installed the ShiningPanda Plugin on Jenkins, but I am unsure how to use it.
Any help given is greatly appreciated :)
Thanks
I'm learning about crawlers, and after a few basic ones I tried downloading the google scholar crawler master from github to see how it runs, after a few errors that I could fix, I ran into a ModuleNotFoundError: No module named 'proxy' error (middleware.py file, from proxy import PROXIES line is the issue).
This code has had a few problems containing solutions that are no longer supported/advised in python 3.x versions, including modules that have since been renamed/moved, but I was unable to find out if this is the case for this as well, would appreciate help.
Assuming you're talking about this https://github.com/geekan/google-scholar-crawler crawler:
I just tried to run it on Python 2.7 and had no problems with it. A brief look at misc module told me, that there is a possible problem with relative imports (some information about it may be found in this quesion Relative imports in Python 3).
So, the short answer is simply to use python 2.7 as it will allow to concentrate on understanding how scrapy crawlers work instead of understanding language version differences.
UPD: also make sure to remove all of the import pdb; pdb.set_trace() breakpoints in the code
I'm trying to use tablesnap to make backups but without success. I'm using Ubuntu 12.04 and after trying the installation of tablesnap as it is described in github, I'm not able to do it. I guess this is due to fact that the package is for Maverick, so I have tried to copy the code and execute it but again without success. It always display the message "INFO Starting up" and seems nothing happen.
I'm sure the problem is my ignorance but, could you help me? Do you know about any document or example of installing and using for backup and recovery?
UPDATE:
The problem was me. Tablesnap was working but there was no IN_MOVED_TO event. So, now, what I'm trying to do is to backup a complete keyspace. I have tried with the "-B" option of tablesnap but still nothing is uploaded to S3. Any idea?
I'm sure the problem is my ignorance of linux, python and cassandra, but I haven't found enough information to make it work or a step by step document
Being blunt here: yes. You've got the answer to your own question. It's complicated to get used to all of that at once, but a step-by-step document won't help you a bit. Really. You need to be familiar with what you're doing, or else you won't be able to do something useful.
To compare: Installing cassandra is like buying a dentist's chair. Even with a very precise step-by-step information on how to set it up and how to place a patient on it, you'll be a terrible terrible threat to your patient's teeth if you have no education as a dentist before.
Cassandra is a mighty tool for large, ditributed systems. Someone who develops for that or even just administrates that needs to have very solid understanding of how to work with his computer in the environment that cassandra runs in. Get yourself used to linux. Then read a lot about cassandra. Then that project is on your level, and you will have success!
Ok, what I was looking for is very easy. Here is what I have done to make a complete backup of my keyspaces:
python tablesnap -k MY_AWS_KEY -s MY_AWS_SECRET -B my_s3_bucket /opt/cassandra/data/my_keyspace/*
Just replace /opt/cassandra/data/ with the path to your keyspace and that's all. As simple as this is what I was asking for, so I leave it here in case someone finds it useful.