I built an app using kivymd I converted it to apk using the following commands . After installing it in my phone , while opening it it force closes within few seconds .
1.buildozer init
2.nano buildozer.spec
(In order to change some stuffs like app's name)
3.Then the following are some dependencies for buildozer
a)sudo apt update
b)sudo apt install -y git zip unzip openjdk-8-jdk python3-pip autoconf libtool pkg-config zlib1g-dev libncurses5-dev libncursesw5-dev libtinfo5 cmake libffi-dev libssl-dev
c)pip3 install --user --upgrade Cython==0.29.19 virtualenv
d)export PATH=$PATH:~/.local/bin/
After the above commands finally I execute the following command:
4.buildozer -v android debug
I use kivymd because it has good material design
I've added a picture below it loads like this and comes to my home screen again.
Thanks in advance
One procedure you can do to get accurate information about the issue is, I guess your project has a "main.py" script, so make a copy of that script and rename it to, for example, "main2.txt", so it will be plain text (but it is your program), and then, edit the "main.py" script and insert the following code (watch my video https://youtu.be/LFQVhOzRlE0 ), please read the comments in the code:
try:
m=open("main2.txt").read()
exec(m)
except Exception as e:
n=open("/storage/emulated/0/The_error_is_jbsidis.txt","w")
#the path may be different from device to device,
#sometimes is /sdcard/, for the app to have access
#to the external storage in the mobile device you
#should add the READ_EXTERNAL_STORAGE permission
#in the spec file, and then grant the permission manually
#from the app manager from your cellphone,
#this will help the apk, to be able to write content
#to the device, then you can read it and see why the
#app closes itself or what is throwing the error,
#watch my video https://youtu.be/LFQVhOzRlE0
#a fun fact is that if the "The_error_is_jbsidis.txt" file is empty (sometimes the app may crash before python can execute your main.py), that means that the issue is in the compilation itself or there is a missing module in your apk spec file #jbsidis
n.write(str(e))
n.close()
Remember to add the permissions in the spec file, otherwise the app will not have permission granted to read the storage of your device and won't be able to write the error before the app crashes, for more info watch my video: https://youtu.be/LFQVhOzRlE0
Related
Im completely new to Docker and I'm trying to create and run a very simple example using instructions defined in a DockerFile.
DockerFile->
FROM ubuntu:latest
RUN apt-get update
RUN apt-get install -y python3 pip
COPY ./ .
RUN python3 test.py
contents of test.py ->
import pandas as pd
import numpy as np
print('test code')
command being used to create a Docker Container ->
docker build --no-cache . -t intro_to_docker -f abs/path/to/DockerFile
folder structure -> (both files are present at abs/path/to)
abs/path/to:
-DockerFile
-test.py
Error message ->
error from sender: open .Trash: operation not permitted
(using sudo su did not resolve the issue, which i believe is linked to the copy commands)
I'm using a Mac.
any help in solving this will be much appreciated!
The Dockerfile should be inside a folder. Navigate to that folder and then run docker build command. I was also facing the same issue but got resovled when moved the docker file inside a folder
Usually the error would look like:
error: failed to solve: failed to read dockerfile: error from sender: open .Trash: operation not permitted
And in my case, it's clearly saying that it is unable to find the dockerfile.
Also, in your command, I see a . after --no-cache, I think that's not required?
So better, try navigating to the specified path and then run the build command replacing the -f option with a ., which specifies the build command to consider the current folder for its build process.
In your case
cd abs/path/to/
docker build --no-cache -t intro_to_docker .
It seems the system policies are not allowing the application to execute this command. The application "Terminal" might not have approval to access the entire file system.
Enable full disk access to terminal. Change it using "System Preferences > Security & Privacy > Privacy > Full Disk Access"
I had the same error message and my Dockerfile was located in the HOME directory, I moved the Docker file to a different location and executed the docker build from that newly moved location and it successfully executed.
So I'm trying to have Python run multiple commands to install programs and enable SSH to setup my Linux computer. I would type all this in, but I'll be doing this to more devices, so I figured why not put in a Python script, but so far it's easier said than done. I did a boatload of research on this and I can't find anything like this.
So here's what I got so far.
--import subprocess
--SSH = "systemctl enable sshd"
--payload = "nmap" # it'll be one of a few I'll be installing
--subprocess.call(["sudo", "yum", "install", "-y", payload])
--subprocess.call(["sudo", SSH])
The first part of this works perfectly. It asks for my password it'll update and install nmap. But for some reason the command "systemctl enable sshd" seems to always throw it off. I know the command works because I can just type it out and it'll work just fine by itself, but for some reason it won't work through this script. I've used subprocess.run as well. What am I missing here?
Here's the error that I get:
--sudo: systemctl start sshd: command not found
What you want is Ansible.
Ansible uses SSH to connect to list of machines and perform configuration tasks. Tasks are described in YAML, which is readable and scale. You can have playbooks and ad hoc commands. For example ad hoc to install package will be
ansible -i inventory.file -m yum -a "name=payload state=present"
In a playbook will look like Install and enable openssh-server
---
- hosts: all # Single or group of hosts from inventory file
become: yes # Become sudo
tasks: # List of tasks
- name: Install ssh-server # Description free text
yum: # Module name
name: openssh-server # Name of the package
state: present # State " state: absent will uninstall the package"
- name: Start and enable service # Description of the task free text
service: # Service
name: sshd # Name of the service
state: started # Started or Stopped
enabled: yes # Start the service on boot
- name: Edit config file sshd_config # Description of the task
lineinfile: # Name of the module
path: /etc/sshd/sshd_config # Which file to edit
regex: ^(# *)?PasswordAuthentication # Which line to edit
line: PasswordAuthentication no # Whit what to change it
Ansible have great documentation https://docs.ansible.com/ in a few days you will be up to speed.
Best regards.
I have somewhat successfully dockerized a software repository (KPConv) that I plan to work with and extend with the following Dockerfile
FROM tensorflow/tensorflow:1.12.0-devel-gpu-py3
# Install other required python stuff
RUN apt-get update && apt install -y --fix-missing --no-install-recommends\
python3-setuptools python3-pip python3-tk
RUN pip install --upgrade pip
RUN pip3 install numpy scikit-learn psutil matplotlib pyqt5 laspy
# Compile the custom operations and CPP wrappers
# For some reason this must be done within container, cannot access libcuda.so during docker build
# Ref: https://stackoverflow.com/questions/66575232
#COPY . /kpconv
#WORKDIR /kpconv/tf_custom_ops
#RUN sh compile_op.sh
#WORKDIR /kpconv/cpp_wrappers
#RUN sh compile_wrappers.sh
# Set the working directory to kpconv
WORKDIR /kpconv
# Set root user password so we can su/sudo later if need be
RUN echo "root:pass" | chpasswd
# Create a user and group akin to the host within the container
ARG USER_ID
ARG GROUP_ID
RUN addgroup --gid $GROUP_ID user
RUN adduser --disabled-password --gecos '' --uid $USER_ID --gid $GROUP_ID user
USER user
#Build
#sudo docker build -t kpconv-test \
# --build-arg USER_ID=$(id -u) \
# --build-arg GROUP_ID=$(id -g) \
# .
At the end of this Dockerfile I followed a post found here which describes a way to correctly set the permissions of files generated by/within a container so that the host machine/user can access them without having to alter the file permissions.
Also, this software repository makes use of custom tensorflow operations in C++ (KPConv/tf_custom_ops) along with Python wrappers for custom C++ code (KPConv/cpp_wrappers). The author of KPConv, Thomas Hugues, provides a bash script which compiles each to generate various .so files.
If I COPY the repository into the image during the build process (COPY . /kpconv), startup the container, call both of the compile bash scripts, and run the code then Python correctly loads the C++ wrapper (the generated .so grid_subsampling.cpython-35m-x86_64-linux-gnu.so) and begins running the software as expected/intended.
$ sudo docker run -it \
> -v /<myhostpath>/data_sets:/data \
> -v /<myhostpath>/_output:/output \
> --runtime=nvidia kpconv-test /bin/bash
user#eec8553dcb5d:/kpconv$ cd tf_custom_ops
user#eec8553dcb5d:/kpconv/tf_custom_ops$ sh compile_op.sh
user#eec8553dcb5d:/kpconv/tf_custom_ops$ cd ..
user#eec8553dcb5d:/kpconv$ cd cpp_wrappers/
user#eec8553dcb5d:/kpconv/cpp_wrappers$ sh compile_wrappers.sh
running build_ext
building 'grid_subsampling' extension
<Redacted for brevity>
user#eec8553dcb5d:/kpconv/cpp_wrappers$ cd ..
user#eec8553dcb5d:/kpconv$ python training_ModelNet40.py
Dataset Preparation
*******************
Loading training points
1620.2 MB loaded in 0.6s
Loading test points
411.6 MB loaded in 0.2s
<Redacted for brevity>
This works well and allows me run the KPConv software.
Also to note for later the .so file has the hash
user#eec8553dcb5d:/kpconv/cpp_wrappers/cpp_subsampling$ sha1sum grid_subsampling.cpython-35m-x86_64-linux-gnu.so
a17eef453f6d2370a15bc2a0e6714c978390c5c3 grid_subsampling.cpython-35m-x86_64-linux-gnu.so
It also has the permissions
user#eec8553dcb5d:/kpconv/cpp_wrappers/cpp_subsampling$ ls -al grid_subsampling.cpython-35m-x86_64-linux-gnu.so
-rwxr-xr-x 1 user user 561056 Mar 14 02:16 grid_subsampling.cpython-35m-x86_64-linux-gnu.so
Though it produces a difficult workflow for quickly editing and the software for my purposes and quickly running it within the container. Every change to the code requires a new build of the image. Thus, I would much rather mount/volume the KPConv code from the host into the container at runtime and then the edits are "live" within the container as it is running.
Doing this and using the Dockerfile at the top of the post (no COPY . /kpconv) to compile an image, perform the same compilation steps, and run the code
$ sudo docker run -it \
> -v /<myhostpath>/data_sets:/data \
> -v /<myhostpath>/KPConv_Tensorflow:/kpconv \
> -v /<myhostpath>/_output:/output \
> --runtime=nvidia kpconv-test /bin/bash
user#a82e2c1af21a:/kpconv$ cd tf_custom_ops/
user#a82e2c1af21a:/kpconv/tf_custom_ops$ sh compile_op.sh
user#a82e2c1af21a:/kpconv/tf_custom_ops$ cd ..
user#a82e2c1af21a:/kpconv$ cd cpp_wrappers/
user#a82e2c1af21a:/kpconv/cpp_wrappers$ sh compile_wrappers.sh
running build_ext
building 'grid_subsampling' extension
<Redacted for brevity>
user#a82e2c1af21a:/kpconv/cpp_wrappers$ cd ..
user#a82e2c1af21a:/kpconv$ python training_ModelNet40.py
I receive the following Python ImportError
user#a82e2c1af21a:/kpconv$ python training_ModelNet40.py
Traceback (most recent call last):
File "training_ModelNet40.py", line 36, in <module>
from datasets.ModelNet40 import ModelNet40Dataset
File "/kpconv/datasets/ModelNet40.py", line 40, in <module>
from datasets.common import Dataset
File "/kpconv/datasets/common.py", line 29, in <module>
import cpp_wrappers.cpp_subsampling.grid_subsampling as cpp_subsampling
ImportError: /kpconv/cpp_wrappers/cpp_subsampling/grid_subsampling.cpython-35m-x86_64-linux-gnu.so: failed to map segment from shared object
Why is this Python wrapper for C++ only useable when COPY'ing code into the docker image and not when mounted by volume?
This .so file has the same hash and permissions as the first described situation
user#a82e2c1af21a:/kpconv/cpp_wrappers/cpp_subsampling$ sha1sum grid_subsampling.cpython-35m-x86_64-linux-gnu.so
a17eef453f6d2370a15bc2a0e6714c978390c5c3 grid_subsampling.cpython-35m-x86_64-linux-gnu.so
user#a82e2c1af21a:/kpconv/cpp_wrappers/cpp_subsampling$ ls -al grid_subsampling.cpython-35m-x86_64-linux-gnu.so
-rwxr-xr-x 1 user user 561056 Mar 14 02:19 grid_subsampling.cpython-35m-x86_64-linux-gnu.so
On my host machine the file has the following permissions (it's on the host because /kpconv was mounted as a volume) (for some reason the container is in the future too, check the timestamps)
$ ls -al grid_subsampling.cpython-35m-x86_64-linux-gnu.so
-rwxr-xr-x 1 <myusername> <myusername> 561056 Mar 13 21:19 grid_subsampling.cpython-35m-x86_64-linux-gnu.so
After some research on the error message it looks like every result is specific to a situation. Though most seem to mention that the error is the result of some sort of permissions issue.
This Unix&Linux Stack answer I think provides the answer to what the actual problem is. But I am a bit too far from my days of working with C++ as an intern in college to necessarily understand how to use it to fix this issue. But I think the issue lies with the permissions between the container and host and between the users on each (that is, root on the container, user (Dockerfile) on the container, root on host, and <myusername> on host).
I have also attempted to first elevate permissions within the container using the root password created in the Dockerfile, then compiling the code, and running the software. But this results in the same issue. I have also tried compiling the code as user in the container, but running the software as root, again with the same issue.
Thus another clue I have found and provide is that there is seemingly something different with the .so when compiled "only within" the container (no --volume) and when it is compiled within the --volume (thus why I attempted to compare the file hashes). So maybe its not so much permissions but how the .so is loaded within the container by the kernel or how its location within the --volume effects that loading process?
EDIT: As for a SSCCE you should be able to clone the linked repository to your machine and use the same Dockerfile. You do not need to specify the /data or /output volumes or alter the code in any way (It attempts to load the .so before loading the data (which will just error and end execution))
If you do not have a GPU or do not want to install nvidia-runtime you should be able to alter the Dockerfile base image to tensorflow:1.12.0-devel-py3 and run the code on CPU.
Your problem is created by the linker trying to dynamically load the library. There could be several root-causes for this:
Permissions. The user should have permission to load the library, so when mounting file systems in docker, the owner id and the group id that are in the host are not necessary the same id in the container although they might be the same name.
Wrong binary format. The host OS is compiling the binary in wrong format. This can happen if you run the compile on (by example) macOS and use it in a linux container.
Wrong mounting. The mounting, by example, with noexec will also prevent the library to be loaded.
Difference in libraries from both environments. Due to the differences of the environment where the library was compiled, you might be missing some libraries, so use ldd grid_subsampling.cpython-35m-x86_64-linux-gnu.so and ldd -r -d -v grid_subsampling.cpython-35m-x86_64-linux-gnu.so check all the libraries that are linked.
[update, I found the solution, see answer below]
I made a GUI wrapper for protonvpn, a cmd program for Linux. dpkg -b gets me ProtonVPNgui.deb, which works fine. However, I have problems using debuild -S -sa to upload it to Launchpad.
As is, it won't build once uploaded with dput, cf. the error msg
I tried using debuild -i -us -uc -b to build a .deb file for local testing, but it returns:
dpkg-genchanges: error: binary build with no binary artifacts found; cannot distribute
Any ideas? This whole process is driving me nuts. (I use this tar.gz)
I figured it out myself. Create a .deb package locally for testing and upload the project to Launchchpad:
Create a launchpad user account.
Install dh-python with the package manager
Create the package source dir
mkdir myscript-0.1
Copy your python3 script(s) (or the sample script below) to the source dir (don't use !/usr/bin/python, use !/usr/bin/python3 or !/usr/bin/python2 and edit accordingly below)
cp ~/myscript myscript-0.1
cd myscript-0.1
Sample script:
#!/usr/bin/python3
if __name__ == '__main__':
print("Hello world")
Create the packaging skeleton (debian/*)
dh_make -s --createorig
Remove the example files
rm debian/*.ex debian/*.EX debian/README.*
Add eventual binary files to include, e.g. gettext .mo files
mkdir myscript-0.1/source
echo debian/locales/es/LC_MESSAGES/base.mo > myscript-0.1/source/include-binaries
Edit debian/control
Replace its content with the following text:
Source: myscript
Section: utils
Priority: optional
Maintainer: Name,
Build-Depends: debhelper (>= 9), python3, dh-python
Standards-Version: 4.1.4
X-Python3-Version: >= 3.2
Package: myscript
Architecture: all
Depends: ${misc:Depends}, ${python3:Depends}
Description: insert up to 60 chars description
insert long description, indented with spaces
debian/install must contain the script(or several, python, perl, etc., also eventual .desktop files for start menu shortcuts) to install as well as the target directories, each on a line
echo myscript usr/bin > debian/install
Edit debian/rules
Replace its content with the following text:
#!/usr/bin/make -f
%:
dh $# --with=python3
Note: it's a TAB before dh $#, not four spaces!
Build the .deb package
debuild -us -uc
You will get a few Lintian warnings/errors but your package is ready to be used:
../myscript_0.1-1_all.deb
Prepare upload to Launchpad, insert your gdp fingerprint after -k
debuild -S -sa -k12345ABC
Upload to Launchpad
dput ppa:[your ppa name]/ppa myscript_0.1-1_source.changes
This is an update to askubuntu.com/399552. It may take some error messages and googling till you're ready... C.f. the ...orig.tar.gz file at launchpad for the complete project.
I am writing an app in Kivy and a part of the app is to turn off the rpi display's back-light after a certain amount of time and to turn the back-light back on when pressing an invisible button. I need to use sudo python when launching the app in order to open the file:
/sys/class/backlight/rpi-backlight/bl_power
The problem is that by default, I get an error saying "no module named kivy.app" when using 'sudo python'. If I add the line:
Defaults env_keep += "PYTHONPATH"
to the /etc/sudoers file it allows me to run the app with 'sudo python', but then none of the buttons on the app function. The app runs, but touch functionality is lost. Is there a way to make this work?
I'd suggest a different approach: make /sys/class/backlight/rpi-backlight/bl_power writable to the user running the Python script (most likely pi). Temporarily, this can be done with
sudo chmod a+w /sys/class/backlight/rpi_backlight/bl_power
(this grants write rights to all users). But this will also be reset at the next restart. The solution for that is to write a udev rule. They live in /etc/udev/rules.d and on my system, 99-com.rules was a good starting point. Here is what I have in a file called 98-backlight.rules:
SUBSYSTEM=="backlight", PROGRAM="/bin/sh -c 'chown -R root:video /sys/class/backlight && chmod -R 770 /sys/class/backlight; chown -R root:video /sys/devices/platform/rpi_backlight && chmod -R 770 /sys/devices/platform/rpi_backlight'"
This changes the owner group to video and grants group write rights. User pi is by default member of video. Then all you need is a restart (or sudo udevadm control --reload-rules followed by sudo udevadm trigger) to activate the new rule.