Multiple Python files into one RPM package - python

I am trying to generate an RPM package for a Python project that includes several source files, like:
runner.py
constants.py
helpers.py
configuration.py
...
The project includes this runner, a CLI and a GUI, which I excluded from the list. The files other than runner.py are the code pieces shared among other executable ones. My intention is to create an RPM package for each. If I manage to solve one of them, It would be easier to solve the others.
My current build.sh looks like below. It is a fork of some other project's SPEC and build script.
...
# Generate and fill the source folders
cp "${SCRIPT_DIR}"/../runner.py package-"${VERSION}"/usr/sbin/runner
chmod +x package-"${VERSION}"/usr/sbin/runner
cp "${SCRIPT_DIR}"/../constants.py package-"${VERSION}"/usr/sbin/constants
chmod +x package-"${VERSION}"/usr/sbin/constants
cp "${SCRIPT_DIR}"/../helpers.py package-"${VERSION}"/usr/sbin/helpers
chmod +x package-"${VERSION}"/usr/sbin/helpers
cp "${SCRIPT_DIR}"/../configuration.py package-"${VERSION}"/usr/sbin/configuration
chmod +x package-"${VERSION}"/usr/sbin/configuration
...
tar --create --file package-"${VERSION}".tar.gz package-"${VERSION}"
...
# Remove out build directory, now that we have our tarball
rm -fr package-"${VERSION}"
mv package-"${VERSION}".tar.gz ~/rpmbuild/SOURCES/
cp rpmbuild/SPECS/* ~/rpmbuild/SPECS/
echo 'Building RPM package based on SPECS/package.spec'
rpmbuild -bb -D 'debug_package %{nil}' ~/rpmbuild/SPECS/package.spec
And below you can see the related part in package.spec:
...
%install
rpm -fr $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/%{_sbindir}
cp usr/sbin/runner.py $RPM_BUILD_ROOT/%{_sbindir}/runner
cp usr/sbin/constants.py $RPM_BUILD_ROOT/%{_sbindir}/constants
cp usr/sbin/helpers.py $RPM_BUILD_ROOT/%{_sbindir}/helpers
cp usr/sbin/configuration.py $RPM_BUILD_ROOT/%{_sbindir}/configuration
...
When RPM package is installed, the files other than runner should not be in /usr/sbin/ directory. It is not the expected result here. So I believe the cleanest way is to have one file per application but that prevents me to share the common parts. What is the recommended approach in this case?

Related

PermissionError: [Errno 13] Permission denied with .sh file in mac terminal [duplicate]

I have a shell script which I want to run without using the "sh" or "bash" commands. For example:
Instead of: sh script.sh
I want to use: script.sh
How can I do this?
P.S. (i) I don't use shell script much and I tried reading about aliases, but I did not understand how to use them.
(ii) I also read about linking the script with another file in the PATH variables. I am using my university server and I don't have permissions to create a file in those locations.
Add a "shebang" at the top of your file:
#!/bin/bash
And make your file executable (chmod +x script.sh).
Finally, modify your path to add the directory where your script is located:
export PATH=$PATH:/appropriate/directory
(typically, you want $HOME/bin for storing your own scripts)
These are the prerequisites of directly using the script name:
Add the shebang line (#!/bin/bash) at the very top.
Use chmod u+x scriptname to make the script executable (where scriptname is the name of your script).
Place the script under /usr/local/bin folder.
Note: I suggest placing it under /usr/local/bin because most likely that path will be already added to your PATH variable.
Run the script using just its name, scriptname.
If you don't have access to /usr/local/bin then do the following:
Create a folder in your home directory and call it bin.
Do ls -lA on your home directory, to identify the start-up script your shell is using. It should be either .profile or .bashrc.
Once you have identified the start up script, add the following line:
PATH="$PATH:$HOME/bin"
Once added, source your start-up script or log out and log back in.
To source, put . followed by a space and then your start-up script name, e.g. . .profile or . .bashrc
Run the script using just its name, scriptname.
Just make sure it is executable, using chmod +x. By default, the current directory is not on your PATH, so you will need to execute it as ./script.sh - or otherwise reference it by a qualified path. Alternatively, if you truly need just script.sh, you would need to add it to your PATH. (You may not have access to modify the system path, but you can almost certainly modify the PATH of your own current environment.) This also assumes that your script starts with something like #!/bin/sh.
You could also still use an alias, which is not really related to shell scripting but just the shell, and is simple as:
alias script.sh='sh script.sh'
Which would allow you to use just simply script.sh (literally - this won't work for any other *.sh file) instead of sh script.sh.
In this example the file will be called myShell
First of all we will need to make this file we can just start off by typing the following:
sudo nano myShell
Notice we didn't put the .sh extension?
That's because when we run it from the terminal we will only need to type myShell in order to run our command!
Now, in nano the top line MUST be #!/bin/bash then you may leave a new line before continuing.
For demonstration I will add a basic Hello World! response
So, I type the following:
echo Hello World!
After that my example should look like this:
#!/bin/bash
echo Hello World!
Now save the file and then run this command:
chmod +x myShell
Now we have made the file executable we can move it to /usr/bin/ by using the following command:
sudo cp myShell /usr/bin/
Congrats! Our command is now done! In the terminal we can type myShell and it should say Hello World!
You have to enable the executable bit for the program.
chmod +x script.sh
Then you can use ./script.sh
You can add the folder to the PATH in your .bashrc file (located in your home directory).
Add this line to the end of the file:
export PATH=$PATH:/your/folder/here
You can type sudo install (name of script) /usr/local/bin/(what you want to type to execute said script)
ex: sudo install quickcommit.sh /usr/local/bin/quickcommit
enter password
now can run without .sh and in any directory
Add . (current directory) to your PATH variable.
You can do this by editing your .profile file.
put following line in your .profile file
PATH=$PATH:.
Just make sure to add Shebang (#!/bin/bash) line at the starting of your script and make the script executable(using chmod +x <File Name>).
Here is my backup script that will give you the idea and the automation:
Server: Ubuntu 16.04
PHP: 7.0
Apache2, Mysql etc...
# Make Shell Backup Script - Bash Backup Script
nano /home/user/bash/backupscript.sh
#!/bin/bash
# Backup All Start
mkdir /home/user/backup/$(date +"%Y-%m-%d")
sudo zip -ry /home/user/backup/$(date +"%Y-%m-%d")/etc_rest.zip /etc -x "*apache2*" -x "*php*" -x "*mysql*"
sudo zip -ry /home/user/backup/$(date +"%Y-%m-%d")/etc_apache2.zip /etc/apache2
sudo zip -ry /home/user/backup/$(date +"%Y-%m-%d")/etc_php.zip /etc/php
sudo zip -ry /home/user/backup/$(date +"%Y-%m-%d")/etc_mysql.zip /etc/mysql
sudo zip -ry /home/user/backup/$(date +"%Y-%m-%d")/var_www_rest.zip /var/www -x "*html*"
sudo zip -ry /home/user/backup/$(date +"%Y-%m-%d")/var_www_html.zip /var/www/html
sudo zip -ry /home/user/backup/$(date +"%Y-%m-%d")/home_user.zip /home/user -x "*backup*"
# Backup All End
echo "Backup Completed Successfully!"
echo "Location: /home/user/backup/$(date +"%Y-%m-%d")"
chmod +x /home/user/bash/backupscript.sh
sudo ln -s /home/user/bash/backupscript.sh /usr/bin/backupscript
change /home/user to your user directory and type: backupscript anywhere on terminal to run the script! (assuming that /usr/bin is in your path)
Enter "#!/bin/sh" before script.
Then save it as script.sh for example.
copy it to $HOME/bin or $HOME/usr/bin
The directory can be different on different linux distros but they end with 'bin' and are in home directory
cd $HOME/bin or $HOME/usr/bin
Type chmod 700 script.sh
And you can run it just by typing run.sh on terminal.
If it not work, try chmod +x run.sh instead of chmod 700 run.sh
Make any file as executable
Let's say you have an executable file called migrate_linux_amd64 and you want to run this file as a command like "migrate"
First test the executable file from the file location:
[oracle#localhost]$ ./migrate.linux-amd64
Usage: migrate OPTIONS COMMAND [arg...]
migrate [ -version | -help ]
Options:
-source Location of the migrations (driver://url)
-path Shorthand for -source=file://path
-database Run migrations against this database (driver://url)
-prefetch N Number of migrations to load in advance before executing (default 10)
-lock-timeout N Allow N seconds to acquire database lock (default 15)
-verbose Print verbose logging
-version Print version
-help Print usage
Commands:
goto V Migrate to version V
up [N] Apply all or N up migrations
down [N] Apply all or N down migrations
drop Drop everyting inside database
force V Set version V but don't run migration (ignores dirty state)
version Print current migration version
Make sure you have execute privileges on the file
-rwxr-xr-x 1 oracle oinstall 7473971 May 18 2017 migrate.linux-amd64
if not, run chmod +x migrate.linux-amd64
Then copy your file to /usr/local/bin. This directory is owned by root, use sudo or switch to root and perform the following operation
sudo cp migrate.linux-amd64 /usr/local/bin
sudo chown oracle:oracle /user/local/bin/migrate.linux.amd64
Then create a symbolic link like below
sudo ln /usr/local/bin/migrate.linux.amd64 /usr/local/bin/migrate
sudo chown oracle:oracle /usr/local/bin/migrate
Finally add /usr/local/bin to your path or user profile
export PATH = $PATH:/usr/local/bin
Then run the command as "migrate"
[oracle#localhost]$ migrate
Usage: migrate OPTIONS COMMAND [arg...]
migrate [ -version | -help ]
Options:
-source Location of the migrations (driver://url)
-path Shorthand for -source=file://path
-database Run migrations against this database (driver://url)
-prefetch N Number of migrations to load in advance before executing (default 10)
-lock-timeout N Allow N seconds to acquire database lock (default 15)
-verbose Print verbose logging
-version Print version
-help Print usage
Commands:
goto V Migrate to version V
up [N] Apply all or N up migrations
down [N] Apply all or N down migrations
drop Drop everyting inside database
force V Set version V but don't run migration (ignores dirty state)
version Print current migration version
Make the script file as executable by using file's properties
Create alias for the executable in ~/.bashrc. alias <alias namme> = <full script file path>'
refresh the user session to apply it. source ~/.bashrc
Just to add to what everyone suggested. Even with those solutions, the problem will persist if the user wants to execute the script as sudo
example:
chmod a+x /tmp/myscript.sh
sudo ln -s /tmp/myscript.sh /usr/local/bin/myscript
typing myscript would work but typing sudo myscript would return command not found.
As sudo you would have to still type sudo sh myscript or sudo bash myscript.
I can't think of a solution around this.
Just:
/path/to/file/my_script.sh

What can I safely remove in a python lib folder?

I am using:
mkdir -p build/python/lib/python3.6/site-packages
pipenv run pip install -r requirements.txt --target build/python/lib/python3.6/site-packages
to create a directory build with everything I need for my python project but I also need to save as much space as possible.
What can I safely remove in order to save space?
Maybe can I do find build -type d -iname "*.dist-info" -exec rm -R {} \; ?
Can I remove *.py if I leave *.pyc?
Thanks
Perhaps platform specific *.exe files, if your project doesn't need to run on Windows:
How to prevent *.exe ...
Delete *.pyc (byte-compiled files), with an impact to load-time: 100% supported, unlike your trick of the reverse: retain just *.pyc (and delete most *.py sources) in some python versions; not safe IMHO but never tried it.

Pipenv lock: how to cache downloads for transfer to an offline machine

I am looking for a way to create a self-contained archive of all dependencies required to satisfy a Pipfile.lock. One way to achieve this would be to point PIPENV_CACHE_DIR at an empty temporary directory, run pipenv install, ship the contents of that directory, and use it on the offline machine.
E.g., this should work:
tmpdir=$(mktemp -d)
if [ -n "$offline" ]; then
tar -xf pipenv_cache.tar -C "$tmpdir"
fi
pipenv --rm
PIPENV_CACHE_DIR="$tmpdir" PIP_CACHE_DIR="$tmpdir" pipenv install
if [ -n "$online" ]; then
tar -cf pipenv_cache.tar -C "$tmpdir" .
fi
However, there are a number of problems with this script, one being that it can’t use the online machine’s cache, having to download everything every time instead.
The question is, is there a better way, that doesn’t involve a custom script? Maybe some documented community best practices?
Ideally, there would exist an interface like:
pipenv lock --create-archive <file_name>
pipenv install --from-archive <file_name>
With some Shell scripting work, wheelfreeze can be made to do it.
To create the archive (in a Bash shell):
(. "$(pipenv --venv)"/bin/activate && wheelfreeze <(pipenv lock -r))
And to install from the archive:
wheelfreeze/install "$(pipenv --venv)"
Disclosure: I created wheelfreeze while trying to solve the problem – “to scratch my own itch”, as the saying goes.

Jenkins doesn't include refrenced files when building conda package

I am building a small conda package with Jenkins (linux) that should just:
Download a .zip from an external refrence holding font files
Extract the .zip
Copy the font files to a specific folder
Build the package
The build runs successful, but the package does not include the font files, but is basically empty. My build.sh has:
mkdir $PREFIX\root\share\fonts
cp *.* $PREFIX\root\share\fonts
My meta.yaml source has:
source:
url: <ftp server url>/next-fonts.zip
fn: next-fonts.zip
In Jenkins I do:
mkdir build
conda build fonts
The console output is strange though at this part:
+ mkdir /var/lib/jenkins/conda-bld/fonts_1478708638575/_b_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_prootsharefonts
+ cp Lato-Black.ttf Lato-BlackItalic.ttf Lato-Bold.ttf Lato-BoldItalic.ttf Lato-Hairline.ttf Lato-HairlineItalic.ttf Lato-Italic.ttf Lato-Light.ttf Lato-LightItalic.ttf Lato-Regular.ttf MyriadPro-Black.otf MyriadPro-Bold.otf MyriadPro-Light.otf MyriadPro-Regular.otf MyriadPro-Semibold.otf conda_build.sh /var/lib/jenkins/conda-bld/fonts_1478708638575/_b_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_prootsharefonts
BUILD START: fonts-1-1
Source cache directory is: /var/lib/jenkins/conda-bld/src_cache
Found source in cache: next-fonts.zip
Extracting download
Package: fonts-1-1
source tree in: /var/lib/jenkins/conda-bld/fonts_1478708638575/work/Fonts
number of files: 0
To me it seems the cp either doesn't complete or it copies to a wrong directory. Unfortunately, with the placeholder stuff I really can't decypher where exactly the fonts land when they are copied, all I know is that in /work/Fonts, there are no files and thus nothing is included in the package. While typing, I also noted that /work/Fonts actually has the Fonts starting with a capital F, while nowhere in the configuration or the scripts there is any instance of fonts starting with a capital F.
Any insight on what might go wrong?
mkdir $PREFIX\root\share\fonts
cp *.* $PREFIX\root\share\fonts
should be replaced with
mkdir $PREFIX/root/share/fonts
cp * $PREFIX/root/share/fonts
The buildscript was taken from another package that was built in windows and in changing the build script I forgot to change the folder separators.
Additionally creating subfolder structures isn't possible in linux while it is in windows. So this
mkdir $PREFIX/root/
mkdir $PREFIX/root/share/
mkdir $PREFIX/root/share/fonts/
cp * $PREFIX/root/share/fonts/
Was the ultimate solution to the problem.

Sharing Python virtualenv environments

I have a Python virtualenv (created with virtualenvwerapper) in one user account. I would like to use it from another user account on the same host.
How can I do this?
How can I set up virtual environments so as to be available to any user on the host? (Primarily Linux / Debian but also Mac OSX.)
Thanks.
Put it in a user-neutral directory, and make it group-readable.
For instance, for libraries, I use /srv/http/share/ for sharing code across web applications.
You could use /usr/local/share/ for normal applications.
I had to do this for workmates. The #Flavius answer worked great once I added a few commands to handle virtualenvwrapper. You need to put your venvs and your WORKON projects folder some place you and your boss/friend can find and use.
sudo mkdir -p /usr/local/share
sudo mv ~/.virtualenvs /usr/local/share
sudo mkdir -p /usr/src/venv/
Assuming you want everyone on the machine to be able to both mkproject and workon:
chmod a+rwx /usr/local/share/.virtualenvs
chmod a+rwx /usr/src/venv
Otherwise chown and chmod to match your security requirements.
If you have any hooks or scripts that expect ~/.virtualenvs to be in the normal place, you better symlink it (on both your user account and your friend's)
ln -s /usr/local/share/.virtualenvs ~/.virtualenvs
Then modify your (and your friend's) .bashrc file to let virtualenvwrapper know where you moved things. Your bashrc should have something like this:
export PROJECT_HOME="/usr/src/venv/"
export WORKON_HOME="/usr/local/share/.virtualenvs"
export USR_BIN=$(dirname $(which virtualenv))
if [ -f $USR_BIN/virtualenvwrapper.sh ]; then
source $USR_BIN/virtualenvwrapper.sh
else
if [ -f /usr/bin/virtualenvwrapper.sh ]; then
source /usr/bin/local/virtualenvwrapper.sh
else
echo "Can't find a virtualenv wrapper installation"
fi
fi
Once you log out and back in (or just source ~/.bashrc you should be good to go with commands like mkproject awesome_new_python_project and workon awesome_new_python_project.
As a bonus, add hooks to load the project folder in sublime every time your workon.

Categories

Resources