What can I safely remove in a python lib folder? - python

I am using:
mkdir -p build/python/lib/python3.6/site-packages
pipenv run pip install -r requirements.txt --target build/python/lib/python3.6/site-packages
to create a directory build with everything I need for my python project but I also need to save as much space as possible.
What can I safely remove in order to save space?
Maybe can I do find build -type d -iname "*.dist-info" -exec rm -R {} \; ?
Can I remove *.py if I leave *.pyc?
Thanks

Perhaps platform specific *.exe files, if your project doesn't need to run on Windows:
How to prevent *.exe ...
Delete *.pyc (byte-compiled files), with an impact to load-time: 100% supported, unlike your trick of the reverse: retain just *.pyc (and delete most *.py sources) in some python versions; not safe IMHO but never tried it.

Related

Multiple Python files into one RPM package

I am trying to generate an RPM package for a Python project that includes several source files, like:
runner.py
constants.py
helpers.py
configuration.py
...
The project includes this runner, a CLI and a GUI, which I excluded from the list. The files other than runner.py are the code pieces shared among other executable ones. My intention is to create an RPM package for each. If I manage to solve one of them, It would be easier to solve the others.
My current build.sh looks like below. It is a fork of some other project's SPEC and build script.
...
# Generate and fill the source folders
cp "${SCRIPT_DIR}"/../runner.py package-"${VERSION}"/usr/sbin/runner
chmod +x package-"${VERSION}"/usr/sbin/runner
cp "${SCRIPT_DIR}"/../constants.py package-"${VERSION}"/usr/sbin/constants
chmod +x package-"${VERSION}"/usr/sbin/constants
cp "${SCRIPT_DIR}"/../helpers.py package-"${VERSION}"/usr/sbin/helpers
chmod +x package-"${VERSION}"/usr/sbin/helpers
cp "${SCRIPT_DIR}"/../configuration.py package-"${VERSION}"/usr/sbin/configuration
chmod +x package-"${VERSION}"/usr/sbin/configuration
...
tar --create --file package-"${VERSION}".tar.gz package-"${VERSION}"
...
# Remove out build directory, now that we have our tarball
rm -fr package-"${VERSION}"
mv package-"${VERSION}".tar.gz ~/rpmbuild/SOURCES/
cp rpmbuild/SPECS/* ~/rpmbuild/SPECS/
echo 'Building RPM package based on SPECS/package.spec'
rpmbuild -bb -D 'debug_package %{nil}' ~/rpmbuild/SPECS/package.spec
And below you can see the related part in package.spec:
...
%install
rpm -fr $RPM_BUILD_ROOT
mkdir -p $RPM_BUILD_ROOT/%{_sbindir}
cp usr/sbin/runner.py $RPM_BUILD_ROOT/%{_sbindir}/runner
cp usr/sbin/constants.py $RPM_BUILD_ROOT/%{_sbindir}/constants
cp usr/sbin/helpers.py $RPM_BUILD_ROOT/%{_sbindir}/helpers
cp usr/sbin/configuration.py $RPM_BUILD_ROOT/%{_sbindir}/configuration
...
When RPM package is installed, the files other than runner should not be in /usr/sbin/ directory. It is not the expected result here. So I believe the cleanest way is to have one file per application but that prevents me to share the common parts. What is the recommended approach in this case?

Pipenv lock: how to cache downloads for transfer to an offline machine

I am looking for a way to create a self-contained archive of all dependencies required to satisfy a Pipfile.lock. One way to achieve this would be to point PIPENV_CACHE_DIR at an empty temporary directory, run pipenv install, ship the contents of that directory, and use it on the offline machine.
E.g., this should work:
tmpdir=$(mktemp -d)
if [ -n "$offline" ]; then
tar -xf pipenv_cache.tar -C "$tmpdir"
fi
pipenv --rm
PIPENV_CACHE_DIR="$tmpdir" PIP_CACHE_DIR="$tmpdir" pipenv install
if [ -n "$online" ]; then
tar -cf pipenv_cache.tar -C "$tmpdir" .
fi
However, there are a number of problems with this script, one being that it can’t use the online machine’s cache, having to download everything every time instead.
The question is, is there a better way, that doesn’t involve a custom script? Maybe some documented community best practices?
Ideally, there would exist an interface like:
pipenv lock --create-archive <file_name>
pipenv install --from-archive <file_name>
With some Shell scripting work, wheelfreeze can be made to do it.
To create the archive (in a Bash shell):
(. "$(pipenv --venv)"/bin/activate && wheelfreeze <(pipenv lock -r))
And to install from the archive:
wheelfreeze/install "$(pipenv --venv)"
Disclosure: I created wheelfreeze while trying to solve the problem – “to scratch my own itch”, as the saying goes.

Jenkins doesn't include refrenced files when building conda package

I am building a small conda package with Jenkins (linux) that should just:
Download a .zip from an external refrence holding font files
Extract the .zip
Copy the font files to a specific folder
Build the package
The build runs successful, but the package does not include the font files, but is basically empty. My build.sh has:
mkdir $PREFIX\root\share\fonts
cp *.* $PREFIX\root\share\fonts
My meta.yaml source has:
source:
url: <ftp server url>/next-fonts.zip
fn: next-fonts.zip
In Jenkins I do:
mkdir build
conda build fonts
The console output is strange though at this part:
+ mkdir /var/lib/jenkins/conda-bld/fonts_1478708638575/_b_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_prootsharefonts
+ cp Lato-Black.ttf Lato-BlackItalic.ttf Lato-Bold.ttf Lato-BoldItalic.ttf Lato-Hairline.ttf Lato-HairlineItalic.ttf Lato-Italic.ttf Lato-Light.ttf Lato-LightItalic.ttf Lato-Regular.ttf MyriadPro-Black.otf MyriadPro-Bold.otf MyriadPro-Light.otf MyriadPro-Regular.otf MyriadPro-Semibold.otf conda_build.sh /var/lib/jenkins/conda-bld/fonts_1478708638575/_b_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_prootsharefonts
BUILD START: fonts-1-1
Source cache directory is: /var/lib/jenkins/conda-bld/src_cache
Found source in cache: next-fonts.zip
Extracting download
Package: fonts-1-1
source tree in: /var/lib/jenkins/conda-bld/fonts_1478708638575/work/Fonts
number of files: 0
To me it seems the cp either doesn't complete or it copies to a wrong directory. Unfortunately, with the placeholder stuff I really can't decypher where exactly the fonts land when they are copied, all I know is that in /work/Fonts, there are no files and thus nothing is included in the package. While typing, I also noted that /work/Fonts actually has the Fonts starting with a capital F, while nowhere in the configuration or the scripts there is any instance of fonts starting with a capital F.
Any insight on what might go wrong?
mkdir $PREFIX\root\share\fonts
cp *.* $PREFIX\root\share\fonts
should be replaced with
mkdir $PREFIX/root/share/fonts
cp * $PREFIX/root/share/fonts
The buildscript was taken from another package that was built in windows and in changing the build script I forgot to change the folder separators.
Additionally creating subfolder structures isn't possible in linux while it is in windows. So this
mkdir $PREFIX/root/
mkdir $PREFIX/root/share/
mkdir $PREFIX/root/share/fonts/
cp * $PREFIX/root/share/fonts/
Was the ultimate solution to the problem.

Python sees uninstalled module

I have a really weird problem. I'm developing a Pyramid project and it seems like non-existing module is found when I run pserve.
__init__.py of my main module:
...
# This works !!! db/models directory does not even exists
from db.models import Base
# Also works - db/model exists
from db.model import Base
...
I even tried to recreate my virtual environment and it still finds it.
Any ideas?
From the comments it appears this was solved and it was leftover *.pyc files. This problem can come up a lot if you're moving between branches or if you find yourself frequently renaming/deleting files.
$ find mydir -name "*.pyc" -exec rm {} \; will recursively find all *.pyc files in the "mydir" directory (replace mydir with your directory name of course) and delete them. $ find . -name "*.pyc" -exec rm {} \; for your current working directory.
If you're using git for your project, add this script to your post-checkout hook to prevent differences between branches from getting in your way.
$ echo "find src -name "*.pyc" -exec rm {} \;" >> .git/hooks/post-checkout

"sys-package-mgr*: can't create package cache dir" when run python script with Jython

I want to run Python script with Jython.
the result show correctly, but at the same time there is an warning message, "sys-package-mgr*: can't create package cache dir"
How could I solve this problem?
thanks in advance~~~
You can change the location of the cache directory to a place that you have read & write access to by setting the "python.cachedir" option when starting jython, e.g.:
jython -Dpython.cachedir=*your cachedir directory here*
or:
java -jar my_standalone_jython.jar -Dpython.cachedir=*your cachedir directory here*
You can read about the python.cachedir option here:
http://www.jython.org/archive/21/docs/registry.html
1) By changing permissions to allow writing to the directory in the error message.
2) By setting python.cachedir.skip = true
You can read this:
http://www.jython.org/jythonbook/en/1.0/ModulesPackages.html#module-search-path-compilation-and-loading
for further insights.
Making directories world writable admittedly makes the problem "go away", however, it introduces a huge security hole. Anyone could introduce code to the now world writable directory that would be executed in the users' jpython environment.
Setting the cachedir to skip would presumably result in a performance drop (why implement a caching scheme other than to improve performace).
Instead I did the following:
I created a new group (in my case eclipse, but it could have been jpython). I added the users of jpython to that group.
$ sudo groupadd eclipse
I then changed the group of my eclipse plugins folder and its children to 'eclipse'.
/opt/eclipse/plugins $ sudo chgrp -R eclipse *
Then I changed the group permissions as follows
/opt/eclipse/plugins $ sudo chmod -R g+w *
/opt/eclipse/plugins $ find * -type d -print | sudo xargs chmod g+s
This added group writable, and set the S_GID bit on all directories recursively. This last bit causes new directories created to have the same group id as their parent.
The final touch was change the umask for the eclipse users set to 007.
$ sudo vi /etc/login.def
change UMASK to 007 (from 022).
UMASK=007
The easiest fix I found so far was to do:
$ sudo chmod -R 777 /opt/jython/cachedir

Categories

Resources