I've just started using the ABlog plugin for sphinx to create a static-site blog.
Is it easy to change ablog deploy to deploy to a different location,
e.g. ../username.github.io/ instead of ./username.github.io/?
I have my ABlog project under source control in a git repository. Creating my username.github.io inside the current ABlog project creates a repo inside a repo and this causes errors (also I don't want to store the built site along with the ABlog repository -- although I could add a .gitignore).
Is it easy to change ablog deploy to deploy to a different location,
e.g. ../username.github.io/ instead of ./username.github.io/?
For ABlog ≥ 0.8.0, yes
For ablog-0.8.0 and above, you can use the -p option to specify a github repo location other than the default (<location of conf.py>/<your username>.github.io):
ablog deploy -p /the/path/for/your/local/github/pages/repo
i.e., in your case
ablog deploy -p ../username.github.io/
How to install the most recent ABlog version
Until version 0.8.0 is available on pypi, you can tell pip to install ablog directly from git:
pip install git+https://github.com/abakan/ablog.git
For Ablog < 0.8.0, no
For versions prior to 0.8.0, the old version of this answer applies:
With the current implementation of ABlog-internal function
ablog_deploy,
the location of the target repository cannot be changed:
String gitdir (holding the path where the local repository will be
created) is set
to
<confdir>/<github_pages option>.github.io but the `github_pages` option is also [used to choose the remote
repository](https://github.com/abakan/ablog/blob/0ed765d95a23ad7dce48c755773ac60dd08cf319/ablog/commands.py#L338),
so passing something else than the GitHub account name will make the
process fail.
Manipulating confdir would be difficult and would result in the
configuration
file
not being found and probably a bunch of other side effects.
However, if you're willing to modify ABlog's source code, it would not
be hard to adapt the assignment of gitdir as you see fit (maybe
introducing another option) to produce the decided effect. (E.g., make
it use confdir if your new option hasn't been set, and have it use
your new option instead if that option has been set.)
Related
One of our repositories relies on another first-party one. Because we're in the middle of a migration from (a privately hosted) gitlab to azure, some of our dependencies aren't available in gitlab, which is where the problem comes up.
Our pyproject.toml file has this poetry group:
# pyproject.toml
[tool.poetry.group.cli.dependencies]
cli-parser = { path = "../cli-parser" }
In the Gitlab-CI, this cannot resolve. Therefore, we want to run the pipelines without this dependency. There is no code being run that actually relies on this library, nor files being imported. Therefore, we factored it out into a separate poetry group. In the gitlab-ci, that looks like this:
# .gitlab-ci.yml
install-poetry-requirements:
stage: install
script:
- /opt/poetry/bin/poetry --version
- /opt/poetry/bin/poetry install --without cli --sync
As visible, poetry is instructed to omit the cli dependency group. However, it still crashes on it:
# Gitlab CI logs
$ /opt/poetry/bin/poetry --version
Poetry (version 1.2.2)
$ /opt/poetry/bin/poetry install --without cli --sync
Directory ../cli-parser does not exist
If I comment out the cli-parser line in pyproject.toml, it will install successfully (and the pipeline passes), but we cannot do that because we need it in production.
I can't find another way to tell poetry to omit this library. Is there something I missed, or is there a workaround?
Good, Permanent Solution
As finswimmer mentioned in a comment, Poetry 1.4 should handle this perfectly good. If you're reading this question after it is published, the code in the question should resolve any issues.
Hacky, Bad, Temporary Solution
Since the original problem was in gitlab CI pipelines, I used a workaround there. Right in front of the install command, I used the following command:
sed -i '/cli-parser/d' pyproject.toml
This modifies the projects' pyproject.toml in-place to remove the line that has my module. This prevents poetry from ever parsing the dependency.
See the sed man page for more information on how this works.
Keep in mind that if your pipeline has any permanent effects, like for example turning your package into an installable wheel, or build artifacts being used elsewhere, this WILL break your setup.
I have a project where I manage the version through git tags.
Then, I use setuptools_scm to get this information in my setup.py and also generates a file (_version.py) that gets included when generating the wheel for pip.
This file is not tracked by git since:
it has the same information that can be gathered by git
it would create a circular situation where building the wheel will modify the version which changes the sources and a new version will be generated
Now, when I build the documentation, it becomes natural to fetch this version from _version.py and this all works well locally.
However, when I try to do this within ReadTheDocs, the building of the documentation fails because _version.py is not tracked by git, so ReadTheDocs does not find it when fetching the sources from the repository.
EDIT: I have tried to use the method proposed in the duplicate, which is the same as what setuptools_scm indicate in the documentation, i.e. using in docs/conf.py:
from pkg_resources import get_distribution
__version__ = get_distribution('numeral').version
... # I use __version__ to define Sphinx variables
but I get:
pkg_resources.DistributionNotFound: The 'numeral' distribution was not found and is required by the application
(Again, building the documentation locally works correctly.)
How could I solve this issue without resorting to maintaining the version number in two places?
Eventually the issue was that ReadTheDocs did not have the option to build my package active by default and I was expecting this to happen.
All I had to do was to enable "Install Project" in the Advanced Settings / Default Settings.
I've got a need to create and ship conda envs that list packages that need to remain private. It would be especially handy to list dependencies using an URL to a (company internal) GitLab instance.
Is there a way to register dependencies with conda using a repo URL? Is there also some other way to include Python packages you have a source distribution for, but cannot be hosted on a regular channel?
Thanks.
If you know before hand what needs to remain private ship direct-reference eggs, or used zoned index-urls, and extra-index-urls, or in the conda-meta stuff like here:
# requirements.txt
gevent
publicthing==1.2
someother==0.1
# private packages
file://package/egg/here
-e git+ssh://priv.gitlab.some.org/some/privpack.git#egg=privpack
--extra-index-url https://build.priv.gitlab.some.org/some/pypi/simple
I'd guess private here would mean sdist/dist build artifacts like tars, eggs, wheels, some URI/URL only accessible on a local network.
Like where the package is hosted should be indicator enough of labeling something as "private". Like the build artifacts are available, or they are not through some availability mechanism. (network location, building locally, shipped binaries, etc)
using pypi/pip.
https://pip.readthedocs.io/en/1.1/requirements.html#requirements-file-format
conda meta build info :
source:
- url: https://build.priv.gitlab.some.org/some/pypi/simple/privpack/a.tar.bz2
folder: stuff
- url: https://build.priv.gitlab.some.org/some/pypi/simple/privpack/b.tar.bz2
folder: stuff
https://conda.io/docs/user-guide/tasks/build-packages/define-metadata.html
examples:
https://github.com/conda/conda-recipes
https://github.com/conda/conda-recipes/blob/c2eb600f8545cd21aa9e50a8bb8a81df7fd3c915/r-packages/r-yaml/meta.yaml#L10
https://github.com/conda/conda-recipes/blob/a796713805ac8eceed191c0cb475b51f4d00718c/python/pyserial/meta.yaml#L5
https://conda.io/docs/user-guide/tasks/build-packages/define-metadata.html#source-from-git
https://conda.io/docs/user-guide/tasks/build-packages/define-metadata.html#source-from-a-local-path
related :
https://docs.anaconda.com/anaconda-repository/admin-guide/install/config/config-client#kerberos-configuration
https://docs.anaconda.com/anaconda-repository/admin-guide/install/config/kerberos-example
https://docs.anaconda.com/anaconda-repository/admin-guide/install/config/config-client#pip-configuration
https://pip.readthedocs.io/en/1.1/requirements.html#git
I tried to uninstall a module on heroku with:
heroku run bin/python bin/pip uninstall whatever
Pip shows the module in the /app tree then claims to have uinstalled the module, but running the same command again shows it installed in the same location in the /app tree.
Is there a way to get pip uinstall to succeed?
Heroku run instantiates a new dyno and runs the command specified in that dyno only. Dynos are ephemeral which is why the results of the pip uninstall don't stick.
Updated 2013-09-30: the current way to clear the virtualenv seems to specify a different python runtime version in runtime.txt as stated on Github and in the Heroku's devcenter reference.
Be aware that Heroku currently "only endorses and supports the use of Python 2.7.4 and 3.3.2" so unless your application supports both Python 2.7.4 and 3.3.2, you may want to test it with the runtime you'll want to switch to (currently available at http://envy-versions.s3.amazonaws.com/$PYTHON_VERSION.tar.bz2, though it shouldn't be an issue to switch e.g. between 2.7.4 and 2.7.3 in most cases).
Thanks #Jesse for your up-to-date answer and to the commenters who made me aware of the issue.
Was up-to-date in ~november 2012 (I haven't since updated the linked buildpack, my pull request was closed and the CLEAN_VIRTUALENV feature was dropped at some point by the official buildpack):
As David explained, you cannot pip uninstall one package but you can purge and reinstall the whole virtualenv. Use the user-env-compile lab feature with the CLEAN_VIRTUALENV option to purge the virtualenv:
heroku labs:enable user-env-compile
heroku config:add CLEAN_VIRTUALENV=true
Currently this won't work because there is a bug. You'll need to use my fork of the buildpack until this get fixed upstream (pull request was closed) :
heroku config:add BUILDPACK_URL=https://github.com/blaze33/heroku-buildpack-python.git
Now push your new code and you'll notice that the whole virtualenv gets reinstalled.
Andrey's answer no longer works since March 23 2012. The new style virtualenv commit moved the virtual env from /app to /app/.heroku/venv but the purge branch wasn't updated to catch up so that you end up with a virtualenv not being in PYTHONHOME.
To avoid reinstalling everything after each push, disable the option:
heroku labs:disable user-env-compile
heroku config:remove CLEAN_VIRTUALENV BUILDPACK_URL
There is now a simpler way to clear the pip cache. Just change the runtime environment, for example from 'python-2.7.3' to 'python-2.7.2', or vice versa.
To do this add a file called runtime.txt to the root of your repository that contains just the runtime string (as show above) in it.
For this to work you need to have turned on the Heroku labs user-env-compile feature. See https://devcenter.heroku.com/articles/labs-user-env-compile
By default virtualenv is cached between deploys.
To avoid caching of packages you can run:
heroku config:add BUILDPACK_URL=git#github.com:heroku/heroku-buildpack-python.git#purge
That way everything will be built from scratch after you push some changes. To enable the caching just remove the BUILDPACK_URL config variable.
Now to uninstall specific package(s):
Remove the corresponding record(s) from the requirements.txt;
Commit and push the changes.
Thanks to Lincoln from Heroku Support Team for the clarifications.
I've created some fabfile recipes for Maxime and Jesse answers that allow re-installing the requirements with one fab command: https://gist.github.com/littlepea/5096814 (look at the docstrings for explanation and examples).
For Maxime's answer I've created a task 'heroku_clean' (or 'hc'), it'll look something like that:
fab heroku_clean
Or using an alias and specifying a heroku app:
fab hc:app=myapp
For Jesse's answer I've created a task 'heroku_runtime' (or 'hr'), it sets heroku python runtime and commits runtime.txt (also creates it if it didn't exist):
fab heroku_runtime:2.7.2
If runtime version is not passed it will just toggle it between 2.7.2 and 2.7.3, so the easiest way to change and commit the runtime is:
fab hr
Then you can just deploy (push to heroku origin) you app and the virtualenv will be rebuilt. I've also added a 'heroku_deploy' task ('hr') that I use for heroku push and scale that also can be user together with 'heroku_runtime' task. This is my preferred method of deploying and rebuilding virtualenv - everything happens in one command and I can choose when to rebuild it, I don't like to do it every time like Maxime's answer suggest because it can take a long time:
fab hd:runtime=yes
This is an equivalent of:
fab heroku_runtime
fab heroku_deploy
I'm trying to install python to a 1and1.com shared linux hosting account.
There is a nice guide at this address:
http://www.jacksinner.com/wordpress/?p=3
However I get stuck at step 6 which is: "make install". The error I get is as follows:
(uiserver):u58399657:~/bin/python > make install
Creating directory /~/bin/python/bin
/usr/bin/install: cannot create directory `/~’: Permission denied
Creating directory /~/bin/python/lib
/usr/bin/install: cannot create directory `/~’: Permission denied
make: *** [altbininstall] Error 1
I look forward to some suggestions.
UPDATE:
Here is an alternative version of the configure step to fix the above error, however this time I'm getting a different error:
(uiserver):u58399657:~ > cd Python-2.6.3
(uiserver):u58399657:~/Python-2.6.3 > ./configure -prefix=~/bin/python
configure: error: expected an absolute directory name for --prefix: ~/bin/python
(uiserver):u58399657:~/Python-2.6.3 >
The short version is, it looks like you've set the prefix to /~/bin/python instead of simply ~/bin/python. This is typically done with a --prefix=path argument to configure or some other similar script. Try fixing this and it should then work. I'd suggest actual commands, but it's been a while (hence my request to see what you've been typing.)
Because of the above mistake, it is trying to install to a subdirectory called ~ of the root directory (/), instead of your home directory (~).
EDIT: Looking at the linked tutorial, this step is incorrect:
./configure --prefix=/~/bin/python
It should instead read:
./configure --prefix=~/bin/python
Note, this is addressed in the very first comment to that post.
EDIT 2: It seems that whatever shell you are using isn't expanding the path properly. Try this instead:
./configure --prefix=$HOME/bin/python
Failing even that, run echo $HOME and substitute that for $HOME above. It should look something like --prefix=/home/mscharley/bin/python
You really should consider using the AS binary package from Activestate for this kind of thing. Download the .tar.gz file, unpack it, change to the python directory and run the install shell script. This installs a completely standalone version of python without touching any of the system stuff. You don't need root permissions and you don't need to mess around with make.
Of course, maybe you are a C/C++ developer, make is a familiar tool and you are experienced at building packages from source. But if any of those is not true then it is worth your while to try out the Activestate AS binary package.
I was facing same issue with 1and1 shared hosting (Your provided linked tutorial is not available now). I followed Installing Python modules on Hostgator shared hosting using VirtualEnv tutorial with only one change for 1and1. That is:
Instead of:
> python virtualenv-1.11.6/virtualenv.py /home1/yourusername/public_html/yourdomain.com/env --no-site-package
I used:
> python virtualenv-1.11.6/virtualenv.py /kunden/homepages/29/yourusername/htdocs/env --no-site-package
Rest of the instructions worked and I successfully installed VirtualEnv.
Example: 1and1 does not provide Requests module and pip cannot be used in shared hosting. This screenshot demonstrates that after installing VirtualEnv, pip command can be used and at the end >>> import requests successfully worked.