I have a package which is distributed by asking people to run:
pip install git+"https://gitlab.com/project/mypackage.git#master"
I would like to have a Python script which when run flags that the local package is older than the remote one (I want to avoid automatically updating it).
I was thinking to do the below, say check_version.py
from pkg_resources import parse_version, get_distribution
local_version = parse_version(get_distribution("mypackage").version)
def foo()
# GET GIT REPO PACKAGE VERSION
return repo_version
repo_version = foo()
if local_version < repo_version:
print("Local Installation is out of date")
else:
print("Local Installation is up to date.")
However I am not sure how to query the git repo to get the version number... Basically foo function should be implemented, any ideas how to do that?
Create a simple text file VERSION in your repository that only has the current version. Every file at Gitlab has a raw URL; your file will be available at https://gitlab.com/project/mypackage/raw/master/VERSION. Read the content of the URL in foo using httplib/http.client or urllib or requests, whatever.
Related
I have written a python script which depends on paramiko to work. The system that will run my script has the following limitations:
No internet connectivity (so it can't download dependencies on the fly).
Volumes are mounted with 'noexec' (so I cannot run my code as a binary file generated with something like 'pyInstaller')
End-user cannot be expected to install any dependencies.
Vanilla python is installed (without paramiko)
Python version is 2.7.5
Pip is not installed either and cannot be installed on the box
I however, have access to pip on my development box (if that helps in any way).
So, what is the way to deploy my script so that I am able to provide it to the end-user with the required dependencies, i.e paramiko (and sub-dependencies of paramiko), so that the user is able to run the script out-of-the-box?
I have already tried the pyinstaller 'one folder' approach but then faced the 'noexec' issue. I have also tried to directly copy paramiko (and sub-dependencies of paramiko) to a place from where my script is able to find it with no success.
pip is usually installed with python installation. You could use it to install the dependencies on the machine as follows:
import os, sys
def selfInstallParamiko():
# assuming paramiko tar-ball/wheel is under the current working directory
currentDir = os.path.abspath(os.path.dirname(__file__))
# paramikoFileName: name of the already downloaded paramiko tar-ball,
# which you'll have to ship with your scripts
paramikoPath = os.path.join(currentDir, paramikoFileName)
print("\nInstalling {} ...".format(paramikoPath))
# To check for which pip to use (pip for python2, pip3 for python3)
pipName = "pip"
if sys.version_info[0] >= 3:
pipName = "pip3"
p = subprocess.Popen("{} install --user {}".format(pipName, paramikoPath).split())
out, err= p.communicate("")
if err or p.returncode != 0:
print("Unable to self-install {}\n".format(paramikoFileName))
sys.exit(1)
# Needed, because sometimes windows command shell does not pick up changes so good
print("\n\nPython paramiko module installed successfully.\n"
"Please start the script again from a new command shell\n\n")
sys.exit()
You can invoke it when your script starts and an ImportError occurs:
# Try to self install on import failure:
try:
import paramiko
except ImportError:
selfInstallParamiko()
I am trying to use the speedtest-cli api. Copied part of the code from official wiki (and removed unused stuff):
import speedtest
s = speedtest.Speedtest()
s.get_best_server()
s.download()
In python console I get everything ok:
>>> import speedtest
>>> s = speedtest.Speedtest()
>>> s.get_best_server()
{HIDDEN}
>>> s.download()
37257579.09084724
But when I create .py file and run it I get:
AttributeError: module 'speedtest' has no attribute 'SpeedTest'
Thanks
As mentioned in the comments, you have a file with the same name and it is conflicting with the import. Since you have moved the file, restarting the console should work.
The code below will also extract the results into a dictionary and make it possible to access the results.
import speedtest
s = speedtest.Speedtest()
s.get_best_server()
s.download()
s.upload()
res = s.results.dict()
print(res["download"], res["upload"], res["ping"])
I faced the same issue because I had installed both speedtest and speedtest-cli.
Using pip uninstall speedtest worked for me.
I faced the same issue because the name of my file was speedtest. When I change the name to something new. It works fine for me.
import speedtest
wifi = speedtest.Speedtest()
print("Wifi Download Speed is ", wifi.download())
print("Wifi Upload Speed is ", wifi.upload())
Ok Here is the Solution to this Problem
1-uninstall speedtest and speedtest-cli if you have both installed
2- install only speedtest-cli pip install speedtest-cli this will install the package in the main python environment
3-CTRL+P select interpreter then select your main python not any environment
4-it will work fine
Reason for this error
(you install the package in your main python environment and you usually open your IDE and run a virtual environment)
Try checking speedtest is imported properly
import speedtest
print(dir(speedtest))
it should display properties of speedtest
Is there any way to get pip to print the config it will attempt to use? For debugging purposes it would be very nice to know that:
config.ini files are in the correct place and pip is finding them.
The precedence of the config settings is treated in the way one would expect from the docs
For 10.0.x and higher
There is new pip config command, to list current configuration values
pip config list
(As pointed by #wmaddox in comments) To get the list of where pip looks for config files
pip config list -v
Pre 10.0.x
You can start python console and do. (If you have virtaulenv don't forget to activate it first)
from pip import create_main_parser
parser = create_main_parser()
# print all config files that it will try to read
print(parser.files)
# reads parser files that are actually found and prints their names
print(parser.config.read(parser.files))
create_main_parser is function that creates parser which pip uses to read params from command line(optparse) and loading configs(configparser)
Possible file names for configurations are generated in get_config_files. Including PIP_CONFIG_FILE environment variable if it set.
parser.config is instance of RawConfigParser so all generated file names in get_config_files are passed to parser.config.read
.
Attempt to read and parse a list of filenames, returning a list of filenames which were successfully parsed. If filenames is a string, it is treated as a single filename. If a file named in filenames cannot be opened, that file will be ignored. This is designed so that you can specify a list of potential configuration file locations (for example, the current directory, the user’s home directory, and some system-wide directory), and all existing configuration files in the list will be read. If none of the named files exist, the ConfigParser instance will contain an empty dataset. An application which requires initial values to be loaded from a file should load the required file or files using read_file() before calling read() for any optional files:
From how I see it, your question can be interpreted in three ways:
What is the configuration of the pip executable?
There is a quite extensive documentation for the configurations supported by pip, see here: https://pip.pypa.io/en/stable/user_guide/#configuration
What is the configuration that pip uses when configuring and subsequently building code required by a Python module?
This is specified by the package that is being installed. The package maintainer is responsible for producing a configuration script. For example, Numpy has a Configuration class (https://github.com/numpy/numpy/blob/master/numpy/distutils/misc_util.py) that they use to configure their Cython build.
What are the current modules installed with pip so I can reproduce a specific environment configuration?
This is easy, pip freeze > requirements.txt. This will produce a file of all currently installed pip modules along with their exact versions. You can then do pip install -r requirements.txt to reproduce that exact environment configuration on another machine.
I hope this helps.
You can run pip in pdb. Here's an example inside ipython:
>>> import pip
>>> import pdb
>>> pdb.run("pip.main()", globals())
(Pdb) s
--Call--
> /usr/lib/python3.5/site-packages/pip/__init__.py(197)main()
-> def main(args=None):
(Pdb) b /usr/lib/python3.5/site-packages/pip/baseparser.py:146
Breakpoint 1 at /usr/lib/python3.5/site-packages/pip/baseparser.py:146
(Pdb) c
> /usr/lib/python3.5/site-packages/pip/baseparser.py(146)__init__()
-> if self.files:
(Pdb) p self.files
['/etc/xdg/pip/pip.conf', '/etc/pip.conf', '/home/andre/.pip/pip.conf', '/home/andre/.config/pip/pip.conf']
The only trick here was looking up the path of the baseparser (and knowing that the files are in there). If you don't know this already you can simply step through the program or read the source. This type of exploration should work for most Python programs.
I'm developing my own module in python 2.7. It resides in ~/Development/.../myModule instead of /usr/lib/python2.7/dist-packages or /usr/lib/python2.7/site-packages. The internal structure is:
/project-root-dir
/server
__init__.py
service.py
http.py
/client
__init__.py
client.py
client/client.py includes PyCachedClient class. I'm having import problems:
project-root-dir$ python
Python 2.7.2+ (default, Jul 20 2012, 22:12:53)
[GCC 4.6.1] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from server import http
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "server/http.py", line 9, in <module>
from client import PyCachedClient
ImportError: cannot import name PyCachedClient
I didn't set PythonPath to include my project-root-dir, therefore when server.http tries to include client.PyCachedClient, it tries to load it from relative path and fails. My question is - how should I set all paths/settings in a good, pythonic way? I know I can run export PYTHONPATH=... in shell each time I open a console and try to run my server, but I guess it's not the best way. If my module was installed via PyPi (or something similar), I'd have it installed in /usr/lib/python... path and it'd be loaded automatically.
I'd appreciate tips on best practices in python module development.
My Python development workflow
This is a basic process to develop Python packages that incorporates what I believe to be the best practices in the community. It's basic - if you're really serious about developing Python packages, there still a bit more to it, and everyone has their own preferences, but it should serve as a template to get started and then learn more about the pieces involved. The basic steps are:
Use virtualenv for isolation
setuptools for creating a installable package and manage dependencies
python setup.py develop to install that package in development mode
virtualenv
First, I would recommend using virtualenv to get an isolated environment to develop your package(s) in. During development, you will need to install, upgrade, downgrade and uninstall dependencies of your package, and you don't want
your development dependencies to pollute your system-wide site-packages
your system-wide site-packages to influence your development environment
version conflicts
Polluting your system-wide site-packages is bad, because any package you install there will be available to all Python applications you installed that use the system Python, even though you just needed that dependency for your small project. And it was just installed in a new version that overrode the one in the system wide site-packages, and is incompatible with ${important_app} that depends on it. You get the idea.
Having your system wide site-packages influence your development environment is bad, because maybe your project depends on a module you already got in the system Python's site-packages. So you forget to properly declare that your project depends on that module, but everything works because it's always there on your local development box. Until you release your package and people try to install it, or push it to production, etc... Developing in a clean environment forces you to properly declare your dependencies.
So, a virtualenv is an isolated environment with its own Python interpreter and module search path. It's based on a Python installation you previously installed, but isolated from it.
To create a virtualenv, install the virtualenv package by installing it to your system wide Python using easy_install or pip:
sudo pip install virtualenv
Notice this will be the only time you install something as root (using sudo), into your global site-packages. Everything after this will happen inside the virtualenv you're about to create.
Now create a virtualenv for developing your package:
cd ~/pyprojects
virtualenv --no-site-packages foobar-env
This will create a directory tree ~/pyprojects/foobar-env, which is your virtualenv.
To activate the virtualenv, cd into it and source the bin/activate script:
~/pyprojects $ cd foobar-env/
~/pyprojects/foobar-env $ . bin/activate
(foobar-env) ~/pyprojects/foobar-env $
Note the leading dot ., that's shorthand for the source shell command. Also note how the prompt changes: (foobar-env) means your inside the activated virtualenv (and always will need to be for the isolation to work). So activate your env every time you open a new terminal tab or SSH session etc..
If you now run python in that activated env, it will actually use ~/pyprojects/foobar-env/bin/python as the interpreter, with its own site-packages and isolated module search path.
A setuptools package
Now for creating your package. Basically you'll want a setuptools package with a setup.py to properly declare your package's metadata and dependencies. You can do this on your own by by following the setuptools documentation, or create a package skeletion using Paster templates. To use Paster templates, install PasteScript into your virtualenv:
pip install PasteScript
Let's create a source directory for our new package to keep things organized (maybe you'll want to split up your project into several packages, or later use dependencies from source):
mkdir src
cd src/
Now for creating your package, do
paster create -t basic_package foobar
and answer all the questions in the interactive interface. Most are optional and can simply be left at the default by pressing ENTER.
This will create a package (or more precisely, a setuptools distribution) called foobar. This is the name that
people will use to install your package using easy_install or pip install foobar
the name other packages will use to depend on yours in setup.py
what it will be called on PyPi
Inside, you almost always create a Python package (as in "a directory with an __init__.py) that's called the same. That's not required, the name of the top level Python package can be any valid package name, but it's a common convention to name it the same as the distribution. And that's why it's important, but not always easy, to keep the two apart. Because the top level python package name is what
people (or you) will use to import your package using import foobar or from foobar import baz
So if you used the paster template, it will already have created that directory for you:
cd foobar/foobar/
Now create your code:
vim models.py
models.py
class Page(object):
"""A dumb object wrapping a webpage.
"""
def __init__(self, content, url):
self.content = content
self.original_url = url
def __repr__(self):
return "<Page retrieved from '%s' (%s bytes)>" % (self.original_url, len(self.content))
And a client.py in the same directory that uses models.py:
client.py
import requests
from foobar.models import Page
url = 'http://www.stackoverflow.com'
response = requests.get(url)
page = Page(response.content, url)
print page
Declare the dependency on the requests module in setup.py:
install_requires=[
# -*- Extra requirements: -*-
'setuptools',
'requests',
],
Version control
src/foobar/ is the directory you'll now want to put under version control:
cd src/foobar/
git init
vim .gitignore
.gitignore
*.egg-info
*.py[co]
git add .
git commit -m 'Create initial package structure.
Installing your package as a development egg
Now it's time to install your package in development mode:
python setup.py develop
This will install the requests dependency and your package as a development egg. So it's linked into your virtualenv's site-packages, but still lives at src/foobar where you can make changes and have them be immediately active in the virtualenv without re-installing your package.
Now for your original question, importing using relative paths: My advice is, don't do it. Now that you've got a proper setuptools package, that's installed and importable, your current working directory shouldn't matter any more. Just do from foobar.models import Page or similar, declaring the fully qualified name where that object lives. That makes your source code much more readable and discoverable, for yourself and other people that read your code.
You can now run your code by doing python client.py from anywhere inside your activated virtualenv. python src/foobar/foobar/client.py works just as fine, your package is properly installed and your working directory doesn't matter any more.
If you want to go one step further, you can even create a setuptools entry point for your CLI scripts. This will create a bin/something script in your virtualenv that you can run from the shell.
setuptools console_scripts entry point
setup.py
entry_points='''
# -*- Entry points: -*-
[console_scripts]
run-fooobar = foobar.main:run_foobar
''',
client.py
def run_client():
# ...
main.py
from foobar.client import run_client
def run_foobar():
run_client()
Re-install your package to activate the entry point:
python setup.py develop
And there you go, bin/run-foo.
Once you (or someone else) installs your package for real, outside the virtualenv, the entry point will be in /usr/local/bin/run-foo or somewhere simiar, where it will automatically be in $PATH.
Further steps
Creating a release of your package and uploading it PyPi, for example using zest.releaser
Keeping a changelog and versioning your package
Learn about declaring dependencies
Learn about Differences between distribute, distutils, setuptools and distutils2
Suggested reading:
The Hitchhiker’s Guide to Packaging
The pip cookbook
So, you have two packages, the first with modules named:
server # server/__init__.py
server.service # server/service.py
server.http # server/http.py
The second with modules names:
client # client/__init__.py
client.client # client/client.py
If you want to assume both packages are in you import path (sys.path), and the class you want is in client/client.py, then in you server you have to do:
from client.client import PyCachedClient
You asked for a symbol out of client, not client.client, and from your description, that isn't where that symbol is defined.
I personally would consider making this one package (ie, putting an __init__.py in the folder one level up, and giving it a suitable python package name), and having client and server be sub-packages of that package. Then (a) you could do relative imports if you wanted to (from ...client.client import something), and (b) your project would be more suitable for redistribution, not putting two very generic package names at the top level of the python module hierarchy.
SITUATION:
I have a python library, which is controlled by git, and bundled with distutils/setuptools. And I want to automatically generate version number based on git tags, both for setup.py sdist and alike commands, and for the library itself.
For the first task I can use git describe or alike solutions (see How can I get the version defined in setup.py (setuptools) in my package?).
And when, for example, I am in a tag '0.1' and call for 'setup.py sdist', I get 'mylib-0.1.tar.gz'; or 'mylib-0.1-3-abcd.tar.gz' if I altered the code after tagging. This is fine.
THE PROBLEM IS:
The problem comes when I want to have this version number available for the library itself, so it could send it in User-Agent HTTP header as 'mylib/0.1-3-adcd'.
If I add setup.py version command as in How can I get the version defined in setup.py (setuptools) in my package?, then this version.py is generated AFTER the tag is made, since it uses the tag as a value. But in this case I need to make one more commit after the version tag is made to make the code consistent. Which, in turns, requires a new tag for further bundling.
THE QUESTION IS:
How to break this circle of dependencies (generate-commit-tag-generate-commit-tag-...)?
You could also reverse the dependency: put the version in mylib/__init__.py, parse that file in setup.py to get the version parameter, and use git tag $(setup.py --version) on the command line to create your tag.
git tag -a v$(python setup.py --version) -m 'description of version'
Is there anything more complicated you want to do that I haven’t understood?
A classic issue when toying with keyword expansion ;)
The key is to realize that your tag is part of the release management process, not part of the development (and its version control) process.
In other word, you cannot include a release management data in a development repository, because of the loop you illustrates in your question.
You need, when generating the package (which is the "release management part"), to write that information in a file that your library will look for and use (if said file exists) for its User-Agent HTTP header.
Since this topic is still alive and sometimes gets to search results, I would like to mention another solution which first appeared in 2012 and now is more or less usable:
https://github.com/warner/python-versioneer
It works in different way than all mentioned solutions: you add git tags manually, and the library (and setup.py) reads the tags, and builds the version string dynamically.
The version string includes the latest tag, distance from that tag, current commit hash, "dirtiness", and some other info. It has few different version formats.
But it still has no branch name for so called "custom builds"; and commit distance can be confusing sometimes when two branches are based on the same commit, so it is better to tag & release only one selected branch (master).
Eric's idea was the simple way to go, just in case this is useful here is the code I used (Flask's team did it this way):
import re
import ast
_version_re = re.compile(r'__version__\s+=\s+(.*)')
with open('app_name/__init__.py', 'rb') as f:
version = str(ast.literal_eval(_version_re.search(
f.read().decode('utf-8')).group(1)))
setup(
name='app-name',
version=version,
.....
)
If you found versioneer excessively convoluted, you can try bump2version.
Just add the simple bumpversion configuration file in the root of your library. This file indicates where in your repository there are strings storing the version number. Then, to update the version in all indicated places for a minor release, just type:
bumpversion minor
Use patch or major if you want to release a patch or a major.
This is not all about bumpversion. There are other flag-options, and config options, such as tagging automatically the repository, for which you can check the official documentation.
Following OGHaza's solution in a similar SO question I keep a file _version.py that I parse in setup.py. With the version string from there, I git tag in setup.py. Then I set the setup version variable to a combination of version string plus the git commit hash. So here is the relevant part of setup.py:
from setuptools import setup, find_packages
from codecs import open
from os import path
import subprocess
here = path.abspath(path.dirname(__file__))
import re, os
VERSIONFILE=os.path.join(here,"_version.py")
verstrline = open(VERSIONFILE, "rt").read()
VSRE = r"^__version__ = ['\"]([^'\"]*)['\"]"
mo = re.search(VSRE, verstrline, re.M)
if mo:
verstr = mo.group(1)
else:
raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,))
if os.path.exists(os.path.join(here, '.git')):
cmd = 'git rev-parse --verify --short HEAD'
git_hash = subprocess.check_output(cmd)
# tag git
gitverstr = 'v' + verstr
tags = subprocess.check_output('git tag')
if not gitverstr in tags:
cmd = 'git tag -a %s %s -m "tagged by setup.py to %s"' % (gitverstr, git_hash, verstr)
subprocess.check_output(cmd)
# use the git hash in the setup
verstr += ', git hash: %s' % git_hash
setup(
name='a_package',
version = verstr,
....
As was mentioned in another answer, this is related to the release process and not to the development process, as such it is not a git issue in itself, but more how is your release work process.
A very simple variant is to use this:
python setup.py egg_info -b ".`date '+%Y%m%d'`git`git rev-parse --short HEAD`" build sdist
The portion between the quotes is up for customization, however I tried to follow the typical Fedora/RedHat package names.
Of note, even if egg_info implies relation to .egg, actually it's used through the toolchain, for example for bdist_wheel as well and has to be specified in the beginning.
In general, your pre-release and post-release versions should live outside setup.py or any type of import version.py. The topic about versioning and egg_info is covered in detail here.
Example:
v1.3.4dev.20200813gitabcdef0
The v1.3.4 is in setup.py or any other variation you would like
The dev and 20200813gitabcdef0 is generated during the build process (example above)
None of the files generated during build are checked in git (usually in .gitignore they are filtered by default); sometimes there is a separate "deployment" repository, or similar, completely separate from the source one
A more complex way would be to have your release work process encoded in a Makefile which is outside the scope of this question, however a good source of inspiration can be found here and here. You will find good correspondeces between Makefile targets and setup.py commands.