Embed git hash into python file when installing - python

I want to embed the git hash into the version number of a python module if that module is installed from the git repository using ./setup.py install. How do I do that?
My thought was to define a function in setup.py to insert the hash and arrange to have it called when setup has copied the module to its build/lib/ directory, but before it has installed it to its final destination. Is there any way to hook into the build process at that point?
Edit: I know how to get the hash of the current version from the command line, I am asking about how to get such a command to run at the right time during the build/install.

Another, possibly simpler way to do it, using gitpython, as in dd/setup.py:
from pkg_resources import parse_version # part of `setuptools`
def git_version(version):
"""Return version with local version identifier."""
import git
repo = git.Repo('.git')
repo.git.status()
# assert versions are increasing
latest_tag = repo.git.describe(
match='v[0-9]*', tags=True, abbrev=0)
assert parse_version(latest_tag) <= parse_version(version), (
latest_tag, version)
sha = repo.head.commit.hexsha
if repo.is_dirty():
return f'{version}.dev0+{sha}.dirty'
# commit is clean
# is it release of `version` ?
try:
tag = repo.git.describe(
match='v[0-9]*', exact_match=True,
tags=True, dirty=True)
except git.GitCommandError:
return f'{version}.dev0+{sha}'
assert tag == f'v{version}', (tag, version)
return version
cf also the discussion at https://github.com/tulip-control/tulip-control/pull/145

Related

Is there a way to get git commit --verbose to show an updated diff when using pre-commit hooks?

So I'm currently setting up a git pre-commit hook to lint my python files with iSort, and python Black, the issue I'm running into is that when I use git commit --verbose the diff that shows up in the commit editor hasn't actually taken the modifications to the staged files into account.
For example, lets say I have a python file that looks like so:
import re
from os import path
def x():
v = re.compile(r"1")
print(3, v)
def y(v=3):
z = path.join("a", "b")
thing = "a string"
print(thing, z)
Based on the iSort and black settings I have configured, my pre-commit script will change the file to look like so:
import re
from os import path
def x():
v = re.compile(r"1")
print(3, v)
def y(v=3):
z = path.join("a", "b")
thing = "a string"
print(thing, z)
Unfortunately in the git commit editor it still shows the non modified diff. Is there some way to get the editor to have the correct output?
Theoretically I guess it doesn't matter, but it would be nice to see what the diff would actually be.
Instead of a pre-commit hook, try instead a content filter driver, with a smudge/clean script which can:
on checkout make your script one way
on commit (or on git diff) make your script the other way
See an example here or (for clean) here
(image from "Customizing Git - Git Attributes" from "Pro Git book"))

How to check out a branch with GitPython

I have cloned a repository with GitPython, now I would like to checkout a branch and update the local repository's working tree with the contents of that branch. Ideally, I'd also be able to check if the branch exists before doing this. This is what I have so far:
import git
repo_clone_url = "git#github.com:mygithubuser/myrepo.git"
local_repo = "mytestproject"
test_branch = "test-branch"
repo = git.Repo.clone_from(repo_clone_url, local_repo)
# Check out branch test_branch somehow
# write to file in working directory
repo.index.add(["test.txt"])
commit = repo.index.commit("Commit test")
I am not sure what to put in the place of the comments above. The documentation seems to give an example of how to detach the HEAD, but not how to checkout an named branch.
If the branch exists:
repo.git.checkout('branchename')
If not:
repo.git.checkout('-b', 'branchename')
Basically, with GitPython, if you know how to do it within command line, but not within the API, just use repo.git.action("your command without leading 'git' and 'action'"), example: git log --reverse => repo.git.log('--reverse')

Scapy module blocks PyCharm debugger

Im working on a project in PyCharm, and I need to debug certain part of the code.
When I tried to debug, the debugger just "skipped" the breakpoints without stopping at them.
After a lot of non-helpful tries in the web, I found that when I import the Scapy module, the debugger doesn't work, and when Scapy isn't imported, everything works just FINE.
Btw - Im working on Ubuntu OS.
Any ideas??
Came across this problem myself. It is very annoying. After much debugging, got to an answer.
The cause of the problem seems to be the way scapy imports everything into the global namespace and this seems to break PyCharm (name clash, perhaps?).
By the way, this all applies to v2.3.3 of scapy from 18th October, 2016.
As scapy is loading, it eventually hits a line in scapy/all.py:
from scapy.layers.all import *
This loads scapy/layers/all.py which loads scapy/config.py. This last file initialises Conf.load_layers[] to a list of modules (in scapy/layers).
scapy/layers/all.py then loops through this list, calling _import_star() on each module.
After it loads scapy/layers/x509.py, all breakpoints in PyCharm stop working.
I've FOUR solutions for you, pick the one you like best ...
(1) If you don't use anything to do with X509, you could simply remove this module from the list assigned to Conf.load_layers[] in scapy/config.py (line 383 in my copy of config.py). WARNING: THIS IS A REAL HACK - please avoid doing it unless there is no other way forward for you.
If you need to temporally debug, you can also use this code sample:
from scapy import config
config.Conf.load_layers.remove("x509")
from scapy.all import *
(2) The problem is with symbols being imported into the global namespace. This is fine for classes, bad for constants. There is code in _import_star() that checks the name of the symbol and does NOT load it into the global namespace if it begins with a _ (i.e. a "private" name). You could modify this function to treat the x509 module specially by ignoring names that do not begin X509_. Hopefully this will import the classes defined in x509 and not the constants. Here is a sample patch:
*** layers/all.py 2017-03-31 12:44:00.673248054 +0100
--- layers/all.py 2017-03-31 12:44:00.673248054 +0100
***************
*** 21,26 ****
--- 21,32 ----
for name in mod.__dict__['__all__']:
__all__.append(name)
globals()[name] = mod.__dict__[name]
+ elif m == "x509":
+ # import but rename as we go ...
+ for name, sym in mod.__dict__.iteritems():
+ if name[0] != '_' and name[:5] != "X509_":
+ __all__.append("_x509_" + name)
+ globals()["_x509_" + name] = sym
else:
# import all the non-private symbols
for name, sym in mod.__dict__.iteritems():
WARNING: THIS IS A REAL HACK - please avoid doing it unless there is no other way forward for you.
(3) This is a variation on solution (2), so also A REAL HACK (etc. etc.). You could edit scapy/layers/x509.py and prepend a _ to all constants. For example, all instances of default_directoryName should be changed to _default_directoryName. I found the following constants that needed changing: default_directoryName, reasons_mapping, cRL_reasons, ext_mapping, default_issuer, default_subject, attrName_mapping and attrName_specials. This is nice as it matches a fix applied to x509.py that I found in the scapy git repo ...
(4) You could just update to the next version of scapy. I don't know if this will be v2.3.4 or v2.4, as there is (at the time of writing) no next version released yet. So, while this (lack of a new release) continues, you could update to the latest development version (where they have already fixed this problem on Feb 8th 2017). I use scapy installed under my home directory (rather than in the system python packages location), so I did the following:
pip uninstall scapy
git clone https://github.com/secdev/scapy /tmp/scapy
cd /tmp/scapy
python setup.py install --user
cd -
rm -rf /tmp/scapy
Good luck !
I can not comment on Spiceisland's response because of lack of reputation points but with current version of scapy 2.3.3.dev532 I can see the same issues with tls layer as pointed out by Spiceisland with x509. Therefore all workarounds and fixes have to be applied accordingly for tls module.
So simplest quick and dirty fix (and you won't be able to use TLS after that):
In scapy/config.py remove "tls" element from load_layers list (that's line 434 in the 2.3.3.dev532 version of scapy)
I have also filed a bug for this issue https://github.com/secdev/scapy/issues/746

Python Scapy and Pyinstaller

I'm trying to create simple executable with pyinstaller from script I use for testing so I do not have to install everything on server I'm testing from.
#! /usr/bin/env python
from scapy.all import *
sourceport=int(raw_input('Soruce port:'))
destinationport=int(raw_input('Destination port:'))
destinationip=raw_input('Destination IP:')
maxttl=int(raw_input('MAX TTL:'))
for i in range(1,maxttl):
udptrace = IP(dst=destinationip,ttl=i)/UDP(dport=destinationport,sport=sourceport,len=500)
received=sr1(udptrace,verbose=0,timeout=2)
try:
print received.summary()
except AttributeError:
print "** TIMEOUT **"
Then I make executable:
pyinstaller -F udp.py
However when I run it and I have the following error:
Soruce port:500
Destination port:500
Destination IP:4.4.4.4
MAX TTL:3
Traceback (most recent call last):
File "<string>", line 16, in <module>
NameError: name 'IP' is not defined
user#:~/2/dist$
I have spent some time researching but did not find any answers.
The problem
First, we need to pinpoint the exact problem.
The PyInstaller manual specifies that:
Some Python scripts import modules in ways that PyInstaller cannot detect: for example, by using the __import__() function with variable data.
Inspecting Scapy's source code reveals that this is exactly how the various networking layers are imported:
scapy/layers/all.py:
def _import_star(m):
mod = __import__(m, globals(), locals())
for k,v in mod.__dict__.iteritems():
globals()[k] = v
for _l in conf.load_layers:
log_loading.debug("Loading layer %s" % _l)
try:
_import_star(_l)
except Exception,e:
log.warning("can't import layer %s: %s" % (_l,e))
Note that __import__ is invoked for each module in conf.load_layers.
scapy/config.py:
class Conf(ConfClass):
"""This object contains the configuration of scapy."""
load_layers = ["l2", "inet", "dhcp", "dns", "dot11", "gprs", "hsrp", "inet6", "ir", "isakmp", "l2tp",
"mgcp", "mobileip", "netbios", "netflow", "ntp", "ppp", "radius", "rip", "rtp",
"sebek", "skinny", "smb", "snmp", "tftp", "x509", "bluetooth", "dhcp6", "llmnr", "sctp", "vrrp",
"ipsec" ]
Note that Conf.load_layers contains "inet".
The file scapy/layers/inet.py defines the IP class, which was not imported successfully in the enclosed example.
The solution
Now, that we have located the root cause, let's see what can be done about it.
The PyInstaller manual suggests some workarounds to such importing issues:
You can give additional files on the PyInstaller command line.
You can give additional import paths on the command line.
You can edit the myscript.spec file that PyInstaller writes the first time you run it for your script. In the spec file you can tell PyInstaller about code and data files that are unique to your script.
You can write "hook" files that inform PyInstaller of hidden imports. If you "hook" imports for a package that other users might also use, you can contribute your hook file to PyInstaller.
A bit of googling reveals that an appropriate "hook" was already added to the default PyInstaller distribution, in this commit, which introduced the file PyInstaller/hooks/hook-scapy.layers.all.py.
The PyInstaller manuals indicates that such built-in hooks should run automatically:
In summary, a "hook" file tells PyInstaller about hidden imports called by a particular module. The name of the hook file is hook-<module>.py where "<module>" is the name of a script or imported module that will be found by Analysis. You should browse through the existing hooks in the hooks folder of the PyInstaller distribution folder, if only to see the names of the many supported imports.
For example hook-cPickle.py is a hook file telling about hidden imports used by the module cPickle. When your script has import cPickle the Analysis will note it and check for a hook file hook-cPickle.py.
Bottom Line
Therefore, please verify that you're running the latest version of PyInstaller. If you can't upgrade to the latest version, or if it doesn't contain the file PyInstaller/hooks/hook-scapy.layers.all.py, then create it with the following content:
#-----------------------------------------------------------------------------
# Copyright (c) 2013, PyInstaller Development Team.
#
# Distributed under the terms of the GNU General Public License with exception
# for distributing bootloader.
#
# The full license is in the file COPYING.txt, distributed with this software.
#-----------------------------------------------------------------------------
from PyInstaller.hooks.hookutils import collect_submodules
# The layers to load can be configured using scapy's conf.load_layers.
# from scapy.config import conf; print(conf.load_layers)
# I decided not to use this, but to include all layer modules. The
# reason is: When building the package, load_layers may not include
# all the layer modules the program will use later.
hiddenimports = collect_submodules('scapy.layers')

distutils: How to pass a user defined parameter to setup.py?

How can I pass a user-defined parameter both from the command line and setup.cfg configuration file to distutils' setup.py script?
I want to write a setup.py script, which accepts my package specific parameters. For example:
python setup.py install -foo myfoo
As Setuptools/Distuils are horribly documented, I had problems finding the answer to this myself. But eventually I stumbled across this example. Also, this similar question was helpful. Basically, a custom command with an option would look like:
from distutils.core import setup, Command
class InstallCommand(Command):
description = "Installs the foo."
user_options = [
('foo=', None, 'Specify the foo to bar.'),
]
def initialize_options(self):
self.foo = None
def finalize_options(self):
assert self.foo in (None, 'myFoo', 'myFoo2'), 'Invalid foo!'
def run(self):
install_all_the_things()
setup(
...,
cmdclass={
'install': InstallCommand,
}
)
Here is a very simple solution, all you have to do is filter out sys.argv and handle it yourself before you call to distutils setup(..).
Something like this:
if "--foo" in sys.argv:
do_foo_stuff()
sys.argv.remove("--foo")
...
setup(..)
The documentation on how to do this with distutils is terrible, eventually I came across this one: the hitchhikers guide to packaging, which uses sdist and its user_options.
I find the extending distutils reference not particularly helpful.
Although this looks like the "proper" way of doing it with distutils (at least the only one that I could find that is vaguely documented). I could not find anything on --with and --without switches mentioned in the other answer.
The problem with this distutils solution is that it is just way too involved for what I am looking for (which may also be the case for you).
Adding dozens of lines and subclassing sdist is just wrong for me.
Yes, it's 2015 and the documentation for adding commands and options in both setuptools and distutils is still largely missing.
After a few frustrating hours I figured out the following code for adding a custom option to the install command of setup.py:
from setuptools.command.install import install
class InstallCommand(install):
user_options = install.user_options + [
('custom_option=', None, 'Path to something')
]
def initialize_options(self):
install.initialize_options(self)
self.custom_option = None
def finalize_options(self):
#print('The custom option for install is ', self.custom_option)
install.finalize_options(self)
def run(self):
global my_custom_option
my_custom_option = self.custom_option
install.run(self) # OR: install.do_egg_install(self)
It's worth to mention that install.run() checks if it's called "natively" or had been patched:
if not self._called_from_setup(inspect.currentframe()):
orig.install.run(self)
else:
self.do_egg_install()
At this point you register your command with setup:
setup(
cmdclass={
'install': InstallCommand,
},
:
You can't really pass custom parameters to the script. However the following things are possible and could solve your problem:
optional features can be enabled using --with-featurename, standard features can be disabled using --without-featurename. [AFAIR this requires setuptools]
you can use environment variables, these however require to be set on windows whereas prefixing them works on linux/ OS X (FOO=bar python setup.py).
you can extend distutils with your own cmd_classes which can implement new features. They are also chainable, so you can use that to change variables in your script. (python setup.py foo install) will execute the foo command before it executes install.
Hope that helps somehow. Generally speaking I would suggest providing a bit more information what exactly your extra parameter should do, maybe there is a better solution available.
I successfully used a workaround to use a solution similar to totaam's suggestion. I ended up popping my extra arguments from the sys.argv list:
import sys
from distutils.core import setup
foo = 0
if '--foo' in sys.argv:
index = sys.argv.index('--foo')
sys.argv.pop(index) # Removes the '--foo'
foo = sys.argv.pop(index) # Returns the element after the '--foo'
# The foo is now ready to use for the setup
setup(...)
Some extra validation could be added to ensure the inputs are good, but this is how I did it
A quick and easy way similar to that given by totaam would be to use argparse to grab the -foo argument and leave the remaining arguments for the call to distutils.setup(). Using argparse for this would be better than iterating through sys.argv manually imho. For instance, add this at the beginning of your setup.py:
argparser = argparse.ArgumentParser(add_help=False)
argparser.add_argument('--foo', help='required foo argument', required=True)
args, unknown = argparser.parse_known_args()
sys.argv = [sys.argv[0]] + unknown
The add_help=False argument means that you can still get the regular setup.py help using -h (provided --foo is given).
Perhaps you are an unseasoned programmer like me that still struggled after reading all the answers above. Thus, you might find another example potentially helpful (and to address the comments in previous answers about entering the command line arguments):
class RunClientCommand(Command):
"""
A command class to runs the client GUI.
"""
description = "runs client gui"
# The format is (long option, short option, description).
user_options = [
('socket=', None, 'The socket of the server to connect (e.g. '127.0.0.1:8000')',
]
def initialize_options(self):
"""
Sets the default value for the server socket.
The method is responsible for setting default values for
all the options that the command supports.
Option dependencies should not be set here.
"""
self.socket = '127.0.0.1:8000'
def finalize_options(self):
"""
Overriding a required abstract method.
The method is responsible for setting and checking the
final values and option dependencies for all the options
just before the method run is executed.
In practice, this is where the values are assigned and verified.
"""
pass
def run(self):
"""
Semantically, runs 'python src/client/view.py SERVER_SOCKET' on the
command line.
"""
print(self.socket)
errno = subprocess.call([sys.executable, 'src/client/view.py ' + self.socket])
if errno != 0:
raise SystemExit("Unable to run client GUI!")
setup(
# Some other omitted details
cmdclass={
'runClient': RunClientCommand,
},
The above is tested and from some code I wrote. I have also included slightly more detailed docstrings to make things easier to understand.
As for the command line: python setup.py runClient --socket=127.0.0.1:7777. A quick double check using print statements shows that indeed the correct argument is picked up by the run method.
Other resources I found useful (more and more examples):
Custom distutils commands
https://seasonofcode.com/posts/how-to-add-custom-build-steps-and-commands-to-setuppy.html
To be fully compatible with both python setup.py install and pip install . you need to use environment variables because pip option --install-option= is bugged:
pip --install-option leaks across lines
Determine what should be done about --(install|global)-option with Wheels
pip not naming abi3 wheels correctly
This is a full example not using the --install-option:
import os
environment_variable_name = 'MY_ENVIRONMENT_VARIABLE'
environment_variable_value = os.environ.get( environment_variable_name, None )
if environment_variable_value is not None:
sys.stderr.write( "Using '%s=%s' environment variable!\n" % (
environment_variable_name, environment_variable_value ) )
setup(
name = 'packagename',
version = '1.0.0',
...
)
Then, you can run it like this on Linux:
MY_ENVIRONMENT_VARIABLE=1 pip install .
MY_ENVIRONMENT_VARIABLE=1 pip install -e .
MY_ENVIRONMENT_VARIABLE=1 python setup.py install
MY_ENVIRONMENT_VARIABLE=1 python setup.py develop
But, if you are on Windows, run it like this:
set "MY_ENVIRONMENT_VARIABLE=1" && pip install .
set "MY_ENVIRONMENT_VARIABLE=1" && pip install -e .
set "MY_ENVIRONMENT_VARIABLE=1" && python setup.py install
set "MY_ENVIRONMENT_VARIABLE=1" && python setup.py develop
References:
How to obtain arguments passed to setup.py from pip with '--install-option'?
Passing command line arguments to pip install
Passing the library path as a command line argument to setup.py

Categories

Resources