This is related to scons - always install after build
With scons 2.3.2, I am trying to get SCons to install the target that gets built into its pre-defined location without running extra commands. The solution proposed in the link above does not work for me. So I am trying to use default targets instead.
Let's say my source is in src/a and I install into /dst-path/a. In src/a SConscript (called from the parent SConscript) I have:
result = env.MyBuild(some_tgt, some_src)
env.Install('/dst-path/a', result)
If I type scons -u in src/a, it builds but does not install. If I type scons -u /dst-path/a in the same location, it builds and installs. I can add env.Alias('install', '/dst-path/a') and then scons -u install would install. This much is described in user guide. But I want to run just scons -u to build AND install.
So my idea is to add /dst-path/a to default targets and only in place that could generate content for that location. So, in that SConscript in src/a, I do
env.Default('/dst-path/a')
from SCons.Script import DEFAULT_TARGETS, BUILD_TARGETS
print "DEFAULT_TARGETS in %s is %s" % (env['MY_SOURCE_DIR'], map(str, DEFAULT_TARGETS))
print " BUILD_TARGETS in %s is %s" % (env['MY_SOURCE_DIR'], map(str, BUILD_TARGETS))
# env['MY_SOURCE_DIR'] tracks current source path and evaluates to 'src/a' in this case
Presumably, this is equivalent to me calling scons -u /dst-path/a Now I delete /dst-path/a, run scons -u while in src/a and see
DEFAULT_TARGETS in src is []
BUILD_TARGETS in src is []
DEFAULT_TARGETS in src/a is ['/dst-path/a']
BUILD_TARGETS in src/a is ['/dst-path/a']
- yet nothing happens!. But if I run scons -u /dst-path/a, I see
DEFAULT_TARGETS in src is []
BUILD_TARGETS in src is ['/dst-path/a']
DEFAULT_TARGETS in src/a is ['/dst-path/a']
BUILD_TARGETS in src/a is ['/dst-path/a']
- and now it builds and installs, just as before. My code had no effect.
So why does it completely ignore my Default specification, even though it even makes it into the BUILD_TARGETS? Is it a bug?
How on earth can I coerce SCons to install the things it builds in one step?
BTW, not sure if it matters much, but I also use VariantDir to separate intermediate files from the source ones.
Ok, I've learned that "-u" affects things. It does not build any Default targets, according to the option help (I think --ignore_defaults option would be a better approach, but oh well...)
So, for SCons to not ignore the Defaults, one should use "-D" or "-U". "-D" picks up the defaults all over the build tree, regardless the current location - that's not what I want. "-U", however, honors the defaults as set according to the current location.
Now the real problem turned out to be the install path! I've tried changing the install location from /dst-path/a to install/a (i.e. within the build tree) and now, magically, everything works! That is even "-u" works as expected (without extra Defaults set), installing the file, when it is not there. And if I set the install path as Default, then "-U" works instead. But with /dst-path/a path, the "-U" says it found no Default targets. If I change nothing but switch the path, then it suddenly finds them and builds.
Basically, this would all work fine from the start had I had install path within the tree. But why would I want to install within the source subtree? This is a crazy limitation. I'd call this a bug.
So this solution only works if you install within the tree, and it works as expected, without any trickery. This still does not answer how to install outside the tree.
...And it does not. You must call the install target/path explicitly in the scons command. Some examples are:
scons -u /dst-path/a # as seen in manual and FAQ
scons -u src/a # if you have Alias('src/a', '/dst-path/a')
# but "scons -u" or "scons -u '.'" from src/a won't work!
Another way to put it: when it comes to external paths, the behavior differs. SCons won't build anything outside top path no matter what you do in Install, Alias, or Default, until that outside path (or its alias) is passed as a target to the scons command. And this has nothing to do with -u/-U, etc.
Related
Sorry this is a long question. See the sentence in bold at the bottom for the TL;DR version.
I've spent many hours trying to track down a problem where pylint sometimes doesn't report all the errors in a module. Note that it does find some errors (e.g. long lines), just not all of them (e.g. missing docstrings).
I'm running pylint 1.7.2 on Ubuntu 16.04. (The version available from apt was 1.5.2 but installing via pip gives 1.7.2.)
We typically run pylint from tox, with a tox.ini that looks something like this (this is a cut-down version):
[tox]
envlist = py35
[testenv]
setenv =
MODULE_NAME=our_module
ignore_errors = True
deps =
-r../requirements.txt
whitelist_externals = bash
commands =
pip install --editable=file:///{toxinidir}/../our_other_module
pip install -e .
bash -c \'set -o pipefail; pylint --rcfile=../linting/pylint.cfg our_module | tee pylint.log\'
Amongst other things, the ../requirements.txt file contains a line for pylint==1.7.2.
The behaviour is like this:
[wrong] When the line that imports our_other_module is present, pylint appears to complete successfully and not report any warnings, even though there are errors in the our_module code that it should pick up.
[correct] When that line is commented out, pylint generates the expected warnings.
As part of tracking this down I took two copies of the .tox folder with and without the module import, naming them .tox-no-errors-reported and .tox-with-errors-reported respectively.
So now, even without sourcing their respective tox virtualenvs, I can do the following:
$ .tox-no-errors-reported/py35/bin/pylint --rcfile=../linting/pylint.cfg our_module -- reports no linting warnings
$ .tox-with-errors-reported/py35/bin/pylint --rcfile=../linting/pylint.cfg our_module -- reports the expected linting warnings
(where I just changed the pylint script's #! line in each case to reference the python3.5 inside that specific .tox directory instead of the unrenamed .tox)
By diffing .tox-no-errors-reported and .tox-with-errors-reported, I've found that they are very similar. But I can make the "no errors" version start to report errors by removing the path to our_other_module from .tox-no-errors-reported/py35/lib/python3.5/site-packages/easy-install.pth.
So my question is why is pylint using easy_install at runtime, and what is it picking up from our other component that is causing it to fail to report some errors.
As I understand it, pylint has dependencies on astroid and logilab-common, but including these in the requirements.txt doesn't make any difference.
One possible reason for the surprising pylint behavior is the --editable option.
it creates a special .egg-link file in the deployment directory, that links to your project’s source code. And, ..., it will also update the easy-install.pth file to include your project’s source code
The pth file will then affect the sys.path which affects the module import logic of astroid and it is deeply buried in the call stack of pylint.expand_files via pylint.utils.expand_modules. Also pylint identifies the module part and function names in the AST using astroid.modutils.get_module_part.
To test the theory, you can try calling some of the affected astroid functions manually:
import sys, astroid
print(sys.path)
print(astroid.modutils.get_module_part('your_package.sub_package.module'))
astroid.modutils.file_from_modpath(['your_package', 'sub_package', 'module'])
I'm getting an error while trying to install FEnicS on Mac OS X 10.11.6. I've read the responses to similar questions on this website, and have tried the suggested solutions, but I must be doing something wrong.
On running the command:
curl -s https://fenicsproject.org/fenics-install.sh | bash
I get an error while the cython package is being installed:
[cython] Building cython/e2t4ieqlgjl3, follow log with:
[cython] tail -f /Users/sophiaw/.hashdist/tmp/cython-e2t4ieqlgjl3-1/_hashdist/build.log
[cython|ERROR] Command '[u'/bin/bash', '_hashdist/build.sh']' returned non-zero exit status 1
[cython|ERROR] command failed (code=1); raising.
The message from build.log is:
Checking .pth file support in
/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages/
/Users/sophiaw/.hashdist/bld/python/pf77qttkbtzn/bin/python -E -c pass
TEST FAILED:
/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages/
does NOT support .pth files error: bad install directory or
PYTHONPATH
You are attempting to install a package to a directory that is not on
PYTHONPATH and which Python does not read ".pth" files from. The
installation directory you specified (via --install-dir, --prefix, or
the distutils default setting) was:
/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages/
and your PYTHONPATH environment variable currently contains:
'/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/Python.framework/Versions/2.7/lib/python2.7/site-packages:'
Here are some of your options for correcting the problem:
You can choose a different installation directory, i.e., one that is on PYTHONPATH or supports .pth files
You can add the installation directory to the PYTHONPATH environment variable. (It must then also be on PYTHONPATH whenever you run Python
and want to use the package(s) you are installing.)
You can set up the installation directory to support ".pth" files by using one of the approaches described here:
https://pythonhosted.org/setuptools/easy_install.html#custom-installation-locations
Please make the appropriate changes for your system and try again.
I've tried adding this to the bash_profile, but get the same error:
export PYTHONPATH=/Users/sophiaw/.hashdist/bld/cython/e2t4ieqlgjl3/lib/python2.7/site-packages:$PYTHONPATH.
How can I fix this error?
This was resolved by the fenics support group. to install FEniCS on OS X, Docker is a more convenient option.
I've been working with SCons for a while now and I'm facing a problem that I can't manage to resolve, I hope someone can help me. I created a dummy project that compiles a basic helloWorld (in main.cpp). What I want to do is compile my binary from 'test' folder using the scons -u command. All of my build is done in a variant dir that will eventually be created at the root of the project (build folder).
Here's my folder tree :
+sconsTest
-SConstruct
+ test
-SConscript
+test2
-SConscript
-main.cpp
+build (will eventually be created by scons)
Following is the SConstruct code:
env = Environment()
env.SConscript('test/SConscript', {'env' : env})
Following is test/SConscript code:
Import('env')
env = env.Clone()
env.SConscript('test2/SConscript', {'env' : env}, variant_dir="#/build", duplicate=0)
Following is test2/SConscript code:
Import('env')
env = env.Clone()
prog = env.Program('main', 'main.cpp')
After placing myself in 'sconsTest/test' folder, I type in scons -u, I expect it to build my program, however all it says is 'test' is up to date. When nothing is compiled. I noticed something, when I remove both variant_dir and duplicate args from test/SConscript, the scons -u works.
Furthermore, I noticed it was possible for me to compile the program using the command
scons -u test2
However, I'm using scons on a large scale project and I don't like giving a relative path as an argument to compile my project. I want scons -u to automatically build everything it finds in subdirs.
Do anyone have any idea on how to resolve this issue?
Please check the MAN page again. The -u option will only build default targets at or below the current directory. This excludes your folder sconsTest/build when you're in sconsTest/test.
What you are looking for is the -U option (with the capital "U") instead. It builds all default targets that are defined in the SConscript(s) in the current directory, regardless of what directory the resultant targets end up in.
How can I set the installation path for pip using get-pip.py to /usr/local/bin/? I can't find any mention in the setup guide or in the command line options.
To clarify I don't mean the path where pip packages are installed, but the path where pip itself is installed (it shuold be in /usr/local/bin/pip).
Edit
I do agree with many of the comments/answers that virtualenv would be a better idea in general. However it simply isn't the best one for me at the moment since it would be too disruptive; many of our user's scripts rely on a python2.7 being magically available; this is not convenient either and should change, but we have been using python since before virtualenv was a thing.
Pip itself is a Python package, and the actual pip command just runs a small Python script which then imports and runs the pip package.
You can edit locations.py to change installation directories, however, as stated above, I highly recommend that you do not do this.
Pip Command
Pip accepts a flag, '--install-option="--install-scripts"', which can be used to change the installation directory:
pip install somepackage --install-option="--install-scripts=/usr/local/bin"
Source method
On line 124 in pip/locations.py, we see the following:
site_packages = sysconfig.get_python_lib()
user_site = site.USER_SITE
You can technically edit these to change the default install path, however, using a virtual environment would be highly preferable. This is then used to find the egglink path, which then finds the dist path (code appended below, from pip/__init__.py).
def egg_link_path(dist):
"""
Return the path for the .egg-link file if it exists, otherwise, None.
There's 3 scenarios:
1) not in a virtualenv
try to find in site.USER_SITE, then site_packages
2) in a no-global virtualenv
try to find in site_packages
3) in a yes-global virtualenv
try to find in site_packages, then site.USER_SITE
(don't look in global location)
For #1 and #3, there could be odd cases, where there's an egg-link in 2
locations.
This method will just return the first one found.
"""
sites = []
if running_under_virtualenv():
if virtualenv_no_global():
sites.append(site_packages)
else:
sites.append(site_packages)
if user_site:
sites.append(user_site)
else:
if user_site:
sites.append(user_site)
sites.append(site_packages)
for site in sites:
egglink = os.path.join(site, dist.project_name) + '.egg-link'
if os.path.isfile(egglink):
return egglink
def dist_location(dist):
"""
Get the site-packages location of this distribution. Generally
this is dist.location, except in the case of develop-installed
packages, where dist.location is the source code location, and we
want to know where the egg-link file is.
"""
egg_link = egg_link_path(dist)
if egg_link:
return egg_link
return dist.location
However, once again, using a virtualenv is much more traceable, and any Pip updates will override these changes, unlike your own virtualenv.
It seems the easiest workaround I found to do this is:
install easy_install; this will go in /usr/local/bin/ as expected; the steps for doing this are listed here; I personally ended up running wget https://bootstrap.pypa.io/ez_setup.py -O - | python
install pip with /usr/local/bin/easy_install pip; this will make pip go in /usr/local/bin
When I create a virtualenv, it installs setuptools and pip. Is it possible to add new packages to this list?
Example use cases:
Following this solution to use ipython in virtualenv (from this question) requires installing ipython in every virtualenv (unless I allow system-site-packages).
Or if I'm doing a only flask/pygame/framework development, I'd want it in every virtualenv.
I took a different approach from what is chosen as the correct answer.
I chose I directory, like ~/.virtualenv/deps and installed packages in there by doing
pip install -U --target ~/.virtualenv/deps ...
Next in ~/.virtualenv/postmkvirtualenv I put the following:
# find directory
SITEDIR=$(virtualenvwrapper_get_site_packages_dir)
PYVER=$(virtualenvwrapper_get_python_version)
# create new .pth file with our path depending of python version
if [[ $PYVER == 3* ]];
then
echo "$HOME/.virtualenvs/deps3/" > "$SITEDIR/extra.pth";
else
echo "$HOME/.virtualenvs/deps/" > "$SITEDIR/extra.pth";
fi
Post that basically says the same thing.
You can write a python script, say personalize_venv.py that extends the EnvBuilder class and override its post_setup() method for installing any default packages that you need.
You can get the basic example from https://docs.python.org/3/library/venv.html#an-example-of-extending-envbuilder.
This doesn't need a hook. Directly run the script with command line argument dirs pointing to your venv directory/directories. The hook is the post_setup() method itself of EnvBuilder class.