I have a project which I build using SCons (and MinGW/gcc depending on the platform). This project depends on several other libraries (lets call them libfoo and libbar) which can be installed on different places for different users.
Currently, my SConstruct file embeds hard-coded path to those libraries (say, something like: C:\libfoo).
Now, I'd like to add a configuration option to my SConstruct file so that a user who installed libfoo at another location (say C:\custom_path\libfoo) can do something like:
> scons --configure --libfoo-prefix=C:\custom_path\libfoo
Or:
> scons --configure
scons: Reading SConscript files ...
scons: done reading SConscript files.
### Environment configuration ###
Please enter location of 'libfoo' ("C:\libfoo"): C:\custom_path\libfoo
Please enter location of 'libbar' ("C:\libfoo"): C:\custom_path\libbar
### Configuration over ###
Once chosen, those configuration options should be written to some file and reread automatically every time scons runs.
Does scons provide such a mechanism ? How would I achieve this behavior ? I don't exactly master Python so even obvious (but complete) solutions are welcome.
Thanks.
SCons has a feature called "Variables". You can set it up so that it reads from command line argument variables pretty easily. So in your case you would do something like this from the command line:
scons LIBFOO=C:\custom_path\libfoo
... and the variable would be remembered between runs. So next time you just run scons and it uses the previous value of LIBFOO.
In code you use it like so:
# read variables from the cache, a user's custom.py file or command line
# arguments
var = Variables(['variables.cache', 'custom.py'], ARGUMENTS)
# add a path variable
var.AddVariables(PathVariable('LIBFOO',
'where the foo library is installed',
r'C:\default\libfoo', PathVariable.PathIsDir))
env = Environment(variables=var)
env.Program('test', 'main.c', LIBPATH='$LIBFOO')
# save variables to a file
var.Save('variables.cache', env)
If you really wanted to use "--" style options then you could combine the above with the AddOption function, but it is more complicated.
This SO question talks about the issues involved in getting values out of the Variables object without passing them through an Environment.
Related
I've installed the linter-flake8 Atom package, and keep getting the following warnings:
F821 — undefined name 'self' at line __ col _
Following this question, is there a way to specify builtins="self" in Atom?
I can't seem to find it. And if not, is there a workaround?
there is a special file which you require in your home directory (or per project basis) called a flake8 file.
For a global version, if not already existing create a file located at ~/.config/flake8
Within this directory add all your customizations, my flake8 file looks like this for example:
[flake8]
max-line-length = 120
ignore = W293, E402
So for your's you'd probably wish to do something similar, with ignore=F821.
For a per project version please see the flake8 documentation on configuration:
http://flake8.pycqa.org/en/latest/user/configuration.html
within the page they highlight a number of possible configuration locations. Good luck!
I've been working with SCons for a while now and I'm facing a problem that I can't manage to resolve, I hope someone can help me. I created a dummy project that compiles a basic helloWorld (in main.cpp). What I want to do is compile my binary from 'test' folder using the scons -u command. All of my build is done in a variant dir that will eventually be created at the root of the project (build folder).
Here's my folder tree :
+sconsTest
-SConstruct
+ test
-SConscript
+test2
-SConscript
-main.cpp
+build (will eventually be created by scons)
Following is the SConstruct code:
env = Environment()
env.SConscript('test/SConscript', {'env' : env})
Following is test/SConscript code:
Import('env')
env = env.Clone()
env.SConscript('test2/SConscript', {'env' : env}, variant_dir="#/build", duplicate=0)
Following is test2/SConscript code:
Import('env')
env = env.Clone()
prog = env.Program('main', 'main.cpp')
After placing myself in 'sconsTest/test' folder, I type in scons -u, I expect it to build my program, however all it says is 'test' is up to date. When nothing is compiled. I noticed something, when I remove both variant_dir and duplicate args from test/SConscript, the scons -u works.
Furthermore, I noticed it was possible for me to compile the program using the command
scons -u test2
However, I'm using scons on a large scale project and I don't like giving a relative path as an argument to compile my project. I want scons -u to automatically build everything it finds in subdirs.
Do anyone have any idea on how to resolve this issue?
Please check the MAN page again. The -u option will only build default targets at or below the current directory. This excludes your folder sconsTest/build when you're in sconsTest/test.
What you are looking for is the -U option (with the capital "U") instead. It builds all default targets that are defined in the SConscript(s) in the current directory, regardless of what directory the resultant targets end up in.
Is there any way to get pip to print the config it will attempt to use? For debugging purposes it would be very nice to know that:
config.ini files are in the correct place and pip is finding them.
The precedence of the config settings is treated in the way one would expect from the docs
For 10.0.x and higher
There is new pip config command, to list current configuration values
pip config list
(As pointed by #wmaddox in comments) To get the list of where pip looks for config files
pip config list -v
Pre 10.0.x
You can start python console and do. (If you have virtaulenv don't forget to activate it first)
from pip import create_main_parser
parser = create_main_parser()
# print all config files that it will try to read
print(parser.files)
# reads parser files that are actually found and prints their names
print(parser.config.read(parser.files))
create_main_parser is function that creates parser which pip uses to read params from command line(optparse) and loading configs(configparser)
Possible file names for configurations are generated in get_config_files. Including PIP_CONFIG_FILE environment variable if it set.
parser.config is instance of RawConfigParser so all generated file names in get_config_files are passed to parser.config.read
.
Attempt to read and parse a list of filenames, returning a list of filenames which were successfully parsed. If filenames is a string, it is treated as a single filename. If a file named in filenames cannot be opened, that file will be ignored. This is designed so that you can specify a list of potential configuration file locations (for example, the current directory, the user’s home directory, and some system-wide directory), and all existing configuration files in the list will be read. If none of the named files exist, the ConfigParser instance will contain an empty dataset. An application which requires initial values to be loaded from a file should load the required file or files using read_file() before calling read() for any optional files:
From how I see it, your question can be interpreted in three ways:
What is the configuration of the pip executable?
There is a quite extensive documentation for the configurations supported by pip, see here: https://pip.pypa.io/en/stable/user_guide/#configuration
What is the configuration that pip uses when configuring and subsequently building code required by a Python module?
This is specified by the package that is being installed. The package maintainer is responsible for producing a configuration script. For example, Numpy has a Configuration class (https://github.com/numpy/numpy/blob/master/numpy/distutils/misc_util.py) that they use to configure their Cython build.
What are the current modules installed with pip so I can reproduce a specific environment configuration?
This is easy, pip freeze > requirements.txt. This will produce a file of all currently installed pip modules along with their exact versions. You can then do pip install -r requirements.txt to reproduce that exact environment configuration on another machine.
I hope this helps.
You can run pip in pdb. Here's an example inside ipython:
>>> import pip
>>> import pdb
>>> pdb.run("pip.main()", globals())
(Pdb) s
--Call--
> /usr/lib/python3.5/site-packages/pip/__init__.py(197)main()
-> def main(args=None):
(Pdb) b /usr/lib/python3.5/site-packages/pip/baseparser.py:146
Breakpoint 1 at /usr/lib/python3.5/site-packages/pip/baseparser.py:146
(Pdb) c
> /usr/lib/python3.5/site-packages/pip/baseparser.py(146)__init__()
-> if self.files:
(Pdb) p self.files
['/etc/xdg/pip/pip.conf', '/etc/pip.conf', '/home/andre/.pip/pip.conf', '/home/andre/.config/pip/pip.conf']
The only trick here was looking up the path of the baseparser (and knowing that the files are in there). If you don't know this already you can simply step through the program or read the source. This type of exploration should work for most Python programs.
Scripts generated by zc.buildout using zc.recipe.egg, on our <package>/bin/ directory look like this:
#! <python shebang> -S
import sys
sys.path[0:0] = [
... # some paths derived from the eggs
... # some other paths included with zc.recipe.egg `extra-path`
]
# some user initialization code from zc.recipe.egg `initialization`
# import function, call function
What I have not been able to was to find a way to programmatically prepend a path at the sys.path construction introduced in every script. Is this possible?
Why: I have a version of my python project installed globally and another version of it installed locally (off-buildout tree). I want to be able to switch between these two versions.
Note: Clearly, one can use the zc.recipe.egg/initialization property to add something like:
initialization = sys.path[0:0] = [ /add/path/to/my/eggs ]
But, is there any other way? Extra points for an example!
Finally, I got a working environment by creating my own buildout recipe that you can find here: https://github.com/idiap/local.bob.recipe. The file that contains the recipe is this one: https://github.com/idiap/local.bob.recipe/blob/master/config.py. There are lots of checks which are specific to our software at the class constructor and some extra improvements as well, but don't get bothered with that. The "real meat (TM)" is on the install() method of that class. It goes like this more or less:
egg_link = os.path.join(self.buildout['buildout']['eggs-directory'], 'external-package.egg-link')
f = open(egg_link, 'wt')
f.write(self.options['install-directory'] + '\n')
f.close()
self.options.created(egg_link)
return self.options.created()
This will do the trick. My external (CMake-based) package now only has to create the right .egg-info file in parallel with the python package(s) it builds. Than, I can tie, using the above recipe, the usage of a specific package installation like this:
[buildout]
parts = external_package python
develop = .
eggs = my_project
external_package
recipe.as.above
[external_package]
recipe = recipe.as.above:config
install-directory = ../path/to/my/local/package/build
[python]
recipe = zc.recipe.egg
interpreter = python
eggs = ${buildout:eggs}
If you wish to switch installations, just change the install-directory property above. If you wish to use the default installation available system wide, just remove altogether the recipe.as.above constructions from your buildout.cfg file. Buildout will just find the global installation w/o requiring any extra configuration. Uninstallation will work properly as well. So, switching between builds will just work.
Here is a fully working buildout .cfg file that we use here: https://github.com/idiap/bob.project.example/blob/master/localbob.cfg
The question is: Is there an easier way to achieve the same w/o having this external recipe?
Well, what you miss is probably the most useful buildout extension, mr.developer.
Typically the package, let's say foo.bar will be in some repo, let's say git.
Your buildout will look like
[buildout]
extensions = mr.developer
[sources]
foo.bar = git git#github.com:foo/foo.bar.git
If you don't have your package in a repo, you can use fs instead of git, have a look at the documentation for details.
Activating the "local" version is done by
./bin/develop a foo.bar
Deactivating by
./bin/develop d foo.bar
There are quite a few other things you can do with mr.developer, do check it out!
I'm using net-snmp's python libraries to do some long queries on various switches. I would like to be able to load new mibs -- but I cannot find any documentation on how to do this.
PySNMP appears to be rather complicated and requires me to create Python objects for each mib (which doesn't scale for me); so I'm stuck with net-snmp's libraries (which aren't bad except for the loading mib thing).
I know I can use the -m and -M options with the net-snmp command-line tools, and there's documentation on pre-compiling the net-snmp suite (./configure, make etc.) with all the mibs (and I assume into the libraries too); if the Python libraries do not offer the ability to load mibs, can I at least configure net-snmp to provide my python libraries access to the mibs without having to recompile?
I found an answer after all. From the snmpcmd(1) man page:
-m MIBLIST
Specifies a colon separated list of MIB modules (not
files) to load for this application. This overrides (or
augments) the environment variable MIBS, the snmp.conf
directive mibs, and the list of MIBs hardcoded into the
Net-SNMP library.
The key part here is that you can use the MIBS environment variable the same way you use the -m command line option...and that support for this is implemented at the library level. This means that if you define the MIBS environment variable prior to starting Python, it will affect the behavior of the netsnmp library:
$ python
Python 2.7.2 (default, Oct 27 2011, 01:40:22)
[GCC 4.6.1 20111003 (Red Hat 4.6.1-10)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> import netsnmp
>>> os.environ['MIBS'] = 'UPS-MIB:SNMPv2-SMI'
>>> oid = netsnmp.Varbind('upsAlarmOnBattery.0')
>>> netsnmp.snmpget(oid, Version=1, DestHost='myserver', Community='public')
('0',)
>>>
Note that you must set os.environ['MIBS'] before calling any of the netsnmp module functions (because this will load the library and any environment changes after this will have no affect).
You can (obviously) also set the environment variable outside of Python:
$ export MIBS='UPS-MIB:SNMPv2-SMI'
$ python
>>> import netsnmp
>>> oid = netsnmp.Varbind('upsAlarmOnBattery.0')
>>> netsnmp.snmpget(oid, Version=1, DestHost='myserver', Community='public')
('0',)
>>>
You technically don't have to initialize or export any environment variables if you configure net-snmp properly.
(Noting that I'm on Ubuntu 12.04.1 LTS so I really didn't have to compile net-snmp from source, and even though I'll cover the entirety of what I did for completeness, this should really only apply if you want to set up some MIBs to be automatically slurped in by net-snmp or its Python bindings.)
First I did sudo apt-get install libsnmp-base libsnmp-python libsnmp15 snmp
This will install net-snmp and its libraries as well as the Python bindings. It also installs some default MIBs (only for for net-snmp) in /usr/share/mibs/netsnmp/. If you want to grab a bunch of other IETF/IANA MIBs, do:
sudo apt-get install snmp-mibs-downloader
Which, as you'd expect, will download a ton of other standard MIBs (including IF-MIB and such) into /var/lib/mibs/iana, /var/lib/mibs/ietf and also /usr/share/mibs/iana and /usr/share/mibs/ietf. The snmp-mibs-downloader package also gives you the /usr/bin/download-mibs command if you want to download the MIBs again.
Next, Use the snmpconf command to set up your net-snmp environment:
$ snmpconf -h
/usr/bin/snmpconf [options] [FILETOCREATE...]
options:
-f overwrite existing files without prompting
-i install created files into /usr/share/snmp.
-p install created files into /home/$USER/.snmp.
-I DIR install created files into DIR.
-a Don't ask any questions, just read in current
current .conf files and comment them
-r all|none Read in all or none of the .conf files found.
-R file,... Read in a particular list of .conf files.
-g GROUP Ask a series of GROUPed questions.
-G List known GROUPs.
-c conf_dir use alternate configuration directory.
-q run more quietly with less advice.
-d turn on debugging output.
-D turn on debugging dumper output.
I used snmpconf -p and walked through the menu items. The process basically looks for existing snmp.conf files (/etc/snmp/snmp.conf by default) and will merge those in with the newly created config file that will get put in /home/$USER/.snmp/snmp.conf specified by the -p option. From there on out you really only need to tell snmpconf where to look for MIBs, but there are a number of useful options that are provided by the script for generating configuration directives in snmp.conf.
You should have a mostly working environment after you finish up with snmpconf. Here's what my (very bare-bones) /home/$USER/.snmp/snmp.conf looks like:
###########################################################################
#
# snmp.conf
#
# - created by the snmpconf configuration program
#
###########################################################################
# SECTION: Textual mib parsing
#
# This section controls the textual mib parser. Textual
# mibs are parsed in order to convert OIDs, enumerated
# lists, and ... to and from textual representations
# and numerical representations.
# mibdirs: Specifies directories to be searched for mibs.
# Adding a '+' sign to the front of the argument appends the new
# directory to the list of directories already being searched.
# arguments: [+]directory[:directory...]
mibdirs : +/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp:/home/$USERNAME/.snmp/mibs/newmibs
# mibs: Specifies a list of mibs to be searched for and loaded.
# Adding a '+' sign to the front of the argument appends the new
# mib name to the list of mibs already being searched for.
# arguments: [+]mibname[:mibname...]
mibs +ALL
Some gotchas:
When net-snmp loads this config file it doesn't do a recursive directory search, so you have to give an absolute path to the directory where the MIBs live.
If you choose to tell net-snmp to load all 300+ MIBs in those directories, it could slow down your SNMP queries, and there are bound to be some things dumped to STDERR because of the fact that some MIBs would either be out of date, wrong, or trying to import definitions from MIBs that don't exist or weren't downloaded by the package. Your options are: tell snmpconf how you want those errors to be handled, or figure out what's missing or out of date and download the MIB yourself. If you go for the latter, you may find yourself going down a MIB rabbithole, so keep that in mind. Personally I'd suggest that you load them all, and then work backwards to only load the given MIBs that would make sense for polling a particular device.
The order of the directories that you specify in the search path in snmp.conf is important, especially if some MIBs reference or depend on other MIBs. I made one error I was getting go away simply by taking a MIB file in the iana directory and moving it into the ietf directory. I'm sure there's a way to programmatically figure out which MIBs depend on the others and make them happily coexist in a single directory but I didn't want to waste a bunch of time trying to figure this out.
The moral of the story is, if you've got a proper snmp.conf, you should just be able to do this:
$ python
>>> import netsnmp
>>> oid = netsnmp.VarList(netsnmp.Varbind('dot1qTpFdbPort'))
>>> res = netsnmp.snmpwalk(oid, Version=2, DestHost='10.0.0.1', Community='pub')
>>> print res
('2', '1')
>>>
FYI I omitted a bunch of STDERR output but again you can configure your environment to dump STDERR to a logfile if you wish via snmp.conf configuration directives.
Hope this helps.