python net-snmp loading mibs - python

I'm using net-snmp's python libraries to do some long queries on various switches. I would like to be able to load new mibs -- but I cannot find any documentation on how to do this.
PySNMP appears to be rather complicated and requires me to create Python objects for each mib (which doesn't scale for me); so I'm stuck with net-snmp's libraries (which aren't bad except for the loading mib thing).
I know I can use the -m and -M options with the net-snmp command-line tools, and there's documentation on pre-compiling the net-snmp suite (./configure, make etc.) with all the mibs (and I assume into the libraries too); if the Python libraries do not offer the ability to load mibs, can I at least configure net-snmp to provide my python libraries access to the mibs without having to recompile?

I found an answer after all. From the snmpcmd(1) man page:
-m MIBLIST
Specifies a colon separated list of MIB modules (not
files) to load for this application. This overrides (or
augments) the environment variable MIBS, the snmp.conf
directive mibs, and the list of MIBs hardcoded into the
Net-SNMP library.
The key part here is that you can use the MIBS environment variable the same way you use the -m command line option...and that support for this is implemented at the library level. This means that if you define the MIBS environment variable prior to starting Python, it will affect the behavior of the netsnmp library:
$ python
Python 2.7.2 (default, Oct 27 2011, 01:40:22)
[GCC 4.6.1 20111003 (Red Hat 4.6.1-10)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> import netsnmp
>>> os.environ['MIBS'] = 'UPS-MIB:SNMPv2-SMI'
>>> oid = netsnmp.Varbind('upsAlarmOnBattery.0')
>>> netsnmp.snmpget(oid, Version=1, DestHost='myserver', Community='public')
('0',)
>>>
Note that you must set os.environ['MIBS'] before calling any of the netsnmp module functions (because this will load the library and any environment changes after this will have no affect).
You can (obviously) also set the environment variable outside of Python:
$ export MIBS='UPS-MIB:SNMPv2-SMI'
$ python
>>> import netsnmp
>>> oid = netsnmp.Varbind('upsAlarmOnBattery.0')
>>> netsnmp.snmpget(oid, Version=1, DestHost='myserver', Community='public')
('0',)
>>>

You technically don't have to initialize or export any environment variables if you configure net-snmp properly.
(Noting that I'm on Ubuntu 12.04.1 LTS so I really didn't have to compile net-snmp from source, and even though I'll cover the entirety of what I did for completeness, this should really only apply if you want to set up some MIBs to be automatically slurped in by net-snmp or its Python bindings.)
First I did sudo apt-get install libsnmp-base libsnmp-python libsnmp15 snmp
This will install net-snmp and its libraries as well as the Python bindings. It also installs some default MIBs (only for for net-snmp) in /usr/share/mibs/netsnmp/. If you want to grab a bunch of other IETF/IANA MIBs, do:
sudo apt-get install snmp-mibs-downloader
Which, as you'd expect, will download a ton of other standard MIBs (including IF-MIB and such) into /var/lib/mibs/iana, /var/lib/mibs/ietf and also /usr/share/mibs/iana and /usr/share/mibs/ietf. The snmp-mibs-downloader package also gives you the /usr/bin/download-mibs command if you want to download the MIBs again.
Next, Use the snmpconf command to set up your net-snmp environment:
$ snmpconf -h
/usr/bin/snmpconf [options] [FILETOCREATE...]
options:
-f overwrite existing files without prompting
-i install created files into /usr/share/snmp.
-p install created files into /home/$USER/.snmp.
-I DIR install created files into DIR.
-a Don't ask any questions, just read in current
current .conf files and comment them
-r all|none Read in all or none of the .conf files found.
-R file,... Read in a particular list of .conf files.
-g GROUP Ask a series of GROUPed questions.
-G List known GROUPs.
-c conf_dir use alternate configuration directory.
-q run more quietly with less advice.
-d turn on debugging output.
-D turn on debugging dumper output.
I used snmpconf -p and walked through the menu items. The process basically looks for existing snmp.conf files (/etc/snmp/snmp.conf by default) and will merge those in with the newly created config file that will get put in /home/$USER/.snmp/snmp.conf specified by the -p option. From there on out you really only need to tell snmpconf where to look for MIBs, but there are a number of useful options that are provided by the script for generating configuration directives in snmp.conf.
You should have a mostly working environment after you finish up with snmpconf. Here's what my (very bare-bones) /home/$USER/.snmp/snmp.conf looks like:
###########################################################################
#
# snmp.conf
#
# - created by the snmpconf configuration program
#
###########################################################################
# SECTION: Textual mib parsing
#
# This section controls the textual mib parser. Textual
# mibs are parsed in order to convert OIDs, enumerated
# lists, and ... to and from textual representations
# and numerical representations.
# mibdirs: Specifies directories to be searched for mibs.
# Adding a '+' sign to the front of the argument appends the new
# directory to the list of directories already being searched.
# arguments: [+]directory[:directory...]
mibdirs : +/usr/share/mibs/iana:/usr/share/mibs/ietf:/usr/share/mibs/netsnmp:/home/$USERNAME/.snmp/mibs/newmibs
# mibs: Specifies a list of mibs to be searched for and loaded.
# Adding a '+' sign to the front of the argument appends the new
# mib name to the list of mibs already being searched for.
# arguments: [+]mibname[:mibname...]
mibs +ALL
Some gotchas:
When net-snmp loads this config file it doesn't do a recursive directory search, so you have to give an absolute path to the directory where the MIBs live.
If you choose to tell net-snmp to load all 300+ MIBs in those directories, it could slow down your SNMP queries, and there are bound to be some things dumped to STDERR because of the fact that some MIBs would either be out of date, wrong, or trying to import definitions from MIBs that don't exist or weren't downloaded by the package. Your options are: tell snmpconf how you want those errors to be handled, or figure out what's missing or out of date and download the MIB yourself. If you go for the latter, you may find yourself going down a MIB rabbithole, so keep that in mind. Personally I'd suggest that you load them all, and then work backwards to only load the given MIBs that would make sense for polling a particular device.
The order of the directories that you specify in the search path in snmp.conf is important, especially if some MIBs reference or depend on other MIBs. I made one error I was getting go away simply by taking a MIB file in the iana directory and moving it into the ietf directory. I'm sure there's a way to programmatically figure out which MIBs depend on the others and make them happily coexist in a single directory but I didn't want to waste a bunch of time trying to figure this out.
The moral of the story is, if you've got a proper snmp.conf, you should just be able to do this:
$ python
>>> import netsnmp
>>> oid = netsnmp.VarList(netsnmp.Varbind('dot1qTpFdbPort'))
>>> res = netsnmp.snmpwalk(oid, Version=2, DestHost='10.0.0.1', Community='pub')
>>> print res
('2', '1')
>>>
FYI I omitted a bunch of STDERR output but again you can configure your environment to dump STDERR to a logfile if you wish via snmp.conf configuration directives.
Hope this helps.

Related

Unpacking PyInstaller packed files

I currently have a PyInstaller packed Elf file and I'm looking to unpack it into the original .py file(s). I have been using PyInstaller Extractor but it appears to be telling the archive is not a PyInstaller archive.
Here is an example of what I've been doing:
$ cat main.py
#! /usr/bin/python3
print ("Hello %s" % ("World"))
I pack it in the file dist/main/main with the command:
pyinstaller main.py
Which outputs the file:
$ file dist/main/main
dist/main/main: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=373ec5dee826653796e927ac3d65c9a8ec7db9da, stripped
Now, when I want to unpack it:
$ python pyinstxtractor.py dist/main/main
[*] Processing dist/main/main
[*] Error : Unsupported pyinstaller version or not a pyinstaller archive
I don't understand why the file cannot be unpacked while I've been looking through many posts telling that this should be possible and I'm beginning to doubt it.
Is the unpacking of the ELF file actually possible?
Am I doing it the right away?
According to the Github page, this script is applicable only for Windows binaries. There is an archive_viewer.py script distributed with pyinstaller itself that allows to view binary contents and extract it. If you get a .pyz file after extraction, use archive_viewer.py on it again. IIRC, after all you will get .pyc files, which have to be decompiled.
On my system (Manjaro Linux) I've found this script at /lib/python3.6/site-packages/PyInstaller/utils/cliutils
It is also available as pyi-archive_viewer (at /usr/bin/pyi-archive_viewer) after installing to global interpreter.
Using pyi-archive_viewer CLI seems to be the supported solution, i.e. to print only the module names, recursively, and quit instead of prompting:
$ pyi-archive_viewer --log --recursive --brief build/PYZ-00.pyz
['__future__',
'_aix_support',
---SNIP---
'zipfile',
'zipimport']
But if you don't want to parse or unsafely eval() the CLI output, it seems to work to use the library directly:
from PyInstaller.utils.cliutils import archive_viewer
archive = archive_viewer.get_archive('build/PYZ-00.pyz')
output = []
archive_viewer.get_content(archive, recursive=True, brief=True, output=output)
# Now, output is ['__future__', '_aix_support', ---SNIP--- 'zipfile', 'zipimport']
This use of the library is undocumented, but it's essentially the same to what the CLI does given those flags.

Is it possible to get pip to print the configuration it is using?

Is there any way to get pip to print the config it will attempt to use? For debugging purposes it would be very nice to know that:
config.ini files are in the correct place and pip is finding them.
The precedence of the config settings is treated in the way one would expect from the docs
For 10.0.x and higher
There is new pip config command, to list current configuration values
pip config list
(As pointed by #wmaddox in comments) To get the list of where pip looks for config files
pip config list -v
Pre 10.0.x
You can start python console and do. (If you have virtaulenv don't forget to activate it first)
from pip import create_main_parser
parser = create_main_parser()
# print all config files that it will try to read
print(parser.files)
# reads parser files that are actually found and prints their names
print(parser.config.read(parser.files))
create_main_parser is function that creates parser which pip uses to read params from command line(optparse) and loading configs(configparser)
Possible file names for configurations are generated in get_config_files. Including PIP_CONFIG_FILE environment variable if it set.
parser.config is instance of RawConfigParser so all generated file names in get_config_files are passed to parser.config.read
.
Attempt to read and parse a list of filenames, returning a list of filenames which were successfully parsed. If filenames is a string, it is treated as a single filename. If a file named in filenames cannot be opened, that file will be ignored. This is designed so that you can specify a list of potential configuration file locations (for example, the current directory, the user’s home directory, and some system-wide directory), and all existing configuration files in the list will be read. If none of the named files exist, the ConfigParser instance will contain an empty dataset. An application which requires initial values to be loaded from a file should load the required file or files using read_file() before calling read() for any optional files:
From how I see it, your question can be interpreted in three ways:
What is the configuration of the pip executable?
There is a quite extensive documentation for the configurations supported by pip, see here: https://pip.pypa.io/en/stable/user_guide/#configuration
What is the configuration that pip uses when configuring and subsequently building code required by a Python module?
This is specified by the package that is being installed. The package maintainer is responsible for producing a configuration script. For example, Numpy has a Configuration class (https://github.com/numpy/numpy/blob/master/numpy/distutils/misc_util.py) that they use to configure their Cython build.
What are the current modules installed with pip so I can reproduce a specific environment configuration?
This is easy, pip freeze > requirements.txt. This will produce a file of all currently installed pip modules along with their exact versions. You can then do pip install -r requirements.txt to reproduce that exact environment configuration on another machine.
I hope this helps.
You can run pip in pdb. Here's an example inside ipython:
>>> import pip
>>> import pdb
>>> pdb.run("pip.main()", globals())
(Pdb) s
--Call--
> /usr/lib/python3.5/site-packages/pip/__init__.py(197)main()
-> def main(args=None):
(Pdb) b /usr/lib/python3.5/site-packages/pip/baseparser.py:146
Breakpoint 1 at /usr/lib/python3.5/site-packages/pip/baseparser.py:146
(Pdb) c
> /usr/lib/python3.5/site-packages/pip/baseparser.py(146)__init__()
-> if self.files:
(Pdb) p self.files
['/etc/xdg/pip/pip.conf', '/etc/pip.conf', '/home/andre/.pip/pip.conf', '/home/andre/.config/pip/pip.conf']
The only trick here was looking up the path of the baseparser (and knowing that the files are in there). If you don't know this already you can simply step through the program or read the source. This type of exploration should work for most Python programs.

Automatic version number both in setup.py (setuptools) AND source code?

SITUATION:
I have a python library, which is controlled by git, and bundled with distutils/setuptools. And I want to automatically generate version number based on git tags, both for setup.py sdist and alike commands, and for the library itself.
For the first task I can use git describe or alike solutions (see How can I get the version defined in setup.py (setuptools) in my package?).
And when, for example, I am in a tag '0.1' and call for 'setup.py sdist', I get 'mylib-0.1.tar.gz'; or 'mylib-0.1-3-abcd.tar.gz' if I altered the code after tagging. This is fine.
THE PROBLEM IS:
The problem comes when I want to have this version number available for the library itself, so it could send it in User-Agent HTTP header as 'mylib/0.1-3-adcd'.
If I add setup.py version command as in How can I get the version defined in setup.py (setuptools) in my package?, then this version.py is generated AFTER the tag is made, since it uses the tag as a value. But in this case I need to make one more commit after the version tag is made to make the code consistent. Which, in turns, requires a new tag for further bundling.
THE QUESTION IS:
How to break this circle of dependencies (generate-commit-tag-generate-commit-tag-...)?
You could also reverse the dependency: put the version in mylib/__init__.py, parse that file in setup.py to get the version parameter, and use git tag $(setup.py --version) on the command line to create your tag.
git tag -a v$(python setup.py --version) -m 'description of version'
Is there anything more complicated you want to do that I haven’t understood?
A classic issue when toying with keyword expansion ;)
The key is to realize that your tag is part of the release management process, not part of the development (and its version control) process.
In other word, you cannot include a release management data in a development repository, because of the loop you illustrates in your question.
You need, when generating the package (which is the "release management part"), to write that information in a file that your library will look for and use (if said file exists) for its User-Agent HTTP header.
Since this topic is still alive and sometimes gets to search results, I would like to mention another solution which first appeared in 2012 and now is more or less usable:
https://github.com/warner/python-versioneer
It works in different way than all mentioned solutions: you add git tags manually, and the library (and setup.py) reads the tags, and builds the version string dynamically.
The version string includes the latest tag, distance from that tag, current commit hash, "dirtiness", and some other info. It has few different version formats.
But it still has no branch name for so called "custom builds"; and commit distance can be confusing sometimes when two branches are based on the same commit, so it is better to tag & release only one selected branch (master).
Eric's idea was the simple way to go, just in case this is useful here is the code I used (Flask's team did it this way):
import re
import ast
_version_re = re.compile(r'__version__\s+=\s+(.*)')
with open('app_name/__init__.py', 'rb') as f:
version = str(ast.literal_eval(_version_re.search(
f.read().decode('utf-8')).group(1)))
setup(
name='app-name',
version=version,
.....
)
If you found versioneer excessively convoluted, you can try bump2version.
Just add the simple bumpversion configuration file in the root of your library. This file indicates where in your repository there are strings storing the version number. Then, to update the version in all indicated places for a minor release, just type:
bumpversion minor
Use patch or major if you want to release a patch or a major.
This is not all about bumpversion. There are other flag-options, and config options, such as tagging automatically the repository, for which you can check the official documentation.
Following OGHaza's solution in a similar SO question I keep a file _version.py that I parse in setup.py. With the version string from there, I git tag in setup.py. Then I set the setup version variable to a combination of version string plus the git commit hash. So here is the relevant part of setup.py:
from setuptools import setup, find_packages
from codecs import open
from os import path
import subprocess
here = path.abspath(path.dirname(__file__))
import re, os
VERSIONFILE=os.path.join(here,"_version.py")
verstrline = open(VERSIONFILE, "rt").read()
VSRE = r"^__version__ = ['\"]([^'\"]*)['\"]"
mo = re.search(VSRE, verstrline, re.M)
if mo:
verstr = mo.group(1)
else:
raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,))
if os.path.exists(os.path.join(here, '.git')):
cmd = 'git rev-parse --verify --short HEAD'
git_hash = subprocess.check_output(cmd)
# tag git
gitverstr = 'v' + verstr
tags = subprocess.check_output('git tag')
if not gitverstr in tags:
cmd = 'git tag -a %s %s -m "tagged by setup.py to %s"' % (gitverstr, git_hash, verstr)
subprocess.check_output(cmd)
# use the git hash in the setup
verstr += ', git hash: %s' % git_hash
setup(
name='a_package',
version = verstr,
....
As was mentioned in another answer, this is related to the release process and not to the development process, as such it is not a git issue in itself, but more how is your release work process.
A very simple variant is to use this:
python setup.py egg_info -b ".`date '+%Y%m%d'`git`git rev-parse --short HEAD`" build sdist
The portion between the quotes is up for customization, however I tried to follow the typical Fedora/RedHat package names.
Of note, even if egg_info implies relation to .egg, actually it's used through the toolchain, for example for bdist_wheel as well and has to be specified in the beginning.
In general, your pre-release and post-release versions should live outside setup.py or any type of import version.py. The topic about versioning and egg_info is covered in detail here.
Example:
v1.3.4dev.20200813gitabcdef0
The v1.3.4 is in setup.py or any other variation you would like
The dev and 20200813gitabcdef0 is generated during the build process (example above)
None of the files generated during build are checked in git (usually in .gitignore they are filtered by default); sometimes there is a separate "deployment" repository, or similar, completely separate from the source one
A more complex way would be to have your release work process encoded in a Makefile which is outside the scope of this question, however a good source of inspiration can be found here and here. You will find good correspondeces between Makefile targets and setup.py commands.

SCons configuration file and default values

I have a project which I build using SCons (and MinGW/gcc depending on the platform). This project depends on several other libraries (lets call them libfoo and libbar) which can be installed on different places for different users.
Currently, my SConstruct file embeds hard-coded path to those libraries (say, something like: C:\libfoo).
Now, I'd like to add a configuration option to my SConstruct file so that a user who installed libfoo at another location (say C:\custom_path\libfoo) can do something like:
> scons --configure --libfoo-prefix=C:\custom_path\libfoo
Or:
> scons --configure
scons: Reading SConscript files ...
scons: done reading SConscript files.
### Environment configuration ###
Please enter location of 'libfoo' ("C:\libfoo"): C:\custom_path\libfoo
Please enter location of 'libbar' ("C:\libfoo"): C:\custom_path\libbar
### Configuration over ###
Once chosen, those configuration options should be written to some file and reread automatically every time scons runs.
Does scons provide such a mechanism ? How would I achieve this behavior ? I don't exactly master Python so even obvious (but complete) solutions are welcome.
Thanks.
SCons has a feature called "Variables". You can set it up so that it reads from command line argument variables pretty easily. So in your case you would do something like this from the command line:
scons LIBFOO=C:\custom_path\libfoo
... and the variable would be remembered between runs. So next time you just run scons and it uses the previous value of LIBFOO.
In code you use it like so:
# read variables from the cache, a user's custom.py file or command line
# arguments
var = Variables(['variables.cache', 'custom.py'], ARGUMENTS)
# add a path variable
var.AddVariables(PathVariable('LIBFOO',
'where the foo library is installed',
r'C:\default\libfoo', PathVariable.PathIsDir))
env = Environment(variables=var)
env.Program('test', 'main.c', LIBPATH='$LIBFOO')
# save variables to a file
var.Save('variables.cache', env)
If you really wanted to use "--" style options then you could combine the above with the AddOption function, but it is more complicated.
This SO question talks about the issues involved in getting values out of the Variables object without passing them through an Environment.

Setting up Django on an internal server (os.environ() not working as expected?)

I'm trying to setup Django on an internal company server. (No external connection to the Internet.)
Looking over the server setup documentation it appears that the "Running Django on a shared-hosting provider with Apache" method seems to be the most-likely to work in this situation.
Here's the server information:
Can't install mod_python
no root access
Server is SunOs 5.6
Python 2.5
Apache/2.0.46
I've installed Django (and flup) using the --prefix option (reading again I probably should've used --home, but at the moment it doesn't seem to matter)
I've added the .htaccess file and mysite.fcgi file to my root web directory as mentioned here.
When I run the mysite.fcgi script from the server I get my expected output (the correct site HTML output). But, it won't when trying to access it from a browser.
It seems that it may be a problem with the PYTHONPATH setting since I'm using the prefix option.
I've noticed that if I run mysite.fcgi from the command-line without setting the PYTHONPATH enviornment variable it throws the following error:
prompt$ python2.5 mysite.fcgi
ERROR:
No module named flup Unable to load
the flup package. In order to run
django as a FastCGI application, you
will need to get flup from
http://www.saddi.com/software/flup/
If you've already installed flup,
then make sure you have it in your
PYTHONPATH.
I've added sys.path.append(prefixpath) and os.environ['PYTHONPATH'] = prefixpath to mysite.fcgi, but if I set the enviornment variable to be empty on the command-line then run mysite.fcgi, I still get the above error.
Here are some command-line results:
>>> os.environ['PYTHONPATH'] = 'Null'
>>>
>>> os.system('echo $PYTHONPATH')
Null
>>> os.environ['PYTHONPATH'] = '/prefix/path'
>>>
>>> os.system('echo $PYTHONPATH')
/prefix/path
>>> exit()
prompt$ echo $PYTHONPATH
Null
It looks like Python is setting the variable OK, but the variable is only applicable inside of the script. Flup appears to be distributed as an .egg file, and my guess is that the egg implementation doesn't take into account variables added by os.environ['key'] = value (?) at least when installing via the --prefix option.
I'm not that familiar with .pth files, but it seems that the easy-install.pth file is the one that points to flup:
import sys; sys.__plen = len(sys.path)
./setuptools-0.6c6-py2.5.egg
./flup-1.0.1-py2.5.egg
import sys; new=sys.path[sys.__plen:]; del sys.path[sys.__plen:]; p=getattr(sys,'__egginsert',0); sy
s.path[p:p]=new; sys.__egginsert = p+len(new)
It looks like it's doing something funky, anyway to edit this or add something to my code so it will find flup?
In your settings you have to point go actual egg file, not directory where egg file is located. It should look something like:
sys.path.append('/path/to/flup/egg/flup-1.0.1-py2.5.egg')
Try using a utility called virtualenv. According to the official package page, "virtualenv is a tool to create isolated Python environments."
It'll take care of the PYTHONPATH stuff for you and make it easy to correctly install Django and flup.
Use site.addsitedir() not os.environ['PYTHONPATH'] or sys.path.append().
site.addsitedir interprets the .pth files. Modifying os.environ or sys.path does not. Not in a FastCGI environment anyway.
#!/user/bin/python2.6
import site
# adds a directory to sys.path and processes its .pth files
site.addsitedir('/path/to/local/prefix/site-packages/')
# avoids permissions error writing to system egg-cache
os.environ['PYTHON_EGG_CACHE'] = '/path/to/local/prefix/egg-cache'
To modify the PYTHONPATH from a python script you should use:
sys.path.append("prefixpath")
Try this instead of modifying with os.environ().
And I would recommend to run Django with mod_python instead of using FastCGI...

Categories

Resources