Pulling Switch SNMP Communinties List with Python pysnmp - python

The code below works for other OIDs such as hostname (1.3.6.1.2.1.1.5.0) however I am having trouble with pulling the SNMP communities table (list of allowed ips for snmp).
I searched for "communities" for cisco in http://www.mibdepot.com/ and found 5 OIDs. All of which did not pull anything:
.1.3.6.1.4.1.224.2.3.6.3.1
.1.3.6.1.4.1.224.2.3.6.1.0
.1.3.6.1.4.1.224.2.3.6.3
.1.3.6.1.4.1.224.2.3.6.4.1
.1.3.6.1.4.1.224.2.3.6.4
Any guidance on this would be much appreciated. Thank you!
from pysnmp import hlapi
def get(target, oids, credentials, port=161, engine=hlapi.SnmpEngine(), context=hlapi.ContextData()):
handler = hlapi.getCmd(
engine,
credentials,
hlapi.UdpTransportTarget((target, port)),
context,
*construct_object_types(oids)
)
return fetch(handler, 1)[0]
def construct_object_types(list_of_oids):
object_types = []
for oid in list_of_oids:
object_types.append(hlapi.ObjectType(hlapi.ObjectIdentity(oid)))
return object_types
def fetch(handler, count):
result = []
for i in range(count):
try:
error_indication, error_status, error_index, var_binds = next(handler)
if not error_indication and not error_status:
items = {}
for var_bind in var_binds:
items[str(var_bind[0])] = cast(var_bind[1])
result.append(items)
else:
raise RuntimeError('Got SNMP error: {0}'.format(error_indication))
except StopIteration:
break
return result
def cast(value):
try:
return int(value)
except (ValueError, TypeError):
try:
return float(value)
except (ValueError, TypeError):
try:
return str(value)
except (ValueError, TypeError):
pass
return value
def getSNMPCommunities(ip):
try:
communities = get(ip, ['1.3.6.1.4.1.224.2.3.6.1.0'], hlapi.CommunityData('public'))
return communities.get('1.3.6.1.4.1.224.2.3.6.1.0')
except:
return None
snmpCommunities = getSNMPCommunities('10.0.0.1')
print(type(snmpCommunities))
print(snmpCommunities)

This is not possible since SNMP read-only shouldn't have access to this information.
The solution I came up with was to login via SSH and read in the stdout.

Related

Implementation to store the session, authenticate and retry with requests

There is a REST API with which I want to communicate via the request library.
First, I have to authenticate with a username and password (BasicAuth).
After a failed attempt I want to wait 3 seconds and repeat the whole process (maximum 3 times).
In case of success, I get a JSON string as a response.
You can see my current implementation below. The method get_order() is just an example. There are about 12 methods with a similar structure (they use partly POST-, partly GET-requests).
I think that my implementation is very complicated and I hope for suggestions and ideas how to optimize it.
def session_wrapper(self, func, *args, **kwargs) -> Union[bool, dict]:
has_authorization = False
attempts = 3
timeout_s = 3
for attempt in range(attempts):
try:
resp = func(*args, **kwargs)
if resp.status_code == requests.status_codes.codes.UNAUTHORIZED:
self.session.auth = self.login_details
result = self.session.get(f"{self.base_uri}/auth/token", verify=False)
if not result.ok:
print("Cannot reach the authentication url.")
else:
has_authorization = True
raise UnauthorizedException
try:
return resp.json()
except json.JSONDecodeError as e:
print(f"Could not parse json: {e}")
return False
except requests.exceptions.RequestException as e:
print(f"RequestException: {e}")
except UnauthorizedException:
print("The session is expired and needs to be reestablished.")
except Exception as e:
print(f"Something unexpected went wrong: {e}")
time.sleep(timeout_s)
if not has_authorization:
print("The session couldn't be reestablished.")
return False
def get_order(self, id: str) -> Optional[dict]:
payload = {'id': f"{id}"}
return self.session_wrapper(
self.session.get,
f"{self.base_uri}/order",
params=payload,
verify=True
)
def get_customor(self, id: str) -> Optional[dict]:
# ...

For loop only iterating one object from list using python

For loop only iterating one object from the list provided by the function. Below are the code and terminal logs.
Note:- I want to delete both the URLs which are the part of below list
function delete_index_url() output is like :-
['https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.13', 'https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.16']
def clean_index( ):
delete_urls = delete_index_url() # above function output assign to variable
for i in delete_urls:
print(i) <-- this only print "https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.13"
try:
req = requests.delete(i)
except requests.exceptions.ConnectionError as e:
print ('ERROR: Not able to connect to URL')
return 0
except requests.exceptions.Timeout as e:
print ('ERROR: ElasticSearch time out')
return 0
except requests.exceptions.HTTPError as e:
print ('ERROR: HTTP error')
return 0
else:
print ('INFO: ElasticSearch response status code was %s' % req.status_code)
if req.status_code != 200:
return 0
else:
return 1
print(clean_index())
Logs output from a python script:-
INFO: Sorting indexes
['https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.13', 'https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.16']
INFO: Getting a list of indexes
INFO: ElasticSearch response status code was 200
INFO: Found 200 indexes
INFO: Sorting indexes
https://vpc.xxx.es.amazonaws.com/staging-logs-2019.09.13 # only 2019.09.13, not 2019.09.16 logs URLs
Instead of returning 0 or 1 and ending the function right away, you can create a list and store the responses and return it:
def clean_index( ):
responses = []
delete_urls = delete_index_url() # above function output assign to variable
for i in delete_urls:
print(i)
try:
req = requests.delete(i)
except requests.exceptions.ConnectionError as e:
print ('ERROR: Not able to connect to URL')
responses.append(0)
except requests.exceptions.Timeout as e:
print ('ERROR: ElasticSearch time out')
responses.append(0)
except requests.exceptions.HTTPError as e:
print ('ERROR: HTTP error')
responses.append(0)
else:
print ('INFO: ElasticSearch response status code was %s' % req.status_code)
if req.status_code != 200:
responses.append(0)
else:
responses.append(1)
return responses
print(clean_index())

How to properly debug ThreadPool?

I'm trying to get some data from a web page. To speed up this process (they allow me to make 1000 requests per minute), I use ThreadPool.
Since there is a huge amount of data, the process is quite vulnerable to connection fails etc. so I try to log everything I can to be able to detect each mistake I did in code.
The problem is that program sometimes just stops without any exception (it acts like it is running but with no effect - I use PyCharm). I log catched exceptions everywhere I can but I can't see any exception in any log.
I assume that if there were a timeout reached, the exception would be raised and logged.
I've found out where the problem could be. Here is the code:
As a pool, I use: from multiprocessing.pool import ThreadPool as Pool
And lock: from threading import Lock
The download_category function is being used in loop.
def download_category(url):
# some code
#
# ...
log('Create pool...')
_pool = Pool(_workers_number)
with open('database/temp_produkty.txt') as f:
log('Spracovavanie produktov... vytvaranie vlakien...') # I see this in log
for url_product in f:
x = _pool.apply_async(process_product, args=(url_product.strip('\n'), url))
_pool.close()
_pool.join()
log('Presuvanie produktov z temp export do export.csv...') # I can't see this in log
temp_export_to_export_csv()
set_spracovanie_kategorie(url)
except Exception as e:
logging.exception('Got exception on download_one_category: {}'.format(url))
And process_product function:
def process_product(url, cat):
try:
data = get_product_data(url)
except:
log('{}: {} exception while getting product data... #') # I don't see this in log
return
try:
print_to_temp_export(data, cat) # I don't see this in log
except:
log('{}: {} exception while printing to csv... #') # I don't see this in log
raise
LOG function:
def log(text):
now = datetime.now().strftime('%d.%m.%Y %H:%M:%S')
_lock.acquire()
mLib.printToFile('logging/log.log', '{} -> {}'.format(now, text))
_lock.release()
I use logging module too. In this log, I see that probably 8 (number of workers) times request was sent but no answer hasn't been recieved.
EDIT1:
def get_product_data(url):
data = defaultdict(lambda: '-')
root = load_root(url)
try:
nazov = root.xpath('//h1[#itemprop="name"]/text()')[0]
except:
nazov = root.xpath('//h1/text()')[0]
under_block = root.xpath('//h2[#id="lowest-cost"]')
if len(under_block) < 1:
under_block = root.xpath('//h2[contains(text(),"Naj")]')
if len(under_block) < 1:
return False
data['nazov'] = nazov
data['url'] = url
blocks = under_block[0].xpath('./following-sibling::div[#class="shp"]/div[contains(#class,"shp")]')
i = 0
for block in blocks:
i += 1
data['dat{}_men'.format(i)] = eblock.xpath('.//a[#class="link"]/text()')[0]
del root
return data
LOAD ROOT:
class RedirectException(Exception):
pass
def load_url(url):
r = requests.get(url, allow_redirects=False)
if r.status_code == 301:
raise RedirectException
if r.status_code == 404:
if '-q-' in url:
url = url.replace('-q-','-')
mLib.printToFileWOEncoding('logging/neexistujuce.txt','Skusanie {} kategorie...'.format(url))
return load_url(url) # THIS IS NOT LOOPING
else:
mLib.printToFileWOEncoding('logging/neexistujuce.txt','{}'.format(url))
html = r.text
return html
def load_root(url):
try:
html = load_url(url)
except Exception as e:
logging.exception('load_root_exception')
raise
return etree.fromstring(html, etree.HTMLParser())

Python: httplib.HTTPSConnection.request doesn't work properly

I wrote a little tool, that gathers data from facebook, using api. Tool uses multiprocessing, queues and httplib modules. Here, is a part of code:
main process:
def extract_and_save(args):
put_queue = JoinableQueue()
get_queue = Queue()
for index in range(args.number_of_processes):
process_name = u"facebook_worker-%s" % index
grabber = FacebookGrabber(get_queue=put_queue, put_queue=get_queue, name=process_name)
grabber.start()
friend_list = get_user_friends(args.default_user_id, ["id"])
for index, friend_id in enumerate(friend_list):
put_queue.put(friend_id)
put_queue.join()
if not get_queue.empty():
... save to database ...
else:
logger.info(u"There is no data to save")
worker process:
class FacebookGrabber(Process):
def __init__(self, *args, **kwargs):
self.connection = httplib.HTTPSConnection("graph.facebook.com", timeout=2)
self.get_queue = kwargs.pop("get_queue")
self.put_queue = kwargs.pop("put_queue")
super(FacebookGrabber, self).__init__(*args, **kwargs)
self.daemon = True
def run(self):
while True:
friend_id = self.get_queue.get(block=True)
try:
friend_obj = self.get_friend_obj(friend_id)
except Exception, e:
logger.info(u"Friend id %s: facebook responded with an error (%s)", friend_id, e)
else:
if friend_obj:
self.put_queue.put(friend_obj)
self.get_queue.task_done()
common code:
def get_json_from_facebook(connection, url, kwargs=None):
url_parts = list(urlparse.urlparse(url))
query = dict(urlparse.parse_qsl(url_parts[4]))
if kwargs:
query.update(kwargs)
url_parts[4] = urllib.urlencode(query)
url = urlparse.urlunparse(url_parts)
try:
connection.request("GET", url)
except Exception, e:
print "<<<", e
response = connection.getresponse()
data = json.load(response)
return data
This code perfectly works on Ubuntu. But when I tried to run it on Windows 7 I got message "There is no data to save". The problem is here:
try:
connection.request("GET", url)
except Exception, e:
print "<<<", e
I get next error: <<< a float is required
Do anybody know, how to fix this problem?
Python version: 2.7.5
One of the "gotcha's" that occasionally happens with socket timeout values is that most operating systems expect them as floats. I believe this has been accounted for with later versions of the linux kernel.
Try changing:
self.connection = httplib.HTTPSConnection("graph.facebook.com", timeout=2)
to:
self.connection = httplib.HTTPSConnection("graph.facebook.com", timeout=2.0)
That's 2 seconds, by the way. Default is typically 5 seconds. Might be a little low.

md5 search using exceptions

import httplib
import re
md5 = raw_input('Enter MD5: ')
conn = httplib.HTTPConnection("www.md5.rednoize.com")
conn.request("GET", "?q="+ md5)
try:
response = conn.getresponse()
data = response.read()
result = re.findall('<div id="result" >(.+?)</div', data)
print result
except:
print "couldnt find the hash"
raw_input()
I know I'm probably implementing the code wrong, but which exception should I use for this? if it cant find the hash then raise an exception and print "couldnt find the hash"
Since re.findall doesn't raise exceptions, that's probably not how you want to check for results. Instead, you could write something like
result = re.findall('<div id="result" >(.+?)</div', data)
if result:
print result
else:
print 'Could not find the hash'
If you realy like to have an exception there you have to define it:class MyError(Exception):
def init(self, value):
self.value = value
def str(self):
return repr(self.value)
try:
response = conn.getresponse()
data = response.read()
result = re.findall('(.+?)</div', data)
if not result:
raise MyError("Could not find the hash")
except MyError:
raise

Categories

Resources