How to connect to remote OpenVas scanner with python? - python

I am exploring OpenVas tool for a project requirement, openVas is currently managed by Greenbone. I am getting error when I try to use remote scanner using python api.
I did all initial configuration, setup the required gui account etc and was able to scan the required systems manually however when I try to do the same using Python Api its not working. There isn't any example available on internet nor in there manual to verify my code.
I have used [https://pypi.org/project/python-gvm/] api.
I wrote simple code but its not working..
from gvm.connections import SSHConnection
from gvm.protocols.latest import Gmp
from gvm.transforms import EtreeTransform
from gvm.xml import pretty_print
connection = SSHConnection(hostname='192.168.1.84',username='alex',password='alex#123')
gmp = Gmp(connection)
gmp.authenticate('admin', 'admin')
# Retrieve current GMP version
version = gmp.get_version()
# Prints the XML in beautiful form
pretty_print(version)
I am getting error-
/usr/bin/python3.7 /home/punshi/PycharmProjects/nessus_api/openvas-greenbone.py
/usr/local/lib/python3.7/dist-packages/paramiko/kex_ecdh_nist.py:39: CryptographyDeprecationWarning: encode_point has been deprecated on EllipticCurvePublicNumbers and will be removed in a future version. Please use EllipticCurvePublicKey.public_bytes to obtain both compressed and uncompressed point encoding.
m.add_string(self.Q_C.public_numbers().encode_point())
/usr/local/lib/python3.7/dist-packages/paramiko/kex_ecdh_nist.py:96: CryptographyDeprecationWarning: Support for unsafe construction of public numbers from encoded data will be removed in a future version. Please use EllipticCurvePublicKey.from_encoded_point
self.curve, Q_S_bytes
/usr/local/lib/python3.7/dist-packages/paramiko/kex_ecdh_nist.py:111: CryptographyDeprecationWarning: encode_point has been deprecated on EllipticCurvePublicNumbers and will be removed in a future version. Please use EllipticCurvePublicKey.public_bytes to obtain both compressed and uncompressed point encoding.
hm.add_string(self.Q_C.public_numbers().encode_point())
Traceback (most recent call last):
File "/home/punshi/PycharmProjects/nessus_api/openvas-greenbone.py", line 8, in <module>
gmp.authenticate('admin', 'admin')
File "/usr/local/lib/python3.7/dist-packages/gvm/protocols/gmpv7.py", line 211, in authenticate
response = self._read()
File "/usr/local/lib/python3.7/dist-packages/gvm/protocols/base.py", line 54, in _read
return self._connection.read()
File "/usr/local/lib/python3.7/dist-packages/gvm/connections.py", line 126, in read
raise GvmError('Remote closed the connection')
gvm.errors.GvmError: Remote closed the connection
Process finished with exit code 1
I have tested SSH connection manually so the problem is either with my code or some other.
Additional Detail-
Ubuntu 16,
Greenbone Security Assistant 7.0.3 (gui)
Open Vas - 9.0.3

I've exactly the same problem that I solved it with TLSConnection instead of SSHConnection. Here is your code:
import gvm
from gvm.protocols.latest import Gmp
from gvm.transforms import EtreeTransform
from gvm.xml import pretty_print
connection =gvm.connections.TLSConnection(hostname='192.168.1.84')
gmp = Gmp(connection)
gmp.authenticate('admin', 'admin')
# Retrieve current GMP version
version = gmp.get_version()
# Prints the XML in beautiful form
pretty_print(version)

I am exploring OpenVas tool for a project requirement, openVas is currently managed by Greenbone.
Just a side note. OpenVAS is developed by Greenbone since many years now. Therefore we did rename the project to Greenbone Vulnerability Management (GVM) with version 10. Only the actual scanner component will be still named after OpenVAS. See https://community.greenbone.net/t/is-openvas-manager-and-gvmd-the-same/1777/3 for more details.
Using the SSHConnection needs some additional setup at the remote server. Using TLSConnection might be easier but needs also changes in the settings of gvmd/openvasmd because it is only listening on a unix socket by default.

Related

sqlanydb windows could not load dbcapi

I am trying to connect to a SQL Anywhere database via python. I have created the DSN and I can use command prompt to connect to the database using dbisql - c "DNS=myDSN". When I try to connect through python using con = sqlanydb.connect(DSN= "myDSN")
I get
`Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
con = sqlanydb.connect(DSN= "RPS Integration")
File "C:\Python27\lib\site-packages\sqlanydb.py", line 522, in connect
return Connection(args, kwargs)
File "C:\Python27\lib\site-packages\sqlanydb.py", line 538, in __init__
parent = Connection.cls_parent = Root("PYTHON")
File "C:\Python27\lib\site-packages\sqlanydb.py", line 464, in __init__
'libdbcapi_r.dylib')
File "C:\Python27\lib\site-packages\sqlanydb.py", line 456, in load_library
raise InterfaceError("Could not load dbcapi. Tried: " + ','.join(map(str, names)))
InterfaceError: (u'Could not load dbcapi. Tried: None,dbcapi.dll,libdbcapi_r.so,libdbcapi_r.dylib', 0)`
I was able to resolve the problem. I was never able to use the sqlanydb.connect. I ended up using pyodbc. So my final connection string was con = pydobc.connect(dsn="myDSN"). I think that sqlanydb is only fully functional with sqlanydb 17 and I was using a previous version.
Depending on how Sybase is locally installed, it could also be that the Python module really cannot find the (correct) library when looking up dbcapi.dll in your PATH. A quick fix for that is to create a new SQLANY_API_DLL environment variable (that's the initial None in the names the error message says it tried using; it takes priority over all the others) containing the correct path, usually something like %SQLANY16%\Bin64\dbcapi.dll depending on what version you have installed (Sybase usually creates an environment variable pointing to its installation folder on a per version basis).
One way to encounter the backtrace shown by the OP is running 64-bit Python interpreter, which is unable to load the 32-bit dbcapi.dll. Try launching with "py -X-32" to force a 32-bit engine, or reinstall using a 32-bit python engine.
Unfortunately sqlanydb's code that tries to be smart about finding dbcapi also ends up swallowing the exceptions thrown during loading. The sqlanydb author probably assumed that failure to load implies file not found, which isn't always the case.
I was stoked in this same problem, both windows system32 and 64.
I'm using Python 3.9 and I found that since Python 3.8 the cdll.LoadLibrary function does no longer search PATH to find library files.
To fix this, I created the enviroment variable SQLANY_API_DLL pointing to the route of sqlanywhere installation, in my case version 17, bin32 or bin64 depending on your system.

Connect to openstack is failing

I have written a bit of python code to interact with an Openstack instance; using the shade library.
The call
myinstance = shade.openstack_cloud(cloud='mycloud', **auth_data)
works fine on my local Ubuntu installation; but fails on our "backend" servers (running RHEL 7.2).
File "mystuff/core.py", line 248, in _create_connection
myinstance = shade.openstack_cloud(cloud='mycloud', **auth_data)
File "/usr/local/lib/python3.5/site-packages/shade-1.20.0-py3.5.egg/shade/init.py", line 106, in openstack_cloud
return OpenStackCloud(cloud_config=cloud_config, strict=strict)
File "/usr/local/lib/python3.5/site-packages/shade-1.20.0-py3.5.egg/shade/openstackcloud.py", line 312, in init
self._local_ipv6 = _utils.localhost_supports_ipv6()
File "/usr/local/lib/python3.5/site-packages/shade-1.20.0-py3.5.egg/shade/_utils.py", line 254, in localhost_supports_ipv6
return netifaces.AF_INET6 in netifaces.gateways()['default']
AttributeError: module 'netifaces' has no attribute 'AF_INET6'
The admin for that system tells me that IPv6 is not enabled there; maybe that explains the fail. I did some research, but couldn't find anything to prevent the failure.
Any thoughts are welcome.
Update: I edited my clouds.yml; and it looks like this:
# openstack/shade config file
# required to connect provisioning using the shade module
client:
force_ipv4: true
clouds:
mycloud:
auth:
user_domain_name: xxx
auth_url: 'someurl'
region_name: RegionOne
I also tried export OS_FORCE_IPV4=True - but the error message is still there.
If you go through the OpenStack os-client-config documentation, there they have mentioned about IPV6 related issue.
IPv6 is the future, and you should always use it if your cloud
supports it and if your local network supports it. Both of those are
easily detectable and all friendly software should do the right thing.
However, sometimes you might exist in a location where you have an
IPv6 stack, but something evil has caused it to not actually function.
In that case, there is a config option you can set to unbreak you
force_ipv4, or OS_FORCE_IPV4 boolean environment variable.
So, using these boolean config you can force appropriate network protocol. Adding below lines to your clouds.yaml file
client:
force_ipv4: true
will force IPV4 and hope it will resolve your issue.
Edit by OP: unfortunately the above doesn't help; fixed it by reworking shade-1.20.0-py3.5.egg/shade/_utils.py: I changed the return statement
return netifaces.AF_INET6 in netifaces.gateways()['default']`
to a simple
return False
And stuff is working. Of course, this is just a workaround; but a bug report was filed as well.

Use python 2 shelf in python 3

I have data stored in a shelf file created with python 2.7
When I try to access the file from python 3.4, I get an error:
>>> import shelve
>>> population=shelve.open('shelved.shelf')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python34\lib\shelve.py", line 239, in open
return DbfilenameShelf(filename, flag, protocol, writeback)
File "C:\Python34\lib\shelve.py", line 223, in __init__
Shelf.__init__(self, dbm.open(filename, flag), protocol, writeback)
File "C:\Python34\lib\dbm\__init__.py", line 88, in open
raise error[0]("db type could not be determined")
dbm.error: db type could not be determined
I'm still able to access the shelf with no problem in python 2.7, so there seems to be a backward-compatibility issue. Is there any way to directly access the old format with the new python version?
As I understand now, here is the path that lead to my problem:
The original shelf was created with Python 2 in Windows
Python 2 Windows defaults to bsddb as the underlying database for shelving, since dbm is not available on the Windows platform
Python 3 does not ship with bsddb. The underlying database is dumbdbm in Python 3 for Windows.
I at first looked into installing a third party bsddb module for Python 3, but it quickly started to turn into a hassle. It then seemed that it would be a recurring hassle any time I need to use the same shelf file on a new machine. So I decided to convert the file from bsddb to dumbdbm, which both my python 2 and python 3 installations can read.
I ran the following in Python 2, which is the version that contains both bsddb and dumbdbm:
import shelve
import dumbdbm
def dumbdbm_shelve(filename,flag="c"):
return shelve.Shelf(dumbdbm.open(filename,flag))
out_shelf=dumbdbm_shelve("shelved.dumbdbm.shelf")
in_shelf=shelve.open("shelved.shelf")
key_list=in_shelf.keys()
for key in key_list:
out_shelf[key]=in_shelf[key]
out_shelf.close()
in_shelf.close()
So far it looks like the dumbdbm.shelf files came out ok, pending a double-check of the contents.
The shelve module uses Python's pickle, which may require a protocol version when being accessed between different versions of Python.
Try supplying protocol version 2:
population = shelve.open('shelved.shelf', protocol=2)
According to the documentation:
Protocol version 2 was introduced in Python 2.3. It provides much more efficient pickling of new-style classes. Refer to PEP 307 for information about improvements brought by protocol 2.
This is most likely the protocol used in the original serialization (or pickling).
Edited: You may need to rename your database. Read on...
Seems like pickle is not the culprit here. shelve relies also in anydbm (Python 2.x) or dbm (Python 3) to create/open a database and store the pickled information.
I created (manually) a database file using the following:
# Python 2.7
import anydbm
anydbm.open('database2', flag='c')
and
# Python 3.4
import dbm
dbm.open('database3', flag='c')
In both cases, it creates the same kind of database (may be distribution dependent, this is on Debian 7):
$ file *
database2: Berkeley DB (Hash, version 9, native byte-order)
database3.db: Berkeley DB (Hash, version 9, native byte-order)
anydbm can open database3.db without problems, as expected:
>>> anydbm.open('database3')
<dbm.dbm object at 0x7fb1089900f0>
Notice the lack of .db when specifying the database name, though. But dbm chokes on database2, which is weird:
>>> dbm.open('database2')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/dbm/__init__.py", line 88, in open
raise error[0]("db type could not be determined")
dbm.error: db type could not be determined
unless I change the name of the name of the database to database2.db:
$ mv database2 database2.db
$ python3
>>> import dbm
>>> dbm.open('database2')
<_dbm.dbm object at 0x7fa7eaefcf50>
So, I suspect a regression on the dbm module, but I haven't checked the documentation. It may be intended :-?
NB: Notice that in my case, the extension is .db, but that depends on the database being used by dbm by default! Create an empty shelf using Python 3 to figure out which one are you using and what is it expecting.
I don't think it's possible to use a Python 2 shelf with Python 3's shelve module. The underlying files are completely different, at least in my tests.
In Python 2*, a shelf is represented as a single file with the filename you originally gave it.
In Python 3*, a shelf consists of three files: filename.bak, filename.dat, and filename.dir. Without any of these files present, the shelf cannot be opened by the Python 3 library (though it appears that just the .dat file is sufficient for opening, if not actual reading).
#Ricardo Cárdenes has given an overview of why this may be--it's likely an issue with the underlying database modules used in storing the shelved data. It's possible that the databases are backwards compatible, but I don't know and a quick search hasn't turned up any obvious answers.
I think it's likely that some of the possible databases implemented by dbm are backwards-compatible, whereas others are not: this could be the cause of the discrepancy between answers here, where some people, but not all, are able to open older databases directly by specifying a protocol.
*On every machine I've tested, using Python 2.7.6 vs Pythons 3.2.5, 3.3.4, and 3.4.1
The berkeleydb module includes a retro-compatible
implementation of the shelve object. (You will also need to install Oracle Berkeley DB)
You just need to:
import berkeleydb
from berkeleydb import dbshelve as shelve
population = shelve.open('shelved.shelf')

Connecting to SQLServer 2005 with adodbapi

I'm very new to Python and I have Python 3.2 installed on a Win 7-32 workstation. Trying to connect to MSSQLServer 2005 Server using adodbapi-2.4.2.2, the latest update to that package.
The code/connection string looks like this:
conn = adodbapi.connect('Provider=SQLNCLI.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=XXX;Data Source=123.456.789');
From adodbapi I continually get the error (this is entire error message from Wing IDE shell):
Traceback (most recent call last):
File "D:\Program Files\Wing IDE 4.0\src\debug\tserver_sandbox.py", line 2, in
if name == 'main':
File "D:\Python32\Lib\site-packages\adodbapi\adodbapi.py", line 298, in connect
raise InterfaceError #Probably COM Error
adodbapi.adodbapi.InterfaceError:
I can trace through the code and see the exception as it happens.
I also tried using conn strings with OLEDB provider and integrated Windows security, with same results.
All of these connection strings work fine from a UDL file on my workstation, and from SSMS, but fail with the same error in adodbapi.
How do I fix this?
Try this connection string:
Initial Catalog=XXX; Data Source=<servername>\\<SQL Instance name>; Provider=SQLOLEDB.1; Integrated Security=SSPI
Update
Umm ok. Looking at the source for adodbapi I would have to say that you are suffering a COM error. (yeah I know the traceback says that). But specifically with initialising the relevant COM objects.
This means that your connection string has nothing to do with the traceback. I think a good place to start would be to make sure that your copy of pythoncom is up-to-date.
It could be that win32com/pythoncom does not support Python 3K (3.0 onwards) yet, but after a minute of googleing I have not found anything useful on that, I'll leave it to you.
This code should run successfully when you have fixed your problem (and should fail at the moment).
import win32com.client
import pythoncom
pythoncom.CoInitialize()
win32com.client.Dispatch('ADODB.Connection')
Also any exception that code throws would be useful to help debug your problem.
In case anyone else finds this thread looking for the resolution to a similar error that I saw with Python 2.7:
Traceback (most recent call last):
File "get_data.py", line 10, in <module>
connection = get_connection(r"XXX\YYY", "DB")
File "get_data.py", line 7, in get_connection
conn = adodbapi.connect(connstring)
File "C:\Python27\lib\site-packages\adodbapi\adodbapi.py", line 116, in connect
raise api.OperationalError(e, message)
adodbapi.apibase.OperationalError: (InterfaceError("Windows COM Error: Dispatch('ADODB.Connection') failed.",), 'Error opening connection to "Data Source=XXX\\YYY; Initial Catalog=DB; Provider=SQLOLEDB.1; Integrated Security=SSPI"')
In my case the simple solution was to install Python for Windows Extensions (pywin32) from here:
http://sourceforge.net/projects/pywin32/files/pywin32/Build%20219/
I had the same problem, and I tracked it down to a failure to load win32com.pyd, because of some system DLLs that was not in the "dll load path", such as msvcp100.dll
I solved the problem by copying a lot of these dll's (probably too many) into C:\WinPython-64bit-3.3.3.2\python-3.3.3.amd64\Lib\site-packages\win32

Python RESTful client with CAS authentication

I'm trying to build a python library for interacting with our RESTful API, but it uses CAS for client auth, and I haven't been able to find any good existing libraries for it. So far I've found the following links, but I'm not sure if they're intended to be used in clients or by a website that uses CAS itself. Does anyone have any advice for a good library to use and a good way to structure my code for interacting with it?
https://wiki.jasig.org/download/attachments/28213515/pycas.py.txt
https://sp.princeton.edu/oit/sdp/CAS/Wiki%20Pages/Python.aspx
http://github.com/benoitc/restkit/
http://morethanseven.net/2009/02/18/python-rest-client.html
I also just tried using caslib, but that fails to work:
>>> import caslib
>>> srv = caslib.CASServer('https://my.cas/auth')
>>> svc = caslib.CASService('https://my.service/foo')
>>> caslib.login_to_cas_service(srv.login(svc),'user#example.com','password')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "caslib/cas_dance.py", line 250, in login_to_cas_service
raise CASLoginError('Could not parse the document at %s: %s' % (login_fh.url, errors))
caslib.cas_dance.CASLoginError: Could not parse the document at https://my.cas/auth/login?service=https%3A%2F%2Fmy.service%2Ffoo: undefined entity ©: line 97, column 26
Hmm, the error above appears to be in our markup (or the validater that caslib uses.)
Edit again: The failure is removed after installing the lxml library for python. The fallback parser didn't work as well.
You might have to roll you own solution, either by modifying the python rest client to support CAS, or building something from scratch (I would recommend on top of httplib2.)
Eleven years later, there are at least two Python CAS libraries, with Flask examples available:
python-cas - seems more current
Flask-CAS -- Github repo gone
I don't personally have these working yet, so YMMV.
Maybe, the official python example in: https://wiki.jasig.org/display/casum/restful+api

Categories

Resources