Connect to openstack is failing - python

I have written a bit of python code to interact with an Openstack instance; using the shade library.
The call
myinstance = shade.openstack_cloud(cloud='mycloud', **auth_data)
works fine on my local Ubuntu installation; but fails on our "backend" servers (running RHEL 7.2).
File "mystuff/core.py", line 248, in _create_connection
myinstance = shade.openstack_cloud(cloud='mycloud', **auth_data)
File "/usr/local/lib/python3.5/site-packages/shade-1.20.0-py3.5.egg/shade/init.py", line 106, in openstack_cloud
return OpenStackCloud(cloud_config=cloud_config, strict=strict)
File "/usr/local/lib/python3.5/site-packages/shade-1.20.0-py3.5.egg/shade/openstackcloud.py", line 312, in init
self._local_ipv6 = _utils.localhost_supports_ipv6()
File "/usr/local/lib/python3.5/site-packages/shade-1.20.0-py3.5.egg/shade/_utils.py", line 254, in localhost_supports_ipv6
return netifaces.AF_INET6 in netifaces.gateways()['default']
AttributeError: module 'netifaces' has no attribute 'AF_INET6'
The admin for that system tells me that IPv6 is not enabled there; maybe that explains the fail. I did some research, but couldn't find anything to prevent the failure.
Any thoughts are welcome.
Update: I edited my clouds.yml; and it looks like this:
# openstack/shade config file
# required to connect provisioning using the shade module
client:
force_ipv4: true
clouds:
mycloud:
auth:
user_domain_name: xxx
auth_url: 'someurl'
region_name: RegionOne
I also tried export OS_FORCE_IPV4=True - but the error message is still there.

If you go through the OpenStack os-client-config documentation, there they have mentioned about IPV6 related issue.
IPv6 is the future, and you should always use it if your cloud
supports it and if your local network supports it. Both of those are
easily detectable and all friendly software should do the right thing.
However, sometimes you might exist in a location where you have an
IPv6 stack, but something evil has caused it to not actually function.
In that case, there is a config option you can set to unbreak you
force_ipv4, or OS_FORCE_IPV4 boolean environment variable.
So, using these boolean config you can force appropriate network protocol. Adding below lines to your clouds.yaml file
client:
force_ipv4: true
will force IPV4 and hope it will resolve your issue.
Edit by OP: unfortunately the above doesn't help; fixed it by reworking shade-1.20.0-py3.5.egg/shade/_utils.py: I changed the return statement
return netifaces.AF_INET6 in netifaces.gateways()['default']`
to a simple
return False
And stuff is working. Of course, this is just a workaround; but a bug report was filed as well.

Related

Errors with running slack bolt (Python) locally

I'm deploying this code https://github.com/misscoded/webinar-bolt-python-nov-2020 on my PC, but I am getting the error as shown below. I tried to remove the App initialization but still got the same error.
The code:
app = App(
signing_secret=os.environ.get('SLACK_SIGNING_SECRET'),
token=os.environ.get("SLACK_BOT_TOKEN"),
)
The error:
line 10, in <module>
app = App(
File "C:\Users\Ruba\AppData\Local\Programs\Python\Python38-32\lib\site-packages\slack_bolt\app\app.py", line 208, in __init__
self._init_middleware_list()
File "C:\Users\Ruba\AppData\Local\Programs\Python\Python38-32\lib\site-packages\slack_bolt\app\app.py", line 232, in _init_middleware_list
raise BoltError(error_token_required())
slack_bolt.error.BoltError: Either an env variable `SLACK_BOT_TOKEN` or `token` argument in the constructor is required.
Depending on your version of Windows, check your environment variables (Control Panel -> Advanced System Settings -> Environment Variables) and ensure the system is picking them up. From the error message, it looks like it's not reading them, but I may be wrong.
Also, familiarize yourself with Slack's Bolt Framework. It has something called Socket Mode which makes local development so much easier and quicker.

Setting Connection String as App Setting/Environment Variable in Azure Function

In my Azure Function, I have specified an Environment Variable/App Setting for a database connection string. I can use the Environment Variable when I run the Function locally on my Azure Data Science Virtual Machine using VS Code and Python.
However, when I deploy the Function to Azure, I get an error: KeyValue is None, meaning that it cannot find the Environment Variable for the connection string. See error:
Exception while executing function: Functions.matchmodel Result: Failure
Exception: KeyError: 'CONNECTIONSTRINGS:PDMPDBCONNECTIONSTRING'
Stack: File "/azure-functions
host/workers/python/3.7/LINUX/X64/azure_functions_worker/dispatcher.py", line 315, in
_handle__invocation_request self.__run_sync_func, invocation_id, fi.func, args)
File "/usr/local/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "/azure-functions-host/workers/python/3.7/LINUX/X64/azure_functions_worker/dispatcher.py",
line 434, in __run_sync_func
return func(**params)
File "/home/site/wwwroot/matchmodel/__init__.py", line 116, in main
File "/home/site/wwwroot/matchmodel/production/dataload.py", line 28, in query_dev_database
setting = os.environ["CONNECTIONSTRINGS:PDMPDBCONNECTIONSTRING"]
File "/usr/local/lib/python3.7/os.py", line 679, in __getitem__
raise KeyError(key) from None'
I have tried the following solutions:
Added "CONNECTIONSTRINGS" to specify the Environment Variable in the Python script (which made it work locally)
setting = os.environ["CONNECTIONSTRINGS:PDMPDBCONNECTIONSTRING"]
Used logging.info(os.environ) to output my Environment Variables in the console. My connection string is listed.
Added the connection string as Application Setting in the Azure Function portal.
Added the Connection String as Connection Strings in the Azure Function portal.
Does anyone have any other solutions that I can try?
Actually you are almost get the right the connection string, however you use the wrong prepended string. Further more detailed information you could refer to this doc:Configure connection strings.
Which string to use it depends on which type you choose, like in my test I use a custom type. Then I should use os.environ['CUSTOMCONNSTR_testconnectionstring'] to get the value.
From the doc you could find there are following types:
SQL Server: SQLCONNSTR_
MySQL: MYSQLCONNSTR_
SQL Database: SQLAZURECONNSTR_
Custom: CUSTOMCONNSTR_
I was struggling with this and found out this solution:
import os
setting = os.getenv("mysetting")
In Functions application settings, such as service connection strings, are exposed as environment variables during execution. You can access these settings by declaring import os and then using
setting = os.environ["mysetting"]
As Alex said, try to remove CONNECTIONSTRINGS: from the name of the environment variable. In azure portal just add mysetting in application settings as keyName.
I figured out the issue with help from George and selected George's answer as the correct answer.
I changed the code to os.environ["SQLCONNSTR_PDMPDBCONNECTIONSTRING"] but also I had been deploying the package from VS Code to Azure via the following code using Azure Command Line Interface (Azure CLI). This code overwrites Azure App Settings with the local.settings.json.
func azure functionapp publish <MY_FUNCTION_NAME> --publish-local-settings -i --overwrite-settings -y
I think that was causing changes to the type of database that I had specified in Azure App Settings (SQL Server), so when I had previously tried os.environ["SQLCONNSTR_PDMPDBCONNECTIONSTRING"] it didn't work because I was overwriting my Azure settings using my local.settings.json, which did not specify a database type.
I finally got it to work by deploying using the VS Code Extension called Azure Functions (Azure Functions: Deploy to Function App). This retains the settings that I had created in Azure App Services.

Pyshark - tshark can't use user plugin in 'decode_as'

I use Pyshark that uses tshark to decode a pcap file, and I have a problem using 'decode_as' option.
I'm trying to decode a specific UDP port as SOMEIP protocol. This is a dissector I added that is taken from here.
It is important to say that both the dissector and the "decode_as" option work perfectly in Wireshark.
This is the code I use:
import pyshark
packets=pyshark.FileCapture(pcap_path, display_filter="udp")
packets.next() # Works fine
packets=pyshark.FileCapture(pcap_path, display_filter="udp", decode_as={"udp.port==50000":"someip"})
packets.next() # doesn't return a packet
There is also an ignored exception:
Exception ignored in: <function Capture.__del__ at 0x000001D9CE035268>
Traceback (most recent call last):
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\site-packages\pyshark\capture\capture.py", line 412, in __del__
self.close()
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\site-packages\pyshark\capture\capture.py", line 403, in close
self.eventloop.run_until_complete(self._close_async())
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\asyncio\base_events.py", line 573, in run_until_complete
return future.result()
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\site-packages\pyshark\capture\capture.py", line 407, in _close_async
await self._cleanup_subprocess(process)
File "C:\Users\SHIRM\AppData\Local\Continuum\anaconda3\lib\site-packages\pyshark\capture\capture.py", line 400, in _cleanup_subprocess
% process.returncode)
pyshark.capture.capture.TSharkCrashException: TShark seems to have crashed (retcode: 1). Try rerunning in debug mode [ capture_obj.set_debug() ] or try updating tshark.
As it recommends I use debug mode(packets.set_debug()), and after running it I get:
tshark: Protocol "someip" isn't valid for layer type "udp.port"
tshark: Valid protocols for layer type "udp.port" are:
....
and then a long list of protocols, which "someip" is not in... (but another dissector that I added, and is dll, is)
Any idea what is wrong here?
Does the dissector causes the problems, or did I do something wrong?
Again- the "decode as" works fine when done manually in Wireshark.
Thanks!
EDIT
I found the part in Wireshark code that causes this error:
So I read about dissector tables, and it seems that there shouldn't be a problem, since the dissector lua code does add "someip" to the dissector table of "udp.port":
local udp_dissector_table = DissectorTable.get("udp.port")
-- Register dissector to multiple ports
for i,port in ipairs{30490,30491,30501,30502,30503,30504} do
udp_dissector_table:add(port,p_someip)
tcp_dissector_table:add(port,p_someip)
end
I also tried to use the dissectortable:add_for_decode_as(proto) function (described in 11.6.2.11 here):
udp_dissector_table:add_for_decode_as(p_someip)
But it didn't work :(
Any idea will be appreciated, thanks
Even though it is an old question:
I tried with a pcap of mine at it worked. So 3 suggestions:
There has been a bug, which is fixed now - then it should work for you now as well
The udp port is wrong. I do have a different one (30490) and if this one is wrong, the package will be empty. Please try with 50001, as this port shows on your screenshot
The pcap has some problems, in this case, try with another one.
Hope that helps!

sqlanydb windows could not load dbcapi

I am trying to connect to a SQL Anywhere database via python. I have created the DSN and I can use command prompt to connect to the database using dbisql - c "DNS=myDSN". When I try to connect through python using con = sqlanydb.connect(DSN= "myDSN")
I get
`Traceback (most recent call last):
File "<pyshell#5>", line 1, in <module>
con = sqlanydb.connect(DSN= "RPS Integration")
File "C:\Python27\lib\site-packages\sqlanydb.py", line 522, in connect
return Connection(args, kwargs)
File "C:\Python27\lib\site-packages\sqlanydb.py", line 538, in __init__
parent = Connection.cls_parent = Root("PYTHON")
File "C:\Python27\lib\site-packages\sqlanydb.py", line 464, in __init__
'libdbcapi_r.dylib')
File "C:\Python27\lib\site-packages\sqlanydb.py", line 456, in load_library
raise InterfaceError("Could not load dbcapi. Tried: " + ','.join(map(str, names)))
InterfaceError: (u'Could not load dbcapi. Tried: None,dbcapi.dll,libdbcapi_r.so,libdbcapi_r.dylib', 0)`
I was able to resolve the problem. I was never able to use the sqlanydb.connect. I ended up using pyodbc. So my final connection string was con = pydobc.connect(dsn="myDSN"). I think that sqlanydb is only fully functional with sqlanydb 17 and I was using a previous version.
Depending on how Sybase is locally installed, it could also be that the Python module really cannot find the (correct) library when looking up dbcapi.dll in your PATH. A quick fix for that is to create a new SQLANY_API_DLL environment variable (that's the initial None in the names the error message says it tried using; it takes priority over all the others) containing the correct path, usually something like %SQLANY16%\Bin64\dbcapi.dll depending on what version you have installed (Sybase usually creates an environment variable pointing to its installation folder on a per version basis).
One way to encounter the backtrace shown by the OP is running 64-bit Python interpreter, which is unable to load the 32-bit dbcapi.dll. Try launching with "py -X-32" to force a 32-bit engine, or reinstall using a 32-bit python engine.
Unfortunately sqlanydb's code that tries to be smart about finding dbcapi also ends up swallowing the exceptions thrown during loading. The sqlanydb author probably assumed that failure to load implies file not found, which isn't always the case.
I was stoked in this same problem, both windows system32 and 64.
I'm using Python 3.9 and I found that since Python 3.8 the cdll.LoadLibrary function does no longer search PATH to find library files.
To fix this, I created the enviroment variable SQLANY_API_DLL pointing to the route of sqlanywhere installation, in my case version 17, bin32 or bin64 depending on your system.

Control firewalld in CentOS via Python's dbus module?

My goal is to automate configuring firewalls on CentOS 7 machines using Python.
The OS comes with firewalld, so that's what I'm using. I looked into it and found that it uses dbus (I've never heard of or dealt with any of this - please correct me if anything I say is incorrect.)
I found this documentation for how to control dbus processes using Python:
http://dbus.freedesktop.org/doc/dbus-python/doc/tutorial.txt
I checked and the version of Python that comes with the OS includes the dbus module, so it seems like a promising start.
That document suggests that I needed to learn more about what firewalld exposes via the dbus interface. So I did some more research and found this:
https://www.mankier.com/5/firewalld.dbus
The first document says I need to start out with a "well-known name". Their example for such a thing was org.freedesktop.NetworkManager. The second document is titled firewalld.dbus, so I figured that was as good a name as any to try since the document doesn't explicitly give a name anywhere else.
The first document also says I need a name for an object path. Their example is /org/freedesktop/NetworkManager. The second document has an object path of /org/fedoraproject/FirewallD1.
I put those together and tried using the first method the first document suggested, SystemBus's get_object():
>>> from dbus import SystemBus
>>> bus = SystemBus()
>>> proxy = bus.get_object('firewalld.dbus', '/org/fedoraproject/FirewallD1')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 241, in get_object
follow_name_owner_changes=follow_name_owner_changes)
File "/usr/lib64/python2.7/site-packages/dbus/proxies.py", line 248, in __init__
self._named_service = conn.activate_name_owner(bus_name)
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 180, in activate_name_owner
self.start_service_by_name(bus_name)
File "/usr/lib64/python2.7/site-packages/dbus/bus.py", line 278, in start_service_by_name
'su', (bus_name, flags)))
File "/usr/lib64/python2.7/site-packages/dbus/connection.py", line 651, in call_blocking
message, timeout)
dbus.exceptions.DBusException:
org.freedesktop.DBus.Error.ServiceUnknown:
The name firewalld.dbus was not provided by any .service files
I also gave org.fedoraproject.FirewallD1 a try as the first parameter but ended up with a similar error message.
Why are these not working? Is there some way I can discover what the proper names are? It mentions ".service files" at the end of the error message... where would such a file be located?
Edit: Found several ".service files" by using find / -name *.service. One of them is at /usr/lib/systemd/system/firewalld.service... seems pretty promising so I'll check it out.
Edit 2: It's a rather short file... only about 10 lines. One of them says BusName=org.fedoraproject.FirewallD1. So I'm not sure why it said the name was not provided by any .service files... unless it's not using this file for some reason?
If the unit file says:
BusName=org.fedoraproject.FirewallD1
Then maybe you should try using that as your bus name:
>>> import dbus
>>> bus = dbus.SystemBus()
>>> p = bus.get_object('org.fedoraproject.FirewallD1', '/org/fedoraproject/FirewallD1')
>>> p.getDefaultZone()
dbus.String(u'FedoraWorkstation')
I figured this out based on the fact that this:
>>> help(bus.get_object)
Says that the get_object call looks like:
get_object(self, bus_name, object_path, introspect=True, follow_name_owner_changes=False, **kwargs)

Categories

Resources