centos 8, firewalld error `COMMAND_FAILED: 'python-nftables' failed` - python

when I try to reload firewalld, it tells me
Error: COMMAND_FAILED: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: Numerical result out of range
JSON blob:
{"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"chain": {"family": "inet", "table": "firewalld", "name": "filter_IN_policy_allow-host-ipv6"}}}]}
I don't know why this is, after Google, it still hasn't been resolved

I had the same error message. I enabled verbose debugs on firewalld and tailed the logs to file for a deeper dive. In my case the exception was originally happening in "nftables.py" on line "361".
Exception:
2022-01-23 14:00:23 DEBUG3: <class 'firewall.core.nftables.nftables'>: calling python-nftables with JSON blob: {"nftables": [{"metainfo": {"json_schema_version": 1}}, {"add": {"chain": {"family": "inet", "table": "firewalld", "name": "filter_IN_policy_allow-host-ipv6"}}}]}
2022-01-23 14:00:23 DEBUG1: Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/firewall/core/fw.py", line 888, in rules
backend.set_rule(rule, self._log_denied)
File "/usr/lib/python3.6/site-packages/firewall/core/nftables.py", line 390, in set_rule
self.set_rules([rule], log_denied)
File "/usr/lib/python3.6/site-packages/firewall/core/nftables.py", line 361, in set_rules
raise ValueError("'%s' failed: %s\nJSON blob:\n%s" % ("python-nftables", error, json.dumps(json_blob)))
ValueError: 'python-nftables' failed: internal:0:0-0: Error: Could not process rule: Numerical result out of range
Line 361 in "nftables.py":
self._loader(config.FIREWALLD_POLICIES, "policy")
Why this is a problem:
Basically nftables is a backend service and firewalld is a frontend service. They are dependent on each other to function. Each time you restart firewalld it has to reconcile the backend, in this case nftables. At some point during the reconciliation a conflict is occurring in the python code. That is unfortunate as the only real solution will likely have to come from code improvements from nftables in how it is able to populate policies into chains and tables.
A work-around:
The good news is, if you are like me, you don't use ipv6, in which case we simply disable the policy rather than solve for the issue. I'll put the work-around steps below.
Work-around Steps:
The proper way to remove the policy is to use the command "firewall-cmd --delete-policy=allow-host-ipv6 --permanent" but I encountered other errors and exceptions in python when attempting to do that. Since I don't care about ipv6 I manually deleted the XML from configuration and restarted the firewalld service.
rm /usr/lib/firewalld/policies/allow-host-ipv6.xml
rm /etc/firewalld/policies/allow-host-ipv6.xml
systemctl restart firewalld
Side Note:
Once I fixed this conflict, I also had some additional conflicts between nftables/iptables/fail2ban that had to be cleared up. For that I just used the command "fail2ban-client unban --all" to make fail2ban wipe clean all of the chains it added to iptables.

Related

pygit2 raises KeyError: 'the requested type does not match the type in the ODB'

I'm working on porting some python2 code to python3 - single codebase. I'm using pygit2 0.28.2 on cpython 2.7, and pygit2 1.9.2 on cpython3.10, at least for now.
I'm getting an error (-3) back from:
err = C.git_remote_push(self._remote, refspecs, opts)
...and payload.check_error(err) is mapping that to:
KeyError: 'the requested type does not match the type in the ODB'
That error only surfaces on cpython3.10, not cpython2.7.
I'm afraid I don't know what to make of the error. I googled for about 90 minutes, and didn't find much.
Here's the full traceback:
Traceback (most recent call last):
File "/app/shared/common/git/handlers.py", line 488, in Push
remote.push(temp3, callbacks=self.callbacks)
File "/usr/local/lib/python3.10/site-packages/pygit2/remote.py", line 257, in push
payload.check_error(err)
File "/usr/local/lib/python3.10/site-packages/pygit2/callbacks.py", line 93, in check_error
check_error(error_code)
File "/usr/local/lib/python3.10/site-packages/pygit2/errors.py", line 56, in check_error
raise KeyError(message)'
KeyError: 'the requested type does not match the type in the ODB'
Can anyone please give me a nudge in the right direction? What types is it complaining about? To pygit2, the data passed appears to be pretty opaque.
Is it possible that pygit2 0.28.2 would 'force' always, while pygit2 1.9.2 will only force by request? We've got libgit2's "strict mode" turned off in Python 3.
Thanks!
It turned out that pygit2 0.28.2 works, if we Start with 0.28.2. If we start with something later, like 1.5.0 and manually switch back to 0.28.2, the damage has already been done to the git repo, causing 0.28.2 to give errors too.
There are likely (somewhat) later versions that are happy as well, but that's another story.

Azure ML Studio: How to change input value with Python before it goes through data process

I am currently attempting to change the value of input as it goes through data process in Azure ML. However, I cannot find a clue about how to access to the input data with python.
For example, if you were to use python, you can access to the column of data with
print(dataframe1["Hello World"])
I tried to change the name of Web Service Input and tried to do it like how I did for other dataframe (e.g. sample)
print(dataframe["sample"])
But it returns an error with no luck, and from what I read from an error, it's not compatible to dataframe:
object of type 'NoneType' has no len()
I tried to look up a solution with Nonetype error, but there is no good solution.
The whole error message:
requestId = 1f0f621f1d8841baa7862d5c05154942 errorComponent=Module. taskStatusCode=400. {"Exception":{"ErrorId":"FailedToEvaluateScript","ErrorCode":"0085","ExceptionType":"ModuleException","Message":"Error 0085: The following error occurred during script evaluation, please view the output log for more information:\r\n---------- Start of error message from Python interpreter ----------\r\nCaught exception while executing function: Traceback (most recent call last):\r\n File \"C:\\server\\invokepy.py\", line 211, in batch\r\n xdrutils.XDRUtils.DataFrameToRFile(outlist[i], outfiles[i], True)\r\n File \"C:\\server\\XDRReader\\xdrutils.py\", line 51, in DataFrameToRFile\r\n attributes = XDRBridge.DataFrameToRObject(dataframe)\r\n File \"C:\\server\\XDRReader\\xdrbridge.py\", line 40, in DataFrameToRObject\r\n if (len(dataframe) == 1 and type(dataframe[0]) is pd.DataFrame):\r\nTypeError: object of type 'NoneType' has no len()\r\nProcess returned with non-zero exit code 1\r\n\r\n---------- End of error message from Python interpreter ----------"}}Error: Error 0085: The following error occurred during script evaluation, please view the output log for more information:---------- Start of error message from Python interpreter ----------Caught exception while executing function: Traceback (most recent call last): File "C:\server\invokepy.py", line 211, in batch xdrutils.XDRUtils.DataFrameToRFile(outlist[i], outfiles[i], True) File "C:\server\XDRReader\xdrutils.py", line 51, in DataFrameToRFile attributes = XDRBridge.DataFrameToRObject(dataframe) File "C:\server\XDRReader\xdrbridge.py", line 40, in DataFrameToRObject if (len(dataframe) == 1 and type(dataframe[0]) is pd.DataFrame):TypeError: object of type 'NoneType' has no len()Process returned with non-zero exit code 1---------- End of error message from Python interpreter ---------- Process exited with error code -2
I have also tried to a way to pass python script in data, but it is not able to make any change to Web Service Input value as I want it to be.
I have tried to look on forums like msdn or SO, but it's been difficult to find any information about it. Please let me know if you need any more information if needed. I would greatly appreciate your help!
tl;dr; You need to also link the dataset you used for training to the same port you link the Web service input, so that the Execute Python Script has something to work on - see the image below for how this should look.
You need to keep in mind that the Predictive experiment has some conventions that need to be followed (or learned the hard way :) ). One of them is that in order to use the Web service input, you need to pair it with an actual dataset, which Azure ML Studio can then use to infer structure and to provide you with some data while testing your predictive experiment. You can see it as some sort of 'ghost' module that doesn't do anything by itself.
Hope this helps.

ignore_invalid_triggers not working

I am using the pytransitions library (documented here) to implement a Finite State Machine. One of the features outlined is the ability to ignore invalid triggers. Here is the example as per the documentation:
# Globally suppress invalid trigger exceptions
m = Machine(lump, states, initial='solid', ignore_invalid_triggers=True)
If the trigger is set to true, no error should be thrown for triggers that are invalid.
Here is a sample of the code I am trying to construct:
from transitions import Machine
states = ['changes ongoing', 'changes complete', 'changes pushed', 'code reviewed', 'merged']
triggers = ['git commit', 'git push', 'got plus2', 'merged']
# Initialize the state machine
git_user = Machine(states=states, initial=states[0], ignore_invalid_triggers=True, ordered_transitions=True)
# Create the FSM using the data provided
for i in range(len(triggers)):
git_user.add_transition(trigger=triggers[i], source=states[i], dest=states[i+1])
print(git_user.state)
git_user.trigger('git commit')
print(git_user.state)
git_user.trigger('invalid') # This line will throw an AttributeError
The produced error:
changes ongoing
changes complete
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/transitions/core.py", line 58, in _get_trigger
raise AttributeError("Model has no trigger named '%s'" % trigger_name)
AttributeError: Model has no trigger named 'invalid'
Process finished with exit code 1
I am unsure of why an error is being thrown when ignore_invalid_triggers=True.
There is limited information on this library besides the documentation on the official github page. If anyone has any insight on this I would appreciate the help.
Thanks in advance.
To be an invalid trigger under the rules set out in the documentation, the trigger name has to be valid somewhere in the model. For instance, try trigger "merged" from state "changes ongoing". You get an attribute error because "invalid" is not a trigger at all: you have a list of four, and that's not one of them.
To see the effect of establishing "invalid" as a trigger, add an end-to-start transition (the last line below) after your nice linear loop:
# Create the FSM using the data provided
for i in range(len(triggers)):
git_user.add_transition(trigger=triggers[i], source=states[i], dest=states[i+1])
git_user.add_transition(trigger="invalid", source=states[-1], dest=states[0])
Now your code should run as expected, ignoring that invalid transition.

Ansible ERROR! no action detected in task

I am trying to run a playbook https://github.com/Datanexus/dn-cassandra
With the different deployment scenarios listed out there, I am going for multinode cassandra setup described here: deployment scenarios.
I have setup a static inventory file.
cassandra-seed-01 ansible_ssh_host=192.168.0.17 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='keys/id_rsa'
cassandra-seed-02 ansible_ssh_host=192.168.0.18 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='keys/id_rsa'
cassandra-non-seed-01 ansible_ssh_host=192.168.0.22 ansible_ssh_port=22 ansible_ssh_user='root' ansible_ssh_private_key_file='keys/id_rsa'
[cassandra_seed]
192.168.0.17
192.168.0.18
[cassandra]
192.168.0.22
However when I try running the playbook it throws the following error:
ERROR! no action detected in task
The error appears to have been in
'/home/laumair/workspace/dn-cassandra/provision-cassandra.yml': line
21, column 9, but may be elsewhere in the file depending on the exact
syntax problem.
The offending line appears to be:
# then, build the seed and non-seed host groups
- include_role:
^ here
I would appreciate any sort of direction with this error as I have tried out solutions for similar errors but no luck so far.
include_role is available since Ansible 2.2.
Please upgrade your Ansible installation.

Error calling Python module function in MySQL Workbench

I'm kind of at my wits end here, and so far have had no feedback from the MySQL Workbench bug reporting site, so I thought I'd throw this question/problem out to more sites.
I'm attempting to migrate from a MSSQL server on a Windows Server 2003 machine to MySQL server running on a Centos 6.5 VM. I can connect to the source and target databases, select a schemata, and runs through a pass through once for retrieving tables. After this the process fails and throws the following errors:
Traceback (most recent call last):
File "/usr/lib64/mysql-workbench/modules/db_mssql_grt.py", line 409, in reverseEngineer
reverseEngineerProcedures(connection, schema)
File "/usr/lib64/mysql-workbench/modules/db_mssql_grt.py", line 1016, in reverseEngineerProcedures
for idx, (proc_count, proc_name, proc_definition) in enumerate(cursor):
MemoryError
Traceback (most recent call last):
File "/usr/share/mysql-workbench/libraries/workbench/wizard_progress_page_widget.py", line 192, in thread_work
self.func()
File "/usr/lib64/mysql-workbench/modules/migration_schema_selection.py", line 160, in task_reveng
self.main.plan.migrationSource.reverseEngineer()
File "/usr/lib64/mysql-workbench/modules/migration.py", line 353, in reverseEngineer
self.state.sourceCatalog = self._rev_eng_module.reverseEngineer(self.connection, self.selectedCatalogName, self.selectedSchemataNames, self.state.applicationData)
SystemError: MemoryError(""): error calling Python module function DbMssqlRE.reverseEngineer
ERROR: Reverse engineer selected schemata: MemoryError(""): error calling Python module function DbMssqlRE.reverseEngineer
Failed
I thought this was initally a memory error, so I've upped the memory on the box to 16 GiB. This error also occurs on any size DBs, as I've tried very minimal sized ones with hardly any tables.
Any thoughts? Thanks for looking
Just in case anyone else runs into this. I had the same problem and fixed it by getting rid of non-ASCII characters in schemas, tables....basically all MSSQL objects. This was confounded by the fact that I had SQL# (www.sqlsharp.com) installed, which adds a number of functions and stored procs with a schema called SQL#. You can remove that with this command:
EXEC SQL#.SQLsharp_Uninstall
Once you get rid of non-ASCII chars, the migration works.
The OP (I assume) closed their bug report with this message:
[...] I figured out a work around, or the flaw in the system perhaps. Turns out that Null values were not allowed inside the Datetime fields when doing a migration. I turned every Datetime field in my database to a default value and migrated it successfully after that.

Categories

Resources