"SignatureError: Failed to verify signature" - Okta, pySAML2 - python

For three days, I have been pulling my hair out trying to wrap my head around Okta & SAML.
On my local machine (OSX Mavericks), I am able to successfully follow the steps listed here: http://developer.okta.com/docs/guides/pysaml2
Things work.
But moving everything over to our production server, which is a CentOS box, running nearly identical code, I am faced with this "SignatureError: Failed to verify signature" error.
Traceback (most recent call last):
auth_response = saml_client.parse_authn_request_response(SAMLResponse, entity.BINDING_HTTP_POST)
File \"/usr/local/lib/python2.7.11/lib/python2.7/site-packages/saml2/client_base.py\", line 599, in parse_authn_request_response binding, **kwargs)
response = response.loads(xmlstr, False, origxml=origxml)
File \"/usr/local/lib/python2.7.11/lib/python2.7/site-packages/saml2/response.py\", line 510, in loads self._loads(xmldata, decode, origxml)
File \"/usr/local/lib/python2.7.11/lib/python2.7/site-packages/saml2/response.py\", line 335, in _loads **args)
File \"/usr/local/lib/python2.7.11/lib/python2.7/site-packages/saml2/sigver.py\", line 1756, in correctly_signed_response
class_name(response), origdoc)
File \"/usr/local/lib/python2.7.11/lib/python2.7/site-packages/saml2/sigver.py\", line 1571, in _check_signature
raise SignatureError(\"Failed to verify signature\")
SignatureError: Failed to verify signature
I have scoured the internet looking for a way to troubleshoot this error. I am new to SAML and Okta.
My assumption is that this has something to do with xmlsec1 acting differently on our production machine. But the versions are identical. There are many dependencies so I'm not sure where the problem might be.
Has anyone ran into this error? Any thoughts on what I might be able to try?

I know this is a little late, but in case someone else runs into this:
pysaml2 provides a lot of logging using python's built in logging, I defined a handler for saml2.sigver and that gave a lot of info. In those logs I found this:
Error: unable to load xmlsec-openssl library. Make sure that you have
this it installed, check shared libraries path (LD_LIBRARY_PATH)
envornment variable or use "--crypto" option to specify different
crypto engine.
Turns out I needed to install xmlsec1-openssl.
Hope this helps someone in the future.

Dealing with xmlsec1 can be extremely frustrating!
The main thing that I suggest doing is enabling debugging in PySAML2, and/or setting the PYSAML2_KEEP_XMLSEC_TMP environment variable, and/or manually enable this code path in sigver.py - the general idea is to get a look at xmlsec1 command that PySAML2 is calling and have PySAML2 leave the temporary files around so that you can test the commands yourself.
As I recall, most of the issues that I've run into in the past involved PySAML2 not finding the xmlsec1 binary. The get_xmlsec_binary() function in sigver.py is responsible for finding the xmlsec1 binary. I suggest that you take a look at the code in get_xmlsec_binary() and make sure that it is looking in the right places on your system.

Depending on the operational system you will also need to install additional libraries.
In my case, I got the issue on a CentOS server, so I needed to install more 2 dependencies in addition to xmlsec1:
yum install libffi-devel xmlsec1 xmlsec1-openssl
This solved my problem.
You should also have a look on Okta's documentation. They have a guide on how to use PySAML2 to add support for Okta (via SAML) to applications written in Python.
https://developer.okta.com/code/python/pysaml2/

Related

Interfacing thorlabs equipment with pylablib

I'm trying to use pylablib to access a thorlabs motor using python3.6 but I can't open the device
the documentation seems simple, but I'm new to python and can't work out what I'm doing wrong.
https://pylablib.readthedocs.io/en/latest/.apidoc/pylablib.aux_libs.devices.html#pylablib.aux_libs.devices.Thorlabs.KDC101
import pylablib as pll
from pylablib.aux_libs.devices import Thorlabs
with Thorlabs.KDC101("27254309") as stage:
stage.get_status_n()
I get the error
File "...\backend.py" line 674, in init
raise self.BackendOpenError(e)
pylablib.core.devio.backend.BackendOpenError:
Device Not Opened
can anyone please suggest to me how I might be referring to the motor wrong or what I might be able to try?
Thanks.
I know this probably isn't gonna help but I had a similar problem and then after five minutes realized the motors weren't turned on. Felt pretty stupid.
Edit: another issue I had was that when I installed pylablib, I simply used:
pip install pylablib
Instead, you need to do this:
pip install pylablib[devio,gui]

RDKit installation under Windows and Python3.7.4

RDKit could be a nice package if it wasn't so complicated to install.
Here on SO, there are several questions having problems with the installation of RDKit.
However, on different operating systems or different environments.
My configuration is:
Win10, Python 3.7.4, pip is installed, PATH is set, PYTHONPATH is set.
The installation of other modules is working fine via python -m pip install <package>.
I'm aware that the site recommends the fastest installation with Anaconda.
However, I don't have and don't want Anaconda.
On the webpage it says:
"Get the appropriate windows binary build from: https://github.com/rdkit/rdkit/releases".
However, there are no binaries of the latest versions.
This means, I would have to build it from source. I'm hesitating because the process seems to be pretty complicated, many extra installations with new problems and unknowns, and furthermore, the instructions seem to be outdated and incomplete for somebody who would build binaries from the source for the first time.
So, then I tried some unofficial binaries of RDKit.
If I unpack them and set the paths according to instructions, I get this error message:
>>> from rdkit import Chem
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\xyz\Programs\RDKit\rdkit\__init__.py", line 2, in <module>
from .rdBase import rdkitVersion as __version__
ImportError: DLL load failed: The specified module could not be found.
So, finally my questions:
How to properly install RDKit with the above mentioned configuration?
What is the specified DLL which is missing?
Where is it expecting it and searching it?
Are these RDKit 3.6 binaries maybe incompatible with Python 3.7.4?
I'm pretty sure it is probably a "small" thing (a path here or a check there), but I'm stuck. Thank you for any hints.
Update:
Apparently, it is not just a "small" thing. Chances to get this to work are most likely very low.
In the meantime I found this:
https://github.com/rdkit/rdkit/issues/1812
https://github.com/rdkit/rdkit/issues/2389
If the author of rdkit writes (April 2019):
I would be happy to be able to do pip distributions of the RDKit, but
to the best of my knowledge no one has managed to figure out how to
make it actually work.
I'd be happy to accept a PR from someone who has figured this out, but
I am not likely to have the time to do this myself anytime in the near
future.
So, if anybody feels capable achieving this, please feel free.
I will invest time in something else or will have to switch to Anaconda if I want to use RDKit.
On the webpage you linked there is a section about missing DLLs:
"In Win7 systems, you may run into trouble due to missing DLLs, see one thread from the mailing list: http://www.mail-archive.com/rdkit-discuss#lists.sourceforge.net/msg01632.html You can download the missing DLLs from here: http://www.microsoft.com/en-us/download/details.aspx?id=5555"
Not sure if this helps

How to run WordCountTopology from storm-starter in Intellij

I work with Storm for a while already, but want to get started with development. As suggested, I am using IntelliJ (up to now, I was using Eclipse and did only write topologies against Java API).
I was also looking at
https://github.com/apache/storm/tree/master/examples/storm-starter#intellij-idea
This documentation is not complete. I was not able to run anything in Intellij first. I could figure out, that I need to remove the scope of storm-core dependency (in storm-starter pom.xml). (found here: storm-starter with intellij idea,maven project could not find class)
After that I was able to build the project. I can also run ExclamationTopology with no problems within IntelliJ. However, WordCountTopology fails.
First I got the following error:
java.lang.RuntimeException: backtype.storm.multilang.NoOutputException: Pipe to subprocess seems to be broken! No output read.
Serializer Exception:
Traceback (most recent call last):
File "splitsentence.py", line 16, in
import storm
ImportError: No module named storm
Update: installing python-storm is not required to make it work
I was able to resolve it via: apt-get install python-storm (from StackOverflow)
However, I don't speak Python and was wondering what the problem is and why I could resolve it like this. Just want to get deeper into it. Maybe someone can explain.
Unfortunately, I am getting a different error now:
java.lang.RuntimeException: backtype.storm.multilang.NoOutputException: Pipe to subprocess seems to be broken! No output read.
Serializer Exception:
Traceback (most recent call last):
File "splitsentence.py", line 18, in
class SplitSentenceBolt(storm.BasicBolt):
AttributeError: 'module' object has no attribute 'BasicBolt'
I did not find any solution on the Internet. Asking at dev#storm.apache.org did not help either. I go the following suggestion:
I think that it was always assumed that topology would always be invoked through storm-command line. Thus working directory would be ${STORM-INSTALLATION}/bin/storm Since storm.py is in the this directory, splitSentence.py would be able to find storm modules. Can you set the working directory to a path, where storm.py is present and then try. If it works, we can add it later to the documentation
However, chancing the working directory did not solve the problem.
And as I am not familiar with Python and as I am new to IntelliJ, I am stuck now. Because ExclamationTopology runs, I guess my basic setup is correct.
What do I do wrong? It is possible at all to run WordcountTopology in LocalCluster in IntelliJ?
Unfortunately, AFAIK you can't run multilang feature with LocalCluster without having packaged file.
ShellProcess relies on codeDir of TopologyContext, which is used by supervisor.
Workers are serialized to stormcode.ser, but multilang files should extracted to outside of serialized file so that python/ruby/node/etc can load it.
Accomplishing this with distribute mode is easy because there's always user submitted jar, and supervisor can know it is what user submitted.
But accomplishing this with local mode is not easy cause supervisor cannot know user submitted jar, and users can run topology to local mode without packaging.
So, Supervisor in local mode finds resource directory ("resources") from each jars (which ends with "jar") in classpath, and copy first occurrence to codeDir.
storm jar places user topology jar to the first of classpath, so it can be run without issue.
So normally, it's natural for ShellProcess to not find "splitsentence.py". Maybe your working directory or PYTHONPATH did the trick.
I struggled with a similar issue, not with the sample topology, but with my own using a Python bolt.
Also experienced the "AttributeError: 'module' object has no attribute 'BasicBolt'" exception - in local mode and when submitting to the cluster.
There are very few resources on this, I found your question and little else discussing this issue.
In case someone else has the same problem:
Make sure you include the correct Maven "multilang-python" dependency in your pom file. This will package the correct run time dependencies into the JAR file needed to run your topology.
I managed to run it on my virtualbox, storm version 1.2.2:
just download https://github.com/apache/storm/blob/master/storm-multilang/python/src/main/resources/resources/storm.py and put it into any folder you want, for example: /apache-storm-1.2.2/examples/storm-starter/multilang/resources/ , and then change the main function:
public static void main(String[] args) throws Exception {
SplitSentence pythonSplit = new SplitSentence();
Map env = new HashMap();
env.put("PYTHONPATH", "/apache-storm-1.2.2/examples/storm-starter/multilang/resources/");
pythonSplit.setEnv(env);
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("spout", new RandomSentenceSpout(), 5);
builder.setBolt("split",pythonSplit, 8).shuffleGrouping("spout");
builder.setBolt("count", new WordCount(), 12).fieldsGrouping("split", new Fields("word"));
Config conf = new Config();
conf.setDebug(true);
if (args != null && args.length > 0) {
conf.setNumWorkers(3);
StormSubmitter.submitTopologyWithProgressBar(args[0], conf, builder.createTopology());
}
else {
conf.setMaxTaskParallelism(3);
LocalCluster cluster = new LocalCluster();
cluster.submitTopology("word-count", conf, builder.createTopology());
Thread.sleep(600000);
cluster.shutdown();
}
}
the full instructions can be found on my blog which includes other issues encountered when running it in Local Mode and Local Cluster mode: https://lyhistory.com/storm/

Error in launching Stanford corenlp server

I want to use Stanford parser in order to get typed dependency of a text. I was trying to follow the instructions provided in https://bitbucket.org/torotoki/corenlp-python, however, I got an error, both while launching the server and using the python library:
from corenlp import *
corenlp = StanfordCoreNLP("./stanford-corenlp-full-2014-08-27/")
This is the error:
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/
pexpect/__init__.py", line 1554, in expect_loopraise EOF(str(err) + '\n' + str(self))
pexpect.EOF: End Of File (EOF). Empty string style platform.
It seems the problem is about pexpect package. I already installed it as it is explained in the insturctuion. I saw similiar problem here(EOF when using pexpect and pxssh), but it is different from my case. I am using Mac and Python 2.7.
Could you please help me!
This is not the solution you wanted.
Are you sure that it is the same error if you use corenlp using the package implementation ?
pexpect is required only when there is interaction with the interactive shell of corenlp, which is present in the server implementation. In the package implementation, a file containing a list of input files is given to the parser. (using batch_parse)
In the server implementation, there is a JSON-RPC server created by the wrapper. If you want to use the server implementation, you have to remotely call one of the procedures and you don't need the package on the client end. I guess, you have mistaken in putting the right code here.
PS:
I, myself is one of the corenlp users but i used a different wrapper.
I hit the same issue while on mac. For posterity, just follow verbatim the instructions at https://bitbucket.org/torotoki/corenlp-python to use python wrapper around Stanford-corenlp. This package gets around the "End Of File (EOF). Empty string style platform." error that throws up on the some mac (or other OS, didnt check though but I know someone who also got it on windows) versions.
Also don't forget to download the stanford-corenlp-full-2014-08-27.zip instead of the latest version.
The solution of using expect(pexpect.EOF) didn't work for me (https://github.com/pexpect/pexpect/blob/master/doc/overview.rst) as the Stanford-corenlp jar was not loading properly in this case.
I also tried https://github.com/dasmith/stanford-corenlp-python as well as https://github.com/Wordseer/stanford-corenlp-python but neither of them worked. Both threw up the EOF error

OpenCV mergevec issues

I'm running windows 7, and i'm trying to get some haar training done to make a haar classifier. I've got to the point were i need to merge a folder full of .vec files. I've been working on this for the better part of a day. I've tried following coding robin's tutorial but i get an error of:
g++.exe": pkg-config: No such file or directory
g++.exe": opencv -I.: No such file or directory
g++.exe": installation problem, cannot exec `cpp': No such file or directory
is this "installation problem" a problem with my g++ install? I'm still not sure.
those files (or directories) aren't in my opencv folder so i'm not really sure what to do about that. I vaguely remember reading that those were for if you were installing it with linux or something so i tried a different method.
I couldn't get Naotoshi Seo's to work because i can't download the mergevec.exe file anywhere. I always get a "your computer or network may be sending automated queries. To protect our users, we can't process your request right now." I've done virus scans i've tried downloading from different computers and networks nothing works. since the previous method of compiling the mergevec.cpp file didn't work for me either, I then looked for yet another method where i found this tutorial[3] for using python. So I installed python 2.7.9 and ran this in command prompt
"C:\Users\Austin\Desktop\Recog_Project>python mergevec.py -v samples -o weed_samples.vec"
and i got this as a result
Traceback (most recent call last):
File "mergevec.py", line 170, in <module>
merge_vec_files(vec_directory, output_filename)
File "mergevec.py", line 133, in merge_vec_files
val = struct.unpack('<iihh', content[:12])
struct.error: unpack requires a string argument of length 12"
I don't know what to do anymore to try and get this to work.
I've tried installing Ubuntu on a virtual machine, but i can't even figure out how to change the resolution from 640:480. Ran these commands in terminal and restarted and got nothing.
sudo apt-get install virtualbox-guest-dkms
virtualbox-guest-utils virtualbox-guest-x11
I also did something with some drivers but I can't remember what it was. basically this is my last hope. I'm out of ideas. I'll of course keep looking and for answers and will post any progress i make. Any help at all would be greatly appreciated as my job is on the line. I could skype screen share if it would be helpful too.
Thanks in advance.
3: github.com/wulfebw/mergevec guess i need more rep to post additional links.
Someone answered this within a few hours on the opencv Q&A. If you have any OpenCV questions i highly recommend you go there first as it is much more likely that someone will answer your question on their page. Here's the answer i received.
You do not need to merge vec files for object classifier training using the cascade of weak classifiers approach. I keep wonder why the hell people merge vec files, and always the answer is because they want to create artificial data vectors. Avoid this at all costs if you want you models to do something with sense.
Skip the complete haar training interface and move on to the traincascade interface, which is better supported, more bugfree and better supported.
Start with the traincascade and createsamples information in the Description.

Categories

Resources