I'm using suds 0.3.8, Python 2.4.3, and Django 1.1.1. The code I inherited has a long duration for the cached files, but it's expiring on the default cadence of once every 24 hours. The external servers hosting the schemas are spotty, so the site is going down nightly and I'm at the end of my rope.
Any idea what is jacked up in this code?
imp = Import('http://domain2.com/url')
imp.filter.add('http://domain3.com/url')
imp.filter.add('http://domain4.com/url')
imp.filter.add('http://domain5.com/url')
d = ImportDoctor(imp)
url = "http://domain.com/wsdl"
client = Client(url, doctor=d, timeout=30)
clientcache = client.options.cache
clientcache.setduration(days=360)
Answering my own question:
This ended up not being a version issue, but user error. Unfortunately the suds documentation isn't as clear as it could be. Reading it, one would think the code above would work, but (on suds v0.39+) it should be written as:
imp = Import('http://domain2.com/url')
imp.filter.add('http://domain3.com/url')
imp.filter.add('http://domain4.com/url')
imp.filter.add('http://domain5.com/url')
d = ImportDoctor(imp)
oc = ObjectCache()
oc.setduration(days=360)
url = "http://domain.com/wsdl"
client = Client(url, doctor=d, cache=oc, timeout=30)
Looking at it now, it makes complete sense that the cache has to be configured before the Client is initialized.
Hopefully this will help anyone else trying to set a suds cache, and it seems to be ignoring your settings.
The problem may be a lack of proper support in the default cache type in the Suds 0.3.8, or possibly even a bug in this version. If you're able to upgrade to Suds 0.3.9 or later (latest is 0.4), that would allow this behavior to work as expected.
Related
I am running a Flask restful API behind an NGINX web server on AWS. I am hitting that with a python module from my Pi.
Everything worked fine when I was using HTTP to make calls to the api. But I just locked down my api so only HTTPS is possible. I changed the UIRL used by my python module but it now fails. The code is quite simple...here is an extract:
jsonpkg = {'subscriberID': self.api_login, 'token': self.api_token,
'content': speech_content}
headers = {'Content-Type': 'application/json'}
r = requests.post(self.api_apiurl, data=json.dumps(jsonpkg), headers=headers)
The values are being correct set by the class init section. And I am importing the requests module at the top. Error messages indicate it is using python 2.7. However when I monitor the API I can see its not even hitting the server. I can point a browser to the api and its working fine.
Am I to understand the requests module in python 2.7 does not support https?
Are there additional parameters I need to send for https?
Aha! With a little more digging into the request module docs I found the answer. If I use the following
r = requests.post(self.api_apiurl, data=json.dumps(jsonpkg), headers=headers, verify=False)
then it works. So the issue is with verifying the cert. I am not quite sure why the browser gets by without this...but perhaps it does the extra stuff automatically. So I either need to NOT verify the cert or have a local copy(?) that can be verified.
Final Update:
I finally worked out how to concatenate my site certificate with the chain certificate (and understand why). This site here was a great help. Also, once they are concatenated you will probably get a second error, which if you google it you will find is caused by the need for a carriage return after the first certificate and before the second (edit the resulting concatenated file with notepad). I then was able to return the post to using "verify=True" which made the warnings about no verification go away.
I'm trying to use the Uber API in Python but I can't even get the basic commands to work. I'm following the code suggested on the GitHub page (https://github.com/uber/rides-python-sdk).
from uber_rides.session import Session
session = Session(server_token='xxxxxx')
from uber_rides.client import UberRidesClient
client = UberRidesClient(session)
response = client.get_products(37.77, -122.41)
products = response.json.get('products')
When I run this I get the following error - KeyError: u'x-rate-limit-limit'
I did make a developer account with Uber and I've tried using different Server Tokens, but none of them work.
Can anyone help?
You are experiencing an issue that was resolved with the latest SDK fix (GitHub issue). This has happened because the Python SDK was upgraded to use the v1.2 endpoints of the Uber API. However, with the upgrade to v1.2 Uber also deprecated the rate limiting headers (X-Rate-Limit-Limit, X-Rate-Limit-Remaining, X-Rate-Limit-Reset). The older SDK version is still using them. That's what causing you trouble.
In order to resolve your issue, please install the newest SDK version (> 0.2.7.1).
You are getting rate limited. This means you are sending requests so frequently that Uber believes you are doing it maliciously. As Uber advises, you should "spread out your requests," by using time.sleep() for example.
If there is someone out there who has already worked with SOLR and a python library to index/query solr, would you be able to try and answer the following question.
I am using the mySolr python library but there are others out (like pysolr) there and I don't think the problem is related to the library itself.
I have a default multicore SOLR setup, so no authentication required normally. Don't need it to access the admin page at http://localhost:8080/solr/testcore/admin/ either
from mysolr import Solr
solr = Solr('http://localhost:8080/solr/testcore/')
response = solr.search(q='*:*')
print("response")
print(response)
This code used to work but now I get a 401 reply from SOLR ... just like that, no changes have been made to the python virtual env containing mysolr or the SOLR setup. Still...something must have changed somewhere but I'm out of clues.
What could be the causes of a SOLR 401 reponse?
Additional info: This script and mor advanced script do work on another PC, just not on the one I am working on. Also, adding "/select?q=:" behind the url in the browser does return the correct results. So the SOLR is setup correctly, it has probably something to do with my computer itself. Could windows settings (of any kind) have an impact on how SOLR responds to requests from python? The python env itself has been reinstalled several times to no avail.
Thanks in advance!
The problem was: proxy.
If this exact situation was ever to occur to someone and you are behind a proxy, check if your HTTP and HTTPS environmental variables are not set. If they are... this might cause the python session to try using the proxy while it shouldn't (connecting to localhost via proxy).
It didn't cause any trouble for months but out of the blue it did so whether you encounter this or not might be dependent on how your IT setup your proxy or made some other changes...somewhere.
thank you everyone!
What's the most current form of Oauth for Python 3?
I'm trying to create a stock screener using my broker's API, which uses Oauth. Most of the info I find is out of date or conflicting. I've seen the following modules referenced:
Oauth - Seems to be the original, now outdated. I get an error of "'module' object has no attribute 'Consumer'"
Oauth2 - Newer version, apparently also outdated? The one most referenced one online. Glitches out in pip/can't figure out how to install it.
Oauthlib - IIRC, claims to be the new replacement for Oauth and Oauth2
Rauth.OAuth2Service - Also potentially replacement for Oauth and Oauth2?
Requests - ?
Oauth_hook - ?
pyoauth2 - I get an error about not having a module named "client" in pyoauth2's init.
None of them seem to work as expected, and I have a feeling that this is due to low Oauth 3 support. Have you gotten OAuth to work in Python 3? If so, how did you do it?
It looks like Requets_oauthlib works. Here's code I used that works in Python 3. I'm posting it because most of the example code I found used formats that I couldn't get working.
from requests_oauthlib import OAuth1
client_key = ''
client_secret = ''
resource_owner_key = ''
resource_owner_secret = ''
def query(queryurl):
headeroauth = OAuth1(client_key, client_secret, resource_owner_key,
resource_owner_secret, signature_type = 'auth_header')
return requests.get(queryurl, auth = headeroauth)
query('http://website.com')
Author of rauth here: rauth is a client library which currently does not officially support Python 3.
However, we are working on it, and there's an active branch (aptly named "python-3") over at GitHub which works. You're free to use it, but bear in mind that things may change slightly when we officially release support for it later on. With that said, it would be great to have people out in the real world testing it so that we can make changes to accommodate the Python 3 ecosystem.
Also note: oauthlib is not a replacement for rauth and not a client library. It attempts to be a generic solution, much like python-oauth2 was, but it doesn't provide a client, unlike python-oauth2.
Anyone had success with using Redis as Beaker backend? Can you tell me link or library how to do it? I am looking for any library which does this but could not get anything out of google search.
I have posted to pylons user group and this information resolve my question..
http://groups.google.com/group/pylons-discuss/msg/a1144aa1ca8e0417
Here are the steps that worked for me:
easy_install redis
easy_install pip
pip install git+git://github.com/bbangert/beaker_extensions.git
Edit Pylons' development.ini
[app:main]
full_stack = true
static_files = true
cache_dir = %(here)s/data
beaker.session.type = redis
beaker.session.url:127.0.0.1:6379
beaker.session.key = appname
(Optional)
Edit this file and change the serialization method to JSON. Even
though JSON is not as efficient byte for byte I like how it is easily
readable and relatively well supported across the technologies I've
chosen:
https://github.com/bbangert/beaker_extensions/blob/master/beaker_extensions/redis_.py
Posted by Jeff Tchang