Pyramid subrequests - python

I need to call GET, POST, PUT, etc. requests to another URI because of search, but I cannot find a way to do that internally with pyramid. Is there any way to do it at the moment?

Simply use the existing python libraries for calling other webservers.
On python 2.x, use urllib2, for python 3.x, use urllib.request instead. Alternatively, you could install requests.
Do note that calling external sites from your server while serving a request yourself could mean your visitors end up waiting for a 3rd-party web server that stopped responding. Make sure you set decent time outs.

pyramid uses webob which has a client api as of version 1.2
from webob import Request
r = Request.blank("http://google.com")
response = r.send()
generally anything you want to override for the request you would just pass in as a parameter.
from webob import Request
r = Request.blank("http://facebook.com",method="DELETE")
another handy feature is that you can see the request as the http that is passed over the wire
print r
DELETE HTTP/1.0
Host: facebook.com:80
docs

Also check the response status code: response.status_int
I use it for example, to introspect my internal URIs and see whether or not a given relative URI is really served by the framework (example to generate breadcrumbs and make intermediate paths as links only if there are pages behind)

Related

Get in python what web server is used by a website

How can I know if a website is using apache, nginx or other and get this information in python? Thanks in advance
This information if available is given in the header of the response to a HTTP Request. With Python you can perform HTTP requests using the module requests.
Make a simple GET request to the interested site and then print the headers parameter of the returned object.
import requests
r = requests.get(YOUR_SITE)
print(r.headers)
The output is made of a dictionary of keys and value, you have to look for the Server parameter
server = r.headers['Server']
You must be aware that not all websites return this information for several reasons, so you could not find this key in the response header.

Generate the AWS HTTP signature from boto3

I am working with the AWS Transcribe streaming service that boto3 does not support yet, so to make HTTP/2 requests, I need to manually setup the authorization header with the "AWS Signature Version 4"
I've found some example implementation, but I was hoping to just call whatever function boto3/botocore have implemented using the same configuration object.
Something like
session = boto3.Session(...)
auth = session.generate_signature('POST', '/stream-transcription', ...)
Any pointers in that direction?
Contrary to the AWS SDKs for most other programming languages, boto3/botocore don't offer the functionality to sign arbitrary requests using "AWS Signature Version 4" yet. However there is at least already an open feature request for that: https://github.com/boto/botocore/issues/1784
In this feature request, existing alternatives are discussed as well. One is the third-party Python library aws-requests-auth, which provides a thin wrapper around botocore and requests to sign HTTP-requests. That looks like the following:
import requests
from aws_requests_auth.boto_utils import BotoAWSRequestsAuth
auth = BotoAWSRequestsAuth(aws_host="your-service.domain.tld",
aws_region="us-east-1",
aws_service="execute-api")
response = requests.get("https://your-service.domain.tld",
auth=auth)
Another alternative presented in the feature request is to implement the necessary glue-code on your own, as shown in the following gist: https://gist.github.com/rhboyd/1e01190a6b27ca4ba817bf272d5a5f9a.
Did you check this SDK? Seems very recent but might do what you need.
https://github.com/awslabs/amazon-transcribe-streaming-sdk/tree/master
It looks like it handles the signing: https://github.com/awslabs/amazon-transcribe-streaming-sdk/blob/master/amazon_transcribe/signer.py
I have not tested this, but you can likely accomplish this by following along with with this SigV4 unit test:
https://github.com/boto/botocore/blob/master/tests/unit/test_auth_sigv4.py
Note, this constructs a request using the botocore.awsrequest.AWSRequest helper. You'll likely need to dig around to figure out how to send the actual HTTP request (perhaps with httpsession.py)

difference between urllibx and requests? [duplicate]

In Python, what are the differences between the urllib, urllib2, urllib3 and requests modules? Why are there three? They seem to do the same thing...
I know it's been said already, but I'd highly recommend the requests Python package.
If you've used languages other than python, you're probably thinking urllib and urllib2 are easy to use, not much code, and highly capable, that's how I used to think. But the requests package is so unbelievably useful and short that everyone should be using it.
First, it supports a fully restful API, and is as easy as:
import requests
resp = requests.get('http://www.mywebsite.com/user')
resp = requests.post('http://www.mywebsite.com/user')
resp = requests.put('http://www.mywebsite.com/user/put')
resp = requests.delete('http://www.mywebsite.com/user/delete')
Regardless of whether GET / POST, you never have to encode parameters again, it simply takes a dictionary as an argument and is good to go:
userdata = {"firstname": "John", "lastname": "Doe", "password": "jdoe123"}
resp = requests.post('http://www.mywebsite.com/user', data=userdata)
Plus it even has a built in JSON decoder (again, I know json.loads() isn't a lot more to write, but this sure is convenient):
resp.json()
Or if your response data is just text, use:
resp.text
This is just the tip of the iceberg. This is the list of features from the requests site:
International Domains and URLs
Keep-Alive & Connection Pooling
Sessions with Cookie Persistence
Browser-style SSL Verification
Basic/Digest Authentication
Elegant Key/Value Cookies
Automatic Decompression
Unicode Response Bodies
Multipart File Uploads
Connection Timeouts
.netrc support
List item
Python 2.7, 3.6—3.9
Thread-safe.
urllib2 provides some extra functionality, namely the urlopen() function can allow you to specify headers (normally you'd have had to use httplib in the past, which is far more verbose.) More importantly though, urllib2 provides the Request class, which allows for a more declarative approach to doing a request:
r = Request(url='http://www.mysite.com')
r.add_header('User-Agent', 'awesome fetcher')
r.add_data(urllib.urlencode({'foo': 'bar'})
response = urlopen(r)
Note that urlencode() is only in urllib, not urllib2.
There are also handlers for implementing more advanced URL support in urllib2. The short answer is, unless you're working with legacy code, you probably want to use the URL opener from urllib2, but you still need to import into urllib for some of the utility functions.
Bonus answer
With Google App Engine, you can use any of httplib, urllib or urllib2, but all of them are just wrappers for Google's URL Fetch API. That is, you are still subject to the same limitations such as ports, protocols, and the length of the response allowed. You can use the core of the libraries as you would expect for retrieving HTTP URLs, though.
In the Python 2 standard library there were two HTTP libraries that existed side-by-side. Despite the similar name, they were unrelated: they had a different design and a different implementation.
urllib was the original Python HTTP client, added to the standard library in Python 1.2. Earlier documentation for urllib can be found in Python 1.4.
urllib2 was a more capable HTTP client, added in Python 1.6, intended as a replacement for urllib:
urllib2 - new and improved but incompatible version of urllib (still experimental).
Earlier documentation for urllib2 can be found in Python 2.1.
The Python 3 standard library has a new urllib which is a merged/refactored/rewritten version of the older modules.
urllib3 is a third-party package (i.e., not in CPython's standard library). Despite the name, it is unrelated to the standard library packages, and there is no intention to include it in the standard library in the future.
Finally, requests internally uses urllib3, but it aims for an easier-to-use API.
urllib and urllib2 are both Python modules that do URL request related stuff but offer different functionalities.
1) urllib2 can accept a Request object to set the headers for a URL request, urllib accepts only a URL.
2) urllib provides the urlencode method which is used for the generation of GET query strings, urllib2 doesn't have such a function. This is one of the reasons why urllib is often used along with urllib2.
Requests - Requests’ is a simple, easy-to-use HTTP library written in Python.
1) Python Requests encodes the parameters automatically so you just pass them as simple arguments, unlike in the case of urllib, where you need to use the method urllib.encode() to encode the parameters before passing them.
2) It automatically decoded the response into Unicode.
3) Requests also has far more convenient error handling.If your authentication failed, urllib2 would raise a urllib2.URLError, while Requests would return a normal response object, as expected. All you have to see if the request was successful by boolean response.ok
Just to add to the existing answers, I don't see anyone mentioning that python requests is not a native library. If you are ok with adding dependencies, then requests is fine. However, if you are trying to avoid adding dependencies, urllib is a native python library that is already available to you.
One considerable difference is about porting Python2 to Python3. urllib2 does not exist for python3 and its methods ported to urllib.
So you are using that heavily and want to migrate to Python3 in future, consider using urllib.
However 2to3 tool will automatically do most of the work for you.
I think all answers are pretty good. But fewer details about urllib3.urllib3 is a very powerful HTTP client for python.
For installing both of the following commands will work,
urllib3
using pip,
pip install urllib3
or you can get the latest code from Github and install them using,
$ git clone git://github.com/urllib3/urllib3.git
$ cd urllib3
$ python setup.py install
Then you are ready to go,
Just import urllib3 using,
import urllib3
In here, Instead of creating a connection directly, You’ll need a PoolManager instance to make requests. This handles connection pooling and thread-safety for you. There is also a ProxyManager object for routing requests through an HTTP/HTTPS proxy
Here you can refer to the documentation.
example usage :
>>> from urllib3 import PoolManager
>>> manager = PoolManager(10)
>>> r = manager.request('GET', 'http://google.com/')
>>> r.headers['server']
'gws'
>>> r = manager.request('GET', 'http://yahoo.com/')
>>> r.headers['server']
'YTS/1.20.0'
>>> r = manager.request('POST', 'http://google.com/mail')
>>> r = manager.request('HEAD', 'http://google.com/calendar')
>>> len(manager.pools)
2
>>> conn = manager.connection_from_host('google.com')
>>> conn.num_requests
3
As mentioned in urrlib3 documentations,urllib3 brings many critical features that are missing from the Python standard libraries.
Thread safety.
Connection pooling.
Client-side SSL/TLS verification.
File uploads with multipart encoding.
Helpers for retrying requests and dealing with HTTP redirects.
Support for gzip and deflate encoding.
Proxy support for HTTP and SOCKS.
100% test coverage.
Follow the user guide for more details.
Response content (The HTTPResponse object provides status, data,
and header attributes)
Using io Wrappers with Response content
Creating a query parameter
Advanced usage of urllib3
requests
requests uses urllib3 under the hood and make it even simpler to make requests and retrieve data.
For one thing, keep-alive is 100% automatic, compared to urllib3 where it's not. It also has event hooks which call a callback function when an event is triggered, like receiving a response
In requests, each request type has its own function. So instead of creating a connection or a pool, you directly GET a URL.
For install requests using pip just run
pip install requests
or you can just install from source code,
$ git clone git://github.com/psf/requests.git
$ cd requests
$ python setup.py install
Then, import requests
Here you can refer the official documentation,
For some advanced usage like session object, SSL verification, and Event Hooks please refer to this url.
I like the urllib.urlencode function, and it doesn't appear to exist in urllib2.
>>> urllib.urlencode({'abc':'d f', 'def': '-!2'})
'abc=d+f&def=-%212'
To get the content of a url:
try: # Try importing requests first.
import requests
except ImportError:
try: # Try importing Python3 urllib
import urllib.request
except AttributeError: # Now importing Python2 urllib
import urllib
def get_content(url):
try: # Using requests.
return requests.get(url).content # Returns requests.models.Response.
except NameError:
try: # Using Python3 urllib.
with urllib.request.urlopen(index_url) as response:
return response.read() # Returns http.client.HTTPResponse.
except AttributeError: # Using Python3 urllib.
return urllib.urlopen(url).read() # Returns an instance.
It's hard to write Python2 and Python3 and request dependencies code for the responses because they urlopen() functions and requests.get() function return different types:
Python2 urllib.request.urlopen() returns a http.client.HTTPResponse
Python3 urllib.urlopen(url) returns an instance
Request request.get(url) returns a requests.models.Response
You should generally use urllib2, since this makes things a bit easier at times by accepting Request objects and will also raise a URLException on protocol errors. With Google App Engine though, you can't use either. You have to use the URL Fetch API that Google provides in its sandboxed Python environment.
A key point that I find missing in the above answers is that urllib returns an object of type <class http.client.HTTPResponse> whereas requests returns <class 'requests.models.Response'>.
Due to this, read() method can be used with urllib but not with requests.
P.S. : requests is already rich with so many methods that it hardly needs one more as read() ;>

Is there a better way to access my public api?

I am new to Flask.
I have a public api, call it api.example.com.
#app.route('/api')
def api():
name = request.args.get('name')
...
return jsonify({'address':'100 Main'})
I am building an app on top of my public api (call it www.coolapp.com), so in another app I have:
#app.route('/make_request')
def index():
params = {'name':'Fred'}
r = requests.get('http://api.example.com', params=params)
return render_template('really_cool.jinja2',address=r.text)
Both api.example.com and www.coolapp.com are hosted on the same server. It seems inefficient the way I have it (hitting the http server when I could access the api directly). Is there a more efficient way for coolapp to access the api and still be able to pass in the params that api needs?
Ultimately, with an API powered system, it's best to hit the API because:
It's user testing the API (even though you're the user, it's what others still access);
You can then scale easily - put a pool of API boxes behind a load balancer if you get big.
However, if you're developing on the same box you could make a virtual server that listens on localhost on a random port (1982) and then forwards all traffic to your api code.
To make this easier I'd abstract the API_URL into a setting in your settings.py (or whatever you are loading in to Flask) and use:
r = requests.get(app.config['API_URL'], params=params)
This will allow you to make a single change if you find using this localhost method isn't for you or you have to move off one box.
Edit
Looking at your comments you are hoping to hit the Python function directly. I don't recommend doing this (for the reasons above - using the API itself is better). I can also see an issue if you did want to do this.
First of all we have to make sure the api package is in your PYTHONPATH. Easy to do, especially if you're using virtualenvs.
We from api import views and replace our code to have r = views.api() so that it calls our api() function.
Our api() function will fail for a couple of reasons:
It uses the flask.request to extract the GET arg 'name'. Because we haven't made a request with the flask WSGI we will not have a request to use.
Even if we did manage to pass the request from the front end through to the API the second problem we have is using the jsonify({'address':'100 Main'}). This returns a Response object with an application type set for JSON (not just the JSON itself).
You would have to completely rewrite your function to take into account the Response object and handle it correctly. A real pain if you do decide to go back to an API system again...
Depending on how you structure your code, your database access, and your functions, you can simply turn the other app into package, import the relevant modules and call the functions directly.
You can find more information on modules and packages here.
Please note that, as Ewan mentioned, there's some advantages to using the API. I would advise you to use requests until you actually need faster requests (this is probably premature optimization).
Another idea that might be worth considering, depending on your particular code, is creating a library that is used by both applications.

Python urllib2 trace route

I'm using Python and urllib2 to make POST requests and I have it working successfully. However, when I make several posts one after the other at times I get the error 502 proxy in use. Our company does us proxy but I'm not set up to hit the proxy since I'm working internally. Is there a way to get a trace route of how the POST request is being routed using urllib2 and Python?
Thanks
I'm not sure what you mean by "a trace route". traceroute is an IP thing, two levels below HTTP. And I doubt you want anything like that. You can find out whether there were any redirects, whether a proxy was used, etc., either by using a general-purpose sniffer or, much more simply, by just asking urllib2.
For example, let's say your code looks like this:
url = 'http://example.com'
data = urllib.urlencode({'spam': 'eggs'})
req = urllib2.Request(url, data)
resp = urllib2.urlopen(req)
respdata = resp.read()
Then req.has_proxy() will tell you whether it's going to use a proxy, resp.geturl() == url will tell you whether there was a redirect, etc. Read the docs for all the info available.
Meanwhile, if you don't want a proxy, you can either disable whatever settings urllib2 picked up that made it auto-configure the proxy (e.g., unset http_proxy before running your script), override the default handler chain to make sure there's no ProxyHandler, build an explicit OpenerDirector instead of using the default one, etc.

Categories

Resources