I'm trying to make requests to the shopify.com API over GAE python
the url i have to request is not formed in the usual format.
it is composed like http://apikey:password#hostname/admin/resource.xml
with urllib I can request it but i cant set the headers for an xml request so it doesn't work.
urllib2, httplib... are having problems with the ':'.
I get either a 'nodename nor servname provided, or not known' or a 'nonnumeric port' because it expects a port number after the semicolon.
any help?
Look into how to do HTTP Basic authentication in Python. See especially the section on Doing it Properly.
Related
How can I know if a website is using apache, nginx or other and get this information in python? Thanks in advance
This information if available is given in the header of the response to a HTTP Request. With Python you can perform HTTP requests using the module requests.
Make a simple GET request to the interested site and then print the headers parameter of the returned object.
import requests
r = requests.get(YOUR_SITE)
print(r.headers)
The output is made of a dictionary of keys and value, you have to look for the Server parameter
server = r.headers['Server']
You must be aware that not all websites return this information for several reasons, so you could not find this key in the response header.
I am a beginner trying to learn REST API programming through Python 2.7 to get data from Socialcast API. From my research it looks like requests or urllib2 would work. I need to authenticate with username and id for the API. I tried using urllib2 and it gave me error 401.
Which one should I use? My goal is to produce .csv files from the data so I can visualize it. Thank you in advance.
The question will yield a bit of an opinion based response, but I would suggest using Requests. I find that when making request that require parameters using Requests is easier to manage. An example for the Socialcast using Requests would be
parameters={"email" : emailAddress, "passoword" : password}
r = requests.post(postUrl, parameters)
The post url would be the url to make the post request and emailAddress and password would be the vales you use to login in.
For the csv, take a look here which includes a tutorial on going from json to csv.
I searched but did not found any example showing on how to convert CoAP request or response to HTTP request.
Basically what I want to do is CoAP request POST some data from device to a server which will translate it and do HTTP request POST to other server to be save inside the database.
While the part to save the data is not a major problem right now, I did not managed to find any example script showing how to convert from CoAP to HTTP.
I already looked at coapthon , aiocoap but since aiocoap requires python 3.5,(I use python 2.7) that left me with coapthon. Unfortunately coapthon only has HTTP to CoAP proxy while CoAP to HTTP is still in development.
If anyone know other project regarding this or has any opinion on how to solve this, I am glad if you can share it. Thank you.
That is called Protocol Interoperability. You Need a CoAP - HTTP and HTTP - CoAP proxy that can translate the messages between them.
Here is californium-proxy on GitHub, I am using it already. Here is the example that shows how to use it.
I am using Python 2.7 with the requests module to send http post with parameters. I encountered a strange problem.
To do http post, it is just one line;
x = requests.post(URL, params)
I have no problem with the params. It is the URL that puzzled me.
Sometimes, this URL http://hostname/path/post works. Sometimes, I use http://hostname/path without the /post to get the HTTP post to work. I am puzzled why is this so. What is the difference between the two? Under what conditions do I use which one?
'http://hostname/path/post' is a path. You could in principle issue and HTTP GET request to that same path (although probably you wouldn't get anything meaningful back).
In general, you should look at the site's API documentation and post to the url that they say you should post to without adding anything extra to the url.
There are two different concepts, url and HTTP method. You are confused by trying to mix them.
url - an address you talk to
The url is addressing something on some server. If you get valid url, you can take it as a string, do not read in, and use it. Consider it to be a string.
If I would link it to a visiting your friend, url is address of a doors to come to.
HTTP method (POST, GET, DELETE...)
There are multiple HTTP methods which differ in the way, how you talk to given url.
Linking it to visiting a friend, it would be the way, you try to make the doors open (use the bell, knock or use a hammer)
I'm trying to add authenticating proxy support to an existing script, as it is the script connects to a https url (with urllib2.Request and urllib2.urlopen), scrapes the page and performs some actions based on what it has found. Initially I had hoped this would be as easy as simply adding a urllib2.ProxyHandler({"http": MY_PROXY}) as an arg to urllib2.build_opener which in turn is passed to urllib2.install_opener. Unfortunately this doesn't seem to work when attempting to do a urllib2.Request(ANY_HTTPS_PAGE). Googling around lends me to believe that the proxy support in urllib2 in python 2.5 does not support https urls. This surprised me to say the least.
There appear to be solutions floating around the web, for example http://bugs.python.org/issue1424152 contains a patch for urllib2 and httplib which purports to solve the issue (when I tried it the issue I began to get the following error instead: urllib2.URLError: <urlopen error (1, 'error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol')>). There is a cookbook recipe here http://code.activestate.com/recipes/456195 which I am planning to try next. All in all though I'm surprised this isn't supported "out of the box", which makes me wonder if I'm simply missing out on an obvious solutions, so in short — has anyone got a simple method for fetching https pages using an authenticating proxy with urllib2 in Python 2.5? Ideally this would work:
import urllib2
#perhaps the dictionary below needs a corresponding "https" entry?
#That doesn't seem to work out of the box.
proxy_handler = urllib2.ProxyHandler({"http": "http://user:pass#myproxy:port"})
urllib2.install_opener( urllib2.build_opener( urllib2.HTTPHandler,
urllib2.HTTPSHandler,
proxy_handler ))
request = urllib2.Request(A_HTTPS_URL)
response = urllib2.urlopen( request)
print response.read()
Many Thanks
You may want to look into httplib2. One of the examples claims support for SOCKS proxies if the socks module is installed.