Is there a way to send CoAP requests, like HTTP requests, using Python. I tried below one but I got many errors.
rec = reuest.get(coap://localhost:5683/other/block)
You can use a library such as CoAPython to act as a CoAP client:
from coapthon.client.helperclient import HelperClient
client = HelperClient(server=('127.0.0.1', 5683))
response = client.get('other/block')
client.stop()
The response is of type Response. The methods available on the response are listed in the documentation, which you must build yourself.
Since you haven't listed what you want to do with the response, you can use the documentation to find out the methods available and get the values you want.
Related
I would like to send a POST request with multiple parameters using Twisted Web Client :
image : image
metadata : json document with meta data
I need to use pure Twisted without external libraries like Treq and requests.
At the moment, I can send only one parameter and tried few ways without success.
Do someone know how to change body to achieve this goal ?
from __future__ import print_function
from twisted.internet import reactor
from twisted.web.client import Agent
from twisted.web.http_headers import Headers
from bytesprod import BytesProducer
agent = Agent(reactor)
body = BytesProducer(b"hello, world")
d = agent.request(
b'POST',
b'http://httpbin.org/post',
Headers({'User-Agent': ['Twisted Web Client Example'],
'Content-Type': ['text/x-greeting']}),
body)
def cbResponse(ignored):
print('Response received')
d.addCallback(cbResponse)
def cbShutdown(ignored):
reactor.stop()
d.addBoth(cbShutdown)
reactor.run()
You need to specify how you would like the parameters encoded. If you want to to submit them like a browser form, you need to application/x-www-form-urlencoded or multipart/form-data encode the data. The former is generally for short data - and since of your parameters is "image" it probably isn't short. So you should multipart/form-data the data.
Once you have, you just declare this in the request head and include the encoded data in the body.
For example,
body = multipart_form_encoded_body_producer(your_form_fields))
d = agent.request(
b'POST',
b'http://httpbin.org/post',
Headers({'User-Agent': ['Twisted Web Client Example'],
'Content-Type': ['multipart/form-data']}),
body)
Conveniently, treq provides a multipart/form-data encoder
So multipart_form_encoded_body_producer(...) probably looks something like:
MultiPartProducer([
("image", image_data),
("metadata", some_metadata),
...
])
You mentioned that you can't use Treq. You didn't mention why. I recommend using Treq or at least finding another library that can do the encoding for you. If you can't do that for some unreasonable reason, you'll have to implement multipart/form-data encoding yourself. It is reasonably well documented and of course there are multiple implementations you can also use as references and interoperability testing tools.
I am trying to get the JSON response from twitter oEmbed API using python's requests library. The tweet ID that I tried to pass in was 1221064170248065024. And here's the code that I have used for making a request to the API.
import requests
tweet_id = '463440424141459456'
embReqUrl = 'https://publish.twitter.com/oembedurl=https://twitter.com/Interior/status/'+tweet_id
embResp = requests.post(embReqUrl)
After that, when I go for checking the HTTP status of my response using embResp.status_code, it is giving me a 405 status code. What's the right way to do it?
Please help
You’ve used a POST method, but this API expects a GET.
embResp = requests.get(embReqUrl)
print(embResp.status_code)
print(embResp.json())
I need to get some json data using API
import requests
url = 'https://example.com/api/some-info/'
response = requests.get(url)
print(response.text) # Here is JSON needeed
And everything fine, except I need to make such requests very often, and API provider says:
You'll be banned if you make more than 5 requests per second, so use
sockets
So, how can I make this work via sockets?
Big thx for advices.
I am doing a Scan work,and Need to send http requests without urlencode. The web application use $_SERVER['REQUEST_URI'](php) to get URL, and I must use "<" or ">" to do my scan work. But python requests will encode ">" to "%3E". Is there any way to send http request without urlencode(requests, urllib, urllib2), except sending http requests using socket?
Firstly, I use requests to do it:
import requests
requests.get("http://test.com/index.php?id=1 and 1>1")
But the truth url the requests get is: http://test.com/index.php?id=1 and 1%3E1
This question has confused me for days, I'll be very happy to get any solution from you, and thanks all.
You can pass string parameters using the params argument. Like this:
requests.get("http://test.com/index.php", params="id=1 and 1>1")
For a given url, how can I detect final internet location after HTTP redirects, without downloading final page (e.g. HEAD request.) using python. I am trying to write a mass downloader, my downloading mechanism needs to know internet location of page before downloading it.
edit
I ended up doing this, I hope this helps other people. I am still open to other methods.
import urlparse
import httplib
def getFinalUrl(url):
"Navigates Through redirections to get final url."
parsed = urlparse.urlparse(url)
conn = httplib.HTTPConnection(parsed.netloc)
conn.request("HEAD",parsed.path)
response = conn.getresponse()
if str(response.status).startswith("3"):
new_location = [v for k,v in response.getheaders() if k == "location"][0]
return getFinalUrl(new_location)
return url
I strongly suggest you to use requests library. It is well coded and actively maintained. Requests can make anything you need like prefetch/
From the Requests' documentation http://docs.python-requests.org/en/latest/user/advanced/ :
By default, when you make a request, the body of the response is downloaded immediately. You can override this behavior and defer downloading the response body until you access the Response.content attribute with the prefetch parameter:
tarball_url = 'https://github.com/kennethreitz/requests/tarball/master'
r = requests.get(tarball_url, prefetch=False)
At this point only the response headers have been downloaded and the connection remains open, hence allowing us to make content retrieval conditional:
if int(r.headers['content-length']) < TOO_LONG:
content = r.content
...
You can further control the workflow by use of the Response.iter_content and Response.iter_lines methods, or reading from the underlying urllib3 urllib3.HTTPResponse at Response.raw
You can use httplib to send HEAD requests.
You can also have a look at python-requests, which seems to be the new trendy API for HTTP requests, replacing the possibly awkward httplib2. (see Why Not httplib2)
It also has a head() method for this.