Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I would like to know how to send data to a website using HTTPS in python.
It seems simple to do it with HTTP but I could not manage to find the same for HTTPS requests...
It's pretty simple with requests:
import requests
r = requests.get('https://example.com')
print r.status_code
If you want to use urllib2, here is a snippet taken directly from their examples:
>>> import urllib2
>>> req = urllib2.Request(url='https://localhost/cgi-bin/test.cgi',
... data='This data is passed to stdin of the CGI')
>>> f = urllib2.urlopen(req)
>>> print f.read()
Got Data: "This data is passed to stdin of the CGI"
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
When i extract a url, it displays as below
https://tv.line.me/v/14985624_%E0%B8%A0%E0%B8%B9%E0%B8%95%E0%B8%A3%E0%B8%B1%E0%B8%95%E0%B8%95%E0%B8%B4%E0%B8%81%E0%B8%B2%E0%B8%A5-ep3-6-6-%E0%B8%8A%E0%B9%88%E0%B8%AD%E0%B8%878
how do i convert this to more readable format like below in python. The link below is the same as above.
Link to the image of how the url appears on browser address bar
You can use urllib module to decode this url
from urllib.parse import unquote
url = unquote('https://tv.line.me/v/14985624_%E0%B8%A0%E0%B8%B9%E0%B8%95%E0%B8%A3%E0%B8%B1%E0%B8%95%E0%B8%95%E0%B8%B4%E0%B8%81%E0%B8%B2%E0%B8%A5-ep3-6-6-%E0%B8%8A%E0%B9%88%E0%B8%AD%E0%B8%878')
print(url)
This will give you the result as follows.
https://tv.line.me/v/14985624_ภูตรัตติกาล-ep3-6-6-ช่อง8
Thank you
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am trying to download the data from this page https://www.nordpoolgroup.com/Market-data1/Power-system-data/Production1/Wind-Power-Prognosis/DK/Hourly/?view=table
As you can see there is a button that can automatically export the data to Excel on the right. I want to create something that is able to automatically export the data present on this to Excel everyday - kind of like a scraper, but I am not able to figure it out.
So far this is my code
import urllib2
nord='https://www.nordpoolgroup.com/Market-data1/Power-system-
data/Production1/Wind-Power-Prognosis/DK/Hourly/?view=table'
page=urllib2.urlopen(nord)
from bs4 import BeautifulSoup as bs
soup=bs(page)
pretty=soup.prettify()
all_links=soup.find_all("a")
for link in all_links:
print link.get("href")
all_tables=soup.find_all('tables')
right_table=soup.find('table', class_='ng-scope')
And this is where I am stuck, because it seems that the table class is not defined.
You can use the requests module for this.
Ex:
import requests
url = "https://www.nordpoolgroup.com/api/marketdata/exportxls"
r = requests.post(url) #POST Request
with open('data_123.xls', 'wb') as f:
f.write(r.content)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to scan about 1000 pdf files using "wepawet" which is an online scanner but it takes one file at a time how could I scan the whole 1000 files, could I do that using python ?
https://wepawet.iseclab.org/
could any one help me please?
thank you in advance for helping
You can automate the process by using python tools like selenium, mechanize or urllib(I'm not sure about urllib). Fill the form using mechanize (a simple example of filling a form and submitting)
response = br.open(url)
print response.read()
response1 = br.response()
print response1.read()
br.select_form("form1")
br.form = list(br.forms())[0]
response = br.submit()
print response.read()
and submit it as in the code. For more info on mechanize, visit http://www.pythonforbeginners.com/cheatsheet/python-mechanize-cheat-sheet. Hope it works.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Is the following good enough or is there a more canonical method?
import requests
import json
response = requests.get(json_rest_url)
data = json.loads(response.text)
# work with data
Since you're using python-requests you can use use request.json() as Jon Clements mentioned. This is when the response is in json format. request.json() will return a dict for you.
>>> import requests
>>> repos = requests.get("https://api.github.com/users/gamesbrainiac/repos").json()
>>> repos[0]['git_url']
'git://github.com/gamesbrainiac/DefinitelyTyped.git'
>>> repos[1]['git_url']
'git://github.com/gamesbrainiac/django-crispy-forms.git'
>>> repos[2]['git_url']
'git://github.com/gamesbrainiac/dots.git'
The above example uses the github api. The json response is converted into a list of dicts. The docs correspond to the information regarding each of the repository projects on github.
You can visit the actual url that I used above to see the json data.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
import requests
r = requests.get('https://www.theguardian.com/international')
print(r.status_code)
print(r.headers)
As far as I can understand, in this code I am assigning the result of the requests.get('https://www.theguardian.com/international') call to r which is the response to the HTTP request. But how can I use the .status_code and .header methods with r? r is not an instance of a class as far as I can tell.
According to Python Requests quickstart guide requests.get(url) simply returns a Response object, thus in your example, r is assigned to that object, which enables you to access its attributes such as status_code, text, etc.
r is actually an instance of class:request.Response. You can read documentation of requests.get for more information.