POSTing a form using Python and Curl - python

I am relatively new (as in a few days) to Python - I am looking for an example that would show me how to post a form to a website (say www.example.com).
I already know how to use Curl. Infact, I have written C+++ code that does exactly the same thing (i.e. POST a form using Curl), but I would like some starting point (a few lines from which I can build on), which will show me how to do this using Python.

Here is an example using urllib and urllib2 for both POST and GET:
POST - If urlopen() has a second parameter then it is a POST request.
import urllib
import urllib2
url = 'http://www.example.com'
values = {'var' : 500}
data = urllib.urlencode(values)
response = urllib2.urlopen(url, data)
page = response.read()
GET - If urlopen() has a single parameter then it is a GET request.
import urllib
import urllib2
url = 'http://www.example.com'
values = {'var' : 500}
data = urllib.urlencode(values)
fullurl = url + '?' + data
response = urllib2.urlopen(fullurl)
page = response.read()
You could also use curl if you call it using os.system().
Here are some helpful links:
http://docs.python.org/library/urllib2.html#urllib2.urlopen
http://docs.python.org/library/os.html#os.system

curl -d "birthyear=1990&press=AUD" www.site.com/register/user.php
http://curl.haxx.se/docs/httpscripting.html

There are two major Python packages for automating web interactions:
Mechanize
Twill
Twill has apparently not been updated for a couple years and seems to have been at version 0.9 since Dec. 2007. Mechanize shows changelog and releases from just a few days ago: 2010-05-16 with the release of version 0.2.1.
Of course you'll find examples listed in their respective web pages. Twill essentially provides a simple shell like interpreter while Mechanize provides a class and API in which you set form values using Python dictionary-like (__setattr__() method) statements, for example. Both use BeautifulSoup for parsing "real world" (sloppy tag soup) HTML. (This is highly recommended for dealing with HTML that you encounter in the wild, and strongly discouraged for your own HTML which should be written to pass standards conforming, validating, parsers).

Related

Using python beautiful soup to make a list of all image urls from a webpage

I'm learning BeautifulSoup and I want to make a list of all image urls from a webpage (https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection).
import requests
from bs4 import BeautifulSoup
url = 'https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection/'
reqs = requests.get(url)
soup = BeautifulSoup(reqs.text, 'html.parser')
for link in soup.find_all('a'):
print(link.get('href'))
The code above doesn't yield any image urls.
And when I print(soup), I can't see any image urls either.
But when I right click on one of the images and manually copy the link, I find out the url starts with https://storage.googleapis.com/kagglesdsdata/datasets/165566/377107/.
So I try setting url = 'https://storage.googleapis.com/kagglesdsdata/datasets/165566/377107/' for the above code, but that doesn't yield any image urls either.
Thanks for any help!
Since it is not in the page source it is likely loaded in by Javascript. You could look at the Javascript to see where it is generated or alternatively if you are just using this to learn BeautifulSoup you get get the page source with Selenium. Selenium uses a Chromedriver so you will need to have a chromedriver in your repo. (extract from here https://chromedriver.storage.googleapis.com/92.0.4515.107/chromedriver_win32.zip). Where 92.0.4515.107 is the version you want, or the latest version , which you can see here)
from selenium import webdriver
from bs4 import BeautifulSoup
url = 'https://www.kaggle.com/navoneel/brain-mri-images-for-brain-tumor-detection/'
driver = webdriver.Chrome()
driver.get(url)
soup = BeautifulSoup(driver.page_source, 'html.parser')
# parse the soup as you normally would from here
If you go to Chrome Dev Tools you can see the requests to these images:
.
If you click on one of these requests you can see these requests have some query string parameters which have values like X-Goog-Signature and X-Goog-Algorithm..
To get this full URL you would need to replicate the POST requests to https://www.kaggle.com/requests/GetDataViewExternalRequest which returns a JSON object like so:
{"result":{"dataView":{"type":"url","dataTable":null,"dataUrl":{"url":"https://storage.googleapis.com/kagglesdsdata/datasets/165566/377107/no/1%20no.jpeg?X-Goog-Algorithm=GOOG4-RSA-SHA256\u0026X-Goog-Credential=databundle-worker-v2%40kaggle-161607.iam.gserviceaccount.com%2F20210801%2Fauto%2Fstorage%2Fgoog4_request\u0026X-Goog-Date=20210801T194627Z\u0026X-Goog-Expires=345599\u0026X-Goog-SignedHeaders=host\u0026X-Goog-Signature=2f3672e41a5821b19eb88a8452237a36943ca0cb54874ec47e47c832480870f1ae29ba4cab3e3717ab1decdb74012135bdb1324b85fd8159084dd9587f5504dbf60f6890f12277e418ddbbf61c720083ce7cca6b8936fa45cb9132a396c12136106c6dcfca8574475156f199169b2eecee7fd51fd784d7ddec3f8e3b80b75a17216893ffa22248e98e9bb5cae7cd5b3598e7f3fbbc6e51c24c864c8746c9fe202d1f6a221baea2f300dedf4ba62eb510d9369607ab2f6e659e3b4e4a18e763943632b110c57e223ffb9f1c09db8dac32da6e273f6248c5146dce8d5633ba38787394852b4bcc240dfa62210f042902e84833cf8817a050fc64655b0ed5f43ac9","type":"image"},"dataRaw":null,"dataCase":"dataUrl"}},"wasSuccessful":true}
Which is easily parsed by a simple code like:
r = request.post("https://www.kaggle.com/requests/GetDataViewExternalRequest", data=data)
url = r.json()["result"]["dataView"]["dataUrl"]
The hard bit will come at generating this data, something like this:
data = {
"verificationInfo": {
"competitionId": None,
"datasetId": 165566,
"databundleVersionId": 391742,
"datasetHashLink": None
},
"firestorePath": "hIPSqqCWJs6oriNI20r6/versions/kKBcaXwa0lr8cvBuOMna/directories/no/files/1 no.jpeg",
"tableQuery": None
}
I would expect most of those values are static for this page, it's likely the firestorePath changes. From a very quick search, it looks like all those values are scrapeable from the page either using regex or BeautifulSoup.
The request also has some headers including __requestverificationtoken and x-xsrf-token. Which looks like they are there to validate your request, they may be scrapeable, they may not. But, they are equal in value. You would need to add these headers to your request as well. I recommend this site to help with creating requests easily. You just need to check the requests and delete any values which are not constants.
In summary, it's not easy! Use selenium if speed is not an issue, use requests and work all this out if it is.
After all that, the best option is using their API as Phorys said. It will be very easy to use.
The scraper looks good so the problem usually is the server which try to protect itself from... the scrapers! BeautifulSoup is a great tool for scraping static pages hence the problem is when you need to request a page:
Pass a user-agent to your request
user_agent = # for example 'Mozilla/5.0 (X11; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/88.0'}
response = requests.get(url, headers={ "user-agent": user_agent})
If this doesn't work investigate the response (cookies, coding, ...)
response = requests.get(url)
for key, value in response.headers.items():
print(key, ":", value)
If you want only links maybe this could be better
for link in soup.find_all('a', href=True): # so you are sure the you don't get an error when retrieving the value
print(link['href'])
Personally, I would look into using Kaggle's official API to access the data. They don't seem particularly restrictive and once you have it figured out it should also be possible to upload solutions as well: https://www.kaggle.com/docs/api
Just to push you in the right direction:
"The Kaggle API and CLI tool provide easy ways to interact with Datasets on Kaggle. The commands available can make searching for and downloading Kaggle Datasets a seamless part of your data science workflow.
If you haven’t installed the Kaggle Python package needed to use the command line tool or generated an API token, check out the getting started steps first.
Some of the commands for interacting with Datasets via CLI include:"
kaggle datasets list -s [KEYWORD]: list datasets matching a search term
kaggle datasets download -d [DATASET]: download files associated with a dataset

Python Requests Library - Scraping separate JSON and HTML responses from POST request

I'm new to web scraping, programming, and StackOverflow, so I'll try to phrase things as clearly as I can.
I'm using the Python requests library to try to scrape some info from a local movie theatre chain. When I look at the Chrome developer tools response/preview tabs in the network section, I can see what appears to be very clean and useful JSON:
However, when I try to use requests to obtain this same info, instead I get the entire page content (pages upon pages of html). Upon further inspection of the cascade in the Chrome developer tools, I can see there are two events called GetNowPlayingByCity: One contains the JSON info while the other seems to be the HTML.
JSON Response
HTML Response
How can I separate the two and only obtain the JSON response using the Python requests library?
I have already tried modifying the headers within requests.post (the Chrome developer tools indicate this is a post method) to include "accept: application/json, text/plain, */*" but didn't see a difference in the response I was getting with requests.post. As it stands I can't parse any JSON from the response I get with requests.post and get the following error:
"json.decoder.JSONDecodeError: Expecting value: line 4 column 1 (char 3)"
I can always try to parse the full HTML, but it's so long and complex I would much rather work with friendly JSON info. Any help would be much appreciated!
This is probably because the javascript the page sends to your browser is making a request to an API to get the json info about the movies.
You could either try sending the request directly to their API (see edit 2), parse the html with a library like Beautiful Soup or you can use a dedicated scraping library in python. I've had great experiences with scrapy. It is much faster than requests
Edit:
If the page uses dynamically loaded content, which I think is the case, you'd have to use selenium with the PhantomJS browser instead of requests. here is an example:
from bs4 import BeautifulSoup
from selenium import webdriver
url = "your url"
browser = webdriver.PhantomJS()
browser.get(url)
html = browser.page_source
soup = BeautifulSoup(html, 'lxml')
# Then parse the html code here
Or you could load the dynamic content with scrapy
I recommend the latter if you want to get into scraping. It would take a bit more time to learn but it is a better solution.
Edit 2:
To make a request directly to their api you can just reproduce the request you see. Using google chrome, you can see the request if you click on it and go to 'Headers':
After that, you simply reproduce the request using the requests library:
import requests
import json
url = 'http://paste.the.url/?here='
response = requests.get(url)
content = response.content
# in my case content was byte string
# (it looks like b'data' instead of 'data' when you print it)
# if this is you case, convert it to string, like so
content_string = content.decode()
content_json = json.loads(content_string)
# do whatever you like with the data
You can modify the url as you see fit, for example if it is something like http://api.movies.com/?page=1&movietype=3 you could modify movietype=3 to movietype=2 to see a different type of movie, etc

Invalid Argument in Open method for web scraping

I am trying to scrape some data from the ancestry, I have a .net background but thought i'd try a bit of python for a project.
I'm falling at the first step, Firstly i am trying to open this page and then just print out the rows.
from requests import get
from requests.exceptions import RequestException
from contextlib import closing
from bs4 import BeautifulSoup
raw_html = open('https://www.ancestry.co.uk/search/collections/britisharmyservice/?
birth=_merthyr+tydfil-wales-united+kingdom_1651442').read()
html = BeautifulSoup(raw_html, 'html.parser')
for p in html.select('tblrow record'):
print(p)
I am getting an illegal argument on open.
According to documentation, open is used to:
Open [a] file and return a corresponding file object.
As such, you cannot use it for downloading the HTML contents of a webpage. You probably meant to use requests.get as follows:
raw_html = get('https://www.ancestry.co.uk/search/collections/britisharmyservice/?
birth=_merthyr+tydfil-wales-united+kingdom_1651442').text
# .text gets the raw text of the response
# (http://docs.python-requests.org/en/master/api/#requests.Response.text)
Here are a few recommendation to improve your code as well:
requests.get provides many useful parameters, one of them being params, which allows you to provide the URL parameters in the form of a Python dictionary.
If you need to verify whether the request was successful before accessing its text, then just check if the returned response.status_code == requests.codes.ok. This only covers status code 200, but if you need more codes, then response.raise_for_status should be helpful.

Data from a custom Google Map

I would like to collect data from this page
xxx
My experience level with python and BeautifulSoup is beginner. However I don't think that it has to be very advanced for what I need to do, excepting for the issue that I am describing below
The page that I need to collect data from lists the active properties for sale listed on MLS for the Greater Toronto Area.
At the right side of the map there are some checkboxes that you must select in order to get your data and this is where my problem is. If I use a browser a local cookie is used to remember the previous selections and tha data is displayed
I would like to know either of these:
1) how I can pass all the params (selections) in my initial request from Python
2) how to use the Chrome cookie with Python so I can get a page return that actually contains data
A code example would be great but sending me to links that I should read would also work.
Thanks a lot
PF
If you insist on using urllib2 over Requests, I suggest looking into cookielib.
Here is an example:
import urllib2
import cookielib
from BeautifulSoup import BeautifulSoup
cookiejar = cookielib.CookieJar()
opener = urllib2.build_opener(
urllib2.HTTPRedirectHandler(),
urllib2.HTTPHandler(debuglevel=0),
urllib2.HTTPSHandler(debuglevel=0),
urllib2.HTTPCookieProcessor(cookiejar),
)
This way you're creating a cookiejar to hold cookies, building an opener and establishing your cookie processor and passing cookiejar. This should take care of your cookie issue. At this point, instead of using urllib2.urlopen(url), just use your custom opener: opener.open(url)
url = 'http://www.somesite.com/'
fp = opener.open(url)
html_object = BeautifulSoup(fp)

Following links with Mechanize

I would like to use the Mechanize python library to follow certain links in a website, but the only links I'm interested in are the ones in a <div> tag. This question is related, but they achieve it using the lxml parser, which I'm not familiar with, I'm more comfortable using BeautifulSoup.
I've located the relevant links using BeautifulSoup already, but I don't know how to use Mechanize (or something else) to follow these links. Is there a way to pass a string to Mechanize so that it will follow it?
the simple open() should be enough:
br.open('http://google.com')
import mechanize
response = mechanize.urlopen("http://example.com/")
content = response.read() #The content is the code of the page (html)
Or, if you want to add things like headers:
import mechanize
request = mechanize.Request("http://example.com/")
response = mechanize.urlopen(request)
content = response.read() #The content is the code of the page (html)

Categories

Resources