Weird Link Extraction Behaviour - python

Python noob here. I'm trying to extract a link, specifically the link to 'all reviews' on an Amazon product page. I get an unexpected result.
import urllib2
req = urllib2.Request('http://www.amazon.com/Ole-Henriksen-Truth-Collagen- Booster/dp/B000A0ADT8/ref=sr_1_1?s=hpc&ie=UTF8&qid=1342922857&sr=1-1&keywords=truth')
response = urllib2.urlopen(req)
page = response.read()
start = page.find("all reviews")
link_start = page.find("href=", start) + 6
link_end = page.find('"', link_start)
print page[link_start:link_end]
The program should output:
http://www.amazon.com/Ole-Henriksen-Truth-Collagen-Booster/product- reviews/B000A0ADT8/ref=dp_top_cm_cr_acr_txt?ie=UTF8&showViewpoints=1
Instead, it outputs:
http://www.amazon.com/Ole-Henriksen-Truth-Collagen-Booster/product-reviews/B000A0ADT8

I get the same result you do, but that appears to be simply because the page Amazon serves to your Python script is different from what it serves to my browser. I wrote the downloaded page to disk and loaded it in a text editor, and sure enough, the link ends with ADT8" without all the /ref=dp_top stuff.
In order to help convince Amazon to serve you the same page as a browser, your script is probably going to have to act a lot more like a browser (by accepting and sending cookies, for example). The mechanize module can help with this.

Ah, okay. If you do the usual trick of faking a user agent, for example:
req = urllib2.Request('http://www.amazon.com/Ole-Henriksen-Truth-Collagen-Booster/dp/B000A0ADT8/ref=sr_1_1?s=hpc&ie=UTF8&qid=1342922857&sr=1-1&keywords=truth')
ua = 'Mozilla/5.0 (X11; Linux x86_64; rv:2.0.1) Gecko/20110506 Firefox/4.0.1'
req.add_header('User-Agent', ua)
response = urllib2.urlopen(req)
then you should get something like
localhost-2:coding $ python plink.py
http://www.amazon.com/Ole-Henriksen-Truth-Collagen-Booster/product-reviews/B000A0ADT8/ref=dp_top_cm_cr_acr_txt/190-6179299-9485047?ie=UTF8&showViewpoints=1
which might be closer to what you want.
[Disclaimer: be sure to verify that Amazon's TOS rules permit whatever you're going to do before you do it..]

Related

How to log on into Costco.com using Python requests

I'm trying to automate log-in into Costco.com to check some member only prices.
I used dev tool and the Network tab to identify the request that handles the Logon, from which I inferred the POST URL and the parameters.
Code looks like:
import requests
s = requests.session()
payload = {'logonId': 'email#email.com',
'logonPassword': 'mypassword'
}
#get this data from Google-ing "my user agent"
user_agent = {"User-Agent" : "myusergent"}
url = 'https://www.costco.com/Logon'
response = s.post(url, headers=user_agent,data=payload)
print(response.status_code)
When I run this, it just runs and runs and never returns anything. Waited 5 minutes and still running.
What am I going worng?
maybe you should try to make a get requests to get some cookies before make the post requests, if the post requests doesnt work, maybe you should add a timeout so the script stop and you know that it doesnt work.
r = requests.get(w, verify=False, timeout=10)
This one is tough. Usually, in order to set the proper cookies, a get request to the url is first required. We can go directly to https://www.costco.com/LogonForm so long as we change the user agent from the default python requests one. This is accomplished as follows:
import requests
agent = (
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) "
"Chrome/85.0.4183.102 Safari/537.36"
)
with requests.Session() as s:
headers = {'user-agent': agent}
s.headers.update(headers)
logon = s.get('https://www.costco.com/LogonForm')
# Saved the cookies in variable, explanation below
cks = s.cookies
Logon get request is successful, ie status code 200! Taking a look at cks:
print(sorted([c.name for c in cks]))
['C_LOC',
'CriteoSessionUserId',
'JSESSIONID',
'WC_ACTIVEPOINTER',
'WC_AUTHENTICATION_-1002',
'WC_GENERIC_ACTIVITYDATA',
'WC_PERSISTENT',
'WC_SESSION_ESTABLISHED',
'WC_USERACTIVITY_-1002',
'_abck',
'ak_bmsc',
'akaas_AS01',
'bm_sz',
'client-zip-short']
Then using the inspect network in google chrome and clicking login yields the following form data for the post in order to login. (place this below cks)
data = {'logonId': username,
'logonPassword': password,
'reLogonURL': 'LogonForm',
'isPharmacy': 'false',
'fromCheckout': '',
'authToken': '-1002,5M9R2fZEDWOZ1d8MBwy40LOFIV0=',
'URL':'Lw=='}
login = s.post('https://www.costco.com/Logon', data=data, allow_redirects=True)
However, simply trying this makes the request just sit there and infinitely redirect.
Using burp suite, I stepped into the post and and found the post request when done via browser. This post has many more cookies than obtained in the initial get request.
Quite a few more in fact
# cookies is equal to the curl from burp, then converted curl to python req
sorted(cookies.keys())
['$JSESSIONID',
'AKA_A2',
'AMCVS_97B21CFE5329614E0A490D45%40AdobeOrg',
'AMCV_97B21CFE5329614E0A490D45%40AdobeOrg',
'C_LOC',
'CriteoSessionUserId',
'OptanonConsent',
'RT',
'WAREHOUSEDELIVERY_WHS',
'WC_ACTIVEPOINTER',
'WC_AUTHENTICATION_-1002',
'WC_GENERIC_ACTIVITYDATA',
'WC_PERSISTENT',
'WC_SESSION_ESTABLISHED',
'WC_USERACTIVITY_-1002',
'WRIgnore',
'WRUIDCD20200731',
'__CT_Data',
'_abck',
'_cs_c',
'_cs_cvars',
'_cs_id',
'_cs_s',
'_fbp',
'ajs_anonymous_id_2',
'ak_bmsc',
'akaas_AS01',
'at_check',
'bm_sz',
'client-zip-short',
'invCheckPostalCode',
'invCheckStateCode',
'mbox',
'rememberedLogonId',
's_cc',
's_sq',
'sto__count',
'sto__session']
Most of these look to be static, however because there are so many its hard to tell which is which and what each is supposed to be. It's here where I myself get stuck, and I am actually really curious how this would be accomplished. In some of the cookie data I can also see some sort of ibm commerce information, so I am linking Prevent Encryption (Krypto) Of Url Paramaters in IBM Commerce Server 6 as its the only other relevant SO answer question pertaining somewhat remotely to this.
Essentially though the steps would be to determine the proper cookies to pass for this post (and then the proper cookies and info for the redirect!). I believe some of these are being set by some js or something since they are not in the get response from the site. Sorry I can't be more help here.
If you absolutely need to login, try using selenium as it simulates a browser. Otherwise, if you just want to check if an item is in stock, this guide uses requests and doesn't need to be logged in https://aryaboudaie.com/python/technical/educational/2020/07/05/using-python-to-buy-a-gift.html

python requests cannot get html

I tried to get html code from a site name dcinside in Korea, i am using requests but cannot get html code
and this is my code
import requests
url = "http://gall.dcinside.com/board/lists/?id=bitcoins&page=1"
req = requests.get(url)
print (req)
print (req.content)
but the result was
Why I cannot get html codes even using requests??
Most likely they are detecting that you are trying to crawl data dynamically, and not giving any content as a response. Try pretending to be a browser and passing some User-Agent headers.
headers = {
'User-Agent': 'My User Agent 1.0',
'From': 'youremail#domain.com'
}
response = requests.get(url, headers=headers)
# use authentic mozilla or chrome user-agent strings if this doesn't work
Take a look at this:
Python Web Crawlers and "getting" html source code
Like the guy said in the aforementioned post, you should use urllib2 which will allow you to easily obtain web resources.

Find out what cookies to set on different websites using Python

I have a list of about 10.000 URLs pointing to online news articles. I have written some code to scrape the html-content of these news articles, using the Requests-library (Python 3.5). The goal is to retrieve the article content using the Readability-module and perform further analysis on that. This works most of the time. However, all websites are in Dutch and so are subject to the EU policy stating they have to ask for consent to use cookies. Some of them, for example http://telegraaf.nl, do this by loading a separate page where the user has to click a button. In this case, I can get the normal article content by passing a cookie in the header:
import requests
user_agent = 'Mozilla/5.0'
url = 'http://www.telegraaf.nl/dft/geld/werk-inkomen/27740808/__Vechten_om_werk_in_noorden__.html'
cookies_telegraaf = {'TMGCOOKIE': '{%22version%22:%22t3%22}'}
html = requests.get(url, headers={"User-Agent": user_agent}, cookies=cookies_telegraaf)
print(html.content)
This prints the html-content I need. The problem is, every site needs a different cookie. So my question is: is there a way to find out what specific cookie to pass in the header for each website, without manually checking in the browser?
Thanks for your help.
This is more like a comment than a real answer. Here is another answer that might help.
What I would do is to deal with sites that work without cookies first, then try to deal those who don't load a separate page, then those with separate page.
However if your question is to know if there is a way to access to cookies easily, requests documentation gives a method for that, here:
url = 'http://example.com/some/cookie/setting/url'
>>> r = requests.get(url)
>>> r.cookies['example_cookie_name']
'example_cookie_value'
To send your own cookies to the server, you can use the cookies parameter:
>>> url = 'http://httpbin.org/cookies'
>>> cookies = dict(cookies_are='working')
>>> r = requests.get(url, cookies=cookies)
>>> r.text
'{"cookies": {"cookies_are": "working"}}'

Python script to translate via google translate

I'm trying to learn python, so I decided to write a script that could translate something using google translate. Till now I wrote this:
import sys
from BeautifulSoup import BeautifulSoup
import urllib2
import urllib
data = {'sl':'en','tl':'it','text':'word'}
request = urllib2.Request('http://www.translate.google.com', urllib.urlencode(data))
request.add_header('User-Agent', 'Mozilla/5.0 (Windows; U; Windows NT 5.1; it; rv:1.8.1.11) Gecko/20071127 Firefox/2.0.0.11')
opener = urllib2.build_opener()
feeddata = opener.open(request).read()
#print feeddata
soup = BeautifulSoup(feeddata)
print soup.find('span', id="result_box")
print request.get_method()
And now I'm stuck. I can't see any bugs in it, but it still doesn't work (by that I mean that the script will run, but it wont translate the word).
Does anyone know how to fix it?
(Sorry for my poor English)
I made this script if you want to check it:
https://github.com/mouuff/Google-Translate-API
: )
Google translate is meant to be used with a GET request and not a POST request. However, urrllib2 will automatically submit a POST if you add any data to your request.
The solution is to construct the url with a querystring so you will be submitting a GET.
You'll need to alter the request = urllib2.Request('http://www.translate.google.com', urllib.urlencode(data)) line of your code.
Here goes:
querystring = urllib.urlencode(data)
request = urllib2.Request('http://www.translate.google.com' + '?' + querystring )
And you will get the following output:
<span id="result_box" class="short_text">
<span title="word" onmouseover="this.style.backgroundColor='#ebeff9'" onmouseout="this.style.backgroundColor='#fff'">
parola
</span>
</span>
By the way, you're kinda breaking Google's terms of service; look into them if you're doing more than hacking a little script for training.
Using requests
I strongly advise you to stay away from urllib if possible, and use the excellent requests library, which will allow you to efficiently use HTTP with Python.
Yes their documentation is not so easy to uncover.
Here's what you do:
In the Google Cloud Platform Console:
1.1 Go to the Projects page and select or create a new project
1.2 Enable billing for your project
1.3 Enable the Cloud Translation API
1.4 Create a new API key in your project, make sure to restrict usage by IP or other means available there.
In the machine where you want to run the client:
pip install --upgrade google-api-python-client
Then you can write this to send translation requests and receive responses:
Here's the code:
import json
from apiclient.discovery import build
query='this is a test to translate english to spanish'
target_language = 'es'
service = build('translate','v2',developerKey='INSERT_YOUR_APP_API_KEY_HERE')
collection = service.translations()
request = collection.list(q=query, target=target_language)
response = request.execute()
response_json = json.dumps(response)
ascii_translation = ((response['translations'][0])['translatedText']).encode('utf-8').decode('ascii', 'ignore')
utf_translation = ((response['translations'][0])['translatedText']).encode('utf-8')
print response
print ascii_translation
print utf_translation

Get HTML source, including result of javascript and authentication

I am building a web scraper and need to get the html page source as it actually appears on the page. However, I only get a limited html source, one that does not include the needed info. I think that I am either seeing it pre javascript loaded or else maybe I'm not getting the full info because I don't have the right authentication?? My result is the same as "view source" in Chrome when what I want is what Chrome's 'inspect element' shows. My test is cimber.dk after entering flight information and searching.
I am coding in python and tried the urllib2 library. Then I heard that Selenium was good for this so I tried that, too. However, that also gets me the same limited page source.
This is what I tried with urllib2 after using Firebug to see the parameters. (I deleted all my cookies after opening cimber.dk so I was starting with a 'clean slate')
url = 'https://www.cimber.dk/booking/'
values = {'ARRANGE_BY' : 'D',...} #one for each value
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor())
#Using HTTPRedirectHandler instead of HTTPCookieProcessor gives the same.
urllib2.install_opener(opener)
request = urllib2.Request(url)
opener.addheaders = [('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0) Gecko/20100101 Firefox/4.0')]
request.add_header(....) # one for each header, also the cookie one
p = urllib.urlencode(values)
data = opener.open(request, p).read()
# data is now the limited source, like Chrome View Source
#I tried to add the following in some vain attempt to do a redirect.
#The result is always "HTTP Error 400: Bad request"
f = opener.open('https://wftc2.e-travel.com/plnext/cimber/Override.action')
data = f.read()
f.close()
Most libraries like this do not support javascript.
If you want javascript, you will need to either automate an existing browser or browser engine, or get a really monolithic big beefy library that is essentially an advanced web crawler.

Categories

Resources