The website is "https://www.jbhifi.com.au/collections/laptops". I'm trying to crawl the href for the "next page".
But why scrapy shell returns an empty list? I'm using the statement:
response.css("li.ais-pagination--item ais-pagination--item__next a").xpath("#href")
Please show me how to scrape this using Scrapy. I suspect this is because the class starts with "ais" (but don't know why it causes the problem). This happened to me in the past. Any solutions? Cheers!
There is a need to understand that if you are extracting selectors merely on basis of inspect element it does not work that way. You need to check page source that what actually comes at time when page was loaded. While inspecting we are able to see all that content which all request against a page update. In your case there is no such class in page source ais-pagination--item__next. You have to track network, check which call is being hit on click of next page button and crack the logic which is being implemented.
Related
I need your help because I have for the first time problems to get some information with Beautifulsoup .
I have two problems on this page
The green button GET COUPON CODE appear after a few moment see GIF capture
When we inspect the button link, we find a a simple href attribute that call to an out.php function that performs the opening of the destination link that I am trying to capture.
GET COUPON CODE
Thank you for your help
Your problem is a little unclear but if I understand correctly, your first problem is that the 'get coupon code' button looks like this when you render the HTML that you receive from the original page request.
The mark-up for a lot of this code is rendered dynamically using javascript. So that button is missing its href value until it gets loaded in later. You would need to also run the javascript on that page to render this after the initial request. You can't really get this easily using just the python requests library and BeautifulSoup. It will be a lot easier if you use Selenium too which lets you control a browser so it runs all that javascript for you and then you can just get the button info a couple of seconds after loading the page.
There is a way to do this all with plain requests, but it's a bit tedious. You would need to read through the requests the page makes and figure out which one gets the link for the button. The upside to this is it would cut the number of steps to get the info you need and the amount of time it takes to get. You could just use this new request every time to get the right PHP link then just get the info from there.
For your second point, I'm also not sure if I answered it already, but maybe you're also trying to get the redirect link from that PHP link. From inspecting the network requests, it looks like the info will be found in the response headers, there is no body to inspect.
(I know it says 'from cache' but the point is that the redirect is being caused by the header info)
I'm trying to iterate over some pages. The different pages of are marked with or10,or20,or30 etc. for the website. i.e.
/Restaurant_Review
is the first page
/Restaurant_Review-or10
Is the second page
/Restaurant_Review-or20
3rd page etc.
The problem is that I get redirected from those sites to the normal url (1st one) if the -or- version doesnt exist. I'm currently looping over a range in a for loop, and dynamically changing the -or- value.
def parse(self,response):
l = range(100)
reviewRange = l[10::10]
for x in reviewRange:
yield((url+"-or"+str(x)), callback=self.parse_page)
def parse_page(self,response):
#do something
#How can I from here tell the for loop to stop
if(oldurl == response.url):
return break
#this doesnt work
The problem is that I need to do the request even if the page doesn't exist, and this is not scalable. I've tried comparing the URLs, but still did not understand how I can return from the parse_page() function something that would tell the parse() function to stop.
You can check what is in response.meta.get('redirect_urls'), for example. In case you have something there, retry original url with dont_filter.
Or try to catch such cases with RetryMiddleware.
This is not an answer to the actual question, but rather an alternative solution that does not require redirect detection.
In the HTML you can already find all those pagination URLs by using:
response.css('.pageNum::attr(href)').getall()
Regarding #Anton's question in a comment about how I got this:
You can check this by opening a random restaurant review page with the Scrapy shell:
scrapy shell "https://www.tripadvisor.co.za/Restaurant_Review-g32655-d348825-Reviews-Brent_s_Delicatessen_Restaurant-Los_Angeles_California.html"
Inside the shell you can view the received HTML in your browser with:
view(response)
There you'll see that it includes the HTML (and that specific class) for the pagination links. The real website does use Javascript to render the next page, but it does so by retrieving the full HTML for the next page based on the URL. Basicallty, it just replaces the entire page, there's very little additional processing involved. So this means if you open the link yourself you get the full HTML too. Hence, the Javascript issue is irrelevant here.
I am trying to write a program in Python that can take the name of a stock and its price and print it. However, when I run it, nothing is printed. it seems like the data is having a problem being fetched from the website. I double checked that the path from the web page is correct, but for some reason the text does not want to show up.
from lxml import html
import requests
page = requests.get('https://www.bloomberg.com/quote/UKX:IND?in_source=topQuotes')
tree = html.fromstring(page.content)
Prices = tree.xpath('//span[#class="priceText__1853e8a5"]/text()')
print ('Prices:' , Prices)
here is the website I am trying to get the data from
I have tried BeautifulSoup, but it has the same problem.
If you print the string page.content, you'll see that the website code it captures is actually for a captcha test, not the "real" destination page itself you see when you manually visit the website. It seems that the website was smart enough to see that your request to this URL was from a script and not manually from a human, and it effectively prevented your script from scraping any real content. So Prices is empty because there simply isn't a span tag of class "priceText__1853e8a5" on this special Captcha page. I get the same when I try scraping with urllib2.
As others have suggested, Selenium (actual web automation) might be able to launch the page and get you what you need. The ID looks dynamically generated, though I do get the same one when I manually look at the page. Another alternative is to simply find a different site that can give you the quote you need without blocking your script. I tried it with https://tradingeconomics.com/ukx:ind and that works. Though of course you'll need a different xpath to find the cell you need.
https://www.fedsdatacenter.com/federal-pay-rates/index.php?y=2017&n=&l=&a=&o=
This website seems to be written by jquery(AJAX). I would like to scrape all pages' tables. When I inspect the 1,2,3,4 page tags, they do not have a specific href link. Besides, clicking on them does not create a clear pattern of get requests, therefore, I find it hard to use Python urllib to send a get request for each page.
You can use Selenium with Python http://selenium-python.readthedocs.io/ to navigate through the pages. I would find the Next button and .click() it then time.sleep(seconds) and scrape the page. I can't navigate to the last page on this site, unfortunately (it seems broken - which you should also be aware of), but I'm assuming the Next button disappears or something when you get to the last page. If not, you might want to save the what you've scraped everytime you go to a new page, this way you don't lose your data in the event of an error.
I am trying to scrape information off websites like this:
https://www.glassdoor.com/Overview/Working-at-7-Eleven-EI_IE3581.11,19.htm
using python + beautifulsoup + mechanize.
Accessing anything on the main-site is no problem. However, I also need the information that appears in a overlay-window that appears when one clicks on the "Rating Trends" button next to the bar with stars.
This overlay-window can also be accessed directly by using the url:
https://www.glassdoor.com/Reviews/7-Eleven-Reviews-E3581.htm#trends-overallRating
The html associated with this page is a modification of the original site's html.
However, regardless of what element I try to find (via findAll ) on that overlay-window website, beautifulsoup returns zero hits.
How can I fix this? I tried adding a sleep time between accessing the website and reading anything in, to no avail.
Thanks!
If you're using the Chrome browser select the background of that page (without the additional information displayed) and select 'Inspect' from the context menu (for Windows anyway), then the 'Network' tab, so that you can see network traffic. Now click on 'Rating trends'. The entry marked 'xhr' will be https://www.glassdoor.ca/api/employer/3581-rating.htm?locationStr=&jobTitleStr=&filterCurrentEmployee=false&filterEmploymentStatus=REGULAR&filterEmploymentStatus=PART_TIME (I much hope!) and its contents will be the following.
{"employerId":3581,"ratings":[{"hasRating":true,"type":"overallRating","value":2.9},{"hasRating":true,"type":"ceoRating","value":0.54},{"hasRating":true,"type":"bizOutlook","value":0.35},{"hasRating":true,"type":"recommend","value":0.4},{"hasRating":true,"type":"compAndBenefits","value":2.4},{"hasRating":true,"type":"cultureAndValues","value":2.5},{"hasRating":true,"type":"careerOpportunities","value":2.5},{"hasRating":true,"type":"workLife","value":2.4},{"hasRating":true,"type":"seniorManagement","value":2.3}],"week":0,"year":0}
Whether this URL can be altered for use in obtaining information for other employers, I regret, I cannot tell you.