Scrapy Not Returning After Yielding a Request - python

Similar to the person here: Scrapy Not Returning Additonal Info from Scraped Link in Item via Request Callback, I am having difficulty accessing the list of items I build in my callback function. I have tried building the list both in the parse function (but doesn't work because the callback hasn't returned), and the callback, but neither have worked for me.
I am trying to return all the items that I build from these requests. Where do I call "return items" such that the item has been fully processed? I am trying to replicate the tutorial (http://doc.scrapy.org/en/latest/intro/tutorial.html#using-our-item)
Thanks!!
The relevant code is below:
class ASpider(Spider):
items = []
...
def parse(self, response):
input_file = csv.DictReader(open("AFile.txt"))
x = 0
for row in input_file:
yield Request("ARequest",
cookies = {"arg1":"1", "arg2":row["arg2"], "style":"default", "arg3":row["arg3"]}, callback = self.aftersubmit, dont_filter = True)
def aftersubmit(self, response):
hxs = Selector(response)
# Create new object..
item = AnItem()
item['Name'] = "jsc"
return item

You need to return (or yield) an item from the aftersubmitcallback method. Quote from docs:
In the callback function, you parse the response (web page) and return
either Item objects, Request objects, or an iterable of both.
def aftersubmit(self, response):
hxs = Selector(response)
item = AnItem()
item['Name'] = "jsc"
return item
Note that this particular Item instance doesn't make sense since you haven't really put anything from the response into it's fields.

Related

Scrapy passing requests along

I previously used some code like this to visit a page and change the url around a bit to generate a second request which gets passed to a second parse method:
from scrapy.http import Request
def parse_final_page(self, response):
# do scraping here:
def get_next_page(self, response, new_url):
req = Request(
url=new_url,
callback=self.parse_final_page,
)
yield req
def parse(self, response):
if 'substring' in response.url:
new_url = 'some_new_url'
yield from self.get_next_page(response, new_url)
else:
pass
# continue..
# scraping items
# yield
This snippet is pretty old (2 years or so) and i'm currently using Scrapy 2.2, although i'm not sure if that's relevant. Note that get_next_page gets called, but parse_final_page never runs, which I don't get...
Why is parse_final_page not being called? Or more to the point.. is there an easier way for me to just generate a new request on the fly? I would prefer to not use a middleware or change start_urls in this context.
1 - "Why is parse_final_page not being called?"
Your script works fine for me on Scrapy v2.2.1, so its probably an issue with the specific request you're trying to make.
2 - "...is there an easier way for me to just generate a new request on the fly?"
You could try this variation where you return the request from the get_next_page callback, instead of yielding it (note I removed the from keyword and did not send the response object to the callback):
def parse(self, response):
if 'substring' in response.url:
new_url = ''
yield self.get_next_page(new_url)
else:
# continue..
# scraping items
# yield
def get_next_page(self, new_url):
req = Request(
url=new_url,
callback=self.parse_final_page,
)
return req
def parse_final_page(self, response):
# do scraping here:

Scrapy not filling object on request

Here is my code
spider.py
def parse(self,response):
item=someItem()
cuv=Vitae()
item['cuv']=cuv
request=scrapy.Request(url, callback=self.cvsearch)
request.meta['item'] = item
yield request
def cvsearch(self, response):
item=response.meta['item']
cv=item['cuv']
cv['link']=response.url
return item
items.py
class someItem(Item):
cuv=Field()
class Vitae(Item):
link=Field()
No errors are displayed!
It adds the object "cuv" to "item" but attributes to "cuv" are never added, what am I missing here?
Why you use scrapy.Item inside another one?
Try using a simple python dict inside your item['cuv']. And try to move request.meta to scrapy.Request constructor argument.
And you should use yield instead of return
def parse(self,response):
item=someItem()
request=scrapy.Request(url, meta={'item': item} callback=self.cvsearch)
yield request
def cvsearch(self, response):
item=response.meta['item']
item['cuv'] = {'link':response.url}
yield item
I am not a very good explainer but I'll try to explain what's wrong best I can
Scrapy is asynchronous meaning there is no order which requests are executed. Let's take a look at this piece of code
def parse(self,response):
item=someItem()
cuv={}
item['cuv']=cuv
request=scrapy.Request(url, callback=self.cvsearch)
request.meta['item'] = item
yield request
logging.error(item['cuv']) #this will return null [1]
def cvsearch(self, response):
item=response.meta['item']
cv=item['cuv']
cv['link']=response.url
return item
[1]-this is because this line will execute before cvsearch is done which you can't control. To solve this you have to do a cascade for multiple requests
def parse(self,response):
item=someItem()
request=scrapy.Request(url, callback=self.cvsearch)
request.meta['item'] = item
yield request
def cvsearch(self, response):
item=response.meta['item']
request=scrapy.Request(url, callback=self.another)
yield request
def another (self, response)
item=response.meta['item']
yield item
To fully grasp this concept I advise to take a look at multithreading. Please add anything that I missed!

scrapy: understanding how do items and requests work between callbacks

I'm struggling with Scrapy and I don't understand how exactly passing items between callbacks works. Maybe somebody could help me.
I'm looking into http://doc.scrapy.org/en/latest/topics/request-response.html#passing-additional-data-to-callback-functions
def parse_page1(self, response):
item = MyItem()
item['main_url'] = response.url
request = scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
request.meta['item'] = item
return request
def parse_page2(self, response):
item = response.meta['item']
item['other_url'] = response.url
return item
I'm trying to understand flow of actions there, step by step:
[parse_page1]
item = MyItem() <- object item is created
item['main_url'] = response.url <- we are assigning value to main_url of object item
request = scrapy.Request("http://www.example.com/some_page.html", callback=self.parse_page2) <- we are requesting a new page and launching parse_page2 to scrap it.
[parse_page2]
item = response.meta['item'] <- I don't understand here. We are creating a new object item or this is the object item created in [parse_page1]? And what response.meta['item'] does mean? We pass to the request in 3 only information like link and callback we didn't add any additional arguments to which we could refer ...
item['other_url'] = response.url <- we are assigning value to other_url of object item
return item <- we are returning item object as a result of request
[parse_page1]
request.meta['item'] = item <- We are assigning object item to request? But request is finished, callback already returned item in 6 ????
return request <- we are getting results of request, so item from 6, am I right?
I went through all documentation concerning scrapy and request/response/meta but still I don't understand what is happening here in points 4 and 7.
line 4: request = scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
line 5: request.meta['item'] = item
line 6: return request
You are confused about the previous code, let me explain it (I enumerated to explain it here):
In line 4, you are instantiating a scrapy.Request object, this doesn't work like other other requests libraries, here you are not calling the url, and not going to the callback function just yet.
You are adding arguments to the scrapy.Request object in line 5, so for example you could also declare the scrapy.Request object like:
request = scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2, meta={'item': item})`
and you could have avoided line 5.
Is in line 6 when you are calling the scrapy.Request object, and when scrapy is making it work, like calling the url specified, going to the following callback, and passing meta with it, you coul have also avoided line 6 (and line 5) if you would have called the request like this:
return scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2, meta={'item': item})`
So the idea here is that your callback methods should return (preferably yield) a Request or and Item, scrapy will output the Item and continue crawling the Request.
#eLRuLL's answer is wonderful. I want to add the part of item transform. First, we shall be clear that callback function only work until the response of this request dwonloaded.
in the code the scrapy.doc given,it don't declare the url and request of page1 and. Let's set the url of page1 as "http://www.example.com.html".
[parse_page1] is the callback of
scrapy.Request("http://www.example.com.html",callback=parse_page1)`
[parse_page2] is the callback of
scrapy.Request("http://www.example.com/some_page.html",callback=parse_page2)
when the response of page1 is downloaded, parse_page1 is called to generate the request of page2:
item['main_url'] = response.url # send "http://www.example.com.html" to item
request = scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
request.meta['item'] = item # store item in request.meta
after the response of page2 is downloaded, the parse_page2 is called to retrun a item:
item = response.meta['item'] #response.meta is equal to request.meta,so here item['main_url'] ="http://www.example.com.html".
item['other_url'] = response.url # response.url ="http://www.example.com/some_page.html"
return item #finally,we get the item recordind urls of page1 and page2.

scrapy: how to have a response parsed by multiple parser functions?

I'd like to do something special to those each one of the landing urls in start_urls, and then the spider'd follow all the nextpages and crawl deeper. So my code's roughly like this:
def init_parse(self, response):
item = MyItem()
# extract info from the landing url and populate item fields here...
yield self.newly_parse(response)
yield item
return
parse_start_url = init_parse
def newly_parse(self, response):
item = MyItem2()
newly_soup = BeautifulSoup(response.body)
# parse, return or yield items
return item
The code won't work because spider only allows return item, request or None but I yield self.newly_parse, so how can I achieve this in scrapy?
My not so elegant solution:
put the init_parse function inside newly_parse and implement an is_start_url check in the beginning, if response.url is inside start_urls, we'll go through the init_parse procedure.
Another ugly solution
Separate out the code where # parse, return or yield items happens and make it a class method or generator, and call this method or generator both inside init_parse and newly_parse.
If you're going to yield multiple items under newly_parse your line under init_parse should be:
for item in self.newly_parse(response):
yield item
as self.newly_parse will return a generator which you will need to iterate through first as scrapy won't recognize it.

Multiple pages per item in Scrapy

Disclaimer: I'm fairly new to Scrapy.
To put my question plainly: How can I retrieve an Item property from a link on a page and get the results back into the same Item?
Given the following sample Spider:
class SiteSpider(Spider):
site_loader = SiteLoader
...
def parse(self, response):
item = Place()
sel = Selector(response)
bl = self.site_loader(item=item, selector=sel)
bl.add_value('domain', self.parent_domain)
bl.add_value('origin', response.url)
for place_property in item.fields:
parse_xpath = self.template.get(place_property)
# parse_xpath will look like either:
# '//path/to/property/text()'
# or
# {'url': '//a[#id="Location"]/#href',
# 'xpath': '//div[#class="directions"]/span[#class="address"]/text()'}
if isinstance(parse_xpath, dict): # place_property is at a URL
url = sel.xpath(parse_xpath['url_elem']).extract()
yield Request(url, callback=self.get_url_property,
meta={'loader': bl, 'parse_xpath': parse_xpath,
'place_property': place_property})
else: # parse_xpath is just an xpath; process normally
bl.add_xpath(place_property, parse_xpath)
yield bl.load_item()
def get_url_property(self, response):
loader = response.meta['loader']
parse_xpath = response.meta['parse_xpath']
place_property = response.meta['place_property']
sel = Selector(response)
loader.add_value(place_property, sel.xpath(parse_xpath['xpath'])
return loader
I'm running these spiders against multiple sites, and most of them have the data I need on one page and it works just fine. However, some sites have certain properties on a sub-page (ex., the "address" data existing at the "Get Directions" link).
The "yield Request" line is really where I have the problem. I see the items move through the pipeline, but they're missing those properties that are found at other URLs (IOW, those properties that get to "yield Request"). The get_url_property callback is basically just looking for an xpath within the new response variable, and adding that to the item loader instance.
Is there a way to do what I'm looking for, or is there a better way? I would like to avoid making a synchronous call to get the data I need (if that's even possible here), but if that's the best way, then maybe that's the right approach. Thanks.
If I understand you correctly, you have (at least) two different cases:
The crawled page links to another page containing the data (1+ further request necessary)
The crawled page contains the data (No further request necessary)
In your current code, you call yield bl.load_item() for both cases, but in the parse callback. Note that the request you yield is executed some later point in time, thus the item is incomplete and that's why you're missing the place_property key from the item for the first case.
Possible Solution
A possible solution (If I understood you correctly) Is to exploit the asynchronous behavior of Scrapy. Only minor changes to your code are involved.
For the first case, you pass the item loader to another request, which will then yield it. This is what you do in the isinstance if clause. You'll need to change the return value of the get_url_property callback to actually yield the loaded item.
For the second case, you can return the item directly,
thus simply yield the item in the else clause.
The following code contains the changes to your example.
Does this solve your problem?
def parse(self, response):
# ...
if isinstance(parse_xpath, dict): # place_property is at a URL
url = sel.xpath(parse_xpath['url_elem']).extract()
yield Request(url, callback=self.get_url_property,
meta={'loader': bl, 'parse_xpath': parse_xpath,
'place_property': place_property})
else: # parse_xpath is just an xpath; process normally
bl.add_xpath(place_property, parse_xpath)
yield bl.load_item()
def get_url_property(self, response):
loader = response.meta['loader']
# ...
loader.add_value(place_property, sel.xpath(parse_xpath['xpath'])
yield loader.load_item()
Related to that problem is the question of chaining requests, for which I have noted a similar solution.

Categories

Resources