Scrapy: cannot load items to spider - python

I cannot load scrapy items to scrapy spiders. Here is my project structure:
-rosen
.log
.scrapers
..scrapers
...spiders
....__init__.py
....exampleSpider.py
...__init__.py
...items.py
...middlewares.py
...pipelines.py
...settings.py
.src
..__init__.py
..otherStuff.py
.tmp
This structure has been created using scrapy startproject scrapers inside of rosen project (directory).
Now, the items.py has the following code:
import scrapy
from Decimal import Decimal
class someItem(scrapy.Item):
title: str = scrapy.Field(serializer=str)
bid: Decimal = scrapy.Field(serializer=Decimal)
And the exampleSpider.py has the following code:
import scrapy
from __future__ import absolute_import
from scrapy.loader import ItemLoader
from scrapers.scrapers.items import someItem
class someSpider(scrapy.Spider):
name = "some"
def __init__(self, **kwargs):
super().__init__(**kwargs)
self._some_fields = someItem()
def parse(self, response) -> None:
some_loader = ItemLoader(item=self._some_fields, response=response)
print(self._some_fields.keys())
The error I get is the following: runspider: error: Unable to load 'someSpider.py': No module named 'scrapers.scrapers'
I found Scrapy: ImportError: No module named items and tried all three solutions by renaming and adding from __future__ import absolute_import. Nothing helps. Please advice.
The command that I execute is scrapy runspider exampleSpider.py. I tried it from the spiders and rosen directories.

i do not see any virtualenv inside your directory. So i recommend you to do so eg. under 'rosen'.
you can try this:
try:
from scrapers.items import someItem
except FileNotFoundError:
from scrapers. scrapers.items import someItem
then cal it with:
scrapy crawl NameOfSpider
or:
scrapy runspider path/to/spider.py

Related

Scrapy: twisted.internet.error.ReactorNotRestartable from running CrawlProcess()

I am trying to run my scrapy from script.
I am using CrawlerProcess and I only have one spider to run.
I've been stuck from this error for a while now, and I've tried a lot of things to change the settings, but every time I run the spider, I get
twisted.internet.error.ReactorNotRestartable
I've been searching to solve this error, and I believe you should only get this error when you try to call process.start() more than once. But I didn't.
Here's my code:
import scrapy
from scrapy.utils.log import configure_logging
from scrapyprefect.items import ScrapyprefectItem
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
class SpiderSpider(scrapy.Spider):
name = 'spider'
start_urls = ['http://www.nigeria-law.org/A.A.%20Macaulay%20v.%20NAL%20Merchant%20Bank%20Ltd..htm']
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def parse(self, response):
item = ScrapyprefectItem()
...
yield item
process = CrawlerProcess(settings=get_project_settings())
process.crawl('spider')
process.start()
Error:
Traceback (most recent call last):
File "/Users/pluggle/Documents/Upwork/scrapyprefect/scrapyprefect/spiders/spider.py", line 59, in <module>
process.start()
File "/Users/pluggle/Documents/Upwork/scrapyprefect/venv/lib/python3.7/site-packages/scrapy/crawler.py", line 309, in start
reactor.run(installSignalHandlers=False) # blocking call
File "/Users/pluggle/Documents/Upwork/scrapyprefect/venv/lib/python3.7/site-packages/twisted/internet/base.py", line 1282, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "/Users/pluggle/Documents/Upwork/scrapyprefect/venv/lib/python3.7/site-packages/twisted/internet/base.py", line 1262, in startRunning
ReactorBase.startRunning(self)
File "/Users/pluggle/Documents/Upwork/scrapyprefect/venv/lib/python3.7/site-packages/twisted/internet/base.py", line 765, in startRunning
raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable
I notice that this only happens when I'm trying to save my items to mongodb.
pipeline.py:
import logging
import pymongo
class ScrapyprefectPipeline(object):
collection_name = 'SupremeCourt'
def __init__(self, mongo_uri, mongo_db):
self.mongo_uri = mongo_uri
self.mongo_db = mongo_db
#classmethod
def from_crawler(cls, crawler):
# pull in information from settings.py
return cls(
mongo_uri=crawler.settings.get('MONGO_URI'),
mongo_db=crawler.settings.get('MONGO_DATABASE')
)
def open_spider(self, spider):
# initializing spider
# opening db connection
self.client = pymongo.MongoClient(self.mongo_uri)
self.db = self.client[self.mongo_db]
def close_spider(self, spider):
# clean up when spider is closed
self.client.close()
def process_item(self, item, spider):
# how to handle each post
self.db[self.collection_name].insert(dict(item))
logging.debug("Post added to MongoDB")
return item
If I change the pipeline.py to the default, which is...
import logging
import pymongo
class ScrapyprefectPipeline(object):
def process_item(self, item, spider):
return item
...the script runs fine.
I'm thinking this has something to do with how I setup the pycharm settings to run the code.
So for referece, I'm also including my pycharm settings
I hope someone can help me. Let me know if need more details
Reynaldo,
thanks a lot - you saved my project!
And you pushed me to the idea, that possibly this occurs because you have this piece if script starting the process in the same file with your spider definition. As a result, it is executed each time scrapy imports your spider definition. I am not a big expert in scrapy, but possibly it does it few times internally and thus we run into this error problem.
Your suggestion obviusly solves the problem!
Another approach could be is to separate the spider class definition and the script running it. Possibly, this is the approach scrapy assumes and that is why in it's Running spider from script documentation it does not even mention this __name__ check.
So what I did is following:
in the project root I have sites folder and in it I have site_spec.py file. This is just a utility file with some target site specific information (URL structure, etc.). I mention it here just to show you how you can import your various utility modules into your spider class definition;
in the project root I have spiders folder and my_spider.py class definition in it. And in that file I import site_spec.py file with directive:
from sites import site_spec
It is important to mention, that the script, running the spider (the one that you presented) IS REMOVED from the class definition my_spider.py file. Also, note, that I import my site_spec.py file with the path related to run.py file (see below), but not related to the class definition file, where this directive is issued as one could expect (python relative import, I guess)
finally, in the project root I have run.py file, runnig the scrapy from script:
from scrapy.crawler import CrawlerProcess
from spiders.my_spider import MySpider # this is our friend in subfolder **spiders**
from scrapy.utils.project import get_project_settings
# Run that thing!
process = CrawlerProcess(get_project_settings())
process.crawl(MySpider)
process.start() # the script will block here until the crawling is finished
With this setup finally I was able to get rid of this twisted.internet.error.ReactorNotRestartable
Thank you very much !!!
Okay. I solved it.
So I think, in the pipeline, when the scraper enters the open_spider, it runs the spider.py again, and calling the process.start() the second time.
To solve the problem, I add this in the spider so process.start() will only be executed when you run the spider:
if __name__ == '__main__':
process = CrawlerProcess(settings=get_project_settings())
process.crawl('spider')
process.start()
Try to change Scrapy and Twisted version. It isnt the solution, but worked.
pip install Twisted==22.1.0
pip install Scrapy==2.5.1

ValueError: attempted relative import beyond top-level package (Scrapy)

I've been trying to write a Python file to scrape the whole content of a page of a website. Now, everything seems to be fine in my code, until I run it.
I've made sure to link the items from the items python file. I shouldn't get any errors, but yet I keep getting "ValueError: attempted relative import beyond top-level package"
Here is my code from my main python file:
import scrapy
from ..items import AnalogicScrapeItem
class AnalogicSpider(scrapy.Spider):
name = 'analogic'
start_urls = ['https://www.analogic.com/about/']
def parse(self, response):
items = AnalogicScrapeItem()
body1 = response.css('body').css('::text').extract()
items['body1'] = body1
yield items
Here is my code from items.py file:
import scrapy
class AnalogicScrapeItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
body1 = scrapy.Field()
After running the code, here is the error I get:
Traceback (most recent call last):
File "C:/Users/Kev/PycharmProjects/whole_page_extract3/analogic_scrape/
analogic_scrape/spiders/analogic.py", line 3, in <module>
from ..items import AnalogicScrapeItem
ValueError: attempted relative import beyond top-level package
Any help resolving this issue would be greatly appreciated, thank you!
from analogic_scrape.items import AnalogicScrapeItem
would do the job. When you use .., you are importing files from a relative path.
However, if you run the script from command line with scrapy crawl analogic, relative imports are not a problem.

Using scrapy command "crawl" from django

I am trying to crawl a spider (of scrapy) from django and now the problem is, the spider can be crawled only when we are at the top directory(directory with scrapy.cfg). So how can that be achieved?
.../polls/managements/commands/mycommand.py
from django.core.management.base import BaseCommand
from scrapy.cmdline import execute
import os
class Command(BaseCommand):
def run_from_argv(self, argv):
print ('In run_from_argv')
self._argv = argv
return self.execute()
def handle(self, *args, **options):
#os.environ['SCRAPY_SETTINGS_MODULE'] = '/home/nabin/scraptut/newscrawler'
execute(self._argv[1:])
And if i try
python manage.py mycommands crawl myspider
then I won't be able. Because to use crawl i need to be in the top directory with scrapy.cfg file.. So I want to know, how that is possible?
You don't need to change the working directory, unless you want to use the .cfg which can include default options for the deploy command.
In your first approach you forgot to add the crawler path to the python path and set correctly the scrapy settings module:
# file: myapp/management/commands/bot.py
import os
import sys
from django.core.management.base import BaseCommand
from scrapy import cmdline
class Command(BaseCommand):
help = "Run scrapy"
def handle(self, *args, **options):
sys.path.insert(0, '/home/user/mybot')
os.environ['SCRAPY_SETTINGS_MODULE'] = 'mybot.settings'
# Execute expects the list args[1:] to be the actual command arguments.
cmdline.execute(['bot'] + list(args))
Ok i have found the solution myself.
In settings.py I defined:
CRAWLER_PATH = os.path.join(os.path.dirname(BASE_DIR), 'required path')
And did the following.
from django.conf import settings
os.chdir(settings.CRAWLER_PATH)

adding project to python path doesn't work

I want to test my scrapy spider. I want to import spider to a test file an make a test spider and override start_urls, but I have a problem with importing it. Here is a project structure
...product-scraper\test_spider.py
...product-scraper\oxygen\oxygen\spiders\oxygen_spider.py
...product-scraper\oxygen\oxygen\items.py
the problem is that spider import Product class from items.py
from oxygen.items import Product
ImportError: No module named items
cmdscrapy crawl oxygen_spider works
I tried change sys.path or site.addsitedir in all possible ways
basedir = os.path.abspath(os.path.dirname(__file__))
module_path = os.path.join(basedir, "oxygen\\oxygen")
sys.path.append(basedir) # module_path
no success :(
I use python 2.7 on windows
Do you really get the error "No module named items"? Or is it something like "No module named oxygen.items"?
Also I'm not really sure why you would want to use os.path commands. Wouldn't this just work:
from items import Product
So without the "oxygen." This would however, as far as I know, only work if Product is a class in your items.py. If it's not a class I would suggest to just use:
import items
If that does not work, please specify what Product is in your items.py

Unable to import items in scrapy

I have a very basic spider, following the instructions in the getting started guide, but for some reason, trying to import my items into my spider returns an error. Spider and items code is shown below:
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from myProject.items import item
class MyProject(BaseSpider):
name = "spider"
allowed_domains = ["website.com"]
start_urls = [
"website.com/start"
]
def parse(self, response):
print response.body
from scrapy.item import Item, Field
class ProjectItem(Item):
title = Field()
When I run this code scrapy either can't find my spider, or can't import my items file. What's going on here? This should be a really example to run right?
I also had this several times while working with scrapy. You could add at the beginning of your Python modules this line:
from __future__ import absolute_import
More info here:
http://www.python.org/dev/peps/pep-0328/#rationale-for-absolute-imports
http://pythonquirks.blogspot.ru/2010/07/absolutely-relative-import.html
you are importing a field ,you must import a class from items.py
like from myproject.items import class_name.
So, this was a problem that I came across the other day that I was able to fix through some trial and error, but I wasn't able to find any documentation of it so I thought I'd put this up in case anyone happens to run into the same problem I did.
This isn't so much an issue with scrapy as it is an issue with naming files and how python deals with importing modules. Basically the problem is that if you name your spider file the same thing as the project then your imports are going to break. Python will try to import from the directory closest to your current position which means it's going to try to import from the spider's directory which isn't going to work.
Basically just change the name of your spider file to something else and it'll all be up and running just fine.
if the structure like this:
package/
__init__.py
subpackage1/
__init__.py
moduleX.py
moduleY.py
subpackage2/
__init__.py
moduleZ.py
moduleA.py
and if you are in moduleX.py, the way to import other modules can be:
from .moduleY.py import *
from ..moduleA.py import *
from ..subpackage2.moduleZ.py import *
refer:PEP Imports: Multi-Line and Absolute/Relative

Categories

Resources