Scraping certain parts of a website [Python] - python

Let's say we have a website www.example.com
and I need 5 certain elements from the website, I have found every element and declared them using BeautifulSoup.
g_data1 = soup.find_all("td", {"class": "title"})
for item in g_data1:
try
print item.****[3].text
except:
pass
Now I have to save this information in a CSV file named ****.csv
This is my code for trying to save it in the CSV file:
def save_csv(f, tvseries):
'''
Output a CSV file containing highest ranking TV-series.
'''
import urllib2
url = *example url*
response = urllib2.urlopen(url)
with open('****.csv', 'w') as f:
f.write(response.read())
Im getting the entire html website.. because i've obviously declared it to grab the url but can someone explain me a different kind of approach, because I don't really understand how to :L
with kind regards,
1337

You should be using Python's csv module.
Specifically the CSVWriter.
Take the text items you grabbed using BeautifulSoup and write them into the CSV file.

Related

Trying to write results of a set into a csv file in Python but only getting one line to print

Stuck on something and hoping to get some ideas on what I'm doing wrong here. I have written out a web scraping program that scrapes all web links from the census.gov website but when I try to write my results out to a csv file, I am only getting one of the links to write out instead of the full list. See code below. To ensure my set was pulling correctly, I added in a line of code that first printed the results of my set and then goes into writing out the results in csv. The initial results look correct, as I'm seeing a full list of links. However, I am not sure why I am only able to write out one row of data to excel csv:
import requests
from bs4 import BeautifulSoup, SoupStrainer
import bs4, csv
search_link = "https://www.census.gov/programs-surveys/popest.html"
search = requests.get(search_link).text
raw_html = search
soup = BeautifulSoup(raw_html, 'html.parser')
import re
links = soup.find_all('a', {'class': re.compile('uscb*')})
urls_set = set()
for link in links:
my_links = link.get("href")
if my_links not in urls_set:
urls_set.add(my_links)
print(my_links)
with open("Current Estimate Result.csv",'wb') as f:
cw = csv.writer(f)
cw.writerows(my_links)
print(my_links)
f.close()
The issue is that the my_links variable holds the last read URL. So cw.writerows(my_links) writes out just that URL and not all of the URLs - which are actually stored in urls_set.
However, I'm not sure your usage of the writerows() method is entirely correct. This method expects an iterator of row objects (typically a list of lists). Each nested list represents a row in the CSV file.
So it may be better to hold the URLs in a list rather than a set, and then wrap each URL in it's own list (row) before adding. For example:
urls_list = []
for link in links:
my_link = [link.get("href")] # A row in the csv
if my_link not in urls_list:
urls_list.append(my_link)
...
cw.writerows(urls_list) # Pass the overall list
Note I renamed my_links to my_link in the example above. Using a list would also ensure that the order would be preserved.

Parsing xml in python - don't understand the DOM

I've been reading up on parsing xml with python all day, but looking at the site i need to extract data on, i'm not sure if i'm barking up the wrong tree. Basically i want to get the 13-digit barcodes from a supermarket website (found in the name of the images). For example:
http://www.tesco.com/groceries/SpecialOffers/SpecialOfferDetail/Default.aspx?promoId=A31033985
has 11 items and 11 images, the barcode for the first item is 0000003235676. However when i look at the page source (i assume this is the best way to extract all of the barcodes in one go with python, urllib and beautifulsoup) all of the barcodes are on one line (line 12) however the data doesn't seem to be structured as i would expect in terms of elements and attributes.
new TESCO.sites.UI.entities.Product({name:"Lb Mens Mattifying Dust 7G",xsiType:"QuantityOnlyProduct",productId:"275303365",baseProductId:"72617958",quantity:1,isPermanentlyUnavailable:true,imageURL:"http://img.tesco.com/Groceries/pi/805/5021320051805/IDShot_90x90.jpg",maxQuantity:99,maxGroupQuantity:0,bulkBuyLimitGroupId:"",increment:1,price:2.5,abbr:"g",unitPrice:3.58,catchWeight:"0",shelfName:"Mens Styling",superdepartment:"Health & Beauty",superdepartmentID:"TO_1448953606"});
new TESCO.sites.UI.entities.Product({name:"Lb Mens Thickening Shampoo 250Ml",xsiType:"QuantityOnlyProduct",productId:"275301223",baseProductId:"72617751",quantity:1,isPermanentlyUnavailable:true,imageURL:"http://img.tesco.com/Groceries/pi/225/5021320051225/IDShot_90x90.jpg",maxQuantity:99,maxGroupQuantity:0,bulkBuyLimitGroupId:"",increment:1,price:2.5,abbr:"ml",unitPrice:1,catchWeight:"0",shelfName:"Mens Shampoo ",superdepartment:"Health & Beauty",superdepartmentID:"TO_1448953606"});
new TESCO.sites.UI.entities.Product({name:"Lb Mens Sculpting Puty 75Ml",xsiType:"QuantityOnlyProduct",productId:"275301557",baseProductId:"72617906",quantity:1,isPermanentlyUnavailable:true,imageURL:"http://img.tesco.com/Groceries/pi/287/5021320051287/IDShot_90x90.jpg",maxQuantity:99,maxGroupQuantity:0,bulkBuyLimitGroupId:"",increment:1,price:2.5,abbr:"ml",unitPrice:3.34,catchWeight:"0",shelfName:"Pastes, Putty, Gums, Pomades",superdepartment:"Health & Beauty",superdepartmentID:"TO_1448953606"});
Maybe something like BeautifulSoup is overkill? I understand the DOM tree is not the same thing as the raw source, but why are they so different - when i go to inspect element in firefox the data seems structured as i would expect.
Apologies if this comes across as totally stupid, thanks in advance.
Unfortunately, the barcode is not given in the HTML as structured data; it only appears embedded as part of a URL. So we'll need to isolate the URL and then pick off the barcode with string manipulation:
import urllib2
import bs4 as bs
import re
import urlparse
url = 'http://www.tesco.com/groceries/SpecialOffers/SpecialOfferDetail/Default.aspx?promoId=A31033985'
response = urllib2.urlopen(url)
content = response.read()
# with open('/tmp/test.html', 'w') as f:
# f.write(content)
# Useful for debugging off-line:
# with open('/tmp/test.html', 'r') as f:
# content = f.read()
soup = bs.BeautifulSoup(content)
barcodes = set()
for tag in soup.find_all('img', {'src': re.compile(r'/pi/')}):
href = tag['src']
scheme, netloc, path, query, fragment = urlparse.urlsplit(href)
barcodes.add(path.split('\\')[1])
print(barcodes)
yields
set(['0000003222737', '0000010039670', '0000010036297', '0000010008393', '0000003050453', '0000010062951', '0000003239438', '0000010078402', '0000010016312', '0000003235676', '0000003203132'])
As your site uses javascript to format its content, You might find useful switching from urllib to a tool like Selenium. That way you can crawl pages as they render for a real user with a web browser. This github project seems to solve your task.
Other option will be filtering out json data from page javascript scripts and getting data directly from there.

searching beautiful soup output without html tags

I'm working on a project that requires input from data that's displayed in a live flash graph (a data logging chart at http://137.205.144.34/flash/index.html#menuIndex=1&accordionIndex=2&menuId=mimic1&menuStruct=S1R2M3C1H1. As the html couldn't be accessed directly, I used Firebug to monitor my activity, and found the data I wanted stored at http://137.205.144.34/services/unload.cmd?format=csvx&sched=&start=-240:00:00&id=75631&step=864. However, when I try and access this url, it automatically saves a file (containing the data) to my pc, so I can't access the html source code. Using the url, I have used beautifulsoup to import the data, but i can't search or manipulate it using html tags as they are unknown. The only data i actually want is the latest hourly reading - one of ~ 1300 lines. And of that line, i only need the last value. Is there a way I could find the html tags? If not, what would be the best way to extract the bit of data I need?
Any help would be greatly appreciated,
Thanks.
The file you are downloading has no HTML in it. It is a comma-separated fle and you should use the csv module to parse it.
This code will print the first item in each row (the item that contains the date and time):
import csv
with open('unload.cmd', 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
print row[0]
This works assuming that you are using the file downloaded with the default name.
In order to first download the file programmatically, import it into a string, and then use it as a source file for the csv.reader():
import urllib
import csv
import StringIO
url = 'http://137.205.144.34/services/unload.cmd?format=csvx&sched=&start=-240:00:00&id=75631&step=864'
f = urllib.urlopen(url)
data = f.read()
reader = csv.reader(StringIO.StringIO(data))
for row in reader:
if row: print row[0]

Download text from a URL in Python

I'm working on a school project currently which aim goal is to analyze scam mails with the Natural Language Toolkit package. Basically what I'm willing to do is to compare scams from different years and try to find a trend - how does their structure changed with time.
I found a scam-database: http://www.419scam.org/emails/
I would like to download the content of the links with python, but I am stuck.
My code so far:
from BeautifulSoup import BeautifulSoup
import urllib2, re
html = urllib2.urlopen('http://www.419scam.org/emails/').read()
soup = BeautifulSoup(html)
links = soup.findAll('a')
links2 = soup.findAll(href=re.compile("index"))
print links2
So I can fetch the links but I don't know yet how can I download the content. Any ideas? Thanks a lot!
You've got a good start, but right now you're simply retrieving the index page and loading it into the BeautifulSoup parser. Now that you have href's from the links, you essentially need to open all of those links, and load their contents into data structures that you can then use for your analysis.
This essentially amounts to a very simple web-crawler. If you can use other people's code, you may find something that fits by googling "python Web crawler." I've looked at a few of those, and they are straightforward enough, but may be overkill for this task. Most web-crawlers use recursion to traverse the full tree of a given site. It looks like something much simpler could suffice for your case.
Given my unfamiliarity with BeautifulSoup, this basic structure will hopefully get you on the right path, or give you for a sense for how the web crawling is done:
from BeautifulSoup import BeautifulSoup
import urllib2, re
emailContents = []
def analyze_emails():
# this function and any sub-routines would analyze the emails after they are loaded into a data structure, e.g. emailContents
def parse_email_page(link):
print "opening " + link
# open, soup, and parse the page.
#Looks like the email itself is in a "blockquote" tag so that may be the starting place.
#From there you'll need to create arrays and/or dictionaries of the emails' contents to do your analysis on, e.g. emailContents
def parse_list_page(link):
print "opening " + link
html = urllib2.urlopen(link).read()
soup = BeatifulSoup(html)
email_page_links = # add your own code here to filter the list page soup to get all the relevant links to actual email pages
for link in email_page_links:
parseEmailPage(link['href'])
def main():
html = urllib2.urlopen('http://www.419scam.org/emails/').read()
soup = BeautifulSoup(html)
links = soup.findAll(href=re.compile("20")) # I use '20' to filter links since all the relevant links seem to have 20XX year in them. Seemed to work
for link in links:
parse_list_page(link['href'])
analyze_emails()
if __name__ == "__main__":
main()

How to extract tables from websites in Python

Here,
http://www.ffiec.gov/census/report.aspx?year=2011&state=01&report=demographic&msa=11500
There is a table. My goal is to extract the table and save it to a csv file. I wrote a code:
import urllib
import os
web = urllib.urlopen("http://www.ffiec.gov/census/report.aspx?year=2011&state=01&report=demographic&msa=11500")
s = web.read()
web.close()
ff = open(r"D:\ex\python_ex\urllib\output.txt", "w")
ff.write(s)
ff.close()
I lost from here. Anyone who can help on this? Thanks!
Pandas can do this right out of the box, saving you from having to parse the html yourself. read_html() extracts all tables from your html and puts them in a list of dataframes. to_csv() can be used to convert each dataframe to a csv file. For the web page in your example, the relevant table is the last one, which is why I used df_list[-1] in the code below.
import requests
import pandas as pd
url = 'http://www.ffiec.gov/census/report.aspx?year=2011&state=01&report=demographic&msa=11500'
html = requests.get(url).content
df_list = pd.read_html(html)
df = df_list[-1]
print(df)
df.to_csv('my data.csv')
It's simple enough to do in one line, if you prefer:
pd.read_html(requests.get(<url>).content)[-1].to_csv(<csv file>)
P.S. Just make sure you have lxml, html5lib, and BeautifulSoup4 packages installed in advance.
So essentially you want to parse out html file to get elements out of it. You can use BeautifulSoup or lxml for this task.
You already have solutions using BeautifulSoup. I'll post a solution using lxml:
from lxml import etree
import urllib.request
web = urllib.request.urlopen("http://www.ffiec.gov/census/report.aspx?year=2011&state=01&report=demographic&msa=11500")
s = web.read()
html = etree.HTML(s)
## Get all 'tr'
tr_nodes = html.xpath('//table[#id="Report1_dgReportDemographic"]/tr')
## 'th' is inside first 'tr'
header = [i[0].text for i in tr_nodes[0].xpath("th")]
## Get text from rest all 'tr'
td_content = [[td.text for td in tr.xpath('td')] for tr in tr_nodes[1:]]
I would recommend BeautifulSoup as it has the most functionality. I modified a table parser that I found online that can extract all tables from a webpage, as long as there are no nested tables. Some of the code is specific to the problem I was trying to solve, but it should be pretty easy to modify for your usage. Here is the pastbin link.
http://pastebin.com/RPNbtX8Q
You could use it as follows:
from urllib2 import Request, urlopen, URLError
from TableParser import TableParser
url_addr ='http://foo/bar'
req = Request(url_addr)
url = urlopen(req)
tp = TableParser()
tp.feed(url.read())
# NOTE: Here you need to know exactly how many tables are on the page and which one
# you want. Let's say it's the first table
my_table = tp.get_tables()[0]
filename = 'table_as_csv.csv'
f = open(filename, 'wb')
with f:
writer = csv.writer(f)
for row in table:
writer.writerow(row)
The code above is an outline, but if you use the table parser from the pastbin link you should be able to get to where you want to go.
You need to parse the table into an internal data structure and then output it in CSV form.
Use BeautifulSoup to parse the table. This question is about how to do that (the accepted answer uses version 3.0.8 which is out of date by now, but you can still use it, or convert the instructions to work with BeautifulSoup version 4).
Once you have the table in a data structure (probably a list of lists in this case) you can write it out with csv.write.
Look at BeautifulSOup module. In documentation you will find many examples of parsing html.
Also for csv you have ready solution - csv module.
It should be quite easy.
Look at this answer parsing table with BeautifulSoup and write in text file.
Also use google with next words "python beautifulsoup"

Categories

Resources