Unscriptable Int Error for String Slice - python

I'm writing a webscraper and I have a table full of links to .pdf files that I want to download, save, and later analyze. I was using beautiful soup and I had soup find all the links. They are normally beautiful soup tag objects, but I've turned them into strings. The string is actually a bunch of junk with the link text buried in the middle. I want to cut out that junk and just leave the link. Then I will turn these into a list and have python download them later. (My plan is for python to keep a list of the pdf link names to keep track of what it's downloaded and then it can name the files according to those link names or a portion thereof).
But the .pdfs come in variable name-lengths, e.g.:
I_am_the_first_file.pdf
And_I_am_the_seond_file.pdf
and as they exist in the table, they have a bunch of junk text:
a href = ://blah/blah/blah/I_am_the_first_file.pdf[plus other annotation stuff that gets into my string accidentally]
a href = ://blah/blah/blah/And_I_am_the_seond_file.pdf[plus other annotation stuff that gets into my string accidentally]
So I want to cut ("slice") the front part and the last part off of the string and just leave the string that points to my url (so what follows is the desired output for my program):
://blah/blah/blah/I_am_the_first_file.pdf
://blah/blah/blah/And_I_am_the_seond_file.pdf
As you can see, though, the second file has more characters in the string than the first. So I can't do:
string[9:40]
or whatever because that would work for the first file but not for the second.
So i'm trying to come up with a variable for the end of the string slice, like so:
string[9:x]
wherein x is the location in the string that ends in '.pdf' (and my thought was to use the string.index('.pdf') function to do this.
But is t3h fail because I get an error trying to use a variable to do this
("TypeError: 'int' object is unsubscriptable")
Probably there's an easy answer and a better way to do this other than messing with strings, but you guys are way smartert than me and I figured you'd know straight off.
Here's my full code so far:
import urllib, urllib2
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("mywebsite.com")
soup = BeautifulSoup(page)
table_with_my_pdf_links = soup.find('table', id = 'searchResults')
#"search results" is just what the table i was looking for happened to be called.
for pdf_link in table_with_my_pdf_links.findAll('a'):
#this says find all the links and looop over them
pdf_link_string = str(pdf_link)
#turn the links into strings (they are usually soup tag objects, which don't help me much that I know of)
if 'pdf' in pdf_link_string:
#some links in the table are .html and I don't want those, I just want the pdfs.
end_of_link = pdf_link_string.index('.pdf')
#I want to know where the .pdf file extension ends because that's the end of the link, so I'll slice backward from there
just_the_link = end_of_link[9:end_of_link]
#here, the first 9 characters are junk "a href = yadda yadda yadda". So I'm setting a variable that starts just after that junk and goes to the .pdf (I realize that I will actualy have to do .pdf + 3 or something to actually get to the end of string, but this makes it easier for now).
print just_the_link
#I debug by print statement because I'm an amatuer
the line (Second from the bottom) that reads:
just_the_link = end_of_link[9:end_of_link]
returns an error (TypeError: 'int' object is unsubscriptable)
also, the ":" should be hyper text transfer protocol colon, but it won't let me post that b/c newbs can't post more than 2 links so I took them out.

just_the_link = end_of_link[9:end_of_link]
This is your problem, just like the error message says. end_of_link is an integer -- the index of ".pdf" in pdf_link_string, which you calculated in the preceding line. So naturally you can't slice it. You want to slice pdf_link_string.

Sounds like a job for regular expressions:
import urllib, urllib2, re
from BeautifulSoup import BeautifulSoup
page = urllib2.urlopen("mywebsite.com")
soup = BeautifulSoup(page)
table_with_my_pdf_links = soup.find('table', id = 'searchResults')
#"search results" is just what the table i was looking for happened to be called.
for pdf_link in table_with_my_pdf_links.findAll('a'):
#this says find all the links and looop over them
pdf_link_string = str(pdf_link)
#turn the links into strings (they are usually soup tag objects, which don't help me much that I know of)
if 'pdf' in pdf_link_string:
pdfURLPattern = re.compile("""://(\w+/)+\S+.pdf""")
pdfURLMatch = pdfURLPattern.search(line)
#If there is no match than search() returns None, otherwise the whole group (group(0)) returns the URL of interest.
if pdfURLMatch:
print pdfURLMatch.group(0)

Related

How do I get a specific string from a string between two specified pieces of information

I apologize for the confusing title. I looked around and I know how to get a string between two specified characters, but I am unsure on how to get a string between a phrase and character, such as src="the information i want". In this case I want my starting point to be src=", and endpoint to be the first " after the start point. How would I go about specifying these parameters in the get method?
Below is the output of what I am asking for help with. Rather than have to manually copy and paste the second URL, I want to assign that string to a variable to automate the process.
>>> %Run myProject.py
enter URL
https://www.instagram.com/p/CAYGHWFFp-x/
<video class="tWeCl" playsinline="" poster="https://scontent-iad3-1.cdninstagram.com/v/t51.2885-15/e35/100101005_584997515466659_2719890114744519125_n.jpg?_nc_ht=scontent-iad3-1.cdninstagram.com&_nc_cat=111&_nc_ohc=DI3B3wg_vaQAX_MvEcQ&oh=06b611ef41299d4f0278467fb1d74e94&oe=5EC66079"
preload="none" src="https://scontent-iad3-1.cdninstagram.com/v/t50.2886-16/98205256_176119867089312_5443572653160790508_n.mp4?_nc_ht=scontent-iad3-1.cdninstagram.com&_nc_cat=100&_nc_ohc=JtZXc2HiQ9kAX_097NE&oe=5EC68ACC&oh=ac92032cb89fa1dfbcb5f2fa9016c9ba" type="video/mp4"></video>
enter the URL
Thank you so much!
You can use Beautiful Soup to parse this content. Then you can look for video elements, and read their src attribute.
from bs4 import BeautifulSoup
soup = BeautifulSoup(text, 'html.parser')
for video in soup.find_all('video'):
print(video.get('src'))
Output
https://scontent-iad3-1.cdninstagram.com/v/t50.2886-1698205256_176119867089312_5443572653160790508_n.mp4?_nc_ht=scontent-iad3-1.cdninstagram.com&_nc_cat=100&_nc_ohc=JtZXc2HiQ9kAX_097NE&oe=5EC68ACC&oh=ac92032cb89fa1dfbcb5f2fa9016c9ba

Using regex to find something in the middle of a href while looping

For "extra credit" in a beginners class in Python that I am taking I wanted to extract data out of a URL using regex. I know that there are other ways I could probably do this, but my regex desperately needs work so...
Given a URL to start at, find the xth occurrence of a href on the page, and use that link to go down a level. Rinse and repeat until I have found the required link on the page at the requested depth on the site.
I am using Python 3.7 and Beautiful Soup 4.
At the beginning of the program, after all of the house-keeping is done, I have:
starting_url = 'http://blah_blah_blah_by_Joe.html'
extracted_name = re.findall('(?<=by_)([a-zA-Z0-9]+)[^.html]*', starting_url)
selected_names.append(extracted_name)
# Just for testing purposes
print(selected_name) [['Joe']]
Hmm, a bit odd didn't expect a nested list, but I know how to flatten a list, so ok. Let's go on.
I work my way through a couple of loops, opening each url for the next level down by using:
html = urllib.request.urlopen(url, context=ctx).read()
soup = BeautifulSoup(html, 'html.parser')
tags = soup('a')
Continue processing and, in the loop where the program should have found the href I want:
# Testing to check I have found the correct href
print(desired_link) <a href="http://blah_blah_blah_by_Mary.html">blah
blah</a>
type(desired_link) bs4.element.tag
Correct link, but a "type" new to me and not something I can use re.findall on. So more research and I have found:
for link in soup.find_all('a') :
tags = link.get('href')
type(tags) str
print(tags)
http://blah_blah_blah_by_George.html
http://blah_blah_blah_by_Bill.html
http://blah_blah_blah_by_Mary.html
etc.
Right type, but when I look at what printed, I think what I am looking at is maybe just one long string? And I need a way to just assign the third href in the string to a variable that I can use in re.findall('regex expression', desired_link).
Time to ask for help, I think.
And, while we are at it, any ideas about why I get the nested list the first time I used re.findall with the regex?
Please let me know how to improve this question so it is clearer what I've done and what I'm looking for (I KNOW you guys will, without me even asking).
You've printed every link on the page. But each time in the loop tags contains only one of them (you can print len(tags) to validate it easily).
Also I suggest replacing [a-zA-Z0-9]+ with \w+ - it will catch letters, numbers and underscores and is much cleaner.

xpath how to format path

I would like to get #src value '/pol_il_DECK-SANTA-CRUZ-STAR-WARS-EMPIRE-STRIKES-BACK-POSTER-8-25-20135.jpg' from webpage
from lxml import html
import requests
URL = 'http://systemsklep.pl/pol_m_Kategorie_Deskorolka_Deski-281.html'
session = requests.session()
page = session.get(URL)
HTMLn = html.fromstring(page.content)
print HTMLn.xpath('//html/body/div[1]/div/div/div[3]/div[19]/div/a[2]/div/div/img/#src')[0]
but I can't. No matter how I format xpath, i tdooesnt work.
In the spirit of #pmuntima's answer, if you already know it's the 14th sourced image, but want to stay with lxml, then you can:
print HTMLn.xpath('//img/#data-src')[14]
To get that particular image. It similarly reports:
/pol_il_DECK-SANTA-CRUZ-STAR-WARS-EMPIRE-STRIKES-BACK-POSTER-8-25-20135.jpg
If you want to do your indexing in XPath (possibly more efficient in very large result sets), then:
print HTMLn.xpath('(//img/#data-src)[14]')[0]
It's a little bit uglier, given the need to parenthesize in the XPath, and then to index out the first element of the list that .xpath always returns.
Still, as discussed in the comments above, strictly numerical indexing is generally a fragile scraping pattern.
Update: So why is the XPath given by browser inspect tools not leading to the right element? Because the content seen by a browser, after a dynamic JavaScript-based update process, is different from the content seen by your request. Your request is not running JS, and is doing no such updates. Different content, different address needed--if the address is static and fragile, at any rate.
Part of the updates here seem to be taking src URIs, which initially point to an "I'm loading!" gif, and replacing them with the "real" src values, which are found in the data-src attribute to begin.
So you need two changes:
a stronger way to address the content you want (a way that doesn't break when you move from browser inspect to program fetch) and
to fetch the URIs you want from data-src not src, because in your program fetch, the JS has not done its load-and-switch trick the way it did in the browser.
If you know text associated with the target image, that can be the trick. E.g.:
search_phrase = 'DECK SANTA CRUZ STAR WARS EMPIRE STRIKES BACK POSTER'
path = '//img[contains(#alt, "{}")]/#data-src'.format(search_phrase)
print HTMLn.xpath(path)[0]
This works because the alt attribute contains the target text. You look for images that have the search phrase contained in their alt attributes, then fetch the corresponding data-src values.
I used a combination of requests and beautiful soup libraries. They both are wonderful and I would recommend them for scraping and parsing/extracting HTML. If you have a complex scraping job, scrapy is really good.
So for your specific example, I can do
from bs4 import BeautifulSoup
import requests
URL = 'http://systemsklep.pl/pol_m_Kategorie_Deskorolka_Deski-281.html'
r = requests.get(URL)
soup = BeautifulSoup(r.text, "html.parser")
specific_element = soup.find_all('a', class_="product-icon")[14]
res = specific_element.find('img')["data-src"]
print(res)
It will print out
/pol_il_DECK-SANTA-CRUZ-STAR-WARS-EMPIRE-STRIKES-BACK-POSTER-8-25-20135.jpg

Data Scraping / Regex expression Error (python)

I'm trying to scrape data from a table in a website. I can pull the data in, in the form of source code. But in my program, I get the error: TypeError: replace_with() takes exactly 2 arguments (3 given)
import urllib2
import bs4
import re
page_content = ""
for i in range(1,11):
page = urllib2.urlopen("http://b3.caspio.com/dp.asp?appSession=627360156923294&RecordID=&PageID=2&PrevPageID=2&CPIpage="+str(i)+"&CPISortType=&CPIorderBy=")
page_content += page.read()
soup = bs4.BeautifulSoup(page_content)
tables = soup.find_all('tr')
file = open('crime_data.csv', 'w+')
for i in tables:
i = i.replace_with('</td>' , (',')) # this is where I get the error
i = re.sub(r'<.?td[^>]*>','',i)
file.write(i + '\n')
Why is it giving me that error?
Also, in essence, I'm trying take the data from the table and put it into a csv file. Any and all help would be greatly appreciated!
That replace_with function does not do what it appears you want it to. The linked docs state that: PageElement.replace_with()removes a tag or string from the tree, and replaces it with the tag or string of your choice
From your code it looks more like you want to replace the whole end tag </td> with a , in a an effort to get some sort of comma separated data.
Perhaps you should instead just use the get_text method on your <td> elements, and format them from there:
for i in tables:
file.write(i.get_text(',').strip() + '\n')
file.close() ####### <----- VERY IMPORTANT TO CLOSE FILES
Note
I tested your code out and you are not really scraping what you are after. I played around with it and came up with this:
import urllib2
import bs4
def scrape_crimes(html,write_headers):
soup = bs4.BeautifulSoup(html) # make the soup
table = soup.find_all('table',class_=('cbResultSetTable',)) # search for the exact table you want, there are multiple nested tables on the pages you are scraping
if len(table) > 0: # if the table is found
table = table[0] # set the table to the first result
else:
return # no table found, no use scraping
with open('crime_data.csv', 'a') as f: # opens file to append content
trs = table.find_all('tr') # get all the rows in the table
if write_headers: # if we request that headers are written
for th in trs[0].find_all('th'): # write each header followed by a comma
f.write(th.get_text(strip=True).encode('utf-8')+',') # ensure data is writable by calling encode
f.write('\n') # write a newline
for tr in trs: # for each table row in the table
tds = tr.find_all('td') # get all the td elements
if len(tds) > 0: # if there are td elements (not true for header rows
for td in tds: # for each td element
f.write(td.get_text(strip=True).encode('utf-8')+',') # add the data followed by a comma
f.write('\n') # finish the row off with a newline
open('crime_data.csv','w').close() # clear the file before running
for i in range(1,11):
page = urllib2.urlopen("http://b3.caspio.com/dp.asp?appSession=627360156923294&RecordID=&PageID=2&PrevPageID=2&CPIpage="+str(i)+"&CPISortType=&CPIorderBy=")
scrape_crimes(page.read(),i==1) # offset the code to a function, the second argument is only true the first time
# this ensures that you will get headers only at the top of your output file
I removed the use of the re library because in general regex and html do not play nicely together., the short explanation being: HTML is not a regular language.
I also witch from using the coding pattern:
file = open('file_name','w')
# do stuff
file.close()
to this preferred pattern:
with open('file_name','w') as f:
# do stuff
In the first example it is often common to forget to close the file, which you did forget in your provided code. The second pattern will handle the close for you, so no worries there. Also, it is not good practice to name your variables with the same names as native python commands.
I changed your scripts pattern from combining all the pages html to scraping each page one by one because that is not a good idea. You could run into memory issues if you were doing this with large pages. Instead, it is usually better to handle the data in chunks.
The next thing I did was look at the HTML of the page you were scraping. You were pulling all <tr> elements but had you closely inspected the page, you would have seen that the table you are after is actually contained in a <tr>, giving you some big nasty block of text as a "result". Using bs4's optional classs_ argument to denote a specific class to look for in the table element leads to the data you are after.
The next thing I noticed was that the table headers would get pulled for every page, sprinkling your results with this redundant information. You would only want to pull this info the first time, so I added some logic for that.
I switched to using the .get_text method instead of the regex/replace_with combo you had because of the above explanations. The get_text method returns unicode however so I added the call to .encode('utf-8') which would ensure the data would be writable. I also specified the strip=True argument to get rid of any pesky white-space characters on the data. The reasoning behind this: you load the whole bs4 library, why not use it? The good people who write that library spent a lot of time taking care of parsing the text so you don't have to waste time doing it.
Hope this was helpful! Happy scraping!

Using Beautiful Soup Python module to replace tags with plain text

I am using Beautiful Soup to extract 'content' from web pages. I know some people have asked this question before and they were all pointed to Beautiful Soup and that's how I got started with it.
I was able to successfully get most of the content but I am running into some challenges with tags that are part of the content. (I am starting off with a basic strategy of: if there are more than x-chars in a node then it is content). Let's take the html code below as an example:
<div id="abc">
some long text goes here and hopefully it
will get picked up by the parser as content
</div>
results = soup.findAll(text=lambda(x): len(x) > 20)
When I use the above code to get at the long text, it breaks (the identified text will start from 'and hopefully..') at the tags. So I tried to replace the tag with plain text as follows:
anchors = soup.findAll('a')
for a in anchors:
a.replaceWith('plain text')
The above does not work because Beautiful Soup inserts the string as a NavigableString and that causes the same problem when I use findAll with the len(x) > 20. I can use regular expressions to parse the html as plain text first, clear out all the unwanted tags and then call Beautiful Soup. But I would like to avoid processing the same content twice -- I am trying to parse these pages so I can show a snippet of content for a given link (very much like Facebook Share) -- and if everything is done with Beautiful Soup, I presume it will be faster.
So my question: is there a way to 'clear tags' and replace them with 'plain text' using Beautiful Soup. If not, what will be best way to do so?
Thanks for your suggestions!
Update: Alex's code worked very well for the sample example. I also tried various edge cases and they all worked fine (with the modification below). So I gave it a shot on a real life website and I run into issues that puzzle me.
import urllib
from BeautifulSoup import BeautifulSoup
page = urllib.urlopen('http://www.engadget.com/2010/01/12/kingston-ssdnow-v-dips-to-30gb-size-lower-price/')
anchors = soup.findAll('a')
i = 0
for a in anchors:
print str(i) + ":" + str(a)
for a in anchors:
if (a.string is None): a.string = ''
if (a.previousSibling is None and a.nextSibling is None):
a.previousSibling = a.string
elif (a.previousSibling is None and a.nextSibling is not None):
a.nextSibling.replaceWith(a.string + a.nextSibling)
elif (a.previousSibling is not None and a.nextSibling is None):
a.previousSibling.replaceWith(a.previousSibling + a.string)
else:
a.previousSibling.replaceWith(a.previousSibling + a.string + a.nextSibling)
a.nextSibling.extract()
i = i+1
When I run the above code, I get the following error:
0:<a href="http://www.switched.com/category/ces-2010">Stay up to date with
Switched's CES 2010 coverage</a>
Traceback (most recent call last):
File "parselink.py", line 44, in <module>
a.previousSibling.replaceWith(a.previousSibling + a.string + a.nextSibling)
TypeError: unsupported operand type(s) for +: 'Tag' and 'NavigableString'
When I look at the HTML code, 'Stay up to date.." does not have any previous sibling (I did not how previous sibling worked until I saw Alex's code and based on my testing it looks like it is looking for 'text' before the tag). So, if there is no previous sibling, I am surprised that it is not going through the if logic of a.previousSibling is None and a;nextSibling is None.
Could you please let me know what I am doing wrong?
-ecognium
An approach that works for your specific example is:
from BeautifulSoup import BeautifulSoup
ht = '''
<div id="abc">
some long text goes here and hopefully it
will get picked up by the parser as content
</div>
'''
soup = BeautifulSoup(ht)
anchors = soup.findAll('a')
for a in anchors:
a.previousSibling.replaceWith(a.previousSibling + a.string)
results = soup.findAll(text=lambda(x): len(x) > 20)
print results
which emits
$ python bs.py
[u'\n some long text goes here ', u' and hopefully it \n will get picked up by the parser as content\n']
Of course, you'll probably need to take a bit more care, i.e., what if there's no a.string, or if a.previousSibling is None -- you'll need suitable if statements to take care of such corner cases. But I hope this general idea can help you. (In fact you may want to also merge the next sibling if it's a string -- not sure how that plays with your heuristics len(x) > 20, but say for example that you have two 9-character strings with an <a> containing a 5-character strings in the middle, perhaps you'd want to pick up the lot as a "23-characters string"? I can't tell because I don't understand the motivation for your heuristic).
I imagine that besides <a> tags you'll also want to remove others, such as <b> or <strong>, maybe <p> and/or <br>, etc...? I guess this, too, depends on what the actual idea behind your heuristics is!
When I tried to flatten tags in the document, that way, the tags' entire content would be pulled up to its parent node in place (I wanted to reduce the content of a p tag with all sub-paragraphs, lists, div and span, etc. inside but get rid of the style and font tags and some horrible word-to-html generator remnants), I found it rather complicated to do with BeautifulSoup itself since extract() also removes the content and replaceWith() unfortunatetly doesn't accept None as argument. After some wild recursion experiments, I finally decided to use regular expressions either before or after processing the document with BeautifulSoup with the following method:
import re
def flatten_tags(s, tags):
pattern = re.compile(r"<(( )*|/?)(%s)(([^<>]*=\\\".*\\\")*|[^<>]*)/?>"%(isinstance(tags, basestring) and tags or "|".join(tags)))
return pattern.sub("", s)
The tags argument is either a single tag or a list of tags to be flattened.

Categories

Resources