BeautifulSoup get_text result as a string variable? - python

I need to collect text from a certain section of a wikipedia page, and put it into a single string variable. I can find the right text relatively easily, but I have no idea how to get it into a string variable.
My code so far:
with open('uottawa_wiki.html', 'rb') as infile:
html_content = infile.read()
soup = BeautifulSoup(html_content, 'html.parser')
soup = soup.body
campus_subsection = soup.find(id='Campus')
campus_subsection_siblings = campus_subsection.find_parent().find_next_siblings()
for sibling in campus_subsection_siblings:
if sibling.name == 'p':
print(sibling.get_text())
else:
break
And this is what it outputs, which is perfect:
The university's main campus is situated within the neighbourhood of
Sandy Hill (Côte-de-Sable). The main campus is bordered to the north
by the ByWard Market district, to the east by Sandy Hill's residential
area, and to the southwest by Nicholas Street, which runs adjacent to
the Rideau Canal on the western half of the university. As of the
2010–2011 academic year, the main campus occupied 35.3 ha (87 acres),
though the university owns and manages other properties throughout the
city, raising the university's total extent to 42.5 ha (105
acres).[32] The main campus moved two times before settling in its
final location in 1856. When the institution was first founded, the
campus was located next to the Notre-Dame Cathedral Basilica. With
space a major issue in 1852, the campus moved to a location that is
now across from the National Gallery of Canada. In 1856, the
institution moved to its present location.[18]
The buildings at the university vary in age from 100 Laurier (1893) to
120 University (Faculty of Social Sciences, 2012).[33] In 2011 the
average age of buildings was 63.[32] In the 2011–2012 academic year,
the university owned and managed 30 main buildings, 806 research
laboratories, 301 teaching laboratories and 257 classrooms and seminar
rooms.[4][32] The main campus is divided between its older Sandy Hill
campus and its Lees campus, purchased in 2007. While Lees Campus is
not adjacent to Sandy Hill, it is displayed as part of the main campus
on school maps.[34] Lees campus, within walking distance of Sandy
Hill, was originally a satellite campus owned by Algonquin
College.[35]
However, I need all of this text, exactly as it is (line break and all) to be in a single string variable. I have NO CLUE how to do that.

Just append to a list in a loop and then "\n".join() it:
paragraphs = []
for sibling in campus_subsection_siblings:
if sibling.name == 'p':
paragraphs.append(sibling.get_text())
else:
break
full_text = "\n".join(paragraphs)

Related

Get rid of weird indents when getting description in beautiful soup

I have a bs4 program where I collect the descriptions of links. It first checks to see if there are any meta description tags and if there aren't any it gets the descriptions from tags.
This is the code:
from bs4 import BeautifulSoup
import requests
def find_title(url):
page = requests.get(url)
soup = BeautifulSoup(page.content, 'html.parser')
with open('descrip.txt', 'a', encoding='utf-8') as f:
description = soup.find('meta', attrs={'name':'og:description'}) or soup.find('meta', attrs={'property':'description'}) or soup.find('meta', attrs={'name':'description'})
if description:
desc = description["content"]
else:
desc = soup.find_all('p')[0].getText()
lengths = len(desc)
index = 0
while lengths == 1:
index = index + 1
desc = soup.find_all('p')[index].getText()
lengths = len(desc)
if lengths > 300:
desc = soup.find_all('p')[index].getText()[0:300]
elif lengths < 300:
desc = soup.find_all('p')[index].getText()[0:lengths]
print(desc)
f.write(desc + '\n')
find_title('https://en.wikipedia.org/wiki/Portal:The_arts')
find_title('https://en.wikipedia.org/wiki/Portal:Biography')
find_title('https://en.wikipedia.org/wiki/Portal:Geography')
find_title('https://en.wikipedia.org/wiki/November_15')
find_title('https://en.wikipedia.org/wiki/November_16')
find_title('https://en.wikipedia.org/wiki/Wikipedia:Selected_anniversaries/November')
find_title('https://lists.wikimedia.org/mailman/listinfo/daily-article-l')
find_title('https://en.wikipedia.org/wiki/List_of_days_of_the_year')
find_title('https://en.wikipedia.org/wiki/File:Proclama%C3%A7%C3%A3o_da_Rep%C3%BAblica_by_Benedito_Calixto_1893.jpg')
find_title('https://en.wikipedia.org/wiki/First_Brazilian_Republic')
find_title('https://en.wikipedia.org/wiki/Empire_of_Brazil')
find_title('https://en.wikipedia.org/wiki/Pedro_II_of_Brazil')
find_title('https://en.wikipedia.org/wiki/Benedito_Calixto')
find_title('https://en.wikipedia.org/wiki/Rio_de_Janeiro')
find_title('https://en.wikipedia.org/wiki/Deodoro_da_Fonseca')
But the output to descrip.txt has some weird indents and some descriptions go in for multiple lines and there are spaces between some
This is the output:
The arts refers to the theory, human application and physical expression of creativity found in human cultures and societies through skills and imagination in order to produce objects, environments and experiences. Major constituents of the arts include visual arts (including architecture, ceramics,
A biography, or simply bio, is a detailed description of a person's life. It involves more than just the basic facts like education, work, relationships, and death; it portrays a person's experience of these life events. Unlike a profile or curriculum vitae (résumé), a biography presents a subject's
Geography (from Greek: γεωγραφία, geographia, literally "earth description") is a field of science devoted to the study of the lands, features, inhabitants, and phenomena of the Earth and planets. The first person to use the word γεωγραφία was Eratosthenes (276–194 BC). Geography is an all-encompass
November 15 is the 319th day of the year (320th in leap years) in the Gregorian calendar. 46 days remain until the end of the year.
November 16 is the 320th day of the year (321st in leap years) in the Gregorian calendar. 45 days remain until the end of the year.
The arts refers to the theory, human application and physical expression of creativity found in human cultures and societies through skills and imagination in order to produce objects, environments and experiences. Major constituents of the arts include visual arts (including architecture, ceramics,
A biography, or simply bio, is a detailed description of a person's life. It involves more than just the basic facts like education, work, relationships, and death; it portrays a person's experience of these life events. Unlike a profile or curriculum vitae (résumé), a biography presents a subject's
Geography (from Greek: γεωγραφία, geographia, literally "earth description") is a field of science devoted to the study of the lands, features, inhabitants, and phenomena of the Earth and planets. The first person to use the word γεωγραφία was Eratosthenes (276–194 BC). Geography is an all-encompass
November 15 is the 319th day of the year (320th in leap years) in the Gregorian calendar. 46 days remain until the end of the year.
November 16 is the 320th day of the year (321st in leap years) in the Gregorian calendar. 45 days remain until the end of the year.
Selected anniversaries / On this day archive
All · January · February · March · April · May · June · July · August · September · October · November · December
The sum of all human knowledge. Delivered to your inbox every day.
The following pages list the historical events, births, deaths, and holidays and observances of the specified day of the year:
Original file ‎(5,799 × 3,574 pixels, file size: 15.11 MB, MIME type: image/jpeg)
The First Brazilian Republic or República Velha (Portuguese pronunciation: [ʁeˈpublikɐ ˈvɛʎɐ], "Old Republic"), officially the Republic of the United States of Brazil, refers to the period of Brazilian history from 1889 to 1930. The República Velha ended with the Brazilian Revolution of 1930 that installed Getúlio Vargas as a new president.
The Empire of Brazil was a 19th-century state that broadly comprised the territories which form modern Brazil and (until 1828) Uruguay. Its government was a representative parliamentary constitutional monarchy under the rule of Emperors Dom Pedro I and his son Dom Pedro II. A colony of the Kingdom of Portugal, Brazil became the seat of the Portuguese colonial Empire in 1808, when the Portuguese Prince regent, later King Dom João VI, fled from Napoleon's invasion of Portugal and established himself and his government in the Brazilian city of Rio de Janeiro. João VI later returned to Portugal, leaving his eldest son and heir, Pedro, to rule the Kingdom of Brazil as regent. On 7 September 1822, Pedro declared the independence of Brazil and, after waging a successful war against his father's kingdom, was acclaimed on 12 October as Pedro I, the first Emperor of Brazil. The new country was huge, sparsely populated and ethnically diverse.
Early life (1825–40)
Consolidation (1840–53)
Growth (1853–64)
Paraguayan War (1864–70)
Apogee (1870–81)
Decline and fall (1881–89)
Exile and death (1889–91)
Legacy
Benedito Calixto de Jesus (14 October 1853 – 31 May 1927) was a Brazilian painter.[1] His works usually depicted figures from Brazil and Brazilian culture, including a famous portrait of the bandeirante Domingos Jorge Velho in 1923,[2] and scenes from the coastline of São Paulo.[3] Unlike many artis
Rio de Janeiro (/ˈriːoʊ di ʒəˈnɛəroʊ, - deɪ -, - də -/; Portuguese: [ˈʁi.u d(ʒi) ʒɐˈne(j)ɾu] (listen);[3]), or simply Rio,[4] is anchor to the Rio de Janeiro metropolitan area and the second-most populous municipality in Brazil and the sixth-most populous in the Americas. Rio de Janeiro is the capit
Manuel Deodoro da Fonseca (Portuguese pronunciation: [mɐnuˈɛw deoˈdɔɾu da fõˈsekɐ]; 5 August 1827 – 23 August 1892) was a Brazilian politician and military officer who served as the first President of Brazil. He took office after heading a military coup that deposed Emperor Pedro II and proclaimed t
is there any way to fix this problem?
Add strip=True to getText() (note: it’s an alias of get_text()), and than add a space as a separator. For example:
get_text(strip=True, separator=' ')

Can't Scrape All the UL Tag's text in python webscrape

I'm new in python webscraping and trying to scrape one of the wikipedia quote page for the practice purpose.
Link of the wikipedia Page
Code I Tried:
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
import re
req = Request('https://en.wikiquote.org/wiki/India', headers={'User-Agent':
'Mozilla/5.0'})
webpage = urlopen(req).read()
html = BeautifulSoup(webpage, 'html.parser')
quotes = html.find('ul').findAll("b")
print(quotes)
I got first quote but I want all of the quotes on the page.
Can Anyone Provide the Solution? TIA!
You have to use findAll to get all ul, then extract text from each one:
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
import re
req = Request('https://en.wikiquote.org/wiki/India', headers={'User-Agent':
'Mozilla/5.0'})
webpage = urlopen(req).read()
html = BeautifulSoup(webpage, 'html.parser')
quotes = html.findAll('ul')
for quote in quotes:
print(quote.get_text())
Result:
In India I found a race of mortals living upon the Earth, but not adhering to it. Inhabiting cities, but not being fixed to them, possessing everything but possessed by nothing.
Apollonius of Tyana, quoted in The Transition to a Global Society (1991) by Kishor Gandhi, p. 17, and in The Age of Elephants (2006) by Peter Moss, p. v
Apollonius of Tyana, quoted in The Transition to a Global Society (1991) by Kishor Gandhi, p. 17, and in The Age of Elephants (2006) by Peter Moss, p. v
This also is remarkable in India, that all Indians are free, and no Indian at all is a slave. In this the Indians agree with the Lacedaemonians. Yet the Lacedaemonians have Helots for slaves, who perform the duties of slaves; but the Indians have no slaves at all, much less is any Indian a slave.
Arrian, Anabasis Alexandri, Book VII : Indica, as translated by Edgar Iliff Robson (1929), p. 335
Arrian, Anabasis Alexandri, Book VII : Indica, as translated by Edgar Iliff Robson (1929), p. 335
No Indian ever went outside his own country on a warlike expedition, so righteous were they.
Arrian, Anabasis Alexandri, Book VII : Indica, as translated by Edgar Iliff Robson (1929), p. 18
Arrian, Anabasis Alexandri, Book VII : Indica, as translated by Edgar Iliff Robson (1929), p. 18
India of the ages is not dead nor has She spoken her last creative word; She lives and has still something to do for herself and the human peoples. And that which must seek now to awake is not an Anglicized oriental people, docile pupil of the West and doomed to repeat the cycle of the Occident's success and failure, but still the ancient immemorial Shakti recovering Her deepest self, lifting Her head higher toward the supreme source of light and strength and turning to discover the complete meaning and a vaster form of her Dharma.
Sri Aurobindo, in the last issue of Arya: A Philosophical Review (January 1921), as quoted in The Modern Review, Vol. 29 (1921), p. 626.
Sri Aurobindo, in the last issue of Arya: A Philosophical Review (January 1921), as quoted in The Modern Review, Vol. 29 (1921), p. 626.
For what is a nation? What is our mother-country? It is not a piece of earth, nor a figure of speech, nor a fiction of the mind. It is a mighty Shakti, composed of the Shaktis of all the millions of units that make up the nation, just as Bhawani Mahisha Mardini sprang into being from the Shaktis of all the millions of gods assembled in one mass of force and welded into unity. The Shakti we call India, Bhawani Bharati, is the living unity of the Shaktis of three hundred million people …
Sri Aurobindo (Bhawāni Mandir) quoted in Issues of Identity in Indian English Fiction: A Close Reading of Canonical Indian English Novels by H. S. Komalesha
Sri Aurobindo (Bhawāni Mandir) quoted in Issues of Identity in Indian English Fiction: A Close Reading of Canonical Indian English Novels by H. S. Komalesha
India is the guru of the nations, the physician of the human soul in its profounder maladies; she is destined once more to remould the life of the world and restore the peace of the human spirit. But Swaraj is the necessary condition of her work and before she can do the work , she must fulfil the condition.
Sri Aurobindo, Sri Aurobindo Mandir Annual (1947), p. 196
Sri Aurobindo, Sri Aurobindo Mandir Annual (1947), p. 196
...

BS4: Parsing HTML, storing parsed elements & sending as text only when new information is published on the webpage

I am currently having trouble with the elems variable below. Essentialy, I am trying to create script that scrapes the webpage below and sends a text with the specified parsed html variable 'v'. It currently does that, but I want to make it so when the web page is updated, the script grabs the new data and sends it(eventually I will add code to have it run once a day). In attempt to make this iteration, I was trying to break the up elems string via splitting at every paragraph end ']' then creating a list and having it call list[0], this is just not working as when I run str(elems) it returns just '[]'. I am very stuck getting this code to send the most recently added paragraph.
import twilio
from twilio.rest import Client
import json
import bs4
import requests
from pprint import pprint
data = json.loads(open('secret.json', 'r').read())
# secret.json password storage
def get_elems_from_document(document):
pass
res = requests.get('http://www.sharkresearchcommittee.com/pacific_coast_shark_news.htm')
res.raise_for_status()
soup = bs4.BeautifulSoup(res.text, 'html.parser')
for i in range(1, 100): # attempting to grab the most recent added paragraph
elems = soup.select('body > div > div > center > table > tr > td:nth-of-type(2) > p:nth-of-type({})'
.format(i))
if '—' in str(elems):
v = elems[0].text
#print("{}th element: ".format(i))
#pprint(elems)
# trying to take the elems variable, turn into string and split each paragraph up, then return the first in the list
x = str(elems)
y = x.split(']')
f = y[0]
# adding a set
accountSID = data['sid']
authToken = data['authToken']
twilioCli = Client(accountSID, authToken)
myTwilioNumber = data['twilioNumber']
myCellPhone = data['myNumber']
message = twilioCli.messages.create(body = 'Warning: Shark sighting off the coast of ' + **v** + 'Beach !', from_=myTwilioNumber, to=myCellPhone)
You can grab most all news with this script (most recent news is stored in news[0]):
import json
import bs4
import requests
res = requests.get('http://www.sharkresearchcommittee.com/pacific_coast_shark_news.htm')
res.raise_for_status()
soup = bs4.BeautifulSoup(res.text, 'html.parser')
news = [p.text.strip() for p in soup.select('h1 ~ p') if p.find('font')]
for p in news:
print(p)
print('-' * 80)
# most recent news is stored in news[0]
Will output:
Ventura   —   On July 10, 2018 Andy Kastenberg reported the following; "I took a paddle from North of Emma Wood State Beach in Ventura to the South part of the inner reef at about 1:00 PM PST. A small South swell mixed with some wind swell was showing. The West wind was just starting to puff but the water texture was still pretty glassy. Outside air temperature was an estimated 80+ degrees Fahrenheit and the water seemed to be nearing 70 degrees Fahrenheit. After catching a wave or two, a 6 foot shark appeared in the face of a set wave (four foot face). Without expertise, my guess is that it was a young Great White Shark. Another guy in the water said that he had seen one in the area for several days prior as did another surfer back up the beach where I had parked." Please report any shark sighting, encounter, or attack to the Shark Research Committee.
--------------------------------------------------------------------------------
Goleta   —   On July 2, 2018 Aaron Lauer reported the following; "I was working off Goleta on platform Holly, about 2 miles from the Santa Barbara Coast at Coal Oil Point. The platform is in 211 feet of water. I sighted a White Shark, approximately 12 feet long, a dark grey body and a white belly with a dorsal fin about 18 inches high.  There was also a small white tip on the tail fin. It circled the platform slowly once and then headed off to the South, following the coast toward Santa Barbara. A consensus of opinions by myself and co-workers estimated the weight to be in excess of 400 pounds. A number of seals reside on the platform which might be the reason the shark was attracted to it. None of the seals were interested in leaving the platform during this time." Please report any shark sighting, encounter, or attack to the Shark Research Committee.
--------------------------------------------------------------------------------
Oceanside   —   On June 25, 2018 Julie Wolfe was paddling her outrigger canoe 2 miles due West of Oceanside Harbor entrance. It was 6:00 PM and she had been on the water 25 – 30 minutes. The late afternoon sky was clear with an estimated temperature of 70 degrees Fahrenheit. The ocean was calm with an estimated temperature of 68 degrees Fahrenheit and a mild breeze from the West creating a bump to the sea surface. No marine mammals were observed in the area. Wolfe reported; "I was paddling by myself when my canoe was hit HARD from underneath. I immediately turned around and paddled as fast as I could toward shore. I never saw the shark and wasn't sure if it was following me or not until about a minute later it tugged at my paddle! I made it into the harbor safe but my carbon fiber canoe has bite marks through and through . My canoe took on water. Terrifying two mile sprint in!" 'Interspace' measurements of the tooth impressions in her outrigger canoe suggest a White Shark 11 – 12 feet in length. This is the first confirmed unprovoked shark attack reported in 2018 from the Pacific Coast of North America. Please report any shark sighting, encounter, or attack to the Shark Research Committee.
--------------------------------------------------------------------------------
...and so on

Cleaning Wikipedia content with python

Hi I have made use of a python library to collect the data of a topic. For example I chose the topic of New york and I have retreived the content with the following code:
import wikipedia
f2 = open('newyork', 'w')
ny = wikipedia.page("New York")
f2.write(ny.content.encode('utf8')+"\n")
I am able to extract the information in the format below:
New York is a state in the Northeastern United States and is the 27th-most extensive, fourth-most populous, and seventh-most densely populated U.S. state. New York is bordered by New Jersey and Pennsylvania to the south and Connecticut, Massachusetts, and Vermont to the east. The state has a maritime border in the Atlantic Ocean with Rhode Island, east of Long Island, as well as an international border with the Canadian provinces of Quebec to the north and Ontario to the west and north. The state of New York, with an estimated 19.8 million residents in 2015, is often referred to as New York State to distinguish it from New York City, the state's most populous city and its economic hub.
With an estimated population of 8.55 million in 2015, New York City is the most populous city in the United States and the premier gateway for legal immigration to the United States. The New York City Metropolitan Area is one of the most populous urban agglomerations in the world. New York City is a global city, exerting a significant impact upon commerce, finance, media, art, fashion, research, technology, education, and entertainment, its fast pace defining the term New York minute. The home of the United Nations Headquarters, New York City is an important center for international diplomacy and has been described as the cultural and financial capital of the world, as well as the world's most economically powerful city. New York City makes up over 40% of the population of New York State. Two-thirds of the state's population lives in the New York City Metropolitan Area, and nearly 40% live on Long Island. Both the state and New York City were named for the 17th century Duke of York, future King James II of England. The next four most populous cities in the state are Buffalo, Rochester, Yonkers, and Syracuse, while the state capital is Albany.
The earliest Europeans in New York were French colonists and Jesuit missionaries who arrived southward from settlements at Montreal for trade and proselytizing. New York had been inhabited by tribes of Algonquian and Iroquoian-speaking Native Americans for several hundred years by the time Dutch settlers moved into the region in the early 17th century. In 1609, the region was first claimed by Henry Hudson for the Dutch, who built Fort Nassau in 1614 at the confluence of the Hudson and Mohawk rivers, where the present-day capital of Albany later developed. The Dutch soon also settled New Amsterdam and parts of the Hudson Valley, establishing the colony of New Netherland, a multicultural community from its earliest days and a center of trade and immigration. The British annexed the colony from the Dutch in 1664. The borders of the British colony, the Province of New York, were similar to those of the present-day state.
Many landmarks in New York are well known to both international and domestic visitors, with New York State hosting four of the world's ten most-visited tourist attractions in 2013: Times Square, Central Park, Niagara Falls (shared with Ontario), and Grand Central Terminal. New York is home to the Statue of Liberty, a symbol of the United States and its ideals of freedom, democracy, and opportunity. In the 21st century, New York has emerged as a global node of creativity and entrepreneurship, social tolerance, and environmental sustainability. New York's higher education network comprises approximately 200 colleges and universities, including Columbia University, Cornell University, New York University, and Rockefeller University, which have been ranked among the top 35 in the world.
== History ==
=== 16th century ===
In 1524, Giovanni da Verrazzano, an Italian explorer in the service of the French crown, explored the Atlantic coast of North America between the Carolinas and Newfoundland, including New York Harbor and Narragansett Bay. On April 17, 1524 Verrazanno entered New York Bay, by way of the Strait now called the Narrows into the northern bay which he named Santa Margherita, in honour of the King of France's sister. Verrazzano described it as "a vast coastline with a deep delta in which every kind of ship could pass" and he adds: "that it extends inland for a league and opens up to form a beautiful lake. This vast sheet of water swarmed with native boats". He landed on the tip of Manhattan and perhaps on the furthest point of Long Island. Verrazanno's stay in this place was interrupted by a storm which pushed him north towards Martha's Vineyard.
In 1540 French traders from New France built a chateau on Castle Island, within present-day Albany; due to flooding, it was abandoned the next year. In 1614, the Dutch under the command of Hendrick Corstiaensen, rebuilt the French chateau, which they called Fort Nassau. Fort Nassau was the first Dutch settlement in North America, and was located along the Hudson River, also within present-day Albany. The small fort served as a trading post and warehouse. Located on the Hudson River flood plain, the rudimentary "fort" was washed away by flooding in 1617, and abandoned for good after Fort Orange (New Netherland) was built nearby in 1623.
=== 17th century ===
Henry Hudson's 1609 voyage marked the beginning of European involvement with the area. Sailing for the Dutch East India Company and looking for a passage to Asia, he entered the Upper New York Bay on September 11 of that year. Word of his findings encouraged Dutch merchants to explore the coast in search for profitable fur trading with local Native American tribes.
During the 17th century, Dutch trading posts established for the trade of pelts from the Lenape, Iroquois, and other tribes were founded in the colony of New Netherland. The first of these trading posts were Fort Nassau (1614, near present-day Albany); Fort Orange (1624, on the Hudson River just south of the current city of Albany and created to replace Fort Nassau), developing into settlement Beverwijck (1647), and into what became Albany; Fort Amsterdam (1625, to develop into the town New Amsterdam which is present-day New York City); and Esopus, (1653, now Kingston). The success of the patroonship of Rensselaerswyck (1630), which surrounded Albany and lasted until the mid-19th century, was also a key factor in the early success of the colony. The English captured the colony during the Second Anglo-Dutch War and governed it as the Province of New York. The city of New York was recaptured by the Dutch in 1673 during the Third Anglo-Dutch War (1672–1674) and renamed New Orange. It was returned to the English under the terms of the Treaty of Westminster a year later.
== References ==
== Further reading ==
French, John Homer (1860). Historical and statistical gazetteer of New York State. Syracuse, New York: R. Pearsall Smith. OCLC 224691273. (Full text via Google Books.)
New York State Historical Association (1940). New York: A Guide to the Empire State. New York City: Oxford University Press. ISBN 978-1-60354-031-5. OCLC 504264143. (Full text via Google Books.)
== External links ==
New York at DMOZ
Geographic data related to New York at OpenStreetMap
The Problems:
Problem 1:
I have a trouble in trying to remove all the contents from the section " Reference and Further Reading"
For example:
== History ==
some text under the section History
=== 17th century ===
some text under the section 17 century
=== 19th century ===
some text under the section 19 century
== References ==
some references
== Further reading ==
some further reading sources
Desired Result:
== History ==
some text under the section History
=== 17th century ===
some text under the section 17 century
=== 19th century ===
some text under the section 19 century
Problem 1B:
I will be getting the content of many topics so there will be many references to delete , how can I do it?
For example I like to delete all sections that begin with "Reference" and "Further Reading":
== New York ==
== References ==
== Further reading ==
== California ==
== References ==
== Further reading ==
== Floria ==
== References ==
== Further reading ==
Desired Result:
== New York ==
== California ==
== Floria ==
Sorry for the long post and please forgive me as I have very little knowledge of python.
All advice and help is greatly appreciated.
Thank you.
Edit
Current Problem
Hi osantana,
I have tried the code that you have provided as shown below:
import wikipedia
import re
f2 = open('osantana', 'w')
ny = wikipedia.page("New York")
section_title_re = re.compile("^=+\s+.*\s+=+$")
raw_content = ny.content
content = []
skip = False
for l in raw_content.splitlines():
line = l.strip()
if "== References ==" in line.lower():
skip = True # replace with break if this is the last section
continue
if "== Further reading ==" in line.lower():
skip = True # replace with break if this is the last section
continue
if "== External links ==" in line.lower():
skip = True # replace with break if this is the last section
continue
if section_title_re.match(line):
skip = False
continue
if skip:
continue
content.append(line)
content = '\n'.join(content) + '\n'
f2.write(content.encode('utf8')+"\n")
It works fine for all except this 3 part:
Original File:
== References ==
Index of New York-related articles
Outline of New York – organized list of topics about New York
== Further reading ==
French, John Homer (1860). Historical and statistical gazetteer of New York State. Syracuse, New York: R. Pearsall Smith. OCLC 224691273. (Full text via Google Books.)
New York State Historical Association (1940). New York: A Guide to the Empire State. New York City: Oxford University Press. ISBN 978-1-60354-031-5. OCLC 504264143. (Full text via Google Books.)
Result of the code:
Index of New York-related articles
Outline of New York – organized list of topics about New York
French, John Homer (1860). Historical and statistical gazetteer of New York State. Syracuse, New York: R. Pearsall Smith. OCLC 224691273. (Full text via Google Books.)
New York State Historical Association (1940). New York: A Guide to the Empire State. New York City: Oxford University Press. ISBN 978-1-60354-031-5. OCLC 504264143. (Full text via Google Books.)
The headings were removed but the content is still intact.
I'll assume that Reference/Further Reading are not the last sections in all pages. If those topics are the last sections replace the highlighted code below with a break command.
import re
def parse(raw_content):
section_title_re = re.compile("^=+\s+.*\s+=+$")
content = []
skip = False
for l in raw_content.splitlines():
line = l.strip()
if "= references =" in line.lower():
skip = True # replace with break if this is the last section
continue
if "= further reading =" in line.lower():
skip = True # replace with break if this is the last section
continue
if section_title_re.match(line):
skip = False
continue
if skip:
continue
content.append(line)
return '\n'.join(content) + '\n'
print(parse(ny.content))
For problem 2 you could do something like this
contents = re.sub('=+\s*.+\s*=+', '', contents)
Just remember to import re, the regular expressions module.
The method being used is re.sub(pattern, repl, string). pattern is a regular expression pattern* (the re documentation provides an overview on it).
repl is what you want to replace all occurrences of the pattern with. In this case you want to remove the pattern, so just use an empty string as the replacement.
string is of course the string you're performing the substitution on. This method returns the final result, so if you want to overwrite the original string, just assign the returned value back to the input string.
Here's the pattern I used explained just in case. '=+\s*.+\s*=+' means any part of the string where there is one or more equal sign (=+), followed by zero or more spaces (\s*), followed by one or more of any character (.+), followed again by zero or more spaces (\s*), finally ending with one or more equal signs (=+).
For problem 1 I'd say you could probably accomplish what you want to using regular expressions as well, and the re module makes it pretty easy. The link I gave above should help.
def clean_data(f):
def inner(word):
text=f(word)
text=text.encode("utf-8",errors='ignore').decode("utf-8")
text=re.sub("https?:.*(?=\s)",'',text)
text=re.sub("[’‘\"]","'",text)
text=re.sub("[^\x00-\x7f]+",'',text)
text=re.sub('[#&\\*+/<>#[\]^`{|}~ \t\n\r]',' ',text)
text=re.sub('\(.*?\)','',text)
text=re.sub('\=\=.*?\=\=','',text)
text=re.sub(' , ',',',text)
text=re.sub(' \.','.',text)
text=re.sub(" +",' ',text)
text=re.sub(";",'and',text)
return text.strip()
return inner
#clean_data
def get_data(word):
try:
data = wikipedia.summary("Orange",sentences=300)
except wikipedia.DisambiguationError as e:
print("picking the data from:",e.options[:3])
data=''.join([wikipedia.summary(s,sentences=100) for s in e.options[:3]])
return data
data=get_data("Orange")

Python Regular Expression Extract Lookahead

I have been trying to extract transport node names and location coordinate strings from a webpage scrape (that I have permission to scrape). The names and locations are in cdata blocks of javascript. See here: http://pastebin.com/6Vtup2dE
Using regular expressions in python
re.findall("(?:\(new\sMicrosoft\.Maps\.Location\()(.+?(?=\)\,))(?:.+?(?=new\ssimpleInfo\(\\\'))(.+?(?=\\)))", test_str)
I get
[(u'55.86527,-4.2517133',
u"new simpleInfo('Buchanan Bus Station','Glasgow, Buchanan Bus Station - entrance to station is situated on Killermont Street. It is a short walk from George Square and within easy reach of Glasgow?s main shopping and leisure areas. Please check the bus station passenger displays for stance information for megabus services.'"),
(u'55.86068,-4.257852', u"new simpleInfo('Central Train Station',''"),
(u'51.492653,-0.14765126',
u"new simpleInfo('Victoria, Buckingham Palace Rd, Stop 10','London Victoria, Buckingham Palace Road - at the corner of Elizabeth Bridge and diagonally across from the main entrance to Victoria Coach Station. megabus Oxford Tube services leave from Stop 10.'"),
(u'51.492596,-0.14985295',
u"new simpleInfo('Victoria Coach Station','London Victoria Coach Station is situated on Buckingham Palace Rd at the junction with Elizabeth St. megabus services depart from Stands 15-20, located in the departures area of North West terminal '"),
(u'51.503437,-0.112076715',
u"new simpleInfo('Waterloo Train Station','London Waterloo - London Waterloo Station is located on Station Approach, SE1 London - just behind the London Eye. The station is a terminus for trains serving the south-west of England and Eurostar services. Waterloo is the largest station in the UK by area. Its spacious, curved concourse is lined with shops and all the modern amenities.\\n'"),
(u'51.53062,-0.12585254',
u"new simpleInfo('St Pancras International Train Station','For East Midlands Trains services only. London St Pancras International, London - St Pancras Station is located on Pancras Rd NW1 between the national Library and Kings Cross station. The station is the terminus for trains serving East Midlands and South Yorkshire. It is also the new London terminal for the Eurostar services to the continent. Kings Cross St Pancras tube station provides links via the London underground to other London destinations.'"),
(u'51.52678,-0.13297649',
u"new simpleInfo('Euston Train Station','For Virgin Trains Services Only. London Euston - The station is the main terminal for trains to London from the West Midlands and North West England. It is connected to Euston Tube Station for easy access to the London Underground network'"),
(u'51.52953,-0.12506014',
u"new simpleInfo('St Pancras, Coach Road','In some instances megabusplus services which operate as coach only will pick up from Coach Road, outside London St Pancras.'"),
(u'55.86527,-4.2517133',
u"new simpleInfo('Buchanan Bus Station','Glasgow, Buchanan Bus Station - entrance to station is situated on Killermont Street. It is a short walk from George Square and within easy reach of Glasgow?s main shopping and leisure areas. Please check the bus station passenger displays for stance information for megabus services.'"),
(u'55.86068,-4.257852', u"new simpleInfo('Central Train Station',''")]
But what I am trying to get is just:
[(u'55.86527,-4.2517133','Buchanan Bus Station'),
(u'55.86068,-4.257852', 'Central Train Station'),
(u'51.492653,-0.14765126','Victoria, Buckingham Palace Rd, Stop 10'),
(u'51.492596,-0.14985295','Victoria Coach Station')....etc]
I've written plenty of regex in my time but I've never had problems like this. I am trying (believe it or not) to hide everything up to and including "new simpleInfo(' and then grab the string up to the next "'" but I can't work it out. help!
Try this:
re.findall(r"(?:\(new\sMicrosoft\.Maps\.Location\(([^)]+)\).+?new\ssimpleInfo\(\\?'(.+?)\\?')", test_str)
This regex find all occurences whether there is \'Buchanan Bus Station\' or 'Buchanan Bus Station'.
Here is the demo
(?:\(new\sMicrosoft\.Maps\.Location\()(.+?(?=\)\,))(?:.+?).*?new\ssimpleInfo\(\\'([^'\\]+)
Try this.This should give you what you want.
import re
p = re.compile(ur'(?:\(new\sMicrosoft\.Maps\.Location\()(.+?(?=\)\,))(?:.+?).*?new\ssimpleInfo\(\\\'([^\'\\]+)')
test_str = u"jQuery(function(){ jQuery(\'#JourneyPlanner_txtOutboundDate\').datepicker({dateFormat: \'dd/mm/yy\', firstDay: 1, beforeShowDay: function(dte){ return [((dte >= new Date(2014,9,29) && dte <= new Date(2015,0,4)) || false)]; }, minDate: new Date(2014,9,29), maxDate: new Date(2015,0,4),buttonImage: \"/images/icon_calendar.gif\", showOn: \"both\", buttonImageOnly: true}); });\njQuery(function(){ jQuery(\'#JourneyPlanner_txtReturnDate\').datepicker({dateFormat: \'dd/mm/yy\', firstDay: 1,buttonImage: \"/images/icon_calendar.gif\", showOn: \"both\", buttonImageOnly: true}); });\nEmperorBing.addCallback(function(){ var map = new Microsoft.Maps.Map(document.getElementById(\'confirm1_Map1\'), {credentials:\'Aodb7Wd7D9Kq5gKNryfW6V29yf8aw2Sbu-tXAlkH7OLJtm8zG2bQzzhDKK5zM9FE\',height: 320,width: 299, zoom: 13, mapTypeId: Microsoft.Maps.MapTypeId.auto, enableClickableLogo: false , enableSearchLogo: false , showDashboard: true, showCopyright: true, showScalebar: true, showMapTypeSelector: true});\r\nEmperorBing.addMarker(map, new Microsoft.Maps.Pushpin(new Microsoft.Maps.Location(55.86527,-4.2517133), { undefined: undefined, icon:\'/images/mapmarker.gif\', width:42, height:42, anchor: new Microsoft.Maps.Point(21,21)}),new simpleInfo(\'Buchanan Bus Station\',\'Glasgow, Buchanan Bus Station - entrance to station is situated on Killermont Street. It is a short walk from George Square and within easy reach of Glasgow?s main shopping and leisure areas. Please check the bus station passenger displays for stance information "
re.findall(p, test_str)
See demo.
http://regex101.com/r/dP9rO4/9

Categories

Resources