I am new to scraping using Python. After using a lot of useful resources I was able to scrape the content of a Page. However, I am having trouble saving this data into a .csv file.
Python:
import mechanize
import time
import requests
import csv
from selenium import webdriver
from selenium.webdriver.common.by import By
driver = webdriver.Firefox(executable_path=r'C:\Users\geckodriver.exe')
driver.get("myUrl.jsp")
username = driver.find_element_by_name('USER')
password = driver.find_element_by_name('PASSWORD')
username.send_keys("U")
password.send_keys("P")
main_frame = driver.find_element_by_xpath('//*[#id="Frame"]')
src = driver.switch_to_frame(main_frame)
table = driver.find_element_by_xpath("/html/body/div/div[2]/div[5]/form/div[7]/div[3]/table")
rows = table.find_elements(By.TAG_NAME, "tr")
for tr in rows:
outfile = open("C:/Users/Scripts/myfile.csv", "w")
with outfile:
writers = csv.writer(outfile)
writers.writerows(tr.text)
Problem:
Only one of the rows gets written to the excel file. However, when I print the tr.text into the console, all the required rows show up. How can I get all the text inside tr elements to be written into an excel file?
Currently your code will open the file, write one line, close it, then on the next row open it again and overwrite the line. Please consider the following code snippet:
# We use 'with' to open the file and auto close it when done
# syntax is best modified as follows
with open('C:/Users/Scripts/myfile.csv', 'w') as outfile:
writers = csv.writer(outfile)
# we only need to open the file once so we open it first
# then loop through each row to print everything into the open file
for tr in rows:
writers.writerows(tr.text)
Related
I've written a script in python which is able to fetch the title of different posts from a webpage and write them to a csv file. As the site updates it's content very frequently, I like to append the new result first in that csv file where there are already list of old titles available.
I've tried with:
import csv
import time
import requests
from bs4 import BeautifulSoup
url = "https://stackoverflow.com/questions/tagged/python"
def get_information(url):
response = requests.get(url)
soup = BeautifulSoup(response.content, 'lxml')
for title in soup.select(".summary .question-hyperlink"):
yield title.text
if __name__ == '__main__':
while True:
with open("output.csv","a",newline="") as f:
writer = csv.writer(f)
writer.writerow(['posts'])
for items in get_information(url):
writer.writerow([items])
print(items)
time.sleep(300)
The above script which when run twice can append the new results after the old results.
Old data are like:
A
F
G
T
New data are W,Q,U.
The csv file should look like below when I rerun the script:
W
Q
U
A
F
G
T
How can I append the new result first in an existing csv file having old data?
Inserting data anywhere in a file except at the end requires rewriting the whole thing. To do this without reading its entire contents into memory first, you could create a temporary csv file with the new data in it, append the data from the existing file to that, delete the old file and rename the new one.
Here's and example of what I mean (using a dummy get_information() function to simplify testing).
import csv
import os
from tempfile import NamedTemporaryFile
url = 'https://stackoverflow.com/questions/tagged/python'
csv_filepath = 'updated.csv'
# For testing, create a existing file.
if not os.path.exists(csv_filepath):
with open(csv_filepath, 'w', newline='') as f:
writer = csv.writer(f)
writer.writerows([item] for item in 'AFGT')
# Dummy for testing.
def get_information(url):
for item in 'WQU':
yield item
if __name__ == '__main__':
folder = os.path.abspath(os.path.dirname(csv_filepath)) # Get dir of existing file.
with NamedTemporaryFile(mode='w', newline='', suffix='.csv',
dir=folder, delete=False) as newf:
temp_filename = newf.name # Save filename.
# Put new data into the temporary file.
writer = csv.writer(newf)
for item in get_information(url):
writer.writerow([item])
print([item])
# Append contents of existing file to new one.
with open(csv_filepath, 'r', newline='') as oldf:
reader = csv.reader(oldf)
for row in reader:
writer.writerow(row)
print(row)
os.remove(csv_filepath) # Delete old file.
os.rename(temp_filename, csv_filepath) # Rename temporary file.
Since you intend to change the position of every element of the table, you need to read the table into memory and rewrite the entire file, starting with the new elements.
You may find it easier to (1) write the new element to a new file, (2) open the old file and append its contents to the new file, and (3) move the new file to the original (old) file name.
I have a folder with lots of .txt files. I want to merge all .txt file in a single .csv file line by line/row by row.
I have tried the following python codes, they work fine but I have to change .txt file name to add the content into .csv row.
import re
import csv
from bs4 import BeautifulSoup
raw_html = open('/home/erdal/Dropbox/Marburg/LA/LT_CORPUS/fsdl.txt')
cleantext = BeautifulSoup(raw_html, "lxml").text
#print(cleantext)
print (re.sub('\s+',' ', cleantext))
#appending to csv as row
row = [re.sub('\s+',' ', cleantext)]
with open('LT_Corpus.csv', 'a') as csvFile:
writer = csv.writer(csvFile)
writer.writerow(row)
csvFile.close()
I expect to see better and faster solutions for automatizing the process without changing file names. Any recommendation is welcome.
Accessing a list of filenames
The following should get you closer to what you want.
import os will give you access to the os.listdir() function that lists all the files in a directory. You may need to provide the path to your data folder, if the data files are not in the same folder as your script.
This should look something like:
os.listdir('/home/erdal/Dropbox/Marburg/LA/LT_CORPUS/')
Using all the filenames in that directory, you can then open each one individually, by parsing through them with a for loop.
import re
import csv
from bs4 import BeautifulSoup
import os
filenames = os.listdir('/home/erdal/Dropbox/Marburg/LA/LT_CORPUS/')
for file in filenames:
raw_html = open('/home/erdal/Dropbox/Marburg/LA/LT_CORPUS/' + file)
cleantext = BeautifulSoup(raw_html, "lxml").text
output = re.sub('\s+',' ', cleantext) # saved the result using a variable
print(output) # the variable can be reused
row = [output] # as needed, in different contexts
with open('LT_Corpus.csv', 'a') as csvFile:
writer = csv.writer(csvFile)
writer.writerow(row)
Several other nuances: I removed the csvfile.close() function call at the end. When using with context managers, the context manager automatically closes the file for you when you leave the scope of the context manager code block (i.e. that indented section below the with statement). Having said this, there might be merit to simply opening the csv file, leaving it open, and then opening the txt files one by one and writing their content to the open csv and waiting to close the csv til the very end.
I was somewhat able to write and execute the program using BeautifulSoup. My concept is to capture the details from html source by parsing multiple urls via csv file and save the output as csv.
Programming is executing well, but the csv overwrites the values in 1st row itself.
input File has three urls to parse
I want the output to be stored in 3 different rows.
Below is my code
import csv
import requests
import pandas
from bs4 import BeautifulSoup
with open("input.csv", "r") as f:
reader = csv.reader(f)
for row in reader:
url = row[0]
print (url)
r=requests.get(url)
c=r.content
soup=BeautifulSoup(c, "html.parser")
all=soup.find_all("div", {"class":"biz-country-us"})
for br in soup.find_all("br"):
br.replace_with("\n")
l=[]
for item in all:
d={}
name=item.find("h1",{"class":"biz-page-title embossed-text-white shortenough"})
d["name"]=name.text.replace(" ","").replace("\n","")
claim=item.find("div", {"class":"u-nowrap claim-status_teaser js-claim-status-hover"})
d["claim"]=claim.text.replace(" ","").replace("\n","")
reviews=item.find("span", {"class":"review-count rating-qualifier"})
d["reviews"]=reviews.text.replace(" ","").replace("\n","")
l.append(d)
df=pandas.DataFrame(l)
df.to_csv("output.csv")
Please kindly let me know if Im not clear on explaining anything.
Open the output file in append mode as suggested in this post with the modification that you add header the first time:
from os.path import isfile
if not isfile("output.csv", "w"):
df.to_csv("output.csv", header=True)
else:
with open("output.csv", "a") as f:
df.to_csv(f, header=False)
I'm new to Python and scraping. I'm trying to run two loops. One goes and scrapes ids from one page. Then, using those ids, I call another API to get more info/properties.
But when I run this program, it just runs the first bit fine (gets the IDs), but then it closes and doesn't run the 2nd part. I feel I'm missing something really basic about control flow in Python here. Why does Python close after the first loop when I run it in Terminal?
import requests
import csv
import time
import json
from bs4 import BeautifulSoup, Tag
file = open('parcelids.csv','w')
writer = csv.writer(file)
writer.writerow(['parcelId'])
for x in range(1,10):
time.sleep(1) # slowing it down
url = 'http://apixyz/Parcel.aspx?Pid=' + str(x)
source = requests.get(url)
response = source.content
soup = BeautifulSoup(response, 'html.parser')
parcelId = soup.find("span", id="MainContent_lblMblu").text.strip()
writer.writerow([parcelId])
out = open('mapdata.csv','w')
with open('parcelIds.csv', 'r') as in1:
reader = csv.reader(in1)
writer = csv.writer(out)
next(reader, None) # skip header
for row in reader:
row = ''.join(row[0].split())[:-2].upper().replace('/','-') #formatting
url="https://api.io/api/properties/"
url1=url+row
time.sleep(1) # slowing it down
response = requests.get(url1)
resp_json_payload = response.json()
address = resp_json_payload['property']['address']
writer.writerow([address])
If you are running in windows (where filenames are not case sensitive), then the file you have open for writing (parcelids.csv) is still open when you reopen it to read from it.
Try closing the file before opening it to read from it.
I've encountered an issue with my writing CSV program for a web-scraping project.
I got a data formatted like this :
table = {
"UR": url,
"DC": desc,
"PR": price,
"PU": picture,
"SN": seller_name,
"SU": seller_url
}
Which I get from a loop that analyze a html page and return me this table.
Basically, this table is ok, it changes every time I've done a loop.
The thing now, is when I want to write every table I get from that loop into my CSV file, it is just gonna write the same thing over and over again.
The only element written is the first one I get with my loop and write it about 10 millions times instead of about 45 times (articles per page)
I tried to do it vanilla with the library 'csv' and then with pandas.
So here's my loop :
if os.path.isfile(file_path) is False:
open(file_path, 'a').close()
file = open(file_path, "a", encoding = "utf-8")
i = 1
while True:
final_url = website + brand_formatted + "+handbags/?p=" + str(i)
request = requests.get(final_url)
soup = BeautifulSoup(request.content, "html.parser")
articles = soup.find_all("div", {"class": "dui-card searchresultitem"})
for article in articles:
table = scrap_it(article)
write_to_csv(table, file)
if i == nb_page:
break
i += 1
file.close()
and here my method to write into a csv file :
def write_to_csv(table, file):
import csv
writer = csv.writer(file, delimiter = " ")
writer.writerow(table["UR"])
writer.writerow(table["DC"])
writer.writerow(table["PR"])
writer.writerow(table["PU"])
writer.writerow(table["SN"])
writer.writerow(table["SU"])
I'm pretty new on writing CSV files and Python in general but I can't find why this isn't working. I've followed many guide and got more or less the same code for writing csv file.
edit: Here's an output in an img of my csv file
you can see that every element is exactly the same, even if my table change
EDIT: I fixed my problems by making a file for each article I scrap. That's a lot of files but apparently it is fine for my project.
This might be solution you wanted
import csv
fieldnames = ['UR', 'DC', 'PR', 'PU', 'SN', 'SU']
def write_to_csv(table, file):
writer = csv.DictWriter(file, fieldnames=fieldnames)
writer.writerow(table)
Reference: https://docs.python.org/3/library/csv.html