grap a URL from a column and paste in chrome - python

I have an Excel file with a column filled with +4000 URLs each one in a different cell. I need to use Python to open it with Chrome and scraping the website some of the data from a website.
past them in excel.
And then do the same step for the next URL. Could you please help me with that?

export the excel file to csv file read data from it as
def data_collector(url):
# do your code here and return data that you want to write in place of url
return url
with open("myfile.csv") as fobj:
content = fobj.read()
#below line will return you urls in form of list
urls = content.replace(",", " ").strip()
for url in urls:
data_to_be_write = data_collector(url)
# added extra quotes to prevent csv from breaking it is prescribed
# to use csv module to write in csv file but for ease of understanding
# i did it like this, Hoping You will correct it by yourself
content = "\"" + {content.replace(url, data_to_be_write) + "\""
with open("new_file.csv", "wt") as fnew:
fnew.write(content)
after running this code you will get new_file.csv opening it with Excel you will get your desired data in place of url
if you want your url with data just append it like with data in string seprated by colon.

Related

Search for a word in webpage and save to TXT in Python

I am trying to: Load links from a .txt file, search for a specific Word, and if the word exists on that webpage, save the link to another .txt file but i am getting error: No scheme supplied. Perhaps you meant http://<_io.TextIOWrapper name='import.txt' mode='r' encoding='cp1250'>?
Note: the links has HTTPS://
The code:
import requests
list_of_pages = open('import.txt', 'r+')
save = open('output.txt', 'a+')
word = "Word"
save.truncate(0)
for page_link in list_of_pages:
res = requests.get(list_of_pages)
if word in res.text:
response = requests.request("POST", url)
save.write(str(response) + "\n")
Can anyone explain why ? thank you in advance !
Try putting http:// behind the links.
When you use res = requests.get(list_of_pages) you're creating HTTP connection to list_of_pages. But requests.get takes URL string as a parameter (e.g. http://localhost:8080/static/image01.jpg), and look what list_of_pages is - it's an already opened file. Not a string. You have to either use requests library, or file IO API, not both.
If you have an already opened file, you don't need to create HTTP request at all. You don't need this request.get(). Parse list_of_pages like a normal, local file.
Or, if you would like to go the other way, don't open this text file in list_of_arguments, make it a string with URL of that file.

Download CSV Data by looping over a Pandas Data frame which consists of 47 URL

I am trying to develop a Python Script for my Data Engineering Project and I want to loop over 47 URLS stored in a dataframe, which downloads a CSV File and stores in my local machine. Below is the example of top 5 URLS:
test_url = "https://data.cdc.gov/api/views/pj7m-y5uh/rows.csv?accessType=DOWNLOAD"
req = requests.get(test_url)
url_content = req.content
csv_file = open('cdc6.csv', 'wb')
csv_file.write(url_content)
csv_file.close()
I have this for a single file, but instead of the opening a CSV File and writing the Data in it, I want to directly download all the files and save it in local machine.
You want to iterate and then download the file to a folder. Iteration is easy by using the .items() method in pandas dataframes and passing it into a loop. See the documentation here.
Then, you want to download each item. Urllib has a .urlretrieve(url, filename) function for downloading a hosted file to a local file, which is elaborated on in the urllib documentation here.
Your code may look like:
for index, url in url_df.items():
urllib.urlretrieve(url, "cdcData" + index + ".csv")
or if you want to preserve the original names:
for index, url in url_df.items():
name = url.split("/")[-1]
urllib.urlretrieve(url, name)

Downloaded Share Point Excel Not Opening with Open

I am re-framing an existing question for simplicity. I have the following code to download Excel files from a company Share Point site.
import requests
import pandas as pd
def download_file(url):
filename = url.split('/')[-1]
r = requests.get(url)
with open(filename, 'wb') as output_file:
output_file.write(r.content)
df = pd.read_excel(r'O:\Procurement Planning\QA\VSAF_test_macro.xlsm')
df['Name'] = 'share_point_file_path_documentName' #i'm appending the sp file path to the document name
file = df['Name'] #I only need the file path column, I don't need the rest of the dataframe
# for loop for download
for url in file:
download_file(url)
The downloads happen and I don't get any errors in Python, however when I try to open them I get an error from Excel saying Excel cannot open the file because the file format or extension is not valid. If I print the link in Jupyter Notebooks it does open correctly, the issue appears to be with the download.
Check r.status_code. This must be 200 or you have the wrong url or no permission.
Open the downloaded file in a text editor. It might be a HTML file (Office Online)
If the URL contains a web=1 query parameter, remove it or replace it by web=0.

Python Looping through urls in csv file returns \ufeffhttps://

I am new to python and I am trying to loop through the list of urls in a csv file and grab the website titleusing BeautifulSoup, which I would like then to save to a file Headlines.csv. But I am unable to grab the webpage title. If I use a variable with single url as follows:
url = 'https://www.space.com/japan-hayabusa2-asteroid-samples-landing-date.html'
resp = req.get(url)
soup = BeautifulSoup(resp.text, 'lxml')
print(soup.title.text)
It works just fine and I get the title Japanese capsule carrying pieces of asteroid Ryugu will land on Earth Dec. 6 | Space
But when I use the loop,
import csv
with open('urls_file2.csv', newline='', encoding='utf-8') as f:
reader = csv.reader(f)
for url in reader:
print(url)
resp = req.get(url)
soup = BeautifulSoup(resp.text, 'lxml')
print(soup.title.text)
I get the following
['\ufeffhttps://www.foxnews.com/us/this-day-in-history-july-16']
and an error message
InvalidSchema: No connection adapters were found for "['\\ufeffhttps://www.foxnews.com/us/this-day-in-history-july-16']"
I am not sure what am I doing wrong.
You have a byte order mark \\ufeff on the URL you parse from your file.
It looks like your file is a signature file and has encoding like utf-8-sig.
You need to read with the file with encoding='utf-8-sig'
Read more here.
As the previous answer has already mentioned about the "\ufeff", you would need to change the encoding.
The second issue is that when you read a CSV file, you will get a list containing all the columns for each row. The keyword here is list. You are passing the request a list instead of a string.
Based on the example you have given, I would assume that your urls are in the first column of the csv. Python lists starts with a index of 0 and not 1. So to extract out the url, you would need to extract the index of 0 which refers to the first column.
import csv
with open('urls_file2.csv', newline='', encoding='utf-8-sig') as f:
reader = csv.reader(f)
for url in reader:
print(url[0])
To read up more on lists, you can refer here.
You can add more columns to the CSV file and experiment to see how the results would appear.
If you would like to refer to the column name while reading each row, you can refer here.

Extract a table from a locally saved HTML file

I have a series of HTML files stored in a local folder ("destination folder"). These HTML files all contain a number of tables. What I'm looking to do is to locate the tables I'm interested in thanks to keywords, grab these tables in their entirety, paste them to a text file and save this file to the same local folder ("destination folder").
This is what I have for now:
from bs4 import BeautifulSoup
filename = open('filename.txt', 'r')
soup = BeautifulSoup(filename,"lxml")
data = []
for keyword in keywords.split(','):
u=1
txtfile = destinationFolder + ticker +'_'+ companyname[:10]+ '_'+item[1]+'_'+item[3]+'_'+keyword+u+'.txt'
mots = soup.find_all(string=re.compile(keyword))
for mot in mots:
for row in mot.find("table").find_all("tr"):
data = cell.get_text(strip=True) for cell in row.find_all("td")
data = data.get_string()
with open(txtfile,'wb') as t:
t.write(data)
txtfile.close()
u=u+1
except:
pass
filename.close()
Not sure what's happening in the background but I don't get my txt file in the end like I'm supposed to. The process doesn't fail. It runs its course till the end but the txt file is nowhere to be found in my local folder when it's done. I'm sure I'm looking in the correct folder. The same path is used elsewhere in my code and works fine.

Categories

Resources