I've written some web-scraping code that is currently working, however quite slow. Some background: I am using Selenium as it requires several stages of clicks and entry, along with BeautifulSoup. My code is looking at a list of materials within subcategories on a website(image below) and scraping them. If the material scraped from the website is one of the 30 I am interested in (lst below), then it writes the number 1 to a dataframe which I later convert to an Excel sheet.
The reason it is so slow, I believe anyway, is due to the fact that there are a lot of exceptions. However, I am not sure how to handle these besides try/except. The main bits of code can be seen below, as the entire piece of code is quite lengthy. I have also attached an image of the website in question for reference.
lst = ["Household cleaner and detergent bottles", "Plastic milk bottles", "Toiletries and shampoo bottles", "Plastic drinks bottles",
"Drinks cans", "Food tins", "Metal lids from glass jars", "Aerosols",
"Food pots and tubs", "Margarine tubs", "Plastic trays","Yoghurt pots", "Carrier bags",
"Aluminium foil", "Foil trays",
"Cardboard sleeves", "Cardboard egg boxes", "Cardboard fruit and veg punnets", "Cereal boxes", "Corrugated cardboard", "Toilet roll tubes", "Food and drink cartons",
"Newspapers", "Window envelopes", "Magazines", "Junk mail", "Brown envelopes", "Shredded paper", "Yellow Pages" , "Telephone directories",
"Glass bottles and jars"]
def site_scraper(site):
page_loc = ('//*[#id="wrap-rlw"]/div/div[2]/div/div/div/div[2]/div/ol/li[{}]/div').format(site)
page = driver.find_element_by_xpath(page_loc)
page.click()
driver.execute_script("arguments[0].scrollIntoView(true);", page)
soup=BeautifulSoup(driver.page_source, 'lxml')
for i in x:
for j in y:
try:
material = soup.find_all("div", class_ = "rlw-accordion-content")[i].find_all('li')[j].get_text(strip=True).encode('utf-8')
if material in lst:
df.at[code_no, material] = 1
else:
continue
continue
except IndexError:
continue
x = xrange(0,8)
y = xrange(0,9)
p = xrange(1,31)
for site in p:
site_scraper(site)
Specifically, the i's and j's rarely go to 6,7 or 8 but when they do, it is important that I capture that information too. For context, the i's correspond to the number of different categories in the image below (Automative, Building materials etc.) whilst the j's represent the sub-list (car batteries and engine oil etc.). Because these two loops are repeated for all 30 sites for each code, and I have 1500 codes, this is extremely slow. Currently it is taking 6.5 minutes for 10 codes.
Is there a way I could improve this process? I tried list comprehension however it was difficult to handle errors like this and my results were no longer accurate. Could an "if" function be a better choice for this and if so, how would I incorporate it? I also would be happy to attach the full code if required. Thank you!
EDIT:
by changing
except IndexError:
continue
to
except IndexError:
break
it is now running almost twice as fast! Obviously it is best to exit to loop after it fails once, as the later iterations will also fail. However any other pythonic tips are still welcome :)
It sounds like you just need the text of those lis:
lis = driver.execute_script("[...document.querySelectorAll('.rlw-accordion-content li')].map(li => li.innerText.trim())")
Now you can use those for your logic:
for material in lis:
if material in lst:
df.at[code_no, material] = 1
Related
I've created the following code, that pulls Cryptocurrency prices from the CoinGecko api and parses the bits I need in JSON
btc = requests.get("https://api.coingecko.com/api/v3/coins/bitcoin")
btc.raise_for_status()
jsonResponse = btc.json() # print(response.json()) for debug
btc_marketcap=(jsonResponse["market_data"]["market_cap"]["usd"])
This works fine, except I then need to duplicate the above 4 lines for every currency which is getting long/messy & repetitive.
After researching I felt an approach was to store the coins in an array, and loop through the array replacing bitcoin in the above example with each item from the array.
symbols = ["bitcoin", "ethereum", "sushi", "uniswap"]
for x in symbols:
print(x)
This works as expected, but I'm having issues substituting bitcoin/btc for x successfully.
Any pointers appreciated, and whether this is the best approach for what I am trying to achieve
Something like this could work. Basically, just put the repeated part inside a function and call it with the changing arguments (currency). The substitution of the currency can be done for example with f-strings:
def get_data(currency):
btc = requests.get(f"https://api.coingecko.com/api/v3/coins/{currency}")
btc.raise_for_status()
return btc.json()["market_data"]["market_cap"]["usd"]
for currency in ["bitcoin", "ethereum", "sushi", "uniswap"]:
print(get_data(currency))
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I am trying to access the reviews and star rating of each reviewer and append the values to the list. However, it doesn't allow me to retun the output. Can anyone tell me what's wrong with my codes?
l=[]
for i in range(0,len(allrev)):
try:
l["stars"]=allrev[i].allrev.find("div",{"class":"lemon--div__373c0__1mboc i-stars__373c0__1T6rz i-stars--regular-4__373c0__2YrSK border-color--default__373c0__3-ifU overflow--hidden__373c0__2y4YK"}).get('aria-label')
except:
l["stars"]= None
try:
l["review"]=allrev[i].find("span",{"class":"lemon--span__373c0__3997G raw__373c0__3rKqk"}).text
except:
l["review"]=None
u.append(l)
l={}
print({"data":u})
To get all the reviews you can try the following:
import requests
from bs4 import BeautifulSoup
URL = "https://www.yelp.com/biz/sushi-yasaka-new-york"
soup = BeautifulSoup(requests.get(URL).content, "html.parser")
for star, review in zip(
soup.select(
".margin-b1__373c0__1khoT .border-color--default__373c0__3-ifU .border-color--default__373c0__3-ifU .border-color--default__373c0__3-ifU .overflow--hidden__373c0__2y4YK"
),
soup.select(".comment__373c0__3EKjH .raw__373c0__3rcx7"),
):
print(star.get("aria-label"))
print(review.text)
print("-" * 50)
Output:
5 star rating
I've been craving sushi for weeks now and Sushi Yasaka hit the spot for me. Their lunch prices are unbeatable. Their lunch specials seem to extend through weekends which is also amazing.I got the Miyabi lunch as take out and ate in along the benches near the MTA. It came with 4 nigiri, 7 sashimi and you get to pick the other roll (6 pieces). It also came with a side (choose salad or soup, add $1 for both). It was an incredible deal for only $20. I was so full and happy! The fish tasted very fresh with wonderful flavor. I ordered right as they opened and there were at least 10 people waiting outside when I picked up my food so I imagine there is high turnover, keeping the seafood fresh. This will be a regular splurge lunch spot for sure.
--------------------------------------------------
5 star rating
If you're looking for great sushi on Manhattan's upper west side, head over to Sushi Yakasa ! Best sushi lunch specials, especially for sashimi. I ordered the Miyabi - it included a fresh oyster ! The oyster was delicious, served raw on the half shell. The sashimi was delicious too. The portion size was very good for the area, which tends to be a pricey neighborhood. The restaurant is located on a busy street (west 72nd) & it was packed when I dropped by around lunchtimeStill, they handled my order with ease & had it ready quickly. Streamlined service & highly professional. It's a popular sushi place for a reason. Every piece of sashimi was perfect. The salmon avocado roll was delicious too. Very high quality for the price. Highly recommend! Update - I've ordered from Sushi Yasaka a few times since the pandemic & it's just as good as it was before. Fresh, and they always get my order correct. I like their takeout system - you can order over the phone (no app required) & they text you when it's ready. Home delivery is also available & very reliable. One of my favorite restaurants- I'm so glad they're still in business !
--------------------------------------------------
...
...
Edit to only get the first 100 reviews:
import csv
import requests
from bs4 import BeautifulSoup
url = "https://www.yelp.com/biz/sushi-yasaka-new-york?start={}"
offset = 0
review_count = 0
with open("output.csv", "a", encoding="utf-8") as f:
csv_writer = csv.writer(f, delimiter="\t")
csv_writer.writerow(["rating", "review"])
while True:
resp = requests.get(url.format(offset))
soup = BeautifulSoup(resp.content, "html.parser")
for rating, review in zip(
soup.select(
".margin-b1__373c0__1khoT .border-color--default__373c0__3-ifU .border-color--default__373c0__3-ifU .border-color--default__373c0__3-ifU .overflow--hidden__373c0__2y4YK"
),
soup.select(".comment__373c0__3EKjH .raw__373c0__3rcx7"),
):
print(f"review # {review_count}. link: {resp.url}")
csv_writer.writerow([rating.get("aria-label"), review.text])
review_count += 1
if review_count > 100:
raise Exception("Exceeded 100 reviews.")
offset += 20
Here is my code
import json
data = []
with open("review.json") as f:
for line in f:
data.append(json.loads(line))
lst_string = []
lst_num = []
for i in range(len(data)):
if (data[i]["stars"] == 5.0):
x = data[i]["text"]
for word in x.split():
if word in lst_string:
lst_num[lst_string.index(word)] += 1
else:
lst_string.append(word)
lst_num.append(1)
result = set(zip(lst_string, lst_num))
print(result)
with open("set.txt", "w") as g:
g.write(str(result))
I'm trying to write a set of all words in reviews that were given 5 stars from a pulled in json file formatted like
{"review_id":"Q1sbwvVQXV2734tPgoKj4Q","user_id":"hG7b0MtEbXx5QzbzE6C_VA","business_id":"ujmEBvifdJM6h6RLv4wQIg","stars":1.0,"useful":6,"funny":1,"cool":0,"text":"Total bill for this horrible service? Over $8Gs. These crooks actually had the nerve to charge us $69 for 3 pills. I checked online the pills can be had for 19 cents EACH! Avoid Hospital ERs at all costs.","date":"2013-05-07 04:34:36"}
{"review_id":"GJXCdrto3ASJOqKeVWPi6Q","user_id":"yXQM5uF2jS6es16SJzNHfg","business_id":"NZnhc2sEQy3RmzKTZnqtwQ","stars":1.0,"useful":0,"funny":0,"cool":0,"text":"I *adore* Travis at the Hard Rock's new Kelly Cardenas Salon! I'm always a fan of a great blowout and no stranger to the chains that offer this service; however, Travis has taken the flawless blowout to a whole new level! \n\nTravis's greets you with his perfectly green swoosh in his otherwise perfectly styled black hair and a Vegas-worthy rockstar outfit. Next comes the most relaxing and incredible shampoo -- where you get a full head message that could cure even the very worst migraine in minutes --- and the scented shampoo room. Travis has freakishly strong fingers (in a good way) and use the perfect amount of pressure. That was superb! Then starts the glorious blowout... where not one, not two, but THREE people were involved in doing the best round-brush action my hair has ever seen. The team of stylists clearly gets along extremely well, as it's evident from the way they talk to and help one another that it's really genuine and not some corporate requirement. It was so much fun to be there! \n\nNext Travis started with the flat iron. The way he flipped his wrist to get volume all around without over-doing it and making me look like a Texas pagent girl was admirable. It's also worth noting that he didn't fry my hair -- something that I've had happen before with less skilled stylists. At the end of the blowout & style my hair was perfectly bouncey and looked terrific. The only thing better? That this awesome blowout lasted for days! \n\nTravis, I will see you every single time I'm out in Vegas. You make me feel beauuuutiful!","date":"2017-01-14 21:30:33"}
{"review_id":"2TzJjDVDEuAW6MR5Vuc1ug","user_id":"n6-Gk65cPZL6Uz8qRm3NYw","business_id":"WTqjgwHlXbSFevF32_DJVw","stars":1.0,"useful":3,"funny":0,"cool":0,"text":"I have to say that this office really has it together, they are so organized and friendly! Dr. J. Phillipp is a great dentist, very friendly and professional. The dental assistants that helped in my procedure were amazing, Jewel and Bailey helped me to feel comfortable! I don't have dental insurance, but they have this insurance through their office you can purchase for $80 something a year and this gave me 25% off all of my dental work, plus they helped me get signed up for care credit which I knew nothing about before this visit! I highly recommend this office for the nice synergy the whole office has!","date":"2016-11-09 20:09:03"}
{"review_id":"yi0R0Ugj_xUx_Nek0-_Qig","user_id":"dacAIZ6fTM6mqwW5uxkskg","business_id":"ikCg8xy5JIg_NGPx-MSIDA","stars":1.0,"useful":0,"funny":0,"cool":0,"text":"Went in for a lunch. Steak sandwich was delicious, and the Caesar salad had an absolutely delicious dressing, with a perfect amount of dressing, and distributed perfectly across each leaf. I know I'm going on about the salad ... But it was perfect.\n\nDrink prices were pretty good.\n\nThe Server, Dawn, was friendly and accommodating. Very happy with her.\n\nIn summation, a great pub experience. Would go again!","date":"2018-01-09 20:56:38"}
{"review_id":"yi0R0Ugj_xUx_Nek0-_Qig","user_id":"dacAIZ6fTM6mqwW5uxkskg","business_id":"ikCg8xy5JIg_NGPx-MSIDA","stars":5.0,"useful":0,"funny":0,"cool":0,"text":"a b aa bb a b","date":"2018-01-09 20:56:38"}
but it is using all the memory on my computer before it can output into a text file. How can I use a less memory intensive way?
Only get text where stars == 5:
Data:
Based on the question, the data is a file containing rows of dicts.
Get the text into a list:
Given the data from Yelp Challenge, getting the 5 stars text into a list, doesn't take that much memory.
The Windows resource manager showed an increase of about 1.3GB, but the object size of text_list was about 25MB.
import json
text_list = list()
with open("review.json", encoding="utf8") as f:
for line in f:
line = json.loads(line)
if line['stars'] == 5:
text_list.append(line['text'])
print(text_list)
>>> ['Test text, example 1!', 'Test text, example 2!']
Extra:
Everything after loading the data, seems to require a lot of memory that isn't being released.
When cleaning the text, Windows resource manager went up by 16GB, though the final size of clean_text was also only about 25MB.
Interestingly, deleting clean_text does not release the 16GB of memory.
In Jupyter Lab, restarting the Kernel will release the memory
In PyCharm, stopping the process also releases the memory
I tried manually running the garbage collector, but that didn't release the memory
Clean text_list:
import string
def clean_string(value: str) -> list:
value = value.lower()
value = value.translate(str.maketrans('', '', string.punctuation))
value = value.split()
return value
clean_text = [clean_string(item) for item in text_list]
print(clean_text)
>>> [['test', 'text', 'example', '1'], ['test', 'text', 'example', '2']]
Count words in clean_text:
from collection import Counter
words = Counter()
for item in clean_text:
words.update(item)
print(words)
>>> Counter({'test': 2, 'text': 2, 'example': 2, '1': 1, '2': 1})
Having established already a dynamic search for the offers based on companies generating a link where you use it to search it´s available job reviews done by the previous employees, I´m now faced with the question about coding the part that would let me after having assign job offers and job reviews to a list as well as description to iterate through them and print the correspondent.
It all seems easy to do until you notice that job offers list have a different size than job reviews so I´m on a standstill regarding the following situation.
I´m trying the following code which obviously gives me an error since cargo_revisto_list is longer in length than nome_emprego_list because once you have more reviews than job offers this tends to happen, as well as the opposite.
Lists would be per example, the following:
cargo_revisto_list = ["Business Leader","Sales Manager"]
nome_emprego_list = ["Business Leader","Sales Manager","Front-end Developer"]
opiniao_list = ["Excellent Job","Wonderful managing"]
It would be a question of luck to get them to be exactly the same size.
url = "https://www.indeed.pt/cmp/Novabase/reviews?fcountry=PT&floc=Lisboa"
comprimento_cargo_revisto = len(cargo_revisto_list) #19
comprimento_nome_emprego = len(nome_emprego_list) #10
descricoes_para_cargos_existentes = []
if comprimento_cargo_revisto > comprimento_nome_emprego:
for i in range(len(cargo_revisto_list)):
s = cargo_revisto_list[i]
for z in range(len(nome_emprego_list)):
a = nome_emprego_list[z]
if(s == a): #A Stopping here needs new way of comparing strings
c=opiniao_list[i]
descricoes_para_cargos_existentes.append(c)
elif comprimento_nome_emprego > comprimento_cargo_revisto:
for i in range(len(comprimento_nome_emprego)):
s = nome_emprego_list[i]
for z in range(len(cargo_revisto_list)):
a = cargo_revisto_list[z]
if(s == a) and a!=None:
c = opiniao_list[z]
descricoes_para_cargos_existentes.append(c)
else:
for i in range(len(cargo_revisto_list)):
s = cargo_revisto_list[i]
for z in range(len(nome_emprego_list)):
a = nome_emprego_list[z]
if(s == a):
c = (opiniao_list[i])
descricoes_para_cargos_existentes.append(c)
After solving this issue I would need to get the exact review description about the job reviewed that corresponds to the job offer, so to solve this I would get the index of cargo_revisto_list and use that index to print opiniao_list (job description) that matches the job reviewed since it was added to the list at the same time and order by Beautiful Soup at the scraping moment.
I have two lists, one with ids and one with corresponding comments for each id.
list_responseid = ['id1', 'id2', 'id3', 'id4']
list_paragraph = [['I like working and helping them reach their goals.'],
['The communication is broken.',
'Information that should have come to me is found out later.'],
['Try to promote from within.'],
['I would relax the required hours to be available outside.',
'We work a late night each week.']]
The ResponseID 'id1' is related to the paragraph ('I like working and helping them reach their goals.') and so on.
I can break paragraph into sentences using the following function:
list_sentence = list(itertools.chain(*list_paragraph))
What would be the syntax to get the end result that is data frame (or list) with separate entry for a sentence and have an ID associated with that sentence (which is now linked to paragraph). The final result would look like this (I will convert list to panda data frame at the end).
id1 'I like working with students and helping them reach their goals.'
id2 'The communication from top to bottom is broken.'
id2 'Information that should have come to me is found out later and in some cases students know more about what is going on than we do!'
id3 'Try to promote from within.'
id4 'I would relax the required 10 hours to be available outside of 8 to 5 back to 9 to 5 like it used to be.'
id4 'We work a late night each week and rarely do students take advantage of those extended hours.'
Thanks.
If you do it often it would be clearer, and probably more efficient depending on the size of the arrays, if you make a dedicated function for that with two regular nested loops, but if you need a quick one liner for it (it's doing just that):
id_sentence_tuples = [(list_responseid[id_list_idx], sentence) for id_list_idx in range(len(list_responseid)) for sentence in list_paragraph[id_list_idx]]
id_sentence_tuples will then be a list of tupples where each of the elements is a pair like (paragraph_id, sentence) just as the result you expect.
Also i would advise you to check that both lists have the same length before doing it so in case they don't you get a meaningful error.
if len(list_responseid) != len(list_paragraph):
IndexError('Lists must have same cardinality')
I had a dataframe with an ID and a review (col = ['ID','Review']). If you can combine these lists to make a dataframe then you can use my approach. I split these reviews into sentences using nltk and then linked back the IDs within the loop. Following is the code that you can use.
## Breaking feedback into sentences
import nltk
count = 0
df_sentences = pd.DataFrame()
for index, row in df.iterrows():
feedback = row['Reviews']
sent_text = nltk.sent_tokenize(feedback) # this gives us a list of sentences
for j in range(0,len(sent_text)):
# print(index, "-", sent_text[j])
df_sentences = df_sentences.append({'ID':row['ID'],'Count':int(count),'Sentence':sent_text[j]}, ignore_index=True)
count = count + 1
print(df_sentences)