So i was trying to find something to code, and i decided to use python to get fortnite stats, i came across the fortnite_python library and it works, but it displays item codes for items in the shop when i want it to display the names. Anyone know how to convert them or just disply the name in the first place? This is my code.
fortnite = Fortnite('c954ed23-756d-4843-8f99-cfe850d2ed0c')
store = fortnite.store()
fortnite.store()
It outputs something like this
[<StoreItem 12511>,
To print out the attributes of a Python object you can use __dict__ e.g.
from fortnite_python import Fortnite
from json import dumps
fortnite = Fortnite('Your API Key')
# ninjas_account_id = fortnite.player('ninja')
# print(f'ninjas_account: {ninjas_account_id}') # ninjas_account: 4735ce91-3292-4caf-8a5b-17789b40f79c
store = fortnite.store()
example_store_item = store[0]
print(dumps(example_store_item.__dict__, indent=2))
Output:
{
"_data": {
"imageUrl": "https://trackercdn.com/legacycdn/fortnite/237112511_large.png",
"manifestId": 12511,
"name": "Dragacorn",
"rarity": "marvel",
"storeCategory": "BRSpecialFeatured",
"vBucks": 0
},
"id": 12511,
"image_url": "https://trackercdn.com/legacycdn/fortnite/237112511_large.png",
"name": "Dragacorn",
"rarity": "marvel",
"store_category": "BRSpecialFeatured",
"v_bucks": 0
}
So it looks like you want to use name attribute of StoreItem:
for store_item in store:
print(store_item.name)
Output:
Dragacorn
Hulk Smashers
Domino
Unstoppable Force
Scootin'
Captain America
Cable
Probability Dagger
Chimichanga!
Daywalker's Kata
Psi-blade
Snap
Psylocke
Psi-Rider
The Devil's Wings
Daredevil
Meaty Mallets
Silver Surfer
Dayflier
Silver Surfer's Surfboard
Ravenpool
Silver Surfer Pickaxe
Grand Salute
Cuddlepool
Blade
Daredevil's Billy Clubs
Mecha Team
Tricera Ops
Combo Cleaver
Mecha Team Leader
Dino
Triassic
Rex
Cap Kick
Skully
Gold Digger
Windmill Floss
Bold Stance
Jungle Scout
It seems that the library doesn't contain a function to get the names. Also this is what the class of a item from the store looks like:
class StoreItem(Domain):
"""Object containing store items attributes"""
and thats it.
Related
I have a string in R that I would like to pass to python in order to compute something and return the result back into R.
I have the following which "works" but not as I would like.
The below passes a string from R, to a Python file, uses openAI to collect the text data and then load it back into R.
library(reticulate)
computePythonFunction <- "
def print_openai_response():
import openai
openai.api_key = 'ej-powurjf___OpenAI_API_KEY___HGAJjswe' # you will need an API key
prompt = 'write me a poem about the sea'
response = openai.Completion.create(engine = 'text-davinci-003', prompt = prompt, max_tokens=1000)
#response['choices'][0]['text']
print(response)
"
py_run_string(computePythonFunction)
py$print_openai_response()
library("rjson")
fromJSON(as.character(py$print_openai_response()))
I would like to store the results in R objects - i.e. Here is one output from the python script.
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"text": "\n\nThe sea glitters like stars in the night \nWith hues, vibrant and bright\nThe waves flow gentle, serene, and divine \nLike the sun's most gentle shine\n\nAs the sea reaches, so wide, so vast \nAn adventure awaits, and a pleasure, not passed\nWhite sands, with seaweed green \nForms a kingdom of the sea\n\nConnecting different land and tide \nThe sea churns, dancing with the sun's pride\nAs a tempest forms, raging and wild \nThe sea turns, its colors so mild\n\nA raging storm, so wild and deep \nProtecting the creatures that no one can see \nThe sea is a living breathing soul \nA true and untouchable goal \n\nThe sea is a beauty that no one can describe \nAnd it's power, no one can deny \nAn ever-lasting bond, timeless and free \nThe love of the sea, is a love, to keep"
}
],
"created": 1670525403,
"id": "cmpl-6LGG3hDNzeTZ5VFbkyjwfkHH7rDkE",
"model": "text-davinci-003",
"object": "text_completion",
"usage": {
"completion_tokens": 210,
"prompt_tokens": 7,
"total_tokens": 217
}
}
I am interested in the text generated but I am also interested in the completion_tokens, promt_tokens and total_tokens.
I thought about save the Python code as a script, then pass the argument to it such as:
myPythin.py arg1.
How can I return the JSON output from the model to an R object? The only input which changes/varies in the python code is the prompt variable.
I have written a code like this:
#pip3 install google
from googlesearch import search
query = 'java'
for i in search(query, # The query you want to run
tld = 'com', # The top level domain
lang = 'en', # The language
num = 10, # Number of results per page
start = 0, # First result to retrieve
stop = None, # Last result to retrieve
pause = 0, # Lapse between HTTP requests
safe = 'high'
):
print(i)
in the above, I am simply getting the url link. How can I get google like excerpts for each url. Like the attached
I don't think it is possible to do it with the package BUT if you use the requests module on python with some web scraping libraries (beautifulsoup) you can get those descriptions with the HTML tag (<meta name="description" content="google description">)
You can scrape it using requests, bs4 library and user-agent.
Make sure you're using user-agent, because if you don't, your script would have a default user-agent (it could be a tablet or phone) which will show different classes and because of it you will get an empty output.
Here's a code and replit.com(java search result on repl.it):
from bs4 import BeautifulSoup
import requests
import lxml
import json
headers = {
'User-agent':
"Mozilla/5.0 (Windows NT 10.0; Win6a4; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.19582"
}
html = requests.get('https://www.google.com/search?hl=en-US&q=best lasagna recipe ever', headers=headers).text
soup = BeautifulSoup(html, 'lxml'))
summary = []
for container in soup.findAll('div', class_='tF2Cxc'):
heading = container.find('h3', class_='LC20lb DKV0Md').text
article_summary = container.find('span', class_='aCOpRe').text
link = container.find('a')['href']
summary.append({
'Heading': heading,
'Article Summary': article_summary,
'Link': link,
})
print(json.dumps(summary, indent=2, ensure_ascii=False))
JSON output:
[
{
"Heading": "World's Best Lasagna Recipe - Allrecipes.com",
"Article Summary": "Ingredients. 1 pound sweet Italian sausage. ¾ pound lean ground beef. ½ cup minced onion. 2 cloves garlic, crushed. 1 (28 ounce) can crushed tomatoes. 2 (6 ounce) cans tomato paste. 2 (6.5 ounce) cans canned tomato sauce. ½ cup water.",
"Link": "https://www.allrecipes.com/recipe/23600/worlds-best-lasagna/"
},
{
"Heading": "The BEST Lasagna Recipe Ever! | The Recipe Critic",
"Article Summary": "Dec 22, 2019 — The BEST Classic Lasagna Ever has layers of sautéed ground beef and Italian sausage that are cooked together, sweet tomato sauce, Italian ...",
"Link": "https://therecipecritic.com/lasagna-recipe/"
},
{
"Heading": "The Most Amazing Lasagna Recipe - The Stay At Home Chef",
"Article Summary": "The Most Amazing Lasagna Recipe is the best recipe for homemade Italian-style lasagna. The balance ... This recipe is so good—it makes the kind of lasagna people write home about! ... Hands down absolutely the best lasagna recipe ever!",
"Link": "https://thestayathomechef.com/amazing-lasagna-recipe/"
},
{
"Heading": "Best Lasagna - Cafe Delites",
"Article Summary": "My mama's famous lasagna recipe is hands down the best lasagna I have ever had in my life. She learnt her ways from her Italian friends when she lived in New ...",
"Link": "https://cafedelites.com/best-lasagna/"
},
{
"Heading": "The Best Lasagna Recipe EVER | Fail Proof Recipe | Lauren's ...",
"Article Summary": "Start with a bit of meat sauce in the bottom of a large casserole dish or a plain 9×13 and line the bottom with pasta. Top with the cheese mixture and meat sauce.",
"Link": "https://laurenslatest.com/best-lasagna-recipe/"
},
{
"Heading": "Best Lasagna Recipe: How to Make It | Taste of Home",
"Article Summary": "Want to know how to make lasagna for a casual holiday meal? You can't go wrong with this deliciously rich meat lasagna recipe. ... I made this lasagna for my fiance, he said this lasagna was the best he ever tasted, I will never buy frozen ...",
"Link": "https://www.tasteofhome.com/recipes/best-lasagna/"
},
{
"Heading": "Easy Homemade Lasagna {Classic Dinner!} - Spend With ...",
"Article Summary": "May 19, 2020 — noodles – sauce (bake) – cheese. Spread about a cup of meat sauce into a 9×13 pan. Add a layer of noodles. Top the noodles with some of ...",
"Link": "https://www.spendwithpennies.com/easy-homemade-lasagna/"
},
{
"Heading": "The Best Lasagna Recipe {Simple & Classic} - Simply Recipes",
"Article Summary": "Feb 19, 2019 — Ingredients · 1 pound lean ground beef (chuck); 1/2 onion, diced (about 3/4 cup); 1/2 large bell pepper (green, red, or yellow), diced (about 3/4 cup) ...",
"Link": "https://www.simplyrecipes.com/recipes/lasagna/"
},
{
"Heading": "Best Lasagna Recipe - How to Make Lasagna From Scratch",
"Article Summary": "Dec 15, 2020 — The Best Lasagna. Ever. · Bring a large pot of water to a boil. · Meanwhile, in a large skillet or saucepan, combine ground beef, sausage, and garlic ...",
"Link": "https://www.thepioneerwoman.com/food-cooking/recipes/a11728/best-lasagna-recipe/"
}
]
Alternatively, you can use Google Search Engine Results API from SerpApi.
Part of JSON:
"organic_results": [
{
"position": 1,
"title": "World's Best Lasagna Recipe - Allrecipes.com",
"link": "https://www.allrecipes.com/recipe/23600/worlds-best-lasagna/",
"displayed_link": "https://www.allrecipes.com › ... › European › Italian",
}
]
Code to integrate:
import os
from serpapi import GoogleSearch
params = {
"engine": "google",
"q": "best lasagna recipe ever",
"api_key": os.getenv("API_KEY"),
}
search = GoogleSearch(params)
results = search.get_dict()
for result in results["organic_results"]:
print(f"Title: {result['title']}\nLink: {result['link']}")
Output:
Title: World's Best Lasagna Recipe - Allrecipes.com,
Link: https://www.allrecipes.com/recipe/23600/worlds-best-lasagna/
Title: Best Lasagna - Cafe Delites,
Link: https://cafedelites.com/best-lasagna/
Title: The Most Amazing Lasagna Recipe - The Stay At Home Chef,
Link: https://thestayathomechef.com/amazing-lasagna-recipe/
Title: The BEST Lasagna Recipe Ever! | The Recipe Critic,
Link: https://therecipecritic.com/lasagna-recipe/
Title: The Best Lasagna Recipe EVER | Fail Proof Recipe | Lauren's ...,
Link: https://laurenslatest.com/best-lasagna-recipe/
Title: Best Lasagna Recipe - How to Make Lasagna From Scratch,
Link: https://www.thepioneerwoman.com/food-cooking/recipes/a11728/best-lasagna-recipe/
Title: Best Lasagna Recipe: How to Make It | Taste of Home,
Link: https://www.tasteofhome.com/recipes/best-lasagna/
Title: Easy Homemade Lasagna {Classic Dinner!} - Spend With ...,
Link: https://www.spendwithpennies.com/easy-homemade-lasagna/
Title: The Best Lasagna Recipe {Simple & Classic} - Simply Recipes,
Link: https://www.simplyrecipes.com/recipes/lasagna/
Disclaimer: I work for SerpApi.
I am writing a script that scrapes information from a large travel agency. My code closely follows the tutorial at https://python.gotrained.com/selenium-scraping-booking-com/.
However, I would like to be able to navigate to the next page as I'm now limited to n_results = 25. Where do I add this in the code? I know that I can target the pagination button with driver.find_element_by_class_name('paging-next').click(), but I don't know where to incorporate it.
I have tried to put it in the for loop within the scrape_results function, which I have copied below. However, it doesn't seem to work.
def scrape_results(driver, n_results):
'''Returns the data from n_results amount of results.'''
accommodations_urls = list()
accommodations_data = list()
for accomodation_title in driver.find_elements_by_class_name('sr-hotel__title'):
accommodations_urls.append(accomodation_title.find_element_by_class_name(
'hotel_name_link').get_attribute('href'))
for url in range(0, n_results):
if url == n_results:
break
url_data = scrape_accommodation_data(driver, accommodations_urls[url])
accommodations_data.append(url_data)
return accommodations_data
EDIT
I have added some more code to clarify my input and output. Again, I mostly just used code from the GoTrained tutorial and added some code of my own. How I understand it: the scraper first collects all URLs and then scrapes the info of the individual pages one by one. I need to add the pagination loop in that first part – I think.
if __name__ == '__main__':
try:
driver = prepare_driver(domain)
fill_form(driver, 'Waterberg, South Africa') # my search argument
accommodations_data = scrape_results(driver, 25) # 25 is the maximum of results, higher makes the scraper crash due to the pagination problem
accommodations_data = json.dumps(accommodations_data, indent=4)
with open('booking_data.json', 'w') as f:
f.write(accommodations_data)
finally:
driver.quit()
Below is the JSON output for one search result.
[
{
"name": "Lodge Entabeni Safari Conservancy",
"score": "8.4",
"no_reviews": "41",
"location": "Vosdal Plaas, R520 Marken Road, 0510 Golders Green, South Africa",
"room_types": [
"Tented Chalet - Wildside Safari Camp with 1 game drive",
"Double or Twin Room - Hanglip Mountain Lodge with 1 game drive",
"Tented Family Room - Wildside Safari Camp with 1 game drive"
],
"room_prices": [
"\u20ac 480",
"\u20ac 214",
"\u20ac 650",
"\u20ac 290",
"\u20ac 693"
],
"popular_facilities": [
"1 swimming pool",
"Bar",
"Very Good Breakfast"
]
},
...
]
I'm using scrapy to scrape reviews from seek.com.au. I have found this link https://company-profiles-api.cloud.seek.com.au/v1/companies/432306/reviews?page=1 which has data I need encoded in JSON.
The data looks like this :
{
"paging":{
"page":1,
"perPage":20,
"total":825
},
"data":[
{
"timeAgoText":null,
"id":5330561,
"companyName":"Commonwealth Bank of Australia",
"companyRecommended":false,
"salarySummary":"fair",
"salarySummaryDisplayText":"Average",
"jobTitle":"Financial Planning Role",
"title":"Run away, don't walk!",
"pros":"Staff benefits, the programs are very good however IT support is atrocious. There is a procedure for absolutely everything so you aren't left wondering how to do things in the branch network.",
"cons":"Sell, sell, sell! Everything at CBA is about selling. Don't believe the reports that things have changed and performance is based on customer service. They may have on paper but sales numbers are still tracked.",
"yearLeft":"left_2019",
"yearLeftEmploymentStatusText":"former employee",
"yearsWorkedWith":"1_2_years",
"yearsWorkedWithText":"1 to 2 years",
"workLocation":"New South Wales, Australia",
"ratingCompanyOverall":2,
"ratingBenefitsAndPerks":3,
"ratingCareerOpportunity":3,
"ratingExecutiveManagement":1,
"ratingWorkEnvironment":2,
"ratingWorkLifeBalance":1,
"ratingStressLevel":null,
"ratingDiversity":3,
"reviewCreatedAt":"2019-09-11T11:41:10Z",
"reviewCreatedTimeAgoText":"1 month ago",
"reviewResponse":"Thank you for your feedback. At CommBank, we are continually working to ensure our performance metrics are realistic and achievable, so we appreciate your insights, which we will pass on to the Human Resources & Remuneration team. If you have any other feedback that you would like to share, we also encourage you to speak to HR Direct on 1800 989 696.",
"reviewResponseBy":"Employer Brand",
"reviewResponseForeignUserId":1,
"reviewResponseCreatedAt":"2019-10-17T05:13:52Z",
"reviewResponseCreatedTimeAgoText":"a few days ago",
"crowdflowerScore":3.0,
"isAnonymized":false,
"normalizedCfScore":2000.0,
"score":3.0483236,
"roleProximityScore":0.002
},
{
"timeAgoText":null,
"id":5327368,
"companyName":"Commonwealth Bank of Australia",
"companyRecommended":true,
"salarySummary":"below",
"salarySummaryDisplayText":"Low",
"jobTitle":"Customer Service Role",
"title":"Great to start your career in banking; not so great to stay for more than a few years",
"pros":"- Great work culture\n- Amazing colleagues\n- good career progress",
"cons":"- hard to get leave approved\n- no full-time opportunities\n- no staff benefits of real value",
"yearLeft":"still_work_here",
"yearLeftEmploymentStatusText":"current employee",
"yearsWorkedWith":"0_1_year",
"yearsWorkedWithText":"Less than 1 year",
"workLocation":"Melbourne VIC, Australia",
"ratingCompanyOverall":3,
"ratingBenefitsAndPerks":1,
"ratingCareerOpportunity":3,
"ratingExecutiveManagement":2,
"ratingWorkEnvironment":5,
"ratingWorkLifeBalance":3,
"ratingStressLevel":null,
"ratingDiversity":5,
"reviewCreatedAt":"2019-09-11T07:05:26Z",
"reviewCreatedTimeAgoText":"1 month ago",
"reviewResponse":"",
"reviewResponseBy":"",
"reviewResponseForeignUserId":null,
"reviewResponseCreatedAt":null,
"reviewResponseCreatedTimeAgoText":"",
"crowdflowerScore":3.0,
"isAnonymized":false,
"normalizedCfScore":2000.0,
"score":3.0483236,
"roleProximityScore":0.002
},
I have created a dictionary and then tried returning data but only 1 value gets returned
name = 'seek-spider'
allowed_domains = ['seek.com.au']
start_urls = [
'https://www.seek.com.au/companies/commonwealth-bank-of-australia-432306/reviews']
s = str(start_urls)
res = re.findall(r'\d+', s)
res = str(res)
string = (res[res.find("[")+1:res.find("]")])
string_replaced = string.replace("'", "")
start_urls = [
'https://company-profiles-api.cloud.seek.com.au/v1/companies/'+string_replaced+'/reviews?page=1']
def parse(self, response):
result = json.loads(response.body)
detail = {}
for i in result['data']:
detail['ID'] = i['id']
detail['Title'] = i['title']
detail['Pros'] = i['pros']
detail['Cons'] = i['cons']
return detail
I expect the output to have all data but only this is returned :
{'ID': 135413, 'Title': 'Great place to work!', 'Pros': 'All of the above.', 'Cons': 'None that I can think of'}
The dictionary I was creating was erasing my previous data. I created a list before looping and the problem was solved.
def parse(self, response):
result = json.loads(response.body)
res = []
for i in result['data']:
detail = {}
detail['id'] = i['id']
res.append(detail)
return res
Just a heads up I'm completely new to the coding scene and I'm having some issues using a json file
I've got the json to open using
json_queue = json.load(open('customer.json'))
but I just cant find the right code that allows me to make use of the info on the json. I think its because the json is an array not an object (probably completely wrong) My json currently looks like this
[
["James", "VW"],
["Katherine", "BMW"],
["Deborah", "renault"],
["Marguerite", "ford"],
["Kenneth", "VW"],
["Ronald", "Mercedes"],
["Donald", "BMW"],
["Al", "vauxhall"],
["Max", "porsche"],
["Carlos", "BMW"],
["Barry", "ford"],
["Donald", "renault"]
]
What I'm trying to do is take the persons name and the car type they are looking for and compare it too another json file that has the stock of cars in a shop but I'm currently stuck as to how I get python to actually use the information in that json.
I think I might of over explained my problem. My issue is that I am just starting a project using .json files and I can get python to open the file, but then I am unsure of how to get python to read that "James" wants a "VW" and then to go check the stock json to check if it is in stock. The stock json looks like this.
{
"VW": 4,
"BMW": 2,
"renault": 0,
"ford": 1,
"mercedes": 2,
"vauxhall": 1,
"porsche": 0,
}
What you have after the json.load() call is a plain python list of lists:
whishlist = [
["James", "VW"],
["Katherine", "BMW"],
["Deborah", "renault"],
["Marguerite", "ford"],
["Kenneth", "VW"],
["Ronald", "Mercedes"],
["Donald", "BMW"],
["Al", "vauxhall"],
["Max", "porsche"],
["Carlos", "BMW"],
["Barry", "ford"],
["Donald", "renault"]
]
where each sublist is a , pair. You can iterate over this list:
for name, car in whishlist:
print("name : {} - car : {}".format(name, car))
Now with your "other json file", what you have is a dict:
stock = {
"VW": 4,
"BMW": 2,
"renault": 0,
"ford": 1,
"mercedes": 2,
"vauxhall": 1,
"porsche": 0,
}
so all you have to do is to iterate over the whishlist list, check whether the car is in stock and print (or do anything else) the result:
for name, car in whishlist:
in_stock = stock.get(car, 0)
print("for {} : car {} in stock : {}".format(name, car, in_stock))
for James : car VW in stock : 4
for Katherine : car BMW in stock : 2
for Deborah : car renault in stock : 0
for Marguerite : car ford in stock : 1
for Kenneth : car VW in stock : 4
for Ronald : car Mercedes in stock : 0
for Donald : car BMW in stock : 2
for Al : car vauxhall in stock : 1
for Max : car porsche in stock : 0
for Carlos : car BMW in stock : 2
for Barry : car ford in stock : 1
for Donald : car renault in stock : 0