How to grab specific items from entire json response api calls - python

I want to grab only Symbol and Company Name items from the entire json data but getting
all data. How I can get above mentioned data and store in pandas DataFrame.
Base_url
My code:
import requests
import pandas as pd
params = {
'sectorID': 'All',
'_': '1630217365368'}
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'}
def main(url):
with requests.Session() as req:
req.headers.update(headers)
sym = []
name = []
r = req.get(url, params=params, headers =headers)
for item in r.json()['data']:
print(item)
# sym.append(item['symbol']),
# name.append(item['lonaName'])
# df = pd.DataFrame(sym, name, columns=[["Symble","Company name"]])
# print(df)
main('https://www.saudiexchange.sa/wps/portal/tadawul/market-participants/issuers/issuers-directory/!ut/p/z1/04_Sj9CPykssy0xPLMnMz0vMAfIjo8zi_Tx8nD0MLIy8DTyMXAwczVy9vV2cTY0MnEz1w8EKjIycLQwtTQx8DHzMDYEK3A08A31NjA0CjfWjSNLv7ulnbuAY6OgR5hYWYgzUQpl-AxPi9BvgAI4GhPVHgZXgCwFUBVi8iFcByA9gBXgcWZAbGhoaYZDpma6oCABqndOv/p0/IZ7_NHLCH082KOAG20A6BDUU6K3082=CZ6_NHLCH082K0H2D0A6EKKDC520B5=N/')

you need to fix the way you are creating the dataframe:
import requests
import pandas as pd
params = {
'sectorID': 'All',
'_': '1630217365368'}
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'}
def main(url):
with requests.Session() as req:
req.headers.update(headers)
sym = []
name = []
r = req.get(url, params=params, headers =headers)
for item in r.json()['data']:
# print(item)
sym.append(item['symbol']),
name.append(item['lonaName'])
df = pd.DataFrame({'symbol':sym , 'longName':name})
print(df)
main('https://www.saudiexchange.sa/wps/portal/tadawul/market-participants/issuers/issuers-directory/!ut/p/z1/04_Sj9CPykssy0xPLMnMz0vMAfIjo8zi_Tx8nD0MLIy8DTyMXAwczVy9vV2cTY0MnEz1w8EKjIycLQwtTQx8DHzMDYEK3A08A31NjA0CjfWjSNLv7ulnbuAY6OgR5hYWYgzUQpl-AxPi9BvgAI4GhPVHgZXgCwFUBVi8iFcByA9gBXgcWZAbGhoaYZDpma6oCABqndOv/p0/IZ7_NHLCH082KOAG20A6BDUU6K3082=CZ6_NHLCH082K0H2D0A6EKKDC520B5=N/')
symbol longName
0 1330 Abdullah A. M. Al-Khodari Sons Co.
1 4001 Abdullah Al Othaim Markets Co.
2 4191 Abdullah Saad Mohammed Abo Moati for Bookstore...
3 1820 Abdulmohsen Alhokair Group for Tourism and Dev...
4 2330 Advanced Petrochemical Co.
.. ... ...
199 3020 Yamama Cement Co.
200 3060 Yanbu Cement Co.
201 2290 Yanbu National Petrochemical Co.
202 3007 Zahrat Al Waha for Trading Co.
203 2240 Zamil Industrial Investment Co.

To get all data from the site, you can use their API:
import requests
import pandas as pd
url = "https://www.saudiexchange.sa/tadawul.eportal.theme.helper/TickerServlet"
data = requests.get(url).json()
# print(json.dumps(data, indent=4))
df = pd.json_normalize(data["stockData"])
print(df)
Prints:
pk_rf_company companyShortNameEn companyShortNameAr companyLongNameEn companyLongNameAr highPrice lowPrice noOfTrades previousClosePrice todaysOpen transactionDate turnOver volumeTraded aveTradeSize change changePercent lastTradePrice transactionDateStr
0 4700 Alkhabeer Income الخبير للدخل Al Khabeer Diversified Income Traded Fund صندوق الخبير للدخل المتنوع المتداول None None 308 None None None 1.293560e+06 142791 463.61 0.01 0.11 9.07 None
1 2030 SARCO المصافي Saudi Arabia Refineries Co. شركة المصافي العربية السعودية None None 877 None None None 1.352797e+07 83391 95.09 -0.40 -0.25 162.20 None
2 2222 SAUDI ARAMCO أرامكو السعودية Saudi Arabian Oil Co. شركة الزيت العربية السعودية None None 4054 None None None 6.034732e+07 1731463 427.10 0.05 0.14 34.90 None
...and so on.
To get only symbol/company name:
print(df[["pk_rf_company", "companyLongNameEn"]])
pk_rf_company companyLongNameEn
0 4700 Al Khabeer Diversified Income Traded Fund
1 2030 Saudi Arabia Refineries Co.
2 2222 Saudi Arabian Oil Co.
...and so on.

It will be way faster if you store data in pandas DataFrame and later process it.
Example Code:
import requests
import pandas as pd
params = {
'sectorID': 'All',
'_': '1630217365368'}
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.107 Safari/537.36'}
def main(url):
with requests.Session() as req:
req.headers.update(headers)
r = req.get(url, params=params, headers =headers)
data = r.json()['data']
df_main = pd.DataFrame(data)
df_min = df_main.iloc[:, 0:2]
df_min.columns = ['Symbol', 'Company name']
print(df_min)
main('https://www.saudiexchange.sa/wps/portal/tadawul/market-participants/issuers/issuers-directory/!ut/p/z1/04_Sj9CPykssy0xPLMnMz0vMAfIjo8zi_Tx8nD0MLIy8DTyMXAwczVy9vV2cTY0MnEz1w8EKjIycLQwtTQx8DHzMDYEK3A08A31NjA0CjfWjSNLv7ulnbuAY6OgR5hYWYgzUQpl-AxPi9BvgAI4GhPVHgZXgCwFUBVi8iFcByA9gBXgcWZAbGhoaYZDpma6oCABqndOv/p0/IZ7_NHLCH082KOAG20A6BDUU6K3082=CZ6_NHLCH082K0H2D0A6EKKDC520B5=N/')
Output:

Related

Beautiful soup doesn't get all elements

I'm trying to get all the street addresses that are on the right side of the page (https://www.zillow.com/homes/San-Francisco,-CA_rb/) but insted off getting all I get only 9 of them.
from bs4 import BeautifulSoup
import requests
header = {
"User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Safari/537.36",
"Accept-Language": "en-GB,en-US;q=0.9,en;q=0.8"
}
response = requests.get(
"https://www.zillow.com/homes/San-Francisco,-CA_rb/",
headers=header)
data = response.text
soup = BeautifulSoup(data, "html.parser")
tag_adress = soup.find_all('address')
for x in tag_adress:
print(x)
The site uses an api to access the data. I got the URL from dev tools. The script displays 500 addresses (500 agent lists, as the page states).
import requests
import json
useragent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.5026.0 Safari/537.36 Edg/103.0.1254.0"
# obtained url from dev tools
url = "https://www.zillow.com/search/GetSearchPageState.htm?searchQueryState=%7B%22pagination%22%3A%7B%7D%2C%22usersSearchTerm%22%3A%22San%20Francisco%2C%20CA%22%2C%22mapBounds%22%3A%7B%22west%22%3A-122.63417331103516%2C%22east%22%3A-122.23248568896484%2C%22south%22%3A37.70660374673871%2C%22north%22%3A37.84391640339095%7D%2C%22mapZoom%22%3A12%2C%22regionSelection%22%3A%5B%7B%22regionId%22%3A20330%2C%22regionType%22%3A6%7D%5D%2C%22isMapVisible%22%3Atrue%2C%22filterState%22%3A%7B%22isAllHomes%22%3A%7B%22value%22%3Atrue%7D%2C%22sortSelection%22%3A%7B%22value%22%3A%22days%22%7D%7D%2C%22isListVisible%22%3Atrue%7D&wants={%22cat1%22:[%22mapResults%22]}&requestId=2"
page = requests.get(url, headers={"User-Agent": useragent})
page.raise_for_status()
data = json.loads(page.content)
results = data["cat1"]["searchResults"]["mapResults"]
print(f"found {len(results)} results")
for item in results:
address = item["address"]
if address != "--":
print(address)
Outputs:
found 500 results
1160 Mission St, San Francisco, CA
1000 N Point St, San Francisco, CA
750 Van Ness Ave, San Francisco, CA
3131 Pierce St, San Francisco, CA
2655 Bush St, San Francisco, CA
1288 Howard St, San Francisco, CA
765 Market St, San Francisco, CA
10 Innes Ct, San Francisco, CA
51 Innes Ct, San Francisco, CA
...

Create Rows and Columns in BeautifulSoup

Below is code python code output.I want output in rows and column in dataframe:
response = requests.get(source_data)
soup = BeautifulSoup(response.text, "html.parser")
States = soup.find_all('div',class_ = 'card bg-darker p-3 mb-3')
for item in States :
state_name = item.find(class_='fw-bold fs-5 mb-2').text
vaccinated_per = item.find(class_='col-3 text-end fs-5 ff-s text-success').text
print(state_name,vaccinated_per)
Output:
Flanders 80.24%
Wallonia 70.00%
Brussels 56.73%
Ostbelgien 65.11%
Collect your information in a list of dicts and then simply create a data frame from it:
data = []
for item in States :
data.append({
'state_name' : item.find(class_='fw-bold fs-5 mb-2').text,
'vaccinated_per' : item.find(class_='col-3 text-end fs-5 ff-s text-success').text
})
pd.DataFrame(data)
Example
from bs4 import BeautifulSoup
import requests
import pandas as pd
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'}
response = requests.get('https://covid-vaccinatie.be/en', headers=headers)
soup = BeautifulSoup(response.text, "html.parser")
States = soup.find_all('div',class_ = 'card bg-darker p-3 mb-3')
data = []
for item in States :
data.append({
'state_name' : item.find(class_='fw-bold fs-5 mb-2').text,
'vaccinated_per' : item.find(class_='col-3 text-end fs-5 ff-s text-success').text
})
pd.DataFrame(data)
Output
state_name vaccinated_per
0 Flanders 80.24%
1 Wallonia 70.00%
2 Brussels 56.73%
3 Ostbelgien 65.11%

How to extract data using beautiful soup

import requests
from bs4 import BeautifulSoup
import pandas as pd
baseurl='https://locations.atipt.com/'
headers ={
'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36'
}
r =requests.get('https://locations.atipt.com/al')
soup=BeautifulSoup(r.content, 'html.parser')
tra = soup.find_all('ul',class_='list-unstyled')
productlinks=[]
for links in tra:
for link in links.find_all('a',href=True):
comp=baseurl+link['href']
productlinks.append(comp)
for link in productlinks:
r =requests.get(link,headers=headers)
soup=BeautifulSoup(r.content, 'html.parser')
tag=soup.find_all('div',class_='listing content-card')
for pro in tag:
tup=pro.find('a',class_='name').find_all('p')
for i in tup:
print(i.get_text())
I am trying to extract data but they will provide me nothing I try to extract data from the p tagthese is the page in which I try to extract data from p tag check it https://locations.atipt.com/al/alabaster
The working solution so far using css selectors to get data from p tags as follows:
import requests
from bs4 import BeautifulSoup
import pandas as pd
baseurl = 'https://locations.atipt.com/'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36'
}
r = requests.get('https://locations.atipt.com/al')
soup = BeautifulSoup(r.content, 'html.parser')
tra = soup.find_all('ul', class_='list-unstyled')
productlinks = []
for links in tra:
for link in links.find_all('a', href=True):
comp = baseurl+link['href']
productlinks.append(comp)
for link in productlinks:
r = requests.get(link, headers=headers)
soup = BeautifulSoup(r.content, 'html.parser')
tag = ''.join([x.get_text(strip=True).replace('\xa0','') for x in soup.select('div.listing.content-card div:nth-child(2)>p')])
print(tag)
Output:
634 1st Street NSte 100Alabaster, AL35007
9256 Parkway ESte ABirmingham, AL352061940 28th Ave SBirmingham, AL352095431 Patrick WaySte 101Birmingham, AL35235833 St. Vincent's DrSte 100Birmingham, AL352051401 Doug Baker BlvdSte 104Birmingham, AL35242
1877 Cherokee Ave SWCullman, AL350551301-A Bridge Creek Dr NECullman, AL35055
1821 Beltline Rd SWSte BDecatur, AL35601
4825 Montgomery HwySte 103Dothan, AL36303
550 Fieldstown RdGardendale, AL35071323 Fieldstown Rd, Ste 105Gardendale, AL35071
2804 John Hawkins PkwySte 104Hoover, AL35244
700 Pelham Rd NorthJacksonville, AL36265
1811 Hwy 78 ESte 108 & 109Jasper, AL35501-4081
76359 AL-77Ste CLincoln, AL35096
1 College DriveStation #14Livingston, AL35470
106 6th Street SouthSte AOneonta, AL35121-1823
50 Commons WaySte DOxford, AL36203
301 Huntley PkwyPelham, AL35124
41 Eminence WaySte BPell City, AL35128
124 W Grand AveSte A-4Rainbow City, AL35906
1147 US-231Ste 9 & 10Troy, AL36081
7201 Happy Hollow RdTrussville, AL35173
100 Rice Mine Road LoopSte 102Tuscaloosa, AL354061451 Dr. Edward Hillard DrSte 130Tuscaloosa, AL35401
3735 Corporate Woods DrSte 109Vestavia, AL35242-2296
636 Montgomery HwyVestavia Hills, AL352161539 Montgomery HwySte 111Vestavia Hills, AL35216

map JSON to CSV

I try to request data from a sites XHR and then save it as a CSV. The response is JSON and one part of the response is nested.
I have two issues:
First: It doesn't iterate. It returns only data for the first object but in two rows. The same data on both rows except from the last column where the first row have floorPlan:AltText and the second row floorPlan:url
Second: I doesn't like my some of my characters"'charmap' codec can't encode character '\u2560' in position 147:...". Seems to be some UTF-8 problem
The format of the response is (it is shortened to be more readable here):
[{
"id":"67d3686f-848b-e911-a971",
"name":"1302",
"url":"/site/",
"residenceType":"3",
"objectStatus":"1",
"price":570000.0,
"fee":245.0,
"apartmentNumber":"1302",
"address":"Major street 8",
"rooms":4.0,
"floor":3.0,
"primaryArea":92.0,
"inhabitDate":"2022-02-28T23:00:00Z",
"floorPlan":{"url":"/externalfiles/image/1.jpg","altText":"Drawing"}},
{"id":"69d3686f-848b-e911-a971-000d3ab795ed",
"name":"1303",
"url":"/site2/",
"residenceType":"3",
"objectStatus":"1",
"price":320000.0,
"fee":113.0,
"apartmentNumber":"1303",
"address":"Major Street 8",
"rooms":2.0,
"floor":3.0,
"primaryArea":47.0,
"inhabitDate":"2022-02-28T23:00:00Z",
"floorPlan":{"url":"/externalfiles/image/2.jpg","altText":"Drawing"}},
And my code is:
import requests
import pandas as pd
import csv
import json
h = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36',
'X-Requested-With': 'XMLHttpRequest',
}
u = "https://cdn-search-standard-prod.azureedge.net/api/v1/search/getstageobjects/23d8dbc1-005a-e911-a961-000d3aba65fd"
x = requests.get(u,headers=h).json()
f = csv.writer(open("test.csv", "w+"))
f.writerow(["id", "name", "residenceType", "objectStatus", "price", "fee", "apartmentNumber"])
for x in x:
f.writerow([x["id"],
x["name"],
x["residenceType"],
x["objectStatus"],
x["price"],
x["fee"],
x["apartmentNumber"],
x["floorPlan"]["url"]])
df=pd.DataFrame(x)
df.to_csv(r'C:\Users\abc\Documents\Python Scripts\file_20200627.csv', index=False, sep=';',encoding='utf-8')
if you want to convert x to csv use pandas you are tring to convert x to csv but x is not a csv writer object and also you are writing rows at the same time
import requests
import pandas as pd
import csv
import json
h = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36',
'X-Requested-With': 'XMLHttpRequest',
}
u = "https://cdn-search-standard-prod.azureedge.net/api/v1/search/getstageobjects/23d8dbc1-005a-e911-a961-000d3aba65fd"
x = requests.get(u,headers=h).json()
f = csv.writer(open("test.csv", "w+"))
f.writerow(["id", "name", "residenceType", "objectStatus", "price", "fee", "apartmentNumber"])
print(x)
for x in x:
f.writerow([x["id"],
x["name"],
x["residenceType"],
x["objectStatus"],
x["price"],
x["fee"],
x["apartmentNumber"],
x["floorPlan"]["url"]])
df=pd.DataFrame(x)
df.to_csv(r'C:\Users\abc\Documents\file.csv', index=False, sep=';',encoding='utf-8')
output
id name url residenceType objectStatus price fee secondaryArea apartmentNumber address rooms floor maxFloor concept primaryArea plotArea inhabitDate inhabitDateEnd inhabitDateTimeStamp inhabitDateDetermined swanLabel parkingSpaceStatus floorPlan
altText 75d3686f-848b-e911-a971-000d3ab795ed 1504 /stockholm-lan/jarfalla-kommun/bolinder-strand... 3 1 3840000.0 1974.0 0.0 1504 Fabriksvägen 8 3.0 5.0 15.0 782ffe43-f9ef-40eb-8d92-cfd90aa6b147 73.0 0.0 2022-02-28T23:00:00Z 2022-04-30T22:00:00Z 1646089200000 False True 1 Planritning
url 75d3686f-848b-e911-a971-000d3ab795ed 1504 /stockholm-lan/jarfalla-kommun/bolinder-strand... 3 1 3840000.0 1974.0 0.0 1504 Fabriksvägen 8 3.0 5.0 15.0 782ffe43-f9ef-40eb-8d92-cfd90aa6b147 73.0 0.0 2022-02-28T23:00:00Z 2022-04-30T22:00:00Z 1646089200000 False True 1 /externalfiles/image/23d8dbc1-005a-e911-a961-0...
output csv file

How to get a string formatted JSON into a table

I have the following string formatted JSON data. How can I convert data into a table format in R or Python?
I've tried df = pd.DataFrame(data), but that doesn't work, because data is a string.
data = '{"Id":"048f7de7-81a4-464d-bd6d-df3be3b1e7e8","RecordType":20, "CreationTime":"2019-10-08T12:12:32","Operation":"SetScheduledRefresh", "OrganizationId":"39b03722-b836-496a-85ec-850f0957ca6b","UserType":0, "UserAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36", "ItemName":"ASO Daily Statistics","Schedules":{"RefreshFrequency":"Daily", "TimeZone":"E. South America Standard Time","Days":["All"], "Time":["07:30:00","10:30:00","13:30:00","16:30:00","19:30:00","22:30:00"]}, "IsSuccess":true,"ActivityId":"4e8b4514-24be-4ba5-a7d3-a69e8cb8229e"}'
Desired Output:
output =
------------------------------------------------------------------
ID | RecordType | CreationTime
048f7de7-81a4-464d-bd6d-df3be3b1e7e8 | 20 | 2019-10-08T12:12:32
Error:
ValueError Traceback (most recent call last)
<ipython-input-26-039b238b38ef> in <module>
----> 1 df = pd.DataFrame(data)
e:\Anaconda3\lib\site-packages\pandas\core\frame.py in __init__(self, data, index, columns, dtype, copy)
483 )
484 else:
--> 485 raise ValueError("DataFrame constructor not properly called!")
486
487 NDFrame.__init__(self, mgr, fastpath=True)
ValueError: DataFrame constructor not properly called!
In Python:
Given data
str.replace true with True
Use ast.literal_eval to convert data from a str to dict
pandas.io.json.json_normalize to convert the json to a pandas dataframe
import pandas as pd
from ast import literal_eval
from pandas.io.json import json_normalize
data = '{"Id":"048f7de7-81a4-464d-bd6d-df3be3b1e7e8","RecordType":20, "CreationTime":"2019-10-08T12:12:32","Operation":"SetScheduledRefresh", "OrganizationId":"39b03722-b836-496a-85ec-850f0957ca6b","UserType":0, "UserAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36", "ItemName":"ASO Daily Statistics","Schedules":{"RefreshFrequency":"Daily", "TimeZone":"E. South America Standard Time","Days":["All"], "Time":["07:30:00","10:30:00","13:30:00","16:30:00","19:30:00","22:30:00"]}, "IsSuccess":true,"ActivityId":"4e8b4514-24be-4ba5-a7d3-a69e8cb8229e"}'
data = data.replace('true', 'True')
data = literal_eval(data)
{'ActivityId': '4e8b4514-24be-4ba5-a7d3-a69e8cb8229e',
'CreationTime': '2019-10-08T12:12:32',
'Id': '048f7de7-81a4-464d-bd6d-df3be3b1e7e8',
'IsSuccess': True,
'ItemName': 'ASO Daily Statistics',
'Operation': 'SetScheduledRefresh',
'OrganizationId': '39b03722-b836-496a-85ec-850f0957ca6b',
'RecordType': 20,
'Schedules': {'Days': ['All'],
'RefreshFrequency': 'Daily',
'Time': ['07:30:00',
'10:30:00',
'13:30:00',
'16:30:00',
'19:30:00',
'22:30:00'],
'TimeZone': 'E. South America Standard Time'},
'UserAgent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
'(KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36',
'UserType': 0}
Create the dataframe:
df = json_normalize(data)
Id RecordType CreationTime Operation OrganizationId UserType UserAgent ItemName IsSuccess ActivityId Schedules.RefreshFrequency Schedules.TimeZone Schedules.Days Schedules.Time
048f7de7-81a4-464d-bd6d-df3be3b1e7e8 20 2019-10-08T12:12:32 SetScheduledRefresh 39b03722-b836-496a-85ec-850f0957ca6b 0 Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36 ASO Daily Statistics True 4e8b4514-24be-4ba5-a7d3-a69e8cb8229e Daily E. South America Standard Time [All] [07:30:00, 10:30:00, 13:30:00, 16:30:00, 19:30:00, 22:30:00]
You will need the reticulate library: You will need to change all true to True. Look at the code below
a <- 'string = {"Id":"048f7de7-81a4-464d-bd6d-df3be3b1e7e8","RecordType":20,
"CreationTime":"2019-10-08T12:12:32","Operation":"SetScheduledRefresh",
"OrganizationId":"39b03722-b836-496a-85ec-850f0957ca6b","UserType":0,
"UserAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36",
"ItemName":"ASO Daily Statistics","Schedules":{"RefreshFrequency":"Daily",
"TimeZone":"E. South America Standard Time","Days":["All"],
"Time":["07:30:00","10:30:00","13:30:00","16:30:00","19:30:00","22:30:00"]},
"IsSuccess":true,"ActivityId":"4e8b4514-24be-4ba5-a7d3-a69e8cb8229e"}'
data.frame(reticulate::py_eval(gsub('true','True',sub('.*=\\s+','',a))))

Categories

Resources