This question already has answers here:
How to convert an XML file to nice pandas dataframe?
(5 answers)
Closed 1 year ago.
I'm fairly new to python and am hoping to get some help transforming an XML file into Pandas Dataframe. I have searched other resources but am still stuck. I'm looking to get all the fields in between tag into a table. Any help is greatly appreciated! Thank you.
Below is the code I tried but it not working properly.
import xml.etree.ElementTree as ET
import pandas as pd
xml_data = open('5249009-08-34-59-126029.xml', 'r').read()
root = ET.XML(xml_data)
data = []
cols = []
for i, child in enumerate(root):
data.append([subchild.text for subchild in child])
cols.append(child.tag)
df = pd.DataFrame(data).T
df.columns = cols
print(df)
Below is sample input data"
<?xml version="1.0"?>
-<RECORDING>
<IDENT>0</IDENT>
<DEVICEID>133242232</DEVICEID>
<DEVICEALIAS>52232009</DEVICEALIAS>
<GROUP>1823481655</GROUP>
<GATE>1011655</GATE>
<ANI>7777777777</ANI>
<DNIS>777777777</DNIS>
<USER1>00:07:53.2322691,00:03:21.34232761</USER1>
<USER2>text</USER2>
<USER3/>
<USER4/>
<USER5>34fc0a8d-d5632c9b1</USER5>
<USER6>000dfsdf98701596638094</USER6>
<USER7>97</USER7>
<USER8>00701596638094</USER8>
<USER9>10155</USER9>
<USER10/>
<USER11/>
<USER12/>
<USER13>Text</USER13>
<USER14>4</USER14>
<USER15>10</USER15>
<CALLSEGMENTID/>
<CALLID>9870</CALLID>
<FILENAME>\\folderpath\folderpath\folderpath\folderpath\2020\Aug\05\5249009\52343109-234234-34-59-1234234029</FILENAME>
<DURATION>201</DURATION>
<STARTYEAR>2020</STARTYEAR>
<STARTMONTH>08</STARTMONTH>
<STARTMONTHNAME>August</STARTMONTHNAME>
<STARTDAY>05</STARTDAY>
<STARTDAYNAME>Wednesday</STARTDAYNAME>
<STARTHOUR>08</STARTHOUR>
<STARTMINUTE>34</STARTMINUTE>
<STARTSECOND>59</STARTSECOND>
<PRIORITY>50</PRIORITY>
<RECORDINGTYPE>S</RECORDINGTYPE>
<CALLDIRECTION>I</CALLDIRECTION>
<SCREENCAPTURE>7</SCREENCAPTURE>
<KEEPCALLFORDAYS>90</KEEPCALLFORDAYS>
<BLACKOUTREMOTEAUDIO>false</BLACKOUTREMOTEAUDIO>
<BLACKOUTS/>
</RECORDING>
One possible solution how to parse the file:
import pandas as pd
from bs4 import BeautifulSoup
soup = BeautifulSoup(open("your_file.xml", "r"), "xml")
d = {}
for tag in soup.RECORDING.find_all(recursive=False):
d[tag.name] = tag.get_text(strip=True)
df = pd.DataFrame([d])
print(df)
Prints:
IDENT DEVICEID DEVICEALIAS GROUP GATE ANI DNIS USER1 USER2 USER3 USER4 USER5 USER6 USER7 USER8 USER9 USER10 USER11 USER12 USER13 USER14 USER15 CALLSEGMENTID CALLID FILENAME DURATION STARTYEAR STARTMONTH STARTMONTHNAME STARTDAY STARTDAYNAME STARTHOUR STARTMINUTE STARTSECOND PRIORITY RECORDINGTYPE CALLDIRECTION SCREENCAPTURE KEEPCALLFORDAYS BLACKOUTREMOTEAUDIO BLACKOUTS
0 0 133242232 52232009 1823481655 1011655 7777777777 777777777 00:07:53.2322691,00:03:21.34232761 text 34fc0a8d-d5632c9b1 000dfsdf98701596638094 97 00701596638094 10155 Text 4 10 9870 \\folderpath\folderpath\folderpath\folderpath\... 201 2020 08 August 05 Wednesday 08 34 59 50 S I 7 90 false
Related
Humble greetings and welcome to anyone willing to spend time here. I shall introduce myself as a very green student of data science and also python. This thread is meant to gain insight from rather more fortunate minds capable of deeper understanding within the realm of python.
As we can see, the value for each row itself could be found easily on the page inspection. But it seems that they all are using the same class name. As for now, i'm afraid i couldnt even find the right keyword to search for any working method in google.
These are the codes that i've tried. They dont work and embaressing, but i have to show it anyway. Ive tried fiddling by adding .content, .text, find, find_all, but i understand that my failure lies at even deeper fundamental core.
from bs4 import BeautifulSoup
import requests
from csv import writer
import pandas as pd
url= 'https://m4.mobilelegends.com/stats'
page = requests.get(url)
soup = BeautifulSoup(page.text, 'html.parser')
lists = soup.find('div', class_="m4-team-stats-scroll")
with open('m4stats_team.csv', 'w', encoding='utf8', newline='') as f:
thewriter = writer(f)
header = ['Team', 'Win Rate', 'Average KDA', 'Average Kills', 'average Deaths', 'Average Assists', 'Average Game Time', 'Average Lord Kills', 'Average Tortoise Kills', 'Average Towers Destroy', 'First Blood Rate', 'Hero Pool']
thewriter.writerow(header)
for list in lists:
team = list.find_all('p', class_="h3 pl-5 whitespace-nowrap hidden xl:block")
awr = list.find_all('p', class_="h4")
akda = list.find('p', class_="h4").text
akill = list.find('p', class_="h4").text
adeath = list.find('p', class_="h4").text
aassist = list.find('p', class_="h4").text
atime = list.find('p', class_="h4").text
aalord = list.find('p', class_="h4").text
atortoise = list.find('p', class_="h4").text
atower = list.find('p', class_="h4").text
firstblood = list.find('p', class_="h4").text
hrpool = list.find('p', class_="h4").text
info = [team, awr, akda, akill, adeath, aassist, atime, aalord, atortoise, atower, firstblood, hrpool]
thewriter.writerow(info)
pd.read_csv('m4stats_team.csv').head()
What am i expecting:
Any kind of insight. Whether if it's clue, keyword, code snippet, i do appreciate and mostfully thankful for any kind of guidance. I am not asking for somehow getting the complete scrapped CSV, as i couldve done it manually. At these point i want to be able to do basic webscraping myself.
You can iterate over rows in the table and its items.
from bs4 import BeautifulSoup
import requests
page = requests.get('https://m4.mobilelegends.com/stats')
page.raise_for_status()
page = BeautifulSoup(page.content)
table = page.find("div", class_="m4-team-stats-scroll")
with open("table.csv", "w") as file:
for row in table.find_all("div", class_="m4-team-stats"):
items = row.find_all("div", class_="col-span-1")
# write into file in csv format, use map to extract text from items
file.write(",".join(map(lambda item: item.text, items)) + "\n")
Display output:
import pandas as pd
df = pd.read_csv("table.csv")
print(df)
# Outputs:
"""
Team ↓Win Rate ... ↓First Blood Rate ↓Hero pool
0 echo 72.0% ... 48.0% 37
1 rrq 60.9% ... 60.9% 37
2 tv 60.0% ... 60.0% 29
3 fcon 55.0% ... 85.0% 32
4 inc 53.3% ... 26.7% 31
5 onic 52.9% ... 47.1% 39
6 blck 52.2% ... 47.8% 31
7 rrq-br 46.2% ... 30.8% 32
8 thq 45.5% ... 63.6% 27
9 s11 42.9% ... 28.6% 26
10 tdk 37.5% ... 62.5% 24
11 ot 28.6% ... 28.6% 21
12 mvg 20.0% ... 20.0% 15
13 rsg-sg 20.0% ... 60.0% 17
14 burn 0.0% ... 20.0% 21
15 mdh 0.0% ... 40.0% 18
[16 rows x 12 columns]
"""
I have folders full of XML files which I want to parse to a dataframe. The following functions iterate through an XML tree recursively and return a dataframe with three columns: path, attributes and text.
def XML2DF(filename,df1,MAX_DEPTH=20):
with open(filename) as f:
xml_str = f.read()
tree = etree.fromstring(xml_str)
df1 = recursive_parseXML2DF(tree, df1, MAX_DEPTH=MAX_DEPTH)
return
def recursive_parseXML2DF(element, df1, depth=0, MAX_DEPTH=20):
if depth > MAX_DEPTH:
return df1
df2 = pd.DataFrame([[element.getroottree().getpath(element), element.attrib, element.text]],
columns=["path", "attrib", "text"])
#print(df2)
df1 = pd.concat([df1, df2])
for child in element.getchildren():
df1 = recursive_parseXML2DF(child, df1, depth=depth + 1)
return df1
The code for the function was adapted from this post.
Most of the times the function works fine and returns the entire path but for some documents the returned path looks like this:
/*/*[1]/*[3]
/*/*[1]/*[3]/*[1]
The text tag entry remains valid and correct.
The only difference in the XML between working path and widlcard path documents I can make out is that the XML tags are written in all caps.
Working example:
<?xml version="1.0" encoding="utf-8"?>
<root>
<Header>
<ReceivingApplication>ReceivingApplication</ReceivingApplication>
<SendingApplication>SendingApplication</SendingApplication>
<MessageControlID>12345</MessageControlID>
<ReceivingApplication>ReceivingApplication</ReceivingApplication>
<FileCreationDate>2000-01-01T00:00:00</FileCreationDate>
</Header>
<Einsendung>
<Patient>
<PatientName>Name</PatientName>
<PatientVorname>FirstName</PatientVorname>
<PatientGebDat>2000-01-01T00:00:00</PatientGebDat>
<PatientSex>4</PatientSex>
<PatientPWID>123456</PatientPWID>
</Patient>
<Visit>
<VisitNumber>A2000.0001</VisitNumber>
<PatientPLZ>1234</PatientPLZ>
<PatientOrt>PatientOrt</PatientOrt>
<PatientAdr2>
</PatientAdr2>
<PatientStrasse>PatientStrasse 01</PatientStrasse>
<VisitEinsID>1234</VisitEinsID>
<VisitBefund>VisitBefund</VisitBefund>
<Befunddatum>2000-01-01T00:00:00</Befunddatum>
</Visit>
</Einsendung>
</root>
nonsensical Example:
<?xml version="1.0"?>
<KRSCHWEIZ xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="krSCHWEIZ">
<KEY_VS>abcdefg</KEY_VS>
<KEY_KLR>abcdefg</KEY_KLR>
<ABSENDER>
<ABSENDER_MELDER_ID>123456</ABSENDER_MELDER_ID>
<MELDER>
<MELDER_ID>123456</MELDER_ID>
<QUELLSYSTEM>ABCDEF</QUELLSYSTEM>
<PATIENT>
<REFERENZNR>987654</REFERENZNR>
<NACHNAME>my name</NACHNAME>
<VORNAMEN>my first name</VORNAMEN>
<GEBURTSNAME />
<GEBURTSDATUM>my dob</GEBURTSDATUM>
<GESCHLECHT>XX</GESCHLECHT>
<PLZ>9999</PLZ>
<WOHNORT>Mycity</WOHNORT>
<STRASSE>mystreet</STRASSE>
<HAUSNR>99</HAUSNR>
<VERSICHERTENNR>999999999</VERSICHERTENNR>
<DATEIEN>
<DATEI>
<DATEINAME>my_attached_document.html</DATEINAME>
<DATEIBASE64>mybase_64_encoded_document</DATEIBASE64>
</DATEI>
</DATEIEN>
</PATIENT>
</MELDER>
</ABSENDER>
</KRSCHWEIZ>
How do I get correct explicit path information also for this case?
The prescence of namespaces changes the output of .getpath() - you can use .getelementpath() instead which will include the namespace prefix instead of using wildcards.
If the prefix should be discarded completely - you can strip them out before using .getpath()
import lxml.etree
import pandas as pd
rows = []
tree = lxml.etree.parse("broken.xml")
for node in tree.iter():
try:
node.tag = lxml.etree.QName(node).localname
except ValueError:
# skip tags with no name
continue
rows.append([tree.getpath(node), node.attrib, node.text])
df = pd.DataFrame(rows, columns=["path", "attrib", "text"])
Resulting dataframe:
>>> df
path attrib text
0 /KRSCHWEIZ [] \n
1 /KRSCHWEIZ/KEY_VS [] abcdefg
2 /KRSCHWEIZ/KEY_KLR [] abcdefg
3 /KRSCHWEIZ/ABSENDER [] \n
4 /KRSCHWEIZ/ABSENDER/ABSENDER_MELDER_ID [] 123456
5 /KRSCHWEIZ/ABSENDER/MELDER [] \n
6 /KRSCHWEIZ/ABSENDER/MELDER/MELDER_ID [] 123456
7 /KRSCHWEIZ/ABSENDER/MELDER/QUELLSYSTEM [] ABCDEF
8 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT [] \n
9 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/REFERENZNR [] 987654
10 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/NACHNAME [] my name
11 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/VORNAMEN [] my first name
12 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/GEBURTSNAME [] None
13 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/GEBURTSDATUM [] my dob
14 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/GESCHLECHT [] XX
15 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/PLZ [] 9999
16 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/WOHNORT [] Mycity
17 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/STRASSE [] mystreet
18 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/HAUSNR [] 99
19 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/VERSICHERTENNR [] 999999999
20 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/DATEIEN [] \n
21 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/DATEIEN/DATEI [] \n
22 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/DATEIEN/DAT... [] my_attached_document.html
23 /KRSCHWEIZ/ABSENDER/MELDER/PATIENT/DATEIEN/DAT... [] mybase_64_encoded_document
I connect to an API that provides covid-19 data in Brazil organized by state and city, as follows:
#Bibliotecas
import pandas as pd
from pandas import Series, DataFrame, Panel
import matplotlib.pyplot as plt
from matplotlib.pyplot import plot_date, axis, show, gcf
import numpy as np
from urllib.request import Request, urlopen
import urllib
from http.cookiejar import CookieJar
import numpy as np
from datetime import datetime, timedelta
cj = CookieJar()
url_Bso = "https://brasil.io/api/dataset/covid19/caso_full/data?state=MG&city=Barroso"
req_Bso = urllib.request.Request(url_Bso, None, {"User-Agent": "python-urllib"})
opener_Bso = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj))
response_Bso = opener_Bso.open(req_Bso)
raw_response_Bso = response_Bso.read()
json_Bso = pd.read_json(raw_response_Bso)
results_Bso = json_Bso['results']
results_Bso = results_Bso.to_dict().values()
df_Bso = pd.DataFrame(results_Bso)
df_Bso.head(5)
This Api compiles the data released by the state health departments. However, there is a difference between the records of the state and city health departments, and the state records are out of date in relation to those of the cities. I would like to update Thursdays and Saturdays (the day when the epidemiological week ends). I'm trying the following:
saturday = datetime.today() + timedelta(days=-5)
yesterday = datetime.today() + timedelta(days=-1)
last_available_confirmed_day_Bso_saturday = 51
last_available_confirmed_day_Bso_yesterday = 54
df_Bso = df_Bso.loc[df_Bso['date'] == saturday, ['last_available_confirmed']] = last_available_confirmed_day_Bso_saturday
df_Bso = df_Bso.loc[df_Bso['date'] == yesterday, ['last_available_confirmed']] = last_available_confirmed_day_Bso_yesterday
df_Bso
However, I get the error:
> AttributeError: 'int' object has no attribute 'loc'
I need another dataframe with the values of these days updates. Can anyone help?
You have to adjust the date. Your data frame date column is a string. You can convert them to datetime.
today = datetime.now()
last_sat_num = (today.weekday() + 2) % 7
last_thu_num = (today.weekday() + 4) % 7
last_sat = today - timedelta(last_sat_num)
last_thu = today - timedelta(last_thu_num)
last_sat_str = last_sat.strftime('%Y-%m-%d')
last_thu_str = last_thu.strftime('%Y-%m-%d')
last_available_confirmed_day_Bso_sat = 51
last_available_confirmed_day_Bso_thu = 54
df_Bso2 = df_Bso.copy()
df_Bso2.loc[df_Bso2['date'] == last_sat_str, ['last_available_confirmed']] = last_available_confirmed_day_Bso_sat
df_Bso2.loc[df_Bso2['date'] == last_thu_str, ['last_available_confirmed']] = last_available_confirmed_day_Bso_thu
df_Bso2[['date', 'last_available_confirmed']].head(10)
Output
date last_available_confirmed
0 2020-07-15 44
1 2020-07-14 43
2 2020-07-13 40
3 2020-07-12 40
4 2020-07-11 51
5 2020-07-10 39
6 2020-07-09 36
7 2020-07-08 36
8 2020-07-07 27
9 2020-07-06 27
I have a small project working on web-scraping Google search with a list of keywords. I have built a nested For loop for scraping the search results. The problem is that a for loop for searching keywords in the list does not work as I intended to, which is scraping the data from each searching result. The results get only the result of the last keyword, except for the first two search results.
Here is the code:
browser = webdriver.Chrome(r"C:\...\chromedriver.exe")
df = pd.DataFrame(columns = ['ceo', 'value'])
baseUrl = 'https://www.google.com/search?q='
html = browser.page_source
soup = BeautifulSoup(html)
ceo_list = ["Bill Gates", "Elon Musk", "Warren Buffet"]
values =[]
for ceo in ceo_list:
browser.get(baseUrl + ceo)
r = soup.select('div.g.rhsvw.kno-kp.mnr-c.g-blk')
df = pd.DataFrame()
for i in r:
value = i.select_one('div.Z1hOCe').text
ceo = i.select_one('.kno-ecr-pt.PZPZlf.gsmt.i8lZMc').text
values = [ceo, value]
s = pd.Series(values)
df = df.append(s,ignore_index=True)
print(df)
The output:
0 1
0 Warren Buffet Born: October 28, 1955 (age 64 years), Seattle...
The output that I am expecting is as this:
0 1
0 Bill Gates Born:..........
1 Elon Musk Born:...........
2 Warren Buffett Born: August 30, 1930 (age 89 years), Omaha, N...
Any suggestions or comments are welcome here.
Declare df = pd.DataFrame() outside the for loop
Since currently, you have defined it inside the loop, for each keyword in your list it will initialize a new data frame and the older will be replaced. That's why you are just getting the result for the last keyword.
Try this:
browser = webdriver.Chrome(r"C:\...\chromedriver.exe")
df = pd.DataFrame(columns = ['ceo', 'value'])
baseUrl = 'https://www.google.com/search?q='
html = browser.page_source
soup = BeautifulSoup(html)
ceo_list = ["Bill Gates", "Elon Musk", "Warren Buffet"]
df = pd.DataFrame()
for ceo in ceo_list:
browser.get(baseUrl + ceo)
r = soup.select('div.g.rhsvw.kno-kp.mnr-c.g-blk')
for i in r:
value = i.select_one('div.Z1hOCe').text
ceo = i.select_one('.kno-ecr-pt.PZPZlf.gsmt.i8lZMc').text
s = pd.Series([ceo, value])
df = df.append(s,ignore_index=True)
print(df)
From this commands
from stackapi import StackAPI
lst = ['11786778','12370060']
df = pd.DataFrame(lst)
SITE = StackAPI('stackoverflow', key="xxxx")
results = []
for i in range(1,len(df)):
SITE.max_pages=10000000
SITE.page_size=100
post = SITE.fetch('/users/{ids}/reputation-history', ids=lst[i])
results.append(post)
The results variable prints the results of the json format
How is it possible to converts the results variable to a dataframe with five columns?
reputation_history_type, reputation_change, post_id, creation_date,
user_id
Here try this :
from stackapi import StackAPI
import pandas as pd
lst = ['11786778','12370060']
SITE = StackAPI('stackoverflow')
results = []
SITE.max_pages=10000000
SITE.page_size=100
for i in lst:
post = SITE.fetch('/users/{ids}/reputation-history', ids=[i]).get('items')
results.extend([list(j.values()) for j in post])
df = pd.DataFrame(results, columns = ['reputation_history_type', 'reputation_change', 'post_id', 'creation_date', 'user_id'])
Output :
print(df.head()) gives :
reputation_history_type reputation_change post_id creation_date user_id
0 asker_accepts_answer 2 59126012 1575207944 11786778.0
1 post_undownvoted 2 59118819 1575139301 11786778.0
2 post_upvoted 10 59118819 1575139301 11786778.0
3 post_downvoted -2 59118819 1575139299 11786778.0
4 post_upvoted 10 59110166 1575094452 11786778.0
print(df.tail()) gives :
reputation_history_type reputation_change post_id creation_date user_id
170 post_upvoted 10 58906292 1574036540 12370060.0
171 answer_accepted 15 58896536 1573990105 12370060.0
172 post_upvoted 10 58896044 1573972834 12370060.0
173 post_downvoted 0 58896299 1573948372 12370060.0
174 post_downvoted 0 58896158 1573947435 12370060.0
NOTE :
You can just create a dataframe direct from the result which will be list of lists.
You don't need to declare SITE.max_page and SIZE.page_size every time you loop through the lst.
from stackapi import StackAPI
import pandas as pd
lst = ['11786778', '12370060']
df = pd.DataFrame(lst)
SITE = StackAPI('stackoverflow', key="xxxx")
results = []
for i in range(1, len(df)):
SITE.max_pages = 10000000
SITE.page_size = 100
post = SITE.fetch('/users/{ids}/reputation-history', ids=lst[i])
results.append(post)
data = []
for item in results:
data.append(item)
df = pd.DataFrame(data, columns=['reputation_history_type', 'reputation_change', 'post_id', 'creation_date', 'user_id']
print(df)
Kinda flying in the blind since I maxed out my StackOverflow API limit, but this should work:
from stackapi import StackAPI
from pandas.io.json import json_normalize
lst = ['11786778','12370060']
SITE = StackAPI('stackoverflow', key="xxx")
results = []
for ids in lst:
SITE.max_pages=10000000
SITE.page_size=100
post = SITE.fetch('/users/{ids}/reputation-history', ids=ids)
results.append(json_normalize(post, 'items'))
df = pd.concat(results, ignore_index=True)
json_normalize converts the JSON to dataframe
pd.concat concatenates the dataframes together to make a single frame