i am a beginer at machine learning and exploring with database for my nlp project. here i got the data from http://www.cs.jhu.edu/~mdredze/datasets/sentiment/index2.html. and I am trying to create a pd dataframe where i want to parse the xml data , I also want to add a label(1) to the positive reviews, Can someone please help me with the code, a sample output has been given,
from bs4 import BeautifulSoup
positive_reviews = BeautifulSoup(open('/content/drive/MyDrive/sorted_data_acl/electronics/positive.review', encoding='utf-8').read())
positive_reviews = positive_reviews.findAll('review_text')
positive_reviews[0]
<review_text>
I purchased this unit due to frequent blackouts in my area and 2 power supplies going bad. It will run my cable modem, router, PC, and LCD monitor for 5 minutes. This is more than enough time to save work and shut down. Equally important, I know that my electronics are receiving clean power.
I feel that this investment is minor compared to the loss of valuable data or the failure of equipment due to a power spike or an irregular power supply.
As always, Amazon had it to me in <2 business days
</review_text>
the main issue is the note it is pseudo xml
download tar.gz file and unzip / untar
build dictionary of all files
workaround to deal with pseudo xml - insert document element in string representation of document
then simple case of using list/dict comprehensions to generate pandas constructor format
dfs îs a dictionary of data frames ready to be used
import requests
from pathlib import Path
from tarfile import TarFile
from bs4 import BeautifulSoup
import io
import pandas as pd
# download tar with psuedo XML...
url = "http://www.cs.jhu.edu/%7Emdredze/datasets/sentiment/domain_sentiment_data.tar.gz"
fn = Path.cwd().joinpath(url.split("/")[-1])
if not fn.exists():
r = requests.get(url, stream=True)
with open(fn, 'wb') as f:
for chunk in r.raw.stream(1024, decode_content=False):
if chunk:
f.write(chunk)
# untar downloaded file and generate a dictionary of all files
TarFile.open(fn, "r:gz").extractall()
files = {f"{p.parent.name}/{p.name}":p for p in Path.cwd().joinpath("sorted_data_acl").glob("**/*") if p.is_file()}
# convert all files into dataframes in a dict
dfs = {}
for file in files.keys():
with open(files[file]) as f: text = f.read()
# psuedo xml where there is not root element stops it from being well formed
# force it in...
soup = BeautifulSoup(f"<root>{text}</root>", "xml")
# simple case of each review is a row and each child element is a column
dfs[file] = pd.DataFrame([{c.name:c.text.strip("\n") for c in r.children if c.name} for r in soup.find_all("review")])
Related
I am new here to try to solve one of my interesting questions in World of Tanks. I heard that every battle data is reserved in the client's disk in the Wargaming.net folder because I want to make a batch of data analysis for our clan's battle performances.
image
It is said that these .dat files are a kind of json files, so I tried to use a couple of lines of Python code to read but failed.
import json
f = open('ex.dat', 'r', encoding='unicode_escape')
content = f.read()
a = json.loads(content)
print(type(a))
print(a)
f.close()
The code is very simple and obviously fails to make it. Well, could anyone tell me the truth about that?
Added on Feb. 9th, 2022
After I tried another set of codes via Jupyter Notebook, it seems like something can be shown from the .dat files
import struct
import numpy as np
import matplotlib.pyplot as plt
import io
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
fbuff = io.BufferedReader(f)
N = len(fbuff.read())
print('byte length: ', N)
with open('C:/Users/xukun/Desktop/br/ex.dat', 'rb') as f:
data =struct.unpack('b'*N, f.read(1*N))
The result is a set of tuple but I have no idea how to deal with it now.
Here's how you can parse some parts of it.
import pickle
import zlib
file = '4402905758116487.dat'
cache_file = open(file, 'rb') # This can be improved to not keep the file opened.
# Converting pickle items from python2 to python3 you need to use the "bytes" encoding or "latin1".
legacyBattleResultVersion, brAllDataRaw = pickle.load(cache_file, encoding='bytes', errors='ignore')
arenaUniqueID, brAccount, brVehicleRaw, brOtherDataRaw = brAllDataRaw
# The data stored inside the pickled file will be a compressed pickle again.
vehicle_data = pickle.loads(zlib.decompress(brVehicleRaw), encoding='latin1')
account_data = pickle.loads(zlib.decompress(brAccount), encoding='latin1')
brCommon, brPlayersInfo, brPlayersVehicle, brPlayersResult = pickle.loads(zlib.decompress(brOtherDataRaw), encoding='latin1')
# Lastly you can print all of these and see a lot of data inside.
The response contains a mixture of more binary files as well as some data captured from the replays.
This is not a complete solution but it's a decent start to parsing these files.
First you can look at the replay file itself in a text editor. But it won't show the code at the beginning of the file that has to be cleaned out. Then there is a ton of info that you have to read in and figure out but it is the stats for each player in the game. THEN it comes to the part that has to do with the actual replay. You don't need that stuff.
You can grab the player IDs and tank IDs from WoT developer area API if you want.
After loading the pickle files like gabzo mentioned, you will see that it is simply a list of values and without knowing what the value is referring to, its hard to make sense of it. The identifiers for the values can be extracted from your game installation:
import zipfile
WOT_PKG_PATH = "Your/Game/Path/res/packages/scripts.pkg"
BATTLE_RESULTS_PATH = "scripts/common/battle_results/"
archive = zipfile.ZipFile(WOT_PKG_PATH, 'r')
for file in archive.namelist():
if file.startswith(BATTLE_RESULTS_PATH):
archive.extract(file)
You can then decompile the python files(uncompyle6) and then go through the code to see the identifiers for the values.
One thing to note is that the list of values for the main pickle objects (like brAccount from gabzo's code) always has a checksum as the first value. You can use this to check whether you have the right order and the correct identifiers for the values. The way these checksums are generated can be seen in the decompiled python files.
I have been tackling this problem for some time (albeit in Rust): https://github.com/dacite/wot-battle-results-parser/tree/main/datfile_parser.
This link contains CSV files for daily reports of COVID-19.
https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports
What is the best solution to get all the csv files in a dataframe?
I tried the code bellow from other questions but it doesnt work.
from pathlib import Path
import pandas as pd
files = Path('https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports')
csv_only = files.rglob('*.csv')
combo = [pd.read_csv(f)
.assign(f.stem)
.fillna(0)
for f in csv_only]
one_df = pd.concat(combo,ignore_index=True)
one_df = one_df.drop_duplicates('date')
print(one_df)
How could i fit requests to read all the files?
You can simply use requests module to get the names of all the .csv present, which would eliminate the need to run glob:
import requests
url = "https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_daily_reports"
csv_only = [i.split("=")[1][1:-1] for i in requests.get(url).text.split(" ") if '.csv' in i and 'title' in i]
pathlib only works with filesystems so this won't do. csv_only will be an empty generator since there is no such location on your disk. You need to fetch the data from github with actual http requests. I did something for some personal stuff some time ago, you can have a look and modify it accordingly(uses the github API so you'll need to get one).
I want to fetch the npm package metadata. I found this endpoint which gives me all the metadata needed.
I made a following script to get this data. My plan is to select some specific keys and add that data in some database (I can also store it in a json file, but the data is huge). I made following script to fetch the data:
import requests
import json
import sys
db = 'https://replicate.npmjs.com';
r = requests.get('https://replicate.npmjs.com/_all_docs', headers={"include_docs" : "true"})
for line in r.iter_lines():
# filter out keep-alive new lines
if line:
print(line)
decoded_line = line.decode('utf-8')
print(json.loads(decoded_line))
Notice, I don't even include all-docs, but it sticks in an infinite loop. I think this is because the data is huge.
A look at the head of the output from - https://replicate.npmjs.com/_all_docs
gives me following output:
{"total_rows":1017703,"offset":0,"rows":[
{"id":"0","key":"0","value":{"rev":"1-5fbff37e48e1dd03ce6e7ffd17b98998"}},
{"id":"0-","key":"0-","value":{"rev":"1-420c8f16ec6584c7387b19ef401765a4"}},
{"id":"0----","key":"0----","value":{"rev":"1-55f4221814913f0e8f861b1aa42b02e4"}},
{"id":"0-1-project","key":"0-1-project","value":{"rev":"1-3cc19950252463c69a5e717d9f8f0f39"}},
{"id":"0-100","key":"0-100","value":{"rev":"1-c4f41a37883e1289f469d5de2a7b505a"}},
{"id":"0-24","key":"0-24","value":{"rev":"1-e595ec3444bc1039f10c062dd86912a2"}},
{"id":"0-60","key":"0-60","value":{"rev":"2-32c17752acfe363fa1be7dbd38212b0a"}},
{"id":"0-9","key":"0-9","value":{"rev":"1-898c1d89f7064e58f052ff492e94c753"}},
{"id":"0-_-0","key":"0-_-0","value":{"rev":"1-d47c142e9460c815c19c4ed3355d648d"}},
{"id":"0.","key":"0.","value":{"rev":"1-11c33605f2e3fd88b5416106fcdbb435"}},
{"id":"0.0","key":"0.0","value":{"rev":"1-5e541d4358c255cbcdba501f45a66e82"}},
{"id":"0.0.1","key":"0.0.1","value":{"rev":"1-ce856c27d0e16438a5849a97f8e9671d"}},
{"id":"0.0.168","key":"0.0.168","value":{"rev":"1-96ab3047e57ca1573405d0c89dd7f3f2"}},
{"id":"0.0.250","key":"0.0.250","value":{"rev":"1-c07ad0ffb7e2dc51bfeae2838b8d8bd6"}},
Notice, that all the documents start from the second line (i.e. all the documents are part of the "rows" key's values). Now, my question is how to get only the values of "rows" key (i.e. all the documents). I found this repository for the similar purpose, but can't use/ convert it as I am a total beginner in JavaScript.
If there is no stream=True among the arguments of get() then the whole data will be downloaded into memory before the loop over the lines even starts.
Then there is the problem that at least the lines themselves are not valid JSON. You'll need an incremental JSON parser like ijson for this. ijson in turn wants a file like object which isn't easily obtained from the requests.Response, so I will use urllib from the Python standard library here:
#!/usr/bin/env python3
from urllib.request import urlopen
import ijson
def main():
with urlopen('https://replicate.npmjs.com/_all_docs') as json_file:
for row in ijson.items(json_file, 'rows.item'):
print(row)
if __name__ == '__main__':
main()
Is there a reason why you aren't decoding the json before iterating over the lines?
Can you try this:
import requests
import json
import sys
db = 'https://replicate.npmjs.com';
r = requests.get('https://replicate.npmjs.com/_all_docs', headers={"include_docs" : "true"})
decoded_r = r.decode('utf-8')
data = json.loads(decoded_r)
for row in data.rows:
print(row.key)
I'm trying to download about 500k small csv files (5kb-1mb) from a list of urls but it is been taking too long to get this done. With the code bellow, I am lucky if I get 10k files a day.
I have tried using the multiprocessing package and a pool to download multiple files simultaneously. This seems to be effective for the first few thousand downloads, but eventually the overall speed goes down. I am no expert, but I assume that the decreasing speed indicates that the server I am trying to download from cannot keep up with this number of requests. Does that makes sense?
To be honest, I am quite lost here and was wondering if there is any piece of advice on how to speed this up.
import urllib2
import pandas as pd
import csv
from multiprocessing import Pool
#import url file
df = pd.read_csv("url_list.csv")
#select only part of the total list to download
list=pd.Series(df[0:10000])
#define a job and set file name as the id string under urls
def job(url):
file_name = str("test/"+url[46:61])+".csv"
u = urllib2.urlopen(url)
f = open(file_name, 'wb')
f.write(u.read())
f.close()
#run job
pool = Pool()
url = [ "http://" + str(file_path) for file_path in list]
pool.map(job, url)
You are re-coding the wheel!
How about that :
parallel -a urls.file axel
Of course you'll have to install parallel and axel for your distribution.
axel is a multithreaded counterpart to wget
parrallel allows you to run tasks using multithreading.
I'm working on a project that requires input from data that's displayed in a live flash graph (a data logging chart at http://137.205.144.34/flash/index.html#menuIndex=1&accordionIndex=2&menuId=mimic1&menuStruct=S1R2M3C1H1. As the html couldn't be accessed directly, I used Firebug to monitor my activity, and found the data I wanted stored at http://137.205.144.34/services/unload.cmd?format=csvx&sched=&start=-240:00:00&id=75631&step=864. However, when I try and access this url, it automatically saves a file (containing the data) to my pc, so I can't access the html source code. Using the url, I have used beautifulsoup to import the data, but i can't search or manipulate it using html tags as they are unknown. The only data i actually want is the latest hourly reading - one of ~ 1300 lines. And of that line, i only need the last value. Is there a way I could find the html tags? If not, what would be the best way to extract the bit of data I need?
Any help would be greatly appreciated,
Thanks.
The file you are downloading has no HTML in it. It is a comma-separated fle and you should use the csv module to parse it.
This code will print the first item in each row (the item that contains the date and time):
import csv
with open('unload.cmd', 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
print row[0]
This works assuming that you are using the file downloaded with the default name.
In order to first download the file programmatically, import it into a string, and then use it as a source file for the csv.reader():
import urllib
import csv
import StringIO
url = 'http://137.205.144.34/services/unload.cmd?format=csvx&sched=&start=-240:00:00&id=75631&step=864'
f = urllib.urlopen(url)
data = f.read()
reader = csv.reader(StringIO.StringIO(data))
for row in reader:
if row: print row[0]