python - converting unicode in list to dataframe - python

I am using an API to get some data. The data returned is in Unicode (not a dictionary / json object).
# get data
data = []
for urls in api_call_list:
data.append(requests.get(urls))
the data looks like this:
>>> data[0].text
u'Country;Celebrity;Song Volume;CPP;Index\r\nus;Taylor Swift;33100;0.83;0.20\r\n'
>>> data[1].text
u'Country;Celebrity;Song Volume;CPP;Index\r\nus;Rihanna;28100;0.76;0.33\r\n'
I want to put this in a DataFrame with Country, Celebrity, Song, Volume, CPP and Index as column names.
First I tried to split it on \r\n like this:
x = [i.text.split('\r\n') for i in data]
and got:
[[u'Country;Celebrity;Song Volume;CPP;Index',
u'us;Taylor Swift;33100;0.83;0.20',
u''],
[u'Country;Celebrity;Song Volume;CPP;Index',
u'us;Rihanna;28100;0.76;0.33',
u'']]
Not sure where to go from here . . .

You can use pandas.read_csv to read data as a list of data frames and then concatenate them:
# if you use python 2 change this to // from io import BytesIO and use BytesIO instead
from io import StringIO
import pandas as pd
pd.concat([pd.read_csv(StringIO(d), sep = ";") for d in data])
Since your actual data is a list of responses, you may need access the text firstly:
pd.concat([pd.read_csv(StringIO(d.text), sep = ";") for d in data])
data = [u'Country;Celebrity;Song Volume;CPP;Index\r\nus;Taylor Swift;33100;0.83;0.20\r\n',
u'Country;Celebrity;Song Volume;CPP;Index\r\nus;Rihanna;28100;0.76;0.33\r\n']

Related

Skip first line in import statement using gc.open_by_url from gspread (i.e. add header=0)

What is the equivalent of header=0 in pandas, which recognises the first line as a heading in gspread?
pandas import statement (correct)
import pandas as pd
# gcp / google sheets URL
df_URL = "https://docs.google.com/spreadsheets/d/1wKtvNfWSjPNC1fNmTfUHm7sXiaPyOZMchjzQBt1y_f8/edit?usp=sharing"
raw_dataset = pd.read_csv(df_URL, na_values='?',sep=';'
, skipinitialspace=True, header=0, index_col=None)
Using the gspread function, so far I import the data, change the first line to the heading then delete the first line after but this recognises everything in the DataFrame as a string. I would like to recognise the first line as a heading right away in the import statement.
gspread import statement that needs header=True equivalent
import pandas as pd
from google.colab import auth
auth.authenticate_user()
import gspread
from oauth2client.client import GoogleCredentials
# gcp / google sheets url
df_URL = "https://docs.google.com/spreadsheets/d/1wKtvNfWSjPNC1fNmTfUHm7sXiaPyOZMchjzQBt1y_f8/edit?usp=sharing"
# importing the data from Google Drive setup
gc = gspread.authorize(GoogleCredentials.get_application_default())
# read data and put it in dataframe
g_sheets = gc.open_by_url(df_URL)
df = pd.DataFrame(g_sheets.get_worksheet(0).get_all_values())
# change first row to header
df = df.rename(columns=df.iloc[0])
# drop first row
df.drop(index=df.index[0], axis=0, inplace=True)
Looking at the API documentation, you probably want to use:
df = pd.DataFrame(g_sheets.get_worksheet(0).get_all_records(head=1))
The .get_all_records method returns a dictionary of with the column headers as the keys and a list of column values as the dictionary values. The argument head=<int> determines which row to use as keys; rows start from 1 and follow the numeration of the spreadsheet.
Since the values returned by .get_all_records() are lists of strings, the data frame constructor, pd.DataFrame, will return a data frame that is all strings. To convert it to floats, we need to replace the empty strings, and the the dash-only strings ('-') with NA-type values, then convert to float.
Luckily pandas DataFrame has a convenient method for replacing values .replace. We can pass it mapping from the string we want as NAs to None, which gets converted to NaN.
import pandas as pd
data = g_sheets.get_worksheet(0).get_all_records(head=1)
na_strings_map= {
'-': None,
'': None
}
df = pd.DataFrame(data).replace(na_strings_map).astype(float)

Reading Json file and converting it to columns in python

I am trying to read this json file in python using this code (I want to have all the data in a data frame):
import numpy as np
import pandas as pd
import json
from pandas.io.json import json_normalize
df = pd.read_json('short_desc.json')
df.head()
Data frame head screenshot
using this code I am able to convert only the first row to separated columns:
json_normalize(df.short_desc.iloc[0])
First row screenshot
I want to do the same for whole df using this code:
df.apply(lambda x : json_normalize(x.iloc[0]))
but I get this error:
ValueError: If using all scalar values, you must pass an index
What I am doing wrong?
Thank you in advance
After reading the json file with json.load, you can use pd.DataFrame.from_records. This should create the DataFrame you are looking for.
wih open('short_desc.json') as f:
d = json.load(f)
df = pd.DataFrame.from_records(d)

How do I save my list to a dataframe keeping empty rows?

I'm trying to extract subject-verb-object triplets and then attach an ID. I am using a loop so my list of extracted triplets keeping the results for the rows were no triplet was found. So it looks like:
[]
[trump,carried,energy]
[]
[clinton,doesn't,trust]
When I print mylist it looks as expected.
However when I try and create a dataframe from mylist I get an error caused by the empty rows
`IndexError: list index out of range`.
I tried to include an if statement to avoid this but the problem is the same. I also tried using reindex instead but the df2 came out empty.
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
import spacy
import textacy
import csv, string, re
import numpy as np
import pandas as pd
#Import csv file with pre-processing already carried out
import pandas as pd
df = pd.read_csv("pre-processed_file_1.csv", sep=",")
#Prepare dataframe to be relevant columns and unicode
df1 = df[['text_1', 'id']].copy()
import StringIO
s = StringIO.StringIO()
tweets = df1.to_csv(encoding='utf-8');
nlp = spacy.load('en')
count = 0;
df2 = pd.DataFrame();
for row in df1.iterrows():
doc = nlp(unicode(row));
text_ext = textacy.extract.subject_verb_object_triples(doc);
tweetID = df['id'].tolist();
mylist = list(text_ext)
count = count + 1;
if (mylist):
df2 = df2.append(mylist, ignore_index=True)
else:
df2 = df2.append('0','0','0')
Any help would be very appreciated. Thank you!
You're supposed to pass a DataFrame-shaped object to append. Passing the raw data doesn't work. So df2=df2.append([['0','0','0']],ignore_index=True)
You can also wrap your processing in a function process_row, then do df2 = pd.DataFrame([process_row(row) for row in df1.iterrows()]). Note that while append won't work with empty rows, the DataFrame constructor just fills them in with None. If you want empty rows to be ['0','0','0'], you have several options:
-Have your processing function return ['0','0','0'] for empty rows -Change the list comprehension to [process_row(row) if process_row(row) else ['0','0','0'] for row in df1.iterrows()] -Do df2=df2.fillna('0')

Read json file as pandas dataframe?

I am using python 3.6 and trying to download json file (350 MB) as pandas dataframe using the code below. However, I get the following error:
data_json_str = "[" + ",".join(data) + "]
"TypeError: sequence item 0: expected str instance, bytes found
How can I fix the error?
import pandas as pd
# read the entire file into a python array
with open('C:/Users/Alberto/nutrients.json', 'rb') as f:
data = f.readlines()
# remove the trailing "\n" from each line
data = map(lambda x: x.rstrip(), data)
# each element of 'data' is an individual JSON object.
# i want to convert it into an *array* of JSON objects
# which, in and of itself, is one large JSON object
# basically... add square brackets to the beginning
# and end, and have all the individual business JSON objects
# separated by a comma
data_json_str = "[" + ",".join(data) + "]"
# now, load it into pandas
data_df = pd.read_json(data_json_str)
From your code, it looks like you're loading a JSON file which has JSON data on each separate line. read_json supports a lines argument for data like this:
data_df = pd.read_json('C:/Users/Alberto/nutrients.json', lines=True)
Note
Remove lines=True if you have a single JSON object instead of individual JSON objects on each line.
Using the json module you can parse the json into a python object, then create a dataframe from that:
import json
import pandas as pd
with open('C:/Users/Alberto/nutrients.json', 'r') as f:
data = json.load(f)
df = pd.DataFrame(data)
If you open the file as binary ('rb'), you will get bytes. How about:
with open('C:/Users/Alberto/nutrients.json', 'rU') as f:
Also as noted in this answer you can also use pandas directly like:
df = pd.read_json('C:/Users/Alberto/nutrients.json', lines=True)
if you want to convert it into an array of JSON objects, I think this one will do what you want
import json
data = []
with open('nutrients.json', errors='ignore') as f:
for line in f:
data.append(json.loads(line))
print(data[0])
The easiest way to read json file using pandas is:
pd.read_json("sample.json",lines=True,orient='columns')
To deal with nested json like this
[[{Value1:1},{value2:2}],[{value3:3},{value4:4}],.....]
Use Python basics
value1 = df['column_name'][0][0].get(Value1)
Please the code below
#call the pandas library
import pandas as pd
#set the file location as URL or filepath of the json file
url = 'https://www.something.com/data.json'
#load the json data from the file to a pandas dataframe
df = pd.read_json(url, orient='columns')
#display the top 10 rows from the dataframe (this is to test only)
df.head(10)
Please review the code and modify based on your need. I have added comments to explain each line of code. Hope this helps!

Return index values of a sorted row using pandas?

I just recently discovered the power of pandas. (Thanks Wes McKinney!) I have a csv that contains the following information:
RUN_START_DATE,PUSHUP_START_DATE,SITUP_START_DATE,PULLUP_START_DATE
2013-01-24,2013-01-02,2013-01-30,2013-02-03
2013-01-30,2013-01-21,2013-01-13,2013-01-06
2013-01-29,2013-01-28,2013-01-01,2013-01-29
2013-02-16,2013-02-12,2013-01-04,2013-02-11
2013-01-06,2013-02-07,2013-02-25,2013-02-12
2013-01-26,2013-01-28,2013-02-12,2013-01-10
2013-01-26,2013-02-10,2013-01-12,2013-01-30
2013-01-03,2013-01-24,2013-01-19,2013-01-02
2013-01-22,2013-01-13,2013-02-03,2013-02-05
2013-02-06,2013-01-16,2013-02-07,2013-01-11
Normally, I do not use pandas for this process. I use the csv library to generate a lists. Convert them using the datetime library. I then loop through each line and run something like the following to get the sorted index of each row:
'"' + ','.join(map(str, sorted(range(len(dates)), key=lambda k: dates[k]))) + '"'
It then returns something like this for each line:
Out[40]: '"1,0,2,3"'
I then I add it at the end of each line as a new field in my csv.
I can read the csv into pandas and convert the items to the date dtype. I am just unsure how to go about getting the sorted index values using pandas and then flattening them into a string and putting them into a column? Any help most appreciated!
You can use numpy.argsort() to get the sort index:
from StringIO import StringIO
import numpy as np
import pandas as pd
txt = """RUN_START_DATE,PUSHUP_START_DATE,SITUP_START_DATE,PULLUP_START_DATE
2013-01-24,2013-01-02,2013-01-30,2013-02-03
2013-01-30,2013-01-21,2013-01-13,2013-01-06
2013-01-29,2013-01-28,2013-01-01,2013-01-29
2013-02-16,2013-02-12,2013-01-04,2013-02-11
2013-01-06,2013-02-07,2013-02-25,2013-02-12
2013-01-26,2013-01-28,2013-02-12,2013-01-10
2013-01-26,2013-02-10,2013-01-12,2013-01-30
2013-01-03,2013-01-24,2013-01-19,2013-01-02
2013-01-22,2013-01-13,2013-02-03,2013-02-05
2013-02-06,2013-01-16,2013-02-07,2013-01-11"""
df = pd.read_csv(StringIO(txt))
idx = np.argsort(df, axis=1)
buf = StringIO()
idx.to_csv(buf, index=False, header=False)
print buf.getvalue()
the output:
1,0,2,3
3,2,1,0
2,1,0,3
2,3,1,0
0,1,3,2
3,0,1,2
2,0,3,1
3,0,2,1
1,0,2,3
3,1,0,2

Categories

Resources