Not reading all rows while importing csv into pandas dataframe - python
I am trying the kaggle challenge here, and unfortunately I am stuck at a very basic step.
I am trying to read the datasets into a pandas dataframe by executing following command:
test = pd.DataFrame.from_csv("C:/Name/DataMining/hillary/data/output/emails.csv")
The problem is that this file as you would find out has over 300,000 records, but I am reading only 7945.
print (test.shape)
(7945, 21)
Now I have double checked the file and I cannot find anything special about line number 7945. Any pointers why this could be happening?
I think better is use function read_csv with parameters quoting=csv.QUOTE_NONE and error_bad_lines=False. link
import pandas as pd
import csv
test = pd.read_csv("output/Emails.csv", quoting=csv.QUOTE_NONE, error_bad_lines=False)
print (test.shape)
#(381422, 22)
But some data (problematic) will be skipped.
If you want skip emails body data, you can use:
import pandas as pd
import csv
test = pd.read_csv(
"output/Emails.csv",
quoting=csv.QUOTE_NONE,
sep=',',
error_bad_lines=False,
header=None,
names=[
"Id", "DocNumber", "MetadataSubject", "MetadataTo", "MetadataFrom",
"SenderPersonId", "MetadataDateSent", "MetadataDateReleased",
"MetadataPdfLink", "MetadataCaseNumber", "MetadataDocumentClass",
"ExtractedSubject", "ExtractedTo", "ExtractedFrom", "ExtractedCc",
"ExtractedDateSent", "ExtractedCaseNumber", "ExtractedDocNumber",
"ExtractedDateReleased", "ExtractedReleaseInPartOrFull",
"ExtractedBodyText", "RawText"])
print (test.shape)
#delete row with NaN in column MetadataFrom
test = test.dropna(subset=['MetadataFrom'])
#delete headers in data
test = test[test.MetadataFrom != 'MetadataFrom']
Related
CSV dataframe doesn't match with dataframe generated from URL
I have a file that I download from NERL API. When I try to compare it with older csv I get a difference using .equals command in padas but both files are 100% same. the only difference is one data frame is generated from CSV and another is directly from API URL. Below is my code, why is there a difference? import pandas as pd NERL_url = "https://developer.nrel.gov/api/alt-fuel-stations/v1.csv?api_key=DEMO_KEY&fuel_type=ELEC&country=all" outputPath = r"D:\<myPCPath>\nerl.csv" urlDF = pd.read_csv(NERL_url, low_memory=False) urlDF.to_csv(outputPath, header=True,index=None, encoding='utf-8-sig') csv_df = pd.read_csv(outputPath, low_memory=False) if csv_df.equals(urlDF): print("Same") else: print("Different") My output is coming as Different. How do I fix this and why is this difference comming?
Problem is precision in read_csv, set to float_precision='round_trip' and then compared NaNs values, need replaced them to same values, like same: NERL_url = "https://developer.nrel.gov/api/alt-fuel-stations/v1.csv?api_key=DEMO_KEY&fuel_type=ELEC&country=all" outputPath = r"nerl.csv" urlDF = pd.read_csv(NERL_url, low_memory=False) urlDF.to_csv(outputPath, header=True,index=None, encoding='utf-8-sig') csv_df = pd.read_csv(outputPath, low_memory=False, float_precision='round_trip') if csv_df.fillna('same').equals(urlDF.fillna('same')): print("Same") else: print("Different") Same
Using pandas in a csv modified file?
is it possible to use pandas after rewriting data into a csv using something like this: import csv headers = [] cleaned_data = open('cleaned_data.csv', 'w') writer = csv.writer(cleaned_data) for row in open("house_prices.csv"): # <-- Some body code here to filter out the headers This is where I want to continue with my cleaning of data and get rid of rows that contain missing values. I've been told that using pandas is the way to go but I'm not sure if it's ok to do it since the first steps are to write this code: import pandas as pd df = pd.read_csv('house_prices.csv') which conflicts with my first code, right? So is it possible to remove rows of missing values with this method or is there another way without importing anything? Or would it be possible to combine both?ie: import csv import pandas as pd headers = [] cleaned_data = open('cleaned_data.csv', 'w') writer = csv.writer(cleaned_data) df = pd.read_csv('house_prices.csv') df.dropna(axis=0, how='any', thresh=None, subset=None, inplace=False) for row in open("house_prices.csv"): # <-- Some body code here to filter out the headers Would that work? This is the first time I'm seeing pandas
Pandas csv.read pulling csv columns as string instead of integers and inputting them in the first column of database as single argument
I have been trying to build a script in python that pulls the info from a set of csv files. The format of the csv is as follows and has no header: ['Day','Hour','Seconds','Microsecods','x_accel','y_accel']. Instead of inputting the values in the correspondent columns pandas is pulling the values and making them a string like this:" 9,40,19,65664,-0.527,-0.333" in the first column. I tried using dtype and sep=',' but did not work. I don't understand why it does not fit them properly in the right columns. This is my script: import numpy as np import os import pandas as pd os.chdir('C:/Users/pc/Desktop/41x/Learning_set/Bearing1_1') path = os.getcwd() files = os.listdir(path) df = pd.DataFrame() columns = ['Day','Hour','Seconds','Microsecods','x_accel','y_accel'] for f in files: data = pd.read_csv(f, 'Sheet1', header = None,engine='python',names=columns) df = df.append(data) print(df) This is the pd output db: This is snap of the csv:
You're using the read_csv function but in your arguments you are implying that the separator value is 'Sheet1': pd.read_csv(f, 'Sheet1', header=None, engine='python', names=columns) Is it a CSV or is it from an Excel file. If it is a CSV then most likely you can just remove this and it will work as expected.
how to subtract one column data from 2nd row to 1st row in csv files using python
I have CSV file with data like data,data,10.00 data,data,11.00 data,data,12.00 I need to update this as data,data,10.00 data,data,11.00,1.00(11.00-10.00) data,data,12.30,1.30(12.30-11.00) could you help me to update the csv file using python
You can use pandas and numpy. pandas reads/writes the csv and numpy does the calculations: import pandas as pd import numpy as np data = pd.read_csv('test.csv', header=None) col_data = data[2].values diff = np.diff(col_data) diff = np.insert(diff, 0, 0) data['diff'] = diff # write data to file data.to_csv('test1.csv', header=False, index=False) when you open test1.csv then you will find the correct results as you described above with the addition of a zero next to the first data point. For more info see the following docs: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html http://pandas.pydata.org/pandas-docs/version/0.18.1/generated/pandas.DataFrame.to_csv.html
Read specific columns with pandas or other python module
I have a csv file from this webpage. I want to read some of the columns in the downloaded file (the csv version can be downloaded in the upper right corner). Let's say I want 2 columns: 59 which in the header is star_name 60 which in the header is ra. However, for some reason the authors of the webpage sometimes decide to move the columns around. In the end I want something like this, keeping in mind that values can be missing. data = #read data in a clever way names = data['star_name'] ras = data['ra'] This will prevent my program to malfunction when the columns are changed again in the future, if they keep the name correct. Until now I have tried various ways using the csv module and resently the pandas module. Both without any luck. EDIT (added two lines + the header of my datafile. Sorry, but it's extremely long.) # name, mass, mass_error_min, mass_error_max, radius, radius_error_min, radius_error_max, orbital_period, orbital_period_err_min, orbital_period_err_max, semi_major_axis, semi_major_axis_error_min, semi_major_axis_error_max, eccentricity, eccentricity_error_min, eccentricity_error_max, angular_distance, inclination, inclination_error_min, inclination_error_max, tzero_tr, tzero_tr_error_min, tzero_tr_error_max, tzero_tr_sec, tzero_tr_sec_error_min, tzero_tr_sec_error_max, lambda_angle, lambda_angle_error_min, lambda_angle_error_max, impact_parameter, impact_parameter_error_min, impact_parameter_error_max, tzero_vr, tzero_vr_error_min, tzero_vr_error_max, K, K_error_min, K_error_max, temp_calculated, temp_measured, hot_point_lon, albedo, albedo_error_min, albedo_error_max, log_g, publication_status, discovered, updated, omega, omega_error_min, omega_error_max, tperi, tperi_error_min, tperi_error_max, detection_type, mass_detection_type, radius_detection_type, alternate_names, molecules, star_name, ra, dec, mag_v, mag_i, mag_j, mag_h, mag_k, star_distance, star_metallicity, star_mass, star_radius, star_sp_type, star_age, star_teff, star_detected_disc, star_magnetic_field 11 Com b,19.4,1.5,1.5,,,,326.03,0.32,0.32,1.29,0.05,0.05,0.231,0.005,0.005,0.011664,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1,2008,2011-12-23,94.8,1.5,1.5,2452899.6,1.6,1.6,Radial Velocity,,,,,11 Com,185.1791667,17.7927778,4.74,,,,,110.6,-0.35,2.7,19.0,G8 III,,4742.0,, 11 UMi b,10.5,2.47,2.47,,,,516.22,3.25,3.25,1.54,0.07,0.07,0.08,0.03,0.03,0.012887,,,,,,,,,,,,,,,,,,,,,,,,,,,,,1,2009,2009-08-13,117.63,21.06,21.06,2452861.05,2.06,2.06,Radial Velocity,,,,,11 UMi,229.275,71.8238889,5.02,,,,,119.5,0.04,1.8,24.08,K4III,1.56,4340.0,,
An easy way to do this is using the pandas library like this. import pandas as pd fields = ['star_name', 'ra'] df = pd.read_csv('data.csv', skipinitialspace=True, usecols=fields) # See the keys print df.keys() # See content in 'star_name' print df.star_name The problem here was the skipinitialspace which remove the spaces in the header. So ' star_name' becomes 'star_name'
According to the latest pandas documentation you can read a csv file selecting only the columns which you want to read. import pandas as pd df = pd.read_csv('some_data.csv', usecols = ['col1','col2'], low_memory = True) Here we use usecols which reads only selected columns in a dataframe. We are using low_memory so that we Internally process the file in chunks.
Above answers are for python2. So for python 3 users I am giving this answer. You can use the code below: import pandas as pd fields = ['star_name', 'ra'] df = pd.read_csv('data.csv', skipinitialspace=True, usecols=fields) # See the keys print(df.keys()) # See content in 'star_name' print(df.star_name)
Got a solution to above problem in a different way where in although i would read entire csv file, but would tweek the display part to show only the content which is desired. import pandas as pd df = pd.read_csv('data.csv', skipinitialspace=True) print df[['star_name', 'ra']] This one could help in some of the scenario's in learning basics and filtering data on the basis of columns in dataframe.
I think you need to try this method. import pandas as pd data_df = pd.read_csv('data.csv') print(data_df['star_name']) print(data_df['ra'])