I am trying to download data from a csv file in github to a data frame
code:
import pandas as pd
url = "https://github.com/lazyprogrammer/machine_learning_examples/blob/master/linear_regression_class/data_2d.csv"
r = requests.get(url+file)
pd.read_csv(url,names = ['X1','X2','y'])
Instead of loading file data it seems that the html page data is stored in the dataframe:
enter image description here
try this:
import pandas as pd
url = "https://github.com/lazyprogrammer/machine_learning_examples/blob/master/linear_regression_class/data_2d.csv"
df = pd.read_csv(url,index_col=0,parse_dates=[0])
print df.head(5)
Related
Want to download to local directory. This code works for csv but not xlsx. It writes a file but cannot be opened as Excel.
Any help will be appreciated.
url = 'https://some_url'
resp = requests.get(url)
open('some_filename.xlsx', 'wb').write(resp.content)
You could create a dataframe from the resp data and then use pd.to_excel() function to obtain the xlsx file. This is a tested solution, and it worked for me.
import requests
import pandas as pd
import io
url='https://www.google.com' #as an example
urlData = requests.get(url).content #Get the content from the url
dataframe = pd.read_csv(io.StringIO(urlData.decode('latin-1')))
filename="data.xlsx"
dataframe.to_excel(filename)
In pandas you could just do:
import pandas as pd
url = 'https://some_url'
df = pd.read_csv(url)
I found library that allows me to get data from yahoo finance very efficiently. It's a wonderful library.
The problem is, I can't save the data into a csv file.
I've tried converting the data to a Panda Dataframe but I think I'm doing it incorrectly and I'm getting a bunch of 'NaN's.
I tried using Numpy to save directly into a csv file and that's not working either.
import yfinance as yf
import csv
import numpy as np
urls=[
'voo',
'msft'
]
for url in urls:
tickerTag = yf.Ticker(url)
print(tickerTag.actions)
np.savetxt('DivGrabberTest.csv', tickerTag.actions, delimiter = '|')
I can print the data on console and it's fine. Please help me save it into a csv. Thank you!
If you want to store the ticker results for each url in different csv files you can do:
for url in urls:
tickerTag = yf.Ticker(url)
tickerTag.actions.to_csv("tickertag{}.csv".format(url))
if you want them all to be in the same csv file you can do
import pandas as pd
tickerlist = [yf.Ticker.url for url in urls]
pd.concat(tickerlist).to_csv("tickersconcat.csv")
I want to read into pandas the csv generated by this URL:
https://www.alphavantage.co/query?function=FX_DAILY&from_symbol=EUR&to_symbol=USD&apikey=demo&datatype=csv
How should this be done?
I believe you can just read it with pd.read_csv
import pandas as pd
URL = 'https://www.alphavantage.co/query?function=FX_DAILY&from_symbol=EUR&to_symbol=USD&apikey=demo&datatype=csv'
df = pd.read_csv(URL)
Results:
Using requests I am creating an object which is in .csv format. How can I then write that object to a DataFrame with pandas?
To get the requests object in text format:
import requests
import pandas as pd
url = r'http://test.url'
r = requests.get(url)
r.text #this will return the data as text in csv format
I tried (doesn't work):
pd.read_csv(r.text)
pd.DataFrame.from_csv(r.text)
Try this
import requests
import pandas as pd
import io
urlData = requests.get(url).content
rawData = pd.read_csv(io.StringIO(urlData.decode('utf-8')))
I think you can use read_csv with url:
pd.read_csv(url)
filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any object with a read() method (such as a file handle or StringIO)
The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could be file ://localhost/path/to/table.csv
import pandas as pd
import io
import requests
url = r'http://...'
r = requests.get(url)
df = pd.read_csv(io.StringIO(r))
If it doesnt work, try update last line:
import pandas as pd
import io
import requests
url = r'http://...'
r = requests.get(url)
df = pd.read_csv(io.StringIO(r.text))
Using "read_csv with url" worked:
import requests, csv
import pandas as pd
url = 'https://arte.folha.uol.com.br/ciencia/2020/coronavirus/csv/mundo/dados-bra.csv'
corona_bra = pd.read_csv(url)
print(corona_bra.head())
if the url has no authentication then you can directly use read_csv(url)
if you have authentication you can use request to get it un-pickel and print the csv and make sure the result is CSV and use panda.
You can directly use importing
import csv
I am looking to gather all the data from the penultimate worksheet in this Excel file along with all the data in the last Worksheet from "Maturity Years" of 5.5 onward. The code I have below currently grabs data from solely the last workbook and I was wondering what the necessary alterations would be.
import urllib2
import pandas as pd
import os
import xlrd
url = 'http://www.bankofengland.co.uk/statistics/Documents/yieldcurve/uknom05_mdaily.xls'
socket = urllib2.urlopen(url)
xd = pd.ExcelFile(socket)
df = xd.parse(xd.sheet_names[-1], header=None)
print df
I was thinking of using glob but I haven't seen any application of it with an Online Excel file.
Edit: I think the following allows me to combine two worksheets of data into a single Dataframe. However, if there is a better answer please feel free to show it.
import urllib2
import pandas as pd
import os
import xlrd
url = 'http://www.bankofengland.co.uk/statistics/Documents/yieldcurve/uknom05_mdaily.xls'
socket = urllib2.urlopen(url)
xd = pd.ExcelFile(socket)
df1 = xd.parse(xd.sheet_names[-1], header=None)
df2 = xd.parse(xd.sheet_names[-2], header=None)
bigdata = df1.append(df2,ignore_index = True)
print bigdata