I just converted some action I did with JS (node) to Python (flask webserver) - connecting to secured FTP service and read and parse a CSV files from there because I know it is faster with Python.
I managed to do almost everything, but I'm having some hard time at parsing the CSV file well.
So this is my code:
import urllib.request
import csv
import json
import pysftp
import pandas as pd
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
name = 'username'
password = 'pass'
host = 'hostURL'
path = ""
with pysftp.Connection(host=host, username=name, password=password, cnopts=cnopts) as sftp:
for filename in sftp.listdir():
if filename.endswith('.csv'):
file = sftp.open(filename)
csvFile = file.read()
I got to the part where I can see the content of the CSV file but I can't parse well (like I need to it be formatted - array of objects).
I tried to parse it with:
with open (csvFile, 'rb') as csv_file:
print(csv_file)
cr = csv.reader(csv_file,delimiter=",") # , is default
rows = list(cr)
and with this:
Past=pd.read_csv(csvFile,encoding='cp1252')
print(Past)
but I got errors like:
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb0 in position 748: invalid start byte
and
OSError: Expected file path name or file-like object, got <class 'bytes'> type
I'm really kinda stuck right now.
(One more question - not important but just wanted to know if I can retrieve a file from ftp based on the latest date - because sometimes there can be more than 1 file in a repository.)
If you don't mind using Pandas (and Numpy)
Pandas' read_csv accepts a file path or a file object (docs). More specifically, it mentions:
By file-like object, we refer to objects with a read() method, such as a file handler (e.g. via builtin open function) or StringIO.
In that sense, using either filename or file from your example should work.
Given this, if using pandas option, try replacing your code with:
df = pd.read_csv(filename, encoding='cp1252') # assuming this is the correct encoding
print(df.head()) # optional, prints top 5 entries
df is now a Pandas DataFrame. To transform a DataFrame into an array of objects, try the to_numpy method (docs):
arr = df.to_numpy() # returns numpy array from DataFrame
Related
This question already has an answer here:
Reading CSV file into Pandas from SFTP server via Paramiko fails with "'utf-8' codec can't decode byte ... in position ....: invalid start byte"
(1 answer)
Closed 1 year ago.
I'm trying to work around paramiko's strict utf-8 decoding functionality. I want to open the file in binary mode and read into a dataframe line by line. How can I do that?
remote_file = sftp.open(remoteName, "rb")
for line in remote_file:
print(line.decode("utf8", "ignore"))
I tested on my server and I see
This code
remote_file = sftp.open(remoteName)
print(remote_file.read())
reads data as bytes - even if I don't set bytes-mode (rb)
This code
remote_file = sftp.open(remoteName)
print(remote_file.readlines())
normally reads data as string but can read as bytes when I set bytes-mode (rb).
It seems when I use read_csv(remote_file) then it use some inner wrapper and it automatically converts with utf-8 - even if I set bytes-mode (rb) - and settings encoding in read_csv can't change it.
But I can use read() with io.StringIO to convert it manually with ie. latin1
import io
remote_file = sftp.open(remoteName)
bytes = remote_file.read()
text = bytes.decode('latin1')
#text = remote_file.read().decode('latin1')
file_obj = io.StringIO(text)
df = pd.read_csv(file_obj)
#df = pd.read_csv(io.StringIO(text))
EDIT:
Besed on answer in previous question it works with io.BytesIO and encoding in read_csv.
import io
remote_file = sftp.open(remoteName)
bytes = remote_file.read()
file_obj = io.BytesIO(bytes)
df = pd.read_csv(file_obj, encoding='latin1')
#df = pd.read_csv(io.BytesIO(bytes), encoding='latin1')
I am unable to grab the data from a CSV file to be able to put it into a pandas dataframe. I am able to get into the directory and see all of the files there, but I haven't been able to access the document.
Here is my code:
from ftplib import FTP_TLS
import socket
import pandas as pd
server=ftplib.FTP_TLS(‘server’,certfile = r'C:/’)
server.login(user,pw)
# get into respective directory
server.cwd('Banana')
server.prot_p()
# This piece here is needed in order to see what is in my directory, I don't understand why.
# Something about the server not being set up correctly?
server.af = socket.AF_INET6
# check location
server.pwd()
# check files
server.dir()
# Get CSV file data
import io
download_file = io.BytesIO()
download_file.seek(0)
server.retrbinary('RETR ' + str('file.csv'), download_file.write)
download_file.seek(0)
file_to_process = pd.read_csv(download_file, engine='python')
The error that I get is that the last code from import io down to file_to_process just sits there and does nothing. Maybe it times itself out? Unsure the issue.
New error is this:
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8d in position 3376: character maps to <undefined>
Edit: Now I'm trying to save to disk. But this code deletes the contents of the file. Do I not understand how write works?
filematch = ‘Try20.csv’
target_dir = r'\\server’
import os
for filename in server.nlst(filematch):
target_file_name = os.path.join(target_dir,os.path.basename(filename))
with open(target_file_name,'wb') as fhandle:
server.retrbinary('RETR %s' %filename, fhandle.write)
Secondarily, I don't understand how to write the contents of fhandle into a dataframe now.
I have a tar file with several files compressed in it. I need to read one specific file (it is in csv format) using pandas. I tried to use the following code:
import tarfile
tar = tarfile.open('my_files.tar', 'r:gz')
f = tar.extractfile('some_files/need_to_be_read.csv')
import pandas as pd
df = pd.read_csv(f.read())
but it throws up the following error:
OSError: Expected file path name or file-like object, got <class 'bytes'> type
on the last line of the code. How do I go about this to read this file?
When you call pandas.read_csv(), you need to give it a filename or file-like object. tar.extractfile() returns a file-like object. Instead of reading the file into memory, pass the file to Pandas.
So remove the .read() part:
import tarfile
tar = tarfile.open('my_files.tar', 'r:gz')
f = tar.extractfile('some_files/need_to_be_read.csv')
import pandas as pd
df = pd.read_csv(f)
I've been struggling with this simple problem for too long, so I thought I'd ask for help. I am trying to read a list of journal articles from National Library of Medicine ftp site into Python 3.3.2 (on Windows 7). The journal articles are in a .csv file.
I have tried the following code:
import csv
import urllib.request
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(ftpstream)
data = [row for row in csvfile]
It results in the following error:
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
data = [row for row in csvfile]
File "<pyshell#4>", line 1, in <listcomp>
data = [row for row in csvfile]
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
I presume I should be working with strings not bytes? Any help with the simple problem, and an explanation as to what is going wrong would be greatly appreciated.
The problem relies on urllib returning bytes. As a proof, you can try to download the csv file with your browser and opening it as a regular file and the problem is gone.
A similar problem was addressed here.
It can be solved decoding bytes to strings with the appropriate encoding. For example:
import csv
import urllib.request
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(ftpstream.read().decode('utf-8')) # with the appropriate encoding
data = [row for row in csvfile]
The last line could also be: data = list(csvfile) which can be easier to read.
By the way, since the csv file is very big, it can slow and memory-consuming. Maybe it would be preferable to use a generator.
EDIT:
Using codecs as proposed by Steven Rumbalski so it's not necessary to read the whole file to decode. Memory consumption reduced and speed increased.
import csv
import urllib.request
import codecs
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(codecs.iterdecode(ftpstream, 'utf-8'))
for line in csvfile:
print(line) # do something with line
Note that the list is not created either for the same reason.
Even though there is already an accepted answer, I thought I'd add to the body of knowledge by showing how I achieved something similar using the requests package (which is sometimes seen as an alternative to urlib.request).
The basis of using codecs.itercode() to solve the original problem is still the same as in the accepted answer.
import codecs
from contextlib import closing
import csv
import requests
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
with closing(requests.get(url, stream=True)) as r:
reader = csv.reader(codecs.iterdecode(r.iter_lines(), 'utf-8'))
for row in reader:
print row
Here we also see the use of streaming provided through the requests package in order to avoid having to load the entire file over the network into memory first (which could take long if the file is large).
I thought it might be useful since it helped me, as I was using requests rather than urllib.request in Python 3.6.
Some of the ideas (e.g using closing()) are picked from this similar post
I had a similar problem using requests package and csv.
The response from post request was type bytes.
In order to user csv library, first I a stored them as a string file in memory (in my case the size was small), decoded utf-8.
import io
import csv
import requests
response = requests.post(url, data)
# response.content is something like:
# b'"City","Awb","Total"\r\n"Bucuresti","6733338850003","32.57"\r\n'
csv_bytes = response.content
# write in-memory string file from bytes, decoded (utf-8)
str_file = io.StringIO(csv_bytes.decode('utf-8'), newline='\n')
reader = csv.reader(str_file)
for row_list in reader:
print(row_list)
# Once the file is closed,
# any operation on the file (e.g. reading or writing) will raise a ValueError
str_file.close()
Printed something like:
['City', 'Awb', 'Total']
['Bucuresti', '6733338850003', '32.57']
urlopen will return a urllib.response.addinfourl instance for an ftp request.
For ftp, file, and data urls and requests explicity handled by legacy
URLopener and FancyURLopener classes, this function returns a
urllib.response.addinfourl object which can work as context manager...
>>> urllib2.urlopen(url)
<addinfourl at 48868168L whose fp = <addclosehook at 48777416L whose fp = <socket._fileobject object at 0x0000000002E52B88>>>
At this point ftpstream is a file like object, using .read() would return the contents however csv.reader requires an iterable in this case:
Defining a generator like so:
def to_lines(f):
line = f.readline()
while line:
yield line
line = f.readline()
We can create our csv reader like so:
reader = csv.reader(to_lines(ftps))
And with a url
url = "http://pic.dhe.ibm.com/infocenter/tivihelp/v41r1/topic/com.ibm.ismsaas.doc/reference/CIsImportMinimumSample.csv"
The code:
for row in reader: print row
Prints
>>>
['simpleci']
['SCI.APPSERVER']
['SRM_SaaS_ES', 'MXCIImport', 'AddChange', 'EN']
['CI_CINUM']
['unique_identifier1']
['unique_identifier2']
I have this website that requires log in to access data.
import pandas as pd
import requests
r = requests.get(my_url, cookies=my_cookies) # my_cookies are imported from a selenium session.
df = pd.io.excel.read_excel(r.content, sheetname=0)
Reponse:
IOError: [Errno 2] No such file or directory: 'Ticker\tAction\tName\tShares\tPrice\...
Apparently, the str is processed as a filename. Is there a way to process it as a file? Alternatively can we pass cookies to pd.get_html?
EDIT: After further processing we can now see that this is actually a csv file. The content of the downloaded file is:
In [201]: r.content
Out [201]: 'Ticker\tAction\tName\tShares\tPrice\tCommission\tAmount\tTarget Weight\nBRSS\tSELL\tGlobal Brass and Copper Holdings Inc\t400.0\t17.85\t-1.00\t7,140\t0.00\nCOHU\tSELL\tCohu Inc\t700.0\t12.79\t-1.00\t8,953\t0.00\nUNTD\tBUY\tUnited Online Inc\t560.0\t15.15\t-1.00\t-8,484\t0.00\nFLXS\tBUY\tFlexsteel Industries Inc\t210.0\t40.31\t-1.00\t-8,465\t0.00\nUPRO\tCOVER\tProShares UltraPro S&P500\t17.0\t71.02\t-0.00\t-1,207\t0.00\n'
Notice that it is tab delimited. Still, trying:
# csv version 1
df = pd.read_csv(r.content)
# Returns error, file does not exist. Apparently read_csv() is also trying to read it as a file.
# csv version 2
fh = io.BytesIO(r.content)
df = pd.read_csv(fh) # ValueError: No columns to parse from file.
# csv version 3
s = StringIO(r.content)
df = pd.read_csv(s)
# No error, but the resulting df is not parsed properly; \t's show up in the text of the dataframe.
Simply wrap the file contents in a BytesIO:
with io.BytesIO(r.content) as fh:
df = pd.io.excel.read_excel(fh, sheetname=0)
This functionality was included in an update from 2014. According to the documentation it is as simple as providing the url:
The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could be file://localhost/path/to/workbook.xlsx
Based on the code you've provided, it looks like you are using pandas 0.13.x? If you can upgrade to a newer version (code below is tested with 0.16.x) you can get this to work without the additional utilization of the requests library. This was added in 0.14.1
data2 = pd.read_excel(data_url)
As an example of a full script (with the example XLS document taken from the original bug report stating the read_excel didn't accept a URL):
import pandas as pd
data_url = "http://www.eia.gov/dnav/pet/xls/PET_PRI_ALLMG_A_EPM0_PTC_DPGAL_M.xls"
data = pd.read_excel(data_url, "Data 1", skiprows=2)