I am trying to find a way to connect to a VDB5 file in Python?
I tried using python-odbc connection using a file connections string but it did not work.
Related
It exists a possibility to get a specific file from a specific node executed from a spark-submit?
My first approach was getting the list of every nodes in my cluster using spark-submit by a socket, that was the first part, now, I want to connect directly to a specific node to get a specific file, this file is not a HDFS file is a local file on that remote node.
I cannot use a ftp because I do not have those credentials, they perform a direct connection.
textfile is not working, I would like to specify the node name and path of the file.
Ej.
textfile(remoteNodeConnectedToMyCluster:///path/file.txt)
I hope been clear.
Thanks in advance.
There is no way to accomplish that, short of installing a server (e.g. FTP, HTTP) on the node to serve the file or running a script on the node to copy it to a distributed file system (e.g. HDFS).
Note that a properly specified URL would have the form protocol://host/path/to/file.txt.
I have my script taking data from the messages.log file within MacOS's library so I can get the contents of a message and who it's from. It works perfectly when the script and the message log file is on the same computer. I'm using SQlite to capture the data and Python is printing it in my console.
I'm now trying to get this to work so the script runs on my computer but the log file is on another computer using the same WiFi network. I should point out I'm using a Mac.
So I can see the message log file on the other computer within Finder. In fact, if I write a Python script that 'opens' that message log file and prints it, it does print it successfully.
But when I try to read that log file with SQlite I get
sqlite3.OperationalError: unable to open database file
I've set the path to be:
/Volumes/admin/Library/Messages/chat.db
And like I said, it works when I do a standard 'open' & 'r', but now when I use SQLite to parse it.
Does anyone know how to get around this? Or possibly suggest alternative ways of parsing the data from the log file on a local networked machine? I've not worked much with networking so maybe there is an easier way?
Would appreciate any help anyone can offer!
conn = sqlite3.connect('/Volumes/admin/Library/Messages/chat.db')
cur = conn.cursor()
cur.execute(
"select message.text, handle.id from message inner join handle on message.handle_id=handle.ROWID order by date desc limit "+str(look_back))
Is there any effective way to import csv/text file from Amazon S3 bucket into MS SQL Server 2012/2014 by BCP or Python (without using SSIS)?
I have found this answer: Read a file line by line from S3 using boto? , but I am not sure whether it will be effective and secure way in compare with downloading file and then using BCP.
Thanks.
There is a netcdf file is in a remote server. What i want to do is that extracting data/cropping the file (need only specific variable for specific period) and then moving the file into my local directory.
With python, I ve used 'paramiko' module to access the remote server; is there any way to use 'Dataset' command to open the netcdf file after ssh.connect? Or any solution with python is welcome, thanks.
You can accomplish this using pydap. Once connected to the remote server, you can access the data subset as you would in numpy, e.g. temp_slice = temp[0,1,0], and the data for the corresponding subset will be downloaded on the fly from the server.
Solved: not by python modules/functions, just by executing a 'common' netcdf function to extract subset files on a remote server command line (i.e. myssh.exec_command("ncea -v %s %s %s" %(varname, remoteDBpath, remotesubsetpath) and them bring the files into a local server (i.e. myftp.get(remotesubsetpath, localsubsetpath).
I've noticed that the FTP library doesn't seem to have a method or function of straight up downloading a file from an FTP server. The only function I've come across for downloading a file is ftp.retrbinary and in order to transfer the file contents, you essentially have to write the contents to a pre-existing file on the local computer where the Python script is located.
Is there a way to download the file as-is without having to create a local file first?
Edit: I think the better question to ask is: do I need to have a pre-existing file in order to download an FTP server file's contents?
To download a file from FTP this code will do the job
import urllib urllib.urlretrieve('ftp://server/path/to/file', 'file') # if you need to pass credentials: # urllib.urlretrieve('ftp://username:password#server/path/to/file', 'file')