I've googled but I could only find how to upload one file... and I'm trying to upload all files from local directory to remote ftp directory. Any ideas how to achieve this?
with the loop?
edit: in universal case uploading only files would look like this:
import os
for root, dirs, files in os.walk('path/to/local/dir'):
for fname in files:
full_fname = os.path.join(root, fname)
ftp.storbinary('STOR remote/dir' + fname, open(full_fname, 'rb'))
Obviously, you need to look out for name collisions if you're just preserving file names like this.
Look at Python-scriptlines required to make upload-files from JSON-Call and next FTPlib-operation: why some uploads, but others not?
Although a different starting position than your question, in the Answer of that first url you see an example construction to upload by ftplib a json-file plus an xml-file: look at scriptline 024 and further.
In the second url you see some other aspects related to upload of more files.
Also applicable for other file-types than json and xml, obviously with a different 'entry' before the 2 final sections which define and realize the FTP_Upload-function.
Create a FTP batch file (with a list of files that you need to transfer). Use python to execute ftp.exe with the "-s" option and pass in the list of files.
This is kludgy but apparently the FTPlib does not have accept multiple files in its STOR command.
Here is a sample ftp batch file.
*
OPEN inetxxx
myuser mypasswd
binary
prompt off
cd ~/my_reg/cronjobs/k_load/incoming
mput *.csv
bye
If the above contents were in a file called "abc.ftp" - then my ftp command would be
ftp -s abc.ftp
Hope that helps.
Related
I am pretty new to Python, but would like to use it to do some tasks on an FTP. Feel like it should be fairly easy but I am having some issues when trying to delete multiple files (hundreds) from an FTP folder. I have the file names I want to delete as strings from a SQL table I can copy and paste if needed.
My code so far:
import os
import ftplib
ftpHost = 'ftp.myhost.com'
ftpPort = 21
ftpUsername = 'myuser'
ftpPassword = 'mypassword'
ftp = ftplib.FTP(timeout=30)
ftp.connect(ftpHost, ftpPort)
ftp.login(ftpUsername, ftpPassword)
ftp.cwd("/myftpfolder/January2023")
ftp.delete("1234myfile.mp4")
ftp.quit()
print("Execution complete...")
As above, I can delete the files one-off but is there a practical way for me to delete about 800 files from the folder above if I were able to paste them somewhere or put them in a text file and have Python read through it to execute the deletes? I suppose this isn't necessarily an FTP or ftplib specific question, but could help me get a better general understanding of lists, tuples, etc. Using Python3.10 btw.
Thanks!
I need to put the contents of a folder into a ZIP archive (In my case a JAR but they're the same thing).
Though I cant just do the normal adding every file individually, because for one... they change each time depending on what the user put in and it's something like 1,000 files anyway!
I'm using ZipFile, it's the only really viable option.
Basically my code makes a temporary folder, and puts all the files in there, but I can't find a way to add the contents of the folder and not just the folder itself.
Python allrady support zip and rar archife. Personaly I learn it form here https://docs.python.org/3/library/zipfile.html
And to get all files form that folder try somethign like
for r, d, f in os.walk(path):
for file in f:
#enter code here
I am working on the listener portion of a backdoor program (for an ETHICAL hacking course) and I would like to be able to read files from any part of my linux system and not just from within the directory where my listener python script is located - however, this has not proven to be as simple as specifying a typical absolute path such as "~/Desktop/test.txt"
So far my code is able to read files and upload them to the virtual machine where my reverse backdoor script is actively running. But this is only when I read and upload files that are in the same directory as my listener script (aptly named listener.py). Code shown below.
def read_file(self, path):
with open(path, "rb") as file:
return base64.b64encode(file.read())
As I've mentioned previously, the above function only works if I try to open and read a file that is in the same directory as the script that the above code belongs to, meaning that path in the above content is a simple file name such as "picture.jpg"
I would like to be able to read a file from any part of my filesystem while maintaining the same functionality.
For example, I would love to be able to specify "~/Desktop/another_picture.jpg" as the path so that the contents of "another_picture.jpg" from my "~/Desktop" directory are base64 encoded for further processing and eventual upload.
Any and all help is much appreciated.
Edit 1:
My script where all the code is contained, "listener.py", is located in /root/PycharmProjects/virus_related/reverse_backdoor/. within this directory is a file that for simplicity's sake we can call "picture.jpg" The same file, "picture.jpg" is also located on my desktop, absolute path = "/root/Desktop/picture.jpg"
When I try read_file("picture.jpg"), there are no problems, the file is read.
When I try read_file("/root/Desktop/picture.jpg"), the file is not read and my terminal becomes stuck.
Edit 2:
I forgot to note that I am using the latest version of Kali Linux and Pycharm.
I have run "realpath picture.jpg" and it has yielded the path "/root/Desktop/picture.jpg"
Upon running read_file("/root/Desktop/picture.jpg"), I encounter the same problem where my terminal becomes stuck.
[FINAL EDIT aka Problem solved]:
Based on the answer suggesting trying to read a file like "../file", I realized that the code was fully functional because read_file("../file") worked without any flaws, indicating that my python script had no trouble locating the given path. Once the file was read, it was uploaded to the machine running my backdoor where, curiously, it uploaded the file to my target machine but in the parent directory of the script. It was then that I realized that problem lied in the handling of paths in the backdoor script rather than my listener.py
Credit is also due to the commentator who pointed out that "~" does not count as a valid path element. Once I reached the conclusion mentioned just above, I attempted read_file("~/Desktop/picture.jpg") which failed. But with a quick modification, read_file("/root/Desktop/picture.jpg") was successfully executed and the file was uploaded in the same directory as my backdoor script on my target machine once I implemented some quick-fix code.
My apologies for not being so specific; efforts to aid were certainly confounded by the unmentioned complexity of my situation and I would like to personally thank everyone who chipped in.
This was my first whole-hearted attempt to reach out to the stackoverflow community for help and I have not been disappointed. Cheers!
A solution I found is putting "../" before the filename if the path is right outside of the dictionary.
test.py (in some dictionary right inside dictionary "Desktop" (i.e. /Desktop/test):
with open("../test.txt", "r") as test:
print(test.readlines())
test.txt (in dictionary "/Desktop")
Hi!
Hello!
Result:
["Hi!", "Hello!"]
This is likely the simplest solution. I found this solution because I always use "cd ../" on the terminal.
This not only allows you to modify the current file, but all other files in the same directory as the one you are reading/writing to.
path = os.path.dirname(os.path.abspath(__file__))
dir_ = os.listdir(path)
for filename in dir_:
f = open(dir_ + '/' + filename)
content = f.read()
print filename, len(content)
try:
im = Image.open(filename)
im.show()
except IOError:
print('The following file is not an image type:', filename)
I'm trying to download a large number of files that all share a common string (DEM) from an FTP sever. These files are nested inside multiple directories. For example, Adair/DEM* and Adams/DEM*
The FTP sever is located here: ftp://ftp.igsb.uiowa.edu/gis_library/counties/ and requires no username and password.
So, I'd like to go through each county and download the files containing the string DEM.
I've read many questions here on Stack Overflow and the documentation from Python, but cannot figure out how to use ftplib.FTP() to get into the site without a username and password (which is not required), and I can't figure out how to grep or use glob.glob inside of ftplib or urllib.
Thanks in advance for your help
Ok, seems to work. There may be issues if trying to download a directory, or scan a file. Exception handling may come handy to trap wrong filetypes and skip.
glob.glob cannot work since you're on a remote filesystem, but you can use fnmatch to match the names
Here's the code: it download all files matching *DEM* in TEMP directory, sorting by directory.
import ftplib,sys,fnmatch,os
output_root = os.getenv("TEMP")
fc = ftplib.FTP("ftp.igsb.uiowa.edu")
fc.login()
fc.cwd("/gis_library/counties")
root_dirs = fc.nlst()
for l in root_dirs:
sys.stderr.write(l + " ...\n")
#print(fc.size(l))
dir_files = fc.nlst(l)
local_dir = os.path.join(output_root,l)
if not os.path.exists(local_dir):
os.mkdir(local_dir)
for f in dir_files:
if fnmatch.fnmatch(f,"*DEM*"): # cannot use glob.glob
sys.stderr.write("downloading "+l+"/"+f+" ...\n")
local_filename = os.path.join(local_dir,f)
with open(local_filename, 'wb') as fh:
fc.retrbinary('RETR '+ l + "/" + f, fh.write)
fc.close()
The answer by #Jean with the local pattern matching is the correct portable solution adhering to FTP standards.
Though as most FTP servers do support non-standard wildcard use with file listing commands, you can almost always use a simpler and mainly more efficient solution like:
files = ftp.nlst("*DEM*")
for f in files:
with open(f, 'wb') as fh:
ftp.retrbinary('RETR ' + f, fh.write)
You can use fsspecs FTPFileSystem for convenient globbing on an FTP server:
import fsspec.implementations.ftp
ftpfs = fsspec.implementations.ftp.FTPFileSystem("ftp.ncdc.noaa.gov")
files = ftpfs.glob("/pub/data/swdi/stormevents/csvfiles/*1985*")
print(files)
contents = ftpfs.cat(files[0])
print(contents[:100])
Result:
['/pub/data/swdi/stormevents/csvfiles/StormEvents_details-ftp_v1.0_d1985_c20160223.csv.gz', '/pub/data/swdi/stormevents/csvfiles/StormEvents_fatalities-ftp_v1.0_d1985_c20160223.csv.gz', '/pub/data/swdi/stormevents/csvfiles/StormEvents_locations-ftp_v1.0_d1985_c20160223.csv.gz']
b'\x1f\x8b\x08\x08\xcb\xd8\xccV\x00\x03StormEvents_details-ftp_v1.0_d1985_c20160223.csv\x00\xd4\xfd[\x93\x1c;r\xe7\x8b\xbe\x9fOA\xe3\xd39f\xb1h\x81[\\\xf8\x16U\x95\xac\xca\xc5\xacL*3\x8b\xd5\xd4\x8bL\xd2\xb4\x9d'
A nested search also works, for example, nested_files = ftpfs.glob("/pub/data/swdi/stormevents/**1985*"), but it can be quite slow.
Does anyone know how to compare amount of files and size of the files in archiwum.rar and its extracted content in the folder?
The reason I want to do this, is that server I'am working on has been restarted couple of times during extraction and I am not sure, if all the files has been extracted correctly.
.rar files are more then 100GB's each and server is not that fast.
Any ideas?
ps. if the solution would be some code instead standalone program, my preference is Python.
Thanks
In Python you can use RarFile module. The usage is similar to build-in module ZipFile.
import rarfile
import os.path
extracted_dir_name = "samples/sample" # Directory with extracted files
file = rarfile.RarFile("samples/sample.rar", "r")
# list file information
for info in file.infolist():
print info.filename, info.date_time, info.file_size
# Compare with extracted file here
extracted_file = os.path.join(extracted_dir_name, info.filename)
if info.file_size != os.path.getsize(extracted_file):
print "Different size!"