I have a python script that makes backups to my QNAP.
QNAP from time to time changes the default name (visible via ftp) of the external drive.
In the new firmware it changed to Seagate Expansion Drive.
Now I need to correct the path in the config file:
serverlocation = Seagate Expansion Drive/www03/config
I tried:
serverlocation = "Seagate Expansion Drive"/www03/config
and
serverlocation = 'Seagate Expansion Drive'/www03/config
but still it says that the directory was not found on the ftp server.
The script uses that variable in this way:
try: ftp.mkd(self.config.get(jn, "serverlocation"))
except: pass
try: ftp.mkd("%s/%s" % (self.config.get(jn, "serverlocation"), self.date))
except: pass
syslog.syslog("Uploading %s..." % os.path.basename(location))
try:
th = FTPKeepalive(ftp)
th.start()
ftp.storbinary("STOR %s/%s/%s" % (self.config.get(jn, "serverlocation"), self.date, os.p>
th.alive = False
I am not a python developer, I am wondering how I can fix that path with spaces.
Related
I had a script that was working. I made one small change and now it stopped working. The top version works, while the bottom one fails.
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
conf = open('/home/myuser/verify_yslog_conf/%s/%s' % (host, filename), 'r')
comment = open('/home/myuser/verify_yslog_conf/%s/localconfig.txt' % host, 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
def makelocalconfig(file="TEXT"):
host = env.host_string
filename = file
path = host + "/" + filename
pwd = local("pwd")
conf = open('%s/%s' % (pwd, path), 'r')
comment = open('%s/%s/localconfig.txt' % (pwd, host), 'w')
for line in conf:
comment.write(line)
comment.close()
conf.close()
For troubleshooting purposes I added a print pwd and print path line to make sure the variables were getting filled correctly. pwd comes up empty. Why isn't this variable being set correctly? I use this same format of
var = sudo("cmd")
all the time. Is local different than sudo and run?
In short, you may need to add capture=True:
pwd = local("pwd", capture=True)
local runs a command locally:
a convenience wrapper around the use of the builtin Python subprocess
module with shell=True activated.
run runs a command on a remote server and sudo runs a remote command as super-user.
There is also a note in the documentation:
local is not currently capable of simultaneously printing and capturing output, as run/sudo do. The capture kwarg allows you to switch between printing and capturing as necessary, and defaults to False.
When capture=False, the local subprocess’ stdout and stderr streams are hooked up directly to your terminal, though you may use the global output controls output.stdout and output.stderr to hide one or both if desired. In this mode, the return value’s stdout/stderr values are always empty.
I'm currently an Intern in a IT service and I've been asked to build a web based app using Python that will run on a Linux environment. This web app has to be WSGI-compliant and I cannot use any framework.
My issue currently is that I want to have a variable set as a list of files in the said directory. Therefore I can then proceed to list those files by printing a table having each row being a file.
I am aware of os.listdir() but can't find a way to use it on a remote server (which is supposed not to be the case considering what google searches showed me...).
I tried an os.system(ssh root#someip:/path/to/dir/) but as python doc states, I cant get the output I want as it returns some integers...
Below is a piece of my script.
#ip is = to the ip of the server I want to list.
ip = 192..............
directory = "/var/lib/libvirt/images/"
command = "ssh root#"+ip+" ls "+directory
dirs = os.system(command)
files = ""
table_open = "<table>"
table_close = "</table>"
table_title_open = "<th>Server: "
table_title_close = "</th>"
tr_open = "<tr>"
tr_close = "</tr>"
td_open = "<td>"
td_close = "</td>"
input_open = "<input type='checkbox' name='choice' value='"
input_close = "'>"
#If i don't put dirs in brackets it raises an error (dirs not being iterable)
for file in [dirs]:
files = files + tr_open+td_open+file+td_close+td_open+input_open+file+input_close+td_close+tr_close
table = table_open+table_title_open+str(num_server)+table_title_close+files+table_close
I've tried this with a local directory (with os.listdir) and it works perfectly. I am having troubles only with remote directory listing...
I do hope that my question is crystal clear, if not I'll do my best to be more accurate.
Thanks in advance,
-Karink.
You can use subprocess module, here is an example:
import subprocess
ls = subprocess.Popen(['ssh','user#xx.xx.xx.xx', 'ls'], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = ls.communicate()
print out
print err
You may also use pysftp
first install it using pip install pysftp then the below code can list the files on remote linux machine from windows as well
import pysftp
cnopts = pysftp.CnOpts()
cnopts.hostkeys = None
with pysftp.Connection('ipLinuxmachine', username='username', password='passwd',cnopts=cnopts) as sftp:
out=sftp.execute('cd path of directory ; ls')
print out
Could I unblock a file in windows(7), which is automatically blocked by windows (downloaded from Internet) from a python script? A WindowsError is raised when such a file is encountered. I thought of catching this exception, and running a powershell script that goes something like:
Parameter Set: ByPath
Unblock-File [-Path] <String[]> [-Confirm] [-WhatIf] [ <CommonParameters>]
Parameter Set: ByLiteralPath
Unblock-File -LiteralPath <String[]> [-Confirm] [-WhatIf] [ <CommonParameters>]
I don't know powershell scripting. But if I had one I could call it from python. Could you folks help?
Yes, all you have to do is call the following command line from Python:
powershell.exe -Command Unblock-File -Path "c:\path\to\blocked file.ps1"
From this page about the Unblock-File command: https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/unblock-file?view=powershell-7.2
Internally, the Unblock-File cmdlet removes the Zone.Identifier alternate data stream, which has a value of 3 to indicate that it was downloaded from the internet.
To remove an alternate data stream ads_name from a file path\to\file.ext, simply delete path\to\file.ext:ads_name:
try:
os.remove(your_file_path + ':Zone.Identifier')
except FileNotFoundError:
# The ADS did not exist, it was already unblocked or
# was never blocked in the first place
pass
# No need to open up a PowerShell subprocess!
(And similarly, to check if a file is blocked you can use os.path.isfile(your_file_path + ':Zone.Identifier'))
In a PowerShell script, you can use Unblock-File for this, or simply Remove-Item -Path $your_file_path':Zone.Identifier'.
Remove-Item also has a specific flag for alternate data streams: Remove-Item -Stream Zone.Identifier (which you can pipe in multiple files to, or a single -Path)
Late to the party . . . .
I have found that the Block status is simply an extra 'file' (stream) attached in NTFS and it can actually be accessed and somewhat manipulated by ordinary means. These are called Alternative Data Streams.
The ADS for file blocking (internet zone designation) is called ':Zone.Identifier' and contains, I think, some useful information:
[ZoneTransfer]
ZoneId=3
ReferrerUrl=https://www.google.com/
HostUrl=https://imgs.somewhere.com/product/1969297/some-pic.jpg
All the other info I have found says to just delete this extra stream.... But, personally, I want to keep this info.... So I tried changing the ZoneId to 0, but it still shows as Blocked in Windows File Properties.
I settled on moving it to another stream name so I can still find it later.
The below script originated from a more generic script called pyADS. I only care about deleting / changing the Zone.Identifier attached stream -- which can all be done with simple Python commands. So this is a stripped-down version. It has several really nice background references listed. I am currently running the latest Windows 10 and Python 3.8+; I make no guarantees this works on older versions.
import os
'''
References:
Accessing alternative data-streams of files on an NTFS volume https://www.codeproject.com/Articles/2670/Accessing-alternative-data-streams-of-files-on-an
Original ADS class (pyADS) https://github.com/RobinDavid/pyADS
SysInternal streams applet https://learn.microsoft.com/en-us/sysinternals/downloads/streams
Windows: killing the Zone.Identifier NTFS alternate data stream https://wiert.me/2011/11/25/windows-killing-the-zone-identifier-ntfs-alternate-data-stream-from-a-file-to-prevent-security-warning-popup/
Zone.Information https://learn.microsoft.com/en-us/openspecs/windows_protocols/ms-fscc/6e3f7352-d11c-4d76-8c39-2516a9df36e8
About URL Security Zones https://learn.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/platform-apis/ms537183(v=vs.85)?redirectedfrom=MSDN
GREAT info: How Windows Determines That the File.... http://woshub.com/how-windows-determines-that-the-file-has-been-downloaded-from-the-internet/
Dixin's Blog: Understanding File Blocking and Unblocking https://weblogs.asp.net/dixin/understanding-the-internet-file-blocking-and-unblocking
'''
class ADS2():
def __init__(self, filename):
self.filename = filename
def full_filename(self, stream):
return "%s:%s" % (self.filename, stream)
def add_stream_from_file(self, filename):
if os.path.exists(filename):
with open(filename, "rb") as f: content = f.read()
return self.add_stream_from_string(filename, content)
else:
print("Could not find file: {0}".format(filename))
return False
def add_stream_from_string(self, stream_name, bytes):
fullname = self.full_filename(os.path.basename(stream_name))
if os.path.exists(fullname):
print("Stream name already exists")
return False
else:
fd = open(fullname, "wb")
fd.write(bytes)
fd.close()
return True
def delete_stream(self, stream):
try:
os.remove(self.full_filename(stream))
return True
except:
return False
def get_stream_content(self, stream):
fd = open(self.full_filename(stream), "rb")
content = fd.read()
fd.close()
return content
def UnBlockFile(file, retainInfo=True):
ads = ADS2(file)
if zi := ads.get_stream_content("Zone.Identifier"):
ads.delete_stream("Zone.Identifier")
if retainInfo: ads.add_stream_from_string("Download.Info", zi)
### Usage:
from unblock_files import UnBlockFile
UnBlockFile(r"D:\Downloads\some-pic.jpg")
Before:
D:\downloads>dir /r
Volume in drive D is foo
Directory of D:\downloads
11/09/2021 10:05 AM 8 some-pic.jpg
126 some-pic.jpg:Zone.Identifier:$DATA
1 File(s) 8 bytes
D:\downloads>more <some-pic.jpg:Zone.Identifier:$DATA
[ZoneTransfer]
ZoneId=3
ReferrerUrl=https://www.google.com/
HostUrl=https://imgs.somewhere.com/product/1969297/some-pic.jpg
After:
D:\downloads>dir /r
Volume in drive D is foo
Directory of D:\downloads
11/09/2021 10:08 AM 8 some-pic.jpg
126 some-pic.jpg:Download.Info:$DATA
1 File(s) 8 bytes
I have an issue putting files to a server that contains hyphens ("-"), and I think that it may be because of how Linux is treating the file, but I am in no way sure. The script is scanning a folder for pictures/items, puts them in a list and then transferring all items to the server.
This is a part of the script:
def _transferContent(locale):
## Transferring images to server
now = datetime.datetime.now()
localImages = '/home/bcns/Pictures/upload/'
localList = os.listdir(localImages)
print("Found local items: ")
print(localList)
fname = "/tmp/backup_images_file_list"
f = open(fname, 'r')
remoteList = f.read()
remoteImageLocation = "/var/www/bcns-site/pics/photos/backup_" + locale + "-" + `now.year` + `now.month` + `now.day` + "/"
print("Server image location: " + remoteImageLocation)
## Checking local list against remote list (from the server)
for localItem in localList:
localItem_fullpath = localImages + localItem
if os.path.exists(localItem_fullpath):
if localItem in remoteList:
print("Already exists: " + localItem)
else:
put(localItem_fullpath, remoteImageLocation)
else:
print("File not found: " + localItem)
And this is the out put:
Directory created successfully
/tmp/bcns_deploy/backup_images_file_list
[<server>] download: /tmp/backup_images_file_list <- /tmp/bcns_deploy/backup_images_file_list
Warning: Local file /tmp/backup_images_file_list already exists and is being overwritten.
Found local items:
['darth-vader-mug.jpg', 'gun-dog-leash.jpg', 'think-safety-first-sign.jpg', 'hzmp.jpg', 'cy-happ-short-arms.gif', 'Hogwarts-Crest-Pumpkin.jpg']
Server image location: /var/www/bcns-site/pics/photos/backup_fujitsu-20131031/
[<server>] put: /home/bcns/Pictures/upload/darth-vader-mug.jpg -> /var/www/bcns-site/pics/photos/backup_fujitsu-20131031/
Fatal error: put() encountered an exception while uploading '/home/bcns/Pictures/upload/darth-vader-mug.jpg'
Underlying exception:
Failure
I have tried to remove the hyphons, and then the transfer works just fine.
Server runs Ubuntu 12.04 and client runs Debian 7.1 on ext3 disks.
Irritating error, but anyone out here that has a clue on what might make this error?
Dashes in command line options in Linux matter, but dashes in the middle of filenames are file.
Check file permissions -- it's possible that in transferring one file manually, the perms are set differently than if Fabric transfers.
I suggest using put() to transfer a directory at a time. This will help to make sure all the files (and permissions) are what they should be.
Example (untested):
def _transferContent(locale):
## Transferring images to server
now = datetime.datetime.now()
localImageDir = '/home/bcns/Pictures/upload/'
remoteImageDir = "/var/www/bcns-site/pics/photos/backup_" + locale + "-" + `now.year` + `now.month` + `now.day` + "/"
print("Server image location: " + remoteImageDir)
put( localImagesDir, remoteImageDir)
I have a upload script done. But i need to figure out how to make a script that I can run as a daemon in python to handle the conversion part and moving the file thats converted to its final resting place. heres what I have so far for the directory watcher script:
#!/usr/bin/python
import os
import pyinotify import WatchManager, Notifier, ThreadedNotifier, ProcessEvent, EventCodes
import sys, time, syslog, config
from os import system
from daemon import Daemon
class myLog(ProcessEvent):
def process_IN_CREATE(self, event):
syslog.syslog("creating: " + event.pathname)
def process_IN_DELETE(self, event):
syslog.syslog("deleting: " + event.pathname)
def process_default(self, event):
syslog.syslog("default: " + event.pathname)
class MyDaemon(Daemon):
def loadConfig(self):
"""Load user configuration file"""
self.config = {}
self.parser = ConfigParser.ConfigParser()
if not os.path.isfile(self.configfile):
self.parser.write(open(self.configfile, 'w'))
self.parser.readfp(open(self.configfile, 'r'))
variables = { \
'mplayer': ['paths', self.findProgram("mplayer")], \
'mencoder': ['paths', self.findProgram("mencoder")], \
'tcprobe': ['paths', self.findProgram("tcprobe")], \
'transcode': ['paths', self.findProgram("transcode")], \
'ogmmerge': ['paths', self.findProgram("ogmmerge")], \
'outputdir': ['paths', os.path.expanduser("~")], \
}
for key in variables.keys():
self.cautiousLoad(variables[key][0], key, variables[key][1])
def cautiousLoad(self, section, var, default):
"""Load a configurable variable within an exception clause,
in case variable is not in configuration file"""
try:
self.config[var] = int(self.parser.get(section, var))
except:
self.config[var] = default
try:
self.parser.set(section, var, default)
except:
self.parser.add_section(section)
self.parser.set(section, var, default)
self.parser.write(open(self.configfile, 'w'))
def findProgram(self, program):
"""Looks for program in path, and returns full path if found"""
for path in config.paths:
if os.path.isfile(os.path.join(path, program)):
return os.path.join(path, program)
self.ui_configError(program)
def run(self):
syslog.openlog('mediaConvertor', syslog.LOG_PID,syslog.LOG_DAEMON)
syslog.syslog('daemon started, entering loop')
wm = WatchManager()
mask = IN_DELETE | IN_CREATE
notifier = ThreadedNotifier(wm, myLog())
notifier.start()
wdd = wm.add_watch(self.config['outputdir'], mask, rec=True)
while True:
time.sleep(1)
wm.rm_watch(wdd.values())
notifier.stop()
syslog.syslog('exiting media convertor')
syslog.closelog()
if __name__ == "__main__":
daemon = MyDaemon('/tmp/mediaconvertor.pid')
if len(sys.argv) == 2:
if 'start' == sys.argv[1]:
daemon.run()
if 'stop' == sys.argv[1]:
daemon.stop()
if 'restart' == sys.argv[1]:
daemon.restart()
else:
print "Unknown Command"
sys.exit(2)
sys.exit(0)
else:
print "Usage: %s start|stop|restart" % sys.argv[0]
sys.exit(2)
not sure where to go from here.
I don't run on Linux and have never used the inotify capabilities you are using here. I'll describe how I would do things generically.
In the simplest case, you need to check if there's a new file in the upload directory and when there is one, start doing the conversion notification.
To check if there are new files you can do something like:
import os
import time
def watch_directory(dirname="."):
old_files = set(os.listdir(dirname))
while 1:
time.sleep(1)
new_files = set(os.listdir(dirname))
diff = new_files - old_files
if diff:
print "New files", diff
old_files = new_files
watch_directory()
You may be able to minimize some filesystem overhead by first stat'ing the directory to see if there are any changes.
def watch_directory(dirname="."):
old_files = set(os.listdir(dirname))
old_stat = os.stat(dirname)
while 1:
time.sleep(1)
new_stat = os.stat(dirname)
if new_stat == old_stat:
continue
new_files = set(os.listdir(dirname))
diff = new_files - old_files
if diff:
print "New files", diff
old_stat = new_stat
old_files = new_files
With inotify I think this is all handled for you, and you put your code into process_IN_CREATE() which gets called when a new file is available.
One bit of trickiness - how does the watcher know that the upload is complete? What happens if the upload is canceled part-way through uploading? This could be as simple as having the web server do a rename() to use one extension during upload and another extension when done.
Once you know the file, use subprocess.Popen(conversion_program, "new_filename") or os.system("conversion_program new_filename &") to spawn off the conversion in a new process which does the conversion. You'll need to handle things like error reporting, as when the input isn't in the right format. It should also clean up, meaning that once the conversion is done it should remove the input file from consideration. This might be as easy as deleting the file.
You'll also need to worry about restarting any conversions which were killed. If the machine does down, how does the restarted watcher know which data file conversions were also killed and need to be restarted. But this might be doable as a manual step.