copying logs in python using the command line function - python

I had a doubt with my code which i think I can verify here . My requirement is to copy the apache log and error log from two different servers . Iv written down a python program, using a for loop.
My code:
def copylogs(Appache,Errorlog, folder_prefix) :
root_path = '/home/tza/Desktop/LOGS/'
folders = ['Appache','Errorlog']
for folder in folders:
folder_name = folder_prefix + "_" + folder + str(int(time.time()))
mkdircmd = "mkdir -p " + root_path + "/" + folder_name
os.system(mkdircmd)
filePath = root_path + folder_name
serverPath = "/var/log/apache/*"
cmd = "scp " + "symentic#60.62.1.164:" + serverPath + " " + filePath
cmd = cmd.replace("60.62.1.164" ,myip1)
cmd = os.system(cmd)
print "Logs are at:",root_path+folder_name
time.sleep(10)
filePath = root_path + folder
serverPath = "/var/log/errorlog/*"
cmd = "scp " + "symentic#10.95.21.129:" + serverPath + " " + filePath
cmd = cmd.replace("10.95.21.129" ,myip2)
cmd = os.system(cmd)
print "Logs are at:",root_path+folder_name
now Im calling the function at the end of my program :
folder_prefix = "Fail Case-1"
copylogs(Appache,Errorlog, folder_prefix)
I have an issue here . Programm executes successfully but the logs get overwritten .what i mean is first Appache folder gets created ,logs are copied and then once again it gets overwritten .
What i require is : create a folder Appachelogs[with the timestamp as defined ] ,copy the logs from machine one , next copy error logs from machine2 , and continue the program
How can this be achieved?

scp by default overwrites if a same file name exists in the target computer.
I would suggest using a combination of a error file name + the timestamp for naming the error logs. It's always a good convention for logs to have a timestamp in the name and they also prevent the overwriting problem you are experiencing.

Consider using rsync instead of scp

Do your logs have the same filenames on both machines? scp will overwrite them if they do.
Personally I would have two directories, one for each machine. Or use a timestamp as suggested by Sylar in his answer.

Related

why in the famous backup python script, it doesn't work when target folder name contains spaces?

hello everyone: this is the famous code for backing up files. It works fine giving me the message ( successful back up ). the target directory is created, but it is empty.
But when I remove spaces from the target folder name it worked very fine. so, what is the problem and how to use target folders with spaces ?
enter code here
import os
import time
source = [r'D:\python35']
target_dir = r"D:\Backup to harddisk 2016"
target = target_dir + os.sep + time.strftime('%Y%m%d%H%M%S') + '.zip'
if not os.path.exists(target_dir):
os.mkdir(target_dir)
zip_command = "zip -r {0} {1}".format(target, ' '.join(source))
print('zip command is:')
print(zip_command)
print('Running:')
if os.system(zip_command) == 0:
print('Successful backup to', target)
else:
print('Backup FAILED')
The problem is that Windows doesn't know that those spaces are part of the file name. Use subprocess.call, which takes a list of parameters instead. On Windows, it escapes the spaces for you before calling CreateProcess.
import subprocess as subp
zip_command = ["zip", "-r", target] + source
if subp.call(zip_command) == 0:
print('Successful backup to', target)
It looks like it uses the command line: "zip -r {0} {1}".format(target, ' '.join(source).
Spaces are used to separate arguments on the command line. If there's a space within the name, it believes it's the start of another argument.

Using the "net use" command in a loop

I am writing a simple python script that connects to several remote Windows machines, read the content of a remote folder on these machine, and then zip and copy all files that have been modified after a given date.
The problem is that after it connects to the first computer, it does not connect to the second computer, the "net use" command does not work.
If I do it manually though the windows command line of my computer, it does work, but not through my python script.
I have not found any topic that could help me, and I am kind of stuck now... Have you guys any idea of what I could do wrong ?
Below is my code (I apologize if it is not looking very neat, I am starting in python).
import os, subprocess, datetime, shutil
# the local destination of log files on my computer
ROOT_folder = 'C:\\Logs'
for i, IP in enumerate(list_of_IPs):
# list of names corresponding to the IPs
location = list_of_locs[i]
# create a local repository in my ROT folder for storing the logs of this remote station
try:
os.chdir(ROOT_folder + '\\' + location)
except:
os.mkdir(ROOT_folder + '\\' + location)
# The path to the logs that are stored on the remote windows machines
remote_path_to_logs = '_Temporary\\Logs'
print location
# Mapping a drive m: >>> Here I get the error at the 2nd iteration
subprocess.call(r'net use m: \\' + IP + '\c$ Password /user:Username', shell=True)
# The modification date of the most recent file I donwloaded is stored on my local computer in a txt file - here I read the date
try:
with open(ROOT_folder + '\\' + location + '\\last_mod.txt','r') as myFile:
last_file_downloaded = datetime.datetime.strptime(myFile.read(),'%Y-%m-%d %H:%M:%S')
except:
last_file_downloaded = datetime.datetime(1970,1,1)
os.chdir('M:\\' + remote_path_to_logs)
# I sort the list of files from oldest to newest
list_files = os.listdir('M:\\' + remote_path_to_logs)
list_sorted = sorted([(fl, os.path.getmtime(fl)) for fl in list_files],key=lambda x: x[1])
for log, logtime in list_sorted:
date_file = datetime.datetime(1970,1,1) + datetime.timedelta(seconds=logtime)
# I zip and move the file to my computer if it was modified after the date I stored on my computer
if date_file > last_file_downloaded :
print log + ': zipping and moving to local directory... '
with zipfile.ZipFile(date_file.strftime('%Y-%m-%d_%H-%M-%S') + '.zip','w', zipfile.ZIP_DEFLATED) as z:
z.write(log)
shutil.move(date_file.strftime('%Y-%m-%d_%H-%M-%S') + '.zip',ROOT_folder + '\\' + location)
# I overwrite the modification date in my file
with open(ROOT_folder + '\\' + location + '\\last_mod.txt','w') as myFile:
myFile.write(date_file.strftime('%Y-%m-%d %H:%M:%S'))
# Disconnecting the drive m:
subprocess.call('net use m: /delete /yes', shell=True)
# I tried to put a time.sleep(5) here but it does not help
Actually, if I don't use a letter at all for the mapped drive, the script is working:
I use the command 'net use \\' + IP + '\c$ Password /user:Username' instead of 'net use m: \\' + IP + '\c$ Password /user:Username'.
It works, althgough I still can't explain why it is not working using a same letter.

python script windows path

I have a python script that doesn't seem to be opening the files.
The folder in the script is defined like this:
logdir = "C:\\Programs\\CommuniGate Files\\SystemLogs\\"
submitdir = "C:\\Programs\\CommuniGate Files\\Submitted\\"
This is how the paths are being used:
filenames = os.listdir(logdir)
fnamewithpath = logdir + fname
I'm running this script in Windows 7 sp1
Does this look correct?
Is there something I can put into the code to debug it to make sure the files are opening?
Thank you,
Docfxit
Edited to provide more clarification:
The actual code to open and close the file is here:
# read all the log parts
for fname in logfilenames :
fnamewithpath = logdir + fname
try :
inputFile = open(fnamewithpath, "r")
except IOError as reason :
print("Error: " + str(reason))
return
if testing :
print("Reading file '%s'" % (fname))
reporter.munchFile(inputFile)
inputFile.close()
# open output file
if testing :
outfilename = fullLognameWithPath + ".summary"
fullOutfilename = outfilename
else :
outfilename = submitdir + "ls" + str(time.time()) + "-" + str(os.getpid())
fullOutfilename = outfilename + ".tmp"
try :
outfile = open(fullOutfilename, "w")
except IOError :
print("Can't open output file " + fullOutfilename)
return
if not testing :
# add the mail headers first
outfile.write("To: " + reportToAddress + "\n")
outfile.write("From: " + reportFromAddress + "\n")
outfile.write("Subject: CGP Log Summary new for " + logname + "\n")
if useBase64 :
outfile.write("Content-Transfer-Encoding: base64\n")
outfile.write("\n")
# save all this as a string so that we can base64 encode it
outstring = ""
outstring += "Summary of file: " + fullLogname + partAddendum + "\n"
outstring += "Generated " + time.asctime() + "\n"
outstring += reporter.generateReport()
if useBase64 :
outstring = base64.encodestring(outstring)
outfile.write(outstring)
outfile.close()
if not testing :
# rename output file to submit it
try :
os.rename(outfilename + ".tmp", outfilename + ".sub")
except OSError :
print("Can't rename mail file to " + outfilename + ".sub")
I was originally wondering if the double back slashes included in the path were correct.
I can't figure out why it isn't producing the output correctly.
Just in case someone would like to see the entire script I posted it:
The first half is at:
http://pastebin.ws/7ipf3
The second half is at:
http://pastebin.ws/2fuu3n
It was too large to post all in one.
This is being run in Python 3.2.2
Thank you very much for looking at it,
Docfxit
The code as written above does not actually open either file.
os.listdir returns a list of the files (technically entries, as non-files like . and .. are also included) in the specified path, but does not actually open them. For that you would need to call the open function on one of the paths.
If you wanted to open all the files in filenames for write, you could do something like this:
fileList = []
for f in filenames:
if os.path.isfile(fullPath):
fullPath = os.path.join(logdir, f)
fileList.append(open(fullPath, 'w')
After this, the list fileList would contain the open file handles for all of the files, which could then be iterated over (and, for example, all written to if you wanted to multiplex the output).
Note that when done, you should loop through the list and explicitly close them all (the with syntax that automatically closes them has additional complexities/limitations when it comes to dynamically sized lists, and is best avoided here, IMO).
For more info on files, see:
Reading and Writing Files
Also, it's best to use os.path.join to combine components of a path. That way it can be portable across supported platforms, and will automatically use the correct path separators, and such.
Edit in response to comment from OP:
I would recommend you step through the code using a debugger to see exactly what's going wrong. I use pudb, which is an enhanced command-line debugger, and find it invaluable. You can install it via pip into your system/virtualenv Python environment.

Python CIM_DataFile search for file by full path

So, I am trying to write a script that will be able to connect to remote systems and query the CIM_DataFile among other things.
For the sake of testing, I wrote the following code to test on my local machine. I have two files (ns.txt and dns.txt) in the root of my C: drive, however, the queries are not working correctly for Name= (which is the full path).
import wmi
wmiService = wmi.WMI()
for f in wmiService.CIM_DataFile(Name="c:\ns.txt"):
print "NAME '" + f.Name + "'"
for f in wmiService.CIM_DataFile(Name="c:\dns.txt"):
print "NAME '" + f.Name + "'"
for f in wmiService.CIM_DataFile(FileName="ns", Extension="txt", Drive="c:"):
print "FILENAME '" + f.Name + "'"
for f in wmiService.CIM_DataFile(FileName="dns", Extension="txt", Drive="c:"):
print "FILENAME '" + f.Name + "'"
The output of the above code is:
NAME 'c:\ns.txt'
FILENAME 'c:\ns.txt'
FILENAME 'c:\dns.txt'
Why is it not showing c:\dns.txt for the Name= query? I have also tested on other files located in different places on my system and most of them do not show up for the Name= query.
The reason for the file:wmi.py inside the path:Python27\Lib\site-packages.
I changed this file.
My problem has been resolved.
In fact, the problem is with a library that is installed.

Python Windows CMD mklink, stops working without error message

I want to create symlinks for each file in a nested directory structure, where all symlinks will be put in one large flat folder, and have the following code by now:
# loop over directory structure:
# for all items in current directory,
# if item is directory, recurse into it;
# else it's a file, then create a symlink for it
def makelinks(folder, targetfolder, cmdprocess = None):
if not cmdprocess:
cmdprocess = subprocess.Popen("cmd",
stdin = subprocess.PIPE,
stdout = subprocess.PIPE,
stderr = subprocess.PIPE)
print(folder)
for name in os.listdir(folder):
fullname = os.path.join(folder, name)
if os.path.isdir(fullname):
makelinks(fullname, targetfolder, cmdprocess)
else:
makelink(fullname, targetfolder, cmdprocess)
#for a given file, create one symlink in the target folder
def makelink(fullname, targetfolder, cmdprocess):
linkname = os.path.join(targetfolder, re.sub(r"[\/\\\:\*\?\"\<\>\|]", "-", fullname))
if not os.path.exists(linkname):
try:
os.remove(linkname)
print("Invalid symlink removed:", linkname)
except: pass
if not os.path.exists(linkname):
cmdprocess.stdin.write("mklink " + linkname + " " + fullname + "\r\n")
So this is a top-down recursion where first the folder name is printed, then the subdirectories are processed. If I run this now over some folder, the whole thing just stops after 10 or so symbolic links.
The program still seems to run but no new output is generated. It created 9 symlinks for some files in the # tag & reencode and the first three files in the ChillOutMix folder. The cmd.exe Window is still open and empty, and shows in its title bar that it is currently processing the mklink command for the third file in ChillOutMix.
I tried to insert a time.sleep(2) after each cmdprocess.stdin.write in case Python is just too fast for the cmd process, but it doesn't help.
Does anyone know what the problem might be?
Why not just execute mklink directly?
Try this at the end:
if not os.path.exists(linkname):
fullcmd = "mklink " + linkname + " " + fullname + "\r\n"
print fullcmd
cmdprocess.stdin.write(fullcmd)
See what commands it prints. You may see a problem.
It may need double quotes around mklink's arg, since it contains spaces sometimes.

Categories

Resources