So I'm creating a script that needs to go through all the files on a server and run each of those files names followed by the command "ll" and then take the output of that command and print it to a txt file.
example:
folder/filename.txt ll
output: SoMETHINGSomethingSomethingother - This is sent to a output.txt file
folder/subfolder/filename3.txt ll
output: SoMETHINGSomethingSomethingother - This is sent to a output.txt file
This is what I have so far:
import os
with open("output.txt", "w") as a:
for path, subdirs, files in os.walk(r'C:\Users\user\Desktop\Test_Py'):
for filename in files:
f = os.path.join(filename)
m = f + ' ll'
a.write(str(m) + os.linesep)
So what I'm trying to figure out now is how to make the file names that are printed out run with the "ll" command. So far this code will write the names of all the files in that folder and its subfolders in to my output.txt file.
Does anybody have any ideas?
Use os.system():
import os
with open("output.txt", "w") as a:
for path, subdirs, files in os.walk(r'C:\Users\user\Desktop\Test_Py'):
for filename in files:
f = os.path.join(filename)
m = f + ' ll > output.txt'
os.system(m)
This will send only the standard output to the output.txt file. If you want to send error messages to output.txt as well, use m = f + ' ll > output.txt 2>&1' instead.
Explanation: os.system(command_string) will execute the command command_string in your system as if you typed that command into a terminal. The > operator is standard in Windows and Linux to redirect standard output from a command into a file. The 2>&1 extra argument at the end is the only not-so-clear part: it redirects standard error to the same place where standard output is going. See more about this last part here.
In order to run the files with the "ll" command you can use subprocess module available in python.
Your revised code will be:-
import os
import subprocess
import shlex
with open("output.txt", "w") as a:
for path, subdirs, files in os.walk(r'C:\Users\user\Desktop\Test_Py'):
for filename in files:
f = os.path.join(filename)
m = f + ' ll'
cmd_args = shlex.split(m)
output = subprocess.check_output(cmd_args)
a.write(output + os.linesep)
Related
There is a bash command, I am trying to convert the logic into python. But I don't know what to do, I need some help with that.
bash command is this :
ls
ls *
TODAY=`date +%Y-%m-%d`
cd xx/xx/autotests/
grep -R "TEST_F(" | sed s/"TEST_F("/"#"/g | cut -f2 -d "#" | while read LINE
The logic is inside a directory, reads the filename one by one, includes all sub-folders, then lists the file matches. Any help here will be much appreciated
I tried something like follows, but it is not what I would like to have. And there are some subfolders inside, which the code is not reading the file inside them
import fnmatch
import os
from datetime import datetime
time = datetime.now()
dir_path = "/xx/xx/autotests"
dirs = os.listdir(dir_path)
TODAY = time.strftime("%Y-%m-%d")
filesOfDirectory = os.listdir(dir_path)
print(filesOfDirectory)
pattern = "TEST_F("
for file in filesOfDirectory:
if fnmatch.fnmatch(file, pattern):
print(file)
Use os.walk() to scan the directory recursively.
Open each file, loop through the lines of the file looking for lines with "TEST_F(". Then extract the part of the line after that (that's what sed and cut are doing).
for root, dirs, files in os.walk(dir_path):
for file in files:
with open(os.path.join(root, file)) as f:
for line in f:
if "TEST_F(" in line:
data = line.split("TEST_F(")[1]
print(data)
This manual command is working:
!antiword "test" > "test.docx"
but the following script convert files to empty .docx files:
for file in os.listdir(directory):
subprocess.run(["bash", "-c", "antiword \"$1\" > \"$1\".docx", "_", file])
also it stores the .docx file in the previous directly e-g file is in \a\b this command will store the files to \a
I have tried many different ways including running directly on terminal adn bash loops. ony the manual way works.
Something like this should work (adjust dest_path etc. accordingly).
import os
import shlex
for filename in os.listdir(directory):
if ".doc" not in filename:
continue
path = os.path.join(directory, filename)
dest_path = os.path.splitext(path)[0] + ".txt"
cmd = "antiword %s > %s" % (shlex.quote(path), shlex.quote(dest_path))
print(cmd)
# If the above seems to print correct commands, add:
# os.system(cmd)
I have 200 pyc files I need to convert in a folder. I am aware of converting pyc to py files through uncompyle6 -o . 31.pyc however as I have so many pyc files, this would take a long period of time. I've founds lots of documentation but not much in bulk converting to py files. uncompyle6 -o . *.pyc was not supported.
Any idea on how I can achieve this?
Might not be perfect but it worked great for me.
import os
import uncompyle6
your_directory = ''
for dirpath, b, filenames in os.walk(your_directory):
for filename in filenames:
if not filename.endswith('.pyc'):
continue
filepath = dirpath + '/' + filename
original_filename = filename.split('.')[0]
original_filepath = dirpath + '/' + original_filename + '.py'
with open(original_filepath, 'w') as f:
uncompyle6.decompile_file(filepath, f)
This is natively supported by uncompyle6
uncompyle6 -ro <output_directory> <python_directory>
-r tells the tool to recurse into sub directories.
-o tells the tool to output to the given directory.
In operating systems with shell filename expansion, you might be able to use the shell's file expansion ability. For example:
uncompyle6 -o /tmp/unc6 myfiles/*.pyc
If you need something fancier or more control, you could always write some code that does the fancier expansion. Here is the above done in POSIX shell filtering out the single file myfiles/huge.pyc:
cd myfiles
for pyc in *.pyc; do
if [[ $pyc != huge.pyc ]] ; then
uncompyle -o /tmp/unc $pyc
fi
done
Note: It seems this question was also asked in Issue on output directory while executing commands with windows batch command "FOR /R"
thank you for the code, extending it to recursively call, nested sub directories, save as uncompile.py, in the directory to be converted, to run in command prompt type "python uncomple.py" would convert pyc to py in current working directory, with error handling and if rerun skips (recovery) files checking existing py extension match
import os
import uncompyle6
#Use current working directory
your_directory = os.getcwd()
#function processing current dir
def uncompilepath(mydir):
for dirpath, b, filenames in os.walk(mydir):
for d in b:
folderpath = dirpath + '/' + d
print(folderpath)
#recursive sub dir call
uncompilepath(folderpath)
for filename in filenames:
if not filename.endswith('.pyc'):
continue
filepath = dirpath + '/' + filename
original_filename = filename.split('.')[0]
original_filepath = dirpath + '/' + original_filename + '.py'
#ignore if already uncompiled
if os.path.exists(original_filepath):
continue
with open(original_filepath, 'w') as f:
print(filepath)
#error handling
try:
uncompyle6.decompile_file(filepath, f)
except Exception:
print("Error")
uncompilepath(your_directory)
I have a directory (c:\temp) with some files:
a.txt
b.py
c.html
I need to read all of the files in a directory and output it to a text file. I've got that part handled (I think):
WD = "c:\\temp"
import glob
files = glob.glob('*.*')
with open('dirList.txt', 'w') as in_files:
for eachfile in files: in_files.write(eachfile + '\n')
I need the output to look like:
a|a.txt
b|b.py
c|c.html
I'm not quite sure where to look next.
I'd split the file name by . and take the first part:
for eachfile in files:
in_files.write('%s|%s\n' % (eachfile.split('.')[0], eachfile))
You have almost solved your problem. I am not quite sure where you are getting stuck. If all you need to write is the file name (without extension) followed by | then you just need to update your code as this:
import glob
files = glob.glob('*.*')
with open('dirList.txt', 'w') as in_files:
for eachfile in files:
file_name_without_extension = eachfile.split(".")[0]
in_files.write( file_name_without_extension + "|" + eachfile
+ '\n')
I have a Python script in which I have a directory of .bat files. I loop through them and run each one through command line, then save the result of the batch script to a file. So far I have this:
import subprocess
for _, _, files in os.walk(directory):
for f in files:
fullpath = directory + os.path.basename(f)
params = [fullpath]
result = subprocess.list2cmdline(params)
However, this sets the result variable to the path of the .bat file, when I need the result of running the code in the ,bat file. Anyone have any suggestions?
Why are you calling list2cmdline? This doesn't actually call the subprocess.
Use subprocess.check_output instead:
import os
output = []
for _, _, files in os.walk(directory):
for f in files:
fullpath = os.path.join(directory, os.path.basename(f))
output.append(subprocess.check_output([fullpath]))
print '\n'.join(output)
To write result (output) of a command to a file, you could use stdout parameter:
import os
from glob import glob
from subprocess import check_call
for path in glob(os.path.join(directory, "*.bat")):
name, _ = os.path.splitext(path)
with open(name + ".result", "wb") as outfile:
check_call(path, stdout=outfile)