Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
I want to cat txt files from a folder and cat results should be shown in terminal (Obviously). I have tried using listdir() it but it doesn't work. Required some help!
a simple implementation uses glob to generate absolute paths to files with .txt extension, in a loop which reads the file and prints it on standard output:
import glob,sys
for filepath in sorted(glob.glob("path/to/directory/*.txt")):
with open(filepath) as f:
sys.stdout.write(f.read())
using fileinput allows to read all the files, and line by line, probably less memory intensive and shorter:
import glob,sys,fileinput
for f in fileinput.input(sorted(glob.glob("path/to/directory/*.txt"))):
sys.stdout.write(f)
note that sorted is better to ensure that the files are processed in deterministic order (some filesystems don't respect that order)
sys.stdout.write(f) still writes line by line, but as suggested by comments you could improve performance and not use so much memory by using shutil.copyfileobj(f,sys.stdout)
Just use the command in terminal "cat *"
Or if you want to do it in python:
import os
allTextFileContents = os.popen("cat *").read();
print(allTextFileContents);
You can also do other things with the contents since its stored as a variable!
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I was working on a script that does something with the files in a given folder. Put some checks in place to assure that the path given (as a string) is a directory path and is absolute.
if not (os.path.isdir(dirpath) and os.path.isabs(dirpath)):
error_message = f"Path given: \"{dirpath}\" :was inappropriate for required uses."
main_logger.error(error_message)
raise Exception(error_message)
But while testing it on a folder, I got some unexpected results. The folder contained some pdfs in it and the function only extracts the "files" in a folder and ignores any subfolders.
file_list: list[str] = [os.path.join(dirpath, file) for file in os.listdir(dirpath) if os.path.isfile(file)]
But it ignored all pdfs and marked them as "not files". So, how can I check if a path is any file and not just a regular file? I can check if it has an extension or not. But I don't have any good method of doing that.
Checked some other ways to do it, for example:-
pathlib.Path(filepath).is_file()
But that didn't work either.
I have now settled for checking if it is not a directory path. But it could be useful to know about any other ways.
So, any way to do it?
Edit: Difference between any file and a regular file:-
Any file: Any file means that a file with any extension(s). Ex:- main.py, test.h etc.
Regular file: I used the term "regular file" as that is how they are described in the official documentation.
A possible definition could be here.
And the pathlib.Path(filepath).is_file() method "didn't work" meant that it produced the exct same result as the os.path.isfile() method.
Also, I don't want to check for a specific extension either. I want it to work for any file.
you could try
file_list = [os.path.join(dirpath, file) for file in os.listdir(dirpath) if os.path.isfile(os.path.join(dirpath, file))]
You can simply use .endswith() method.
if file.endswith('.py'):
...
enter code here
else:
...
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
so I am currently somewhat new in python, usually, I do programming in Node.js. I wanted to know if there is a way to delete all files that has a specific name in them?
For example, let's say I have the following files in a directory:
PDforeg.txt
PDahvn.txt
AHgme.txt
Ronra.txt
I want to be able to delete all files that includes the word "PD" in them. How do I do this?
import glob
import os
files = glob.glob("PATH_to_directory/*.txt")
for file in files:
if "PD" in file:
os.remove(file)
import subprocess
subprocess.run('rm *PD*.txt', shell=True)
Or you could import os and do os.remove() stuff.
For multi platform you can use the following script:
import glob
import os
folder = '/home/user/Documents/'
text_to_look_for = 'PD'
for f in glob.glob(folder + '*' + text_to_look_for +'*.txt'):
os.remove(f)
import pathlib
path = pathlib.Path('D:/dev/stackoverflow') # This is your folder
[p.unlink() for p in path.glob('*PD*')]
# Another way of writing the above line
for p in path.glob('*PD*'):
p.unlink()
*PD* will be all files that have PD in them somewhere, not just at the start. If you want to remove files with PD only at the start, use PD*.
In bash, just,
[user ~]$rm *.txt
This command will delete all the files with the same .txt extension in the current directory.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I have a directory with thousands of files, I want to rename them with respect to their sizes, i.e the name of the biggest file is going to be 0001.txt then 0002.txt and so on. To do so iterating over whole directory with
for filename in files:
print(filename)
is very very costly. Is there an easy an quicker way to do so?
You have to iterate over all the files. You can load all the files with respect to size, sort, then rename. Thousands of files isn't much data in the grand scheme of things.
import os
sorting_data = []
for filename in files:
sorting_data.append((filename,os.path.getsize(filename)))
# Sort data by size
sorting_data.sort(key=lambda x: x[1], reverse=true)
# Rename files
for i in range(0, len(sorting_data)):
name = sorting_data[i][0]
os.rename(name, str(i))
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I have an assignment which needs me to convert 4 separate py files into a zip folder. Do you guys know how to do this?
This is the part of the instruction which tells me to compress it
Programs: Name your programs, q1.py, q2.py, q3.py, q4.py and submit all as a zip file(named A1_my_upi.zip) to the assignment drop box before the deadline (No late submission
accepted).
I have read on the internet that I have to import zipfile. Can some one please clairfy?
Let's say that your files are in /users/matthew/documents/school/assignments/a1. Here are a couple of ways in which you can do this:
Python:
import os
from zipfile import Zipfile
dirpath = '/users/matthew/documents/school/assignments/a1'
with open("A1_my_upi.zip", 'w') as outfile:
for fname in os.listdir(dirpath):
if not fname.endswith('.py'):
continue
outfile.write(os.path.join(dirpath, fname))
Now, you have a A1_my_upi.zip that you can submit
If you're on a linux/mac computer, you can do this:
$ cd /users/matthew/documents/school/assignments/a1
$ cd ..
$ zip A1_my_upi.zip a1/*.py
Now, you have a A1_my_upi.zip that you can submit. It exists at /users/matthew/documents/school/assignments/A1_my_upi.zip
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I use python in opensuse, my problem is i need to execute large data in my folder.
for example
python myprogram.py 20140101.txt
i still need to run for a lot data with like that naming (20140101.txt) ex 20140204 etc..
my question is how to make my program running automatically for all data together.
use bash like this:
for file in /dir/*.txt
do
python myprogram.py $file
done
For a pure python solution, have a look at fileinput.
It is part of the python standard library and let's you loop over files given via standard input or via a list of files, e.g.:
import fileinput
for line in fileinput.input():
process(line)
So you could do:
./python myprogram 2014*.txt
The glob module is useful for processing multiple files with different names.
https://docs.python.org/2/library/glob.html
A python solution would be to use "glob". It helps you creating lists of files' name based on a certain pattern. You can then loop through those filenames to execute your commands on. See example below.
import glob
txt_files = glob.glob("201401*.txt")
for txt in txt_files:
print txt
my_txt_file = open(txt, "r")
For further reference:
https://docs.python.org/3/library/glob.htm