Bash to Python convertion [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Want to convert the below shell code into python.
$line = ddtest-7.0
find . -name "*.json" -exec grep -l "project_name.*\"$line\"" {} \; | grep -vw project
This code will do the below functions:
1). It will search for all json files in the current directory(includes subdirectory)
2). It will open each json file, and search for "project_name.*\"$line\"( "projectname": "ddtest-7.0",), if it is present in file . it will store the json file name with path.
3). It will remove the json file from the project directory(grep -vw project)
Output:
./product/ddtest/7.0/product-info.json
Can someone help to covert this into python (version 2.7).

From Python 3.5 onwards you can do something like this:
import glob
list(glob.iglob('**/project_name.*/ddtest-7.0/**/*.json', recursive=True))

Related

convert 400 PDF into JPG with command line in Linux with a batch command [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 days ago.
Improve this question
well i have a entire folder of nearly 400 pdf files that had gotten accidentally scanned into pdf files instead of jpg. I have been looking for a way to correct this and everyone says use Linux, All I need is a simple command to make this convert the contents of the entire folder; all the pdf files are in just one folder.
i think that this might be appropiate
find -type f -name '*.pdf' -exec pdftoppm -jpeg {} {} \;
and supsequently i use this to separate the files and put them into a separate folder:
mkdir jpg_files && mv *.jpg jpg_files/
and besides this we can do this with python:
#!/usr/bin/python3
import sys
from pdf2image import convert_from_bytes
images = convert_from_bytes(open(sys.argv[1], "rb").read())
for i in range(len(images))
images[i].save(f"page-{i}.jpg")
With this test document I see:
well i guess that we can go like this in pyvips as:
#!/usr/bin/python3
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1])
for i in range(image.get('n-pages')):
image = pyvips.Image.new_from_file(filename, page=i)
image.write_to_file(f"page-{i}.jpg")
```

Iterating through images in folder path [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
If I have:
flags = ['australia.png', 'canada.png', 'newzealand.png', 'uk.png', 'usa.png']
and if these images are in some folder called "flags", how can I set the path to that folder to iterate through them?
You need to include the path to the flags folder which can be done using the os module. This will be more robust and portable than adding them as strings.
import os
flag_folder = '/path/to/flags'
flags = ['australia.png', 'canada.png', 'newzealand.png', 'uk.png', 'usa.png']
for filename in flags:
flag_path = os.path.join(flag_folder, filename)
# do something with flag_path
In case you want something to happen just when the images are in the flags folder:
import os
path = '.../flags/'
for flag in flags:
if os.path.exists(path + flag):
*do something*

How to get the current working directory with os library and write a .txt file on it? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I want to find the current working directory (cwd) with os library and write .txt file on it.
Something like this:
import os
data=["somedatahere"]
#get_the_current_directory
#if this_is_the_current_directory:
new_file=open("a_data.txt", "a")
new_file.write(data)
new_file.close()
It can be done using the os library, but the new pathlib is more convenient if you are using Python 3.4 or later:
import pathlib
data_filename = pathlib.Path(__file__).with_name('a_data.txt')
with open(data_filename, 'a') as file_handle:
file_handle.write('Hello, world\n')
Basically, the with_name function said, "same dir with the script, but with this name".
import os
You're halfway there!
The rest of it is:
print(os.getcwd())
Of course, you don't need to know that value,
as a_data.txt or ./a_data.txt suffices.
As a side note, you'd be better off closing with a with handler:
with open('a_data.txt', 'a') as new_file:
new_file.write(data)
Habitually using a Resource Manager means
never having to say, "sorry, forgot to close it!"

What is the meaning of "./" in os.path? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to search a file in my directory files. I saw an example and couldn't understand:
import os
import glob
file_glob = './rpts/folderA/*_fileA.txt'
magic_check = re.compile('[*?[]')
if(magic_check.search(file_glob))
file_list = glob.glob(os.path.expanduser(file_glob))
What does the ./ part mean? I understand that ../ is switch to previous dir.
What I think it does:
expand the wild card to get a list of files that matches the regex
The files are stored in a list called file_list
Magic check regex, [*?[]: What is the [ inside [ ] for?
As Martijn said, this is UNIX shell notation for the current location (cwd or pwd). The reason it's in the command is to be more robust. If the user's environment doesn't have "./" in the search path ($PATH variable), then the shell won't find the file rpts/folderA/*_fileA.txt. With "./" at the front, this script is independent of $PATH.

Running Python script over multiple files in a folder [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I use python in opensuse, my problem is i need to execute large data in my folder.
for example
python myprogram.py 20140101.txt
i still need to run for a lot data with like that naming (20140101.txt) ex 20140204 etc..
my question is how to make my program running automatically for all data together.
use bash like this:
for file in /dir/*.txt
do
python myprogram.py $file
done
For a pure python solution, have a look at fileinput.
It is part of the python standard library and let's you loop over files given via standard input or via a list of files, e.g.:
import fileinput
for line in fileinput.input():
process(line)
So you could do:
./python myprogram 2014*.txt
The glob module is useful for processing multiple files with different names.
https://docs.python.org/2/library/glob.html
A python solution would be to use "glob". It helps you creating lists of files' name based on a certain pattern. You can then loop through those filenames to execute your commands on. See example below.
import glob
txt_files = glob.glob("201401*.txt")
for txt in txt_files:
print txt
my_txt_file = open(txt, "r")
For further reference:
https://docs.python.org/3/library/glob.htm

Categories

Resources