loop in bash/python to call a command including name fixation - python

I have a line of code that works on a single file:
sextractor ABC347583_ima.fits -CATALOG_NAME ABC347583_ima.cat
this piece of line takes the .fits file and creates a catalog file with the same name but .cat extension
Now, I have over 100+ .fits files(All my files start with the name ABC then numbers and end with _ima) and I would like to make a bash/python script that will read the .fits files 1 by 1 and execute the above code with the corresponding file names as input & output.
Basicly ABC347583_ima.fits , ABC57334_ima.fits etc. will enter and ABC347583_ima.cat , ABC57334_ima.cat etc. will be created.
This is beyond my limited knowlegde and what I only know about this is to use the code with
for i in `cat files`
echo $i
However, this does not exactly match with the command line because of both input & output. Any suggestions about how to pass this will be appreciated.

To iterate in python over all files in a dictonary use os.listdir().
Then you can loop over the filenames.
in the "callthecommandhere" function you can parse the filenames and read the file content and write a new file. I hope I understand you right and that is a help for you.
Like so:
import os
for filename in os.listdir('dirname'):
callthecommandhere(blablahbla, filename, foo)
Br christoph

Related

Collecting every txt file from computer in python and printing it

I am trying to collect every txt file from my computer and write it into the terminal when I run the script. I do not know how to do it. Is there a way to read every txt file in the computer then print the contents? (not a certain folder or directory).
In Python, the glob module would give you a list of filenames matching a given string. In your case, glob.glob('dir/*.txt') would give you a list of filenames in directory dir that end in .txt. You can then open each file and print() it to the terminal. Depending on your OS, you might be able to do it in your terminal without writing a separate script.

Running shell command in a loop on every files to convert every file from scalar to corresponding csv

I need to achieve the following task, help will be appreciated.
I run a shell command as
scavetool x A.sca -o A.csv
This command with scavetool converts the scalar file into the corresponding csv file on my Ubuntu terminal. For example after running this command, it converts the A.sca file into corresponding A.csv file with the same name but only extension is changed to .csv.
Thus, I have 100 files starting exactly like this A-#0.sca, A-#1.sca A-#2.sca and so on upto 100, and I want to convert them into their corresponding csv files like A-#0.csv, A-#1.csv, A-#2.csv, and so on upto 100.
I need to do this in Python and I know how to run the terminal command inside the python script which is through os.system as follow:
for example os.system("command")
So far my code look likes this.
#!/usr/bin/python3
import csv, os, glob
for file in glob.glob('*.sca'):
os.system('scavetool x *sca -o *.csv')
However, the problem is, it converts all my scalar files into one single .csv file and I know it is because of * sign. But I have tried to loop through on every file as well but I do not get the desired output because with the loop my scavetool command complaints and I can not get the output for every single separate files as a separate csv file.
Please help to achieve this.
You search your files with glob.glob("*.sca"), so you know that all will end in .sca. The trick is just to forget the last 3 characters from all file name and add the correct extension. By the way, format is glad to repeat meny times the same replacement provided you give the number inside the curly braces ({i}). Code could become
for file in glob.glob('*.sca'):
command = "scavetool x {0}sca -o {0}csv".format(file[:-3])
print(command) # to control the command
# subprocess.call(command, shell=True) # and execute when it looks correct
Try this :
import subprocess, glob
for file in glob.glob('*.sca'):
command = 'scavetool x {}.sca -o {}.csv'.format(file[:-4], file[:-4])
subprocess.call(command, shell=True)
Gl Hf :)

How to execute a python file using txt file as input (to parse data)

I tried looking inside stackoverflow and other sources, but could not find the solution.
I am trying to run/execute a Python script (that parses the data) using a text file as input.
How do I go about doing it?
Thanks in advance.
These basics can be found using google :)
http://pythoncentral.io/execute-python-script-file-shell/
http://www.python-course.eu/python3_execute_script.php
Since you are new to Python make sure that you read Python For Beginners
Sample code Read.py:
import sys
with open(sys.argv[1], 'r') as f:
contents = f.read()
print contents
To execute this program in Windows:
C:\Users\Username\Desktop>python Read.py sample.txt
You can try saving the input in the desired format (line-wise) file, and then using it as an alternative to STDIN (standard input file) using the file subcommand with Python
python source.py file input.txt
Note: To use it with input or source files in any other directory, use complete file location instead of file names.

Python: How to get the URL to a file when the file is received from a pipe?

I created, in Python, an executable whose input is the URL to a file and whose output is the file, e.g.,
file:///C:/example/folder/test.txt --> url2file --> the file
Actually, the URL is stored in a file (url.txt) and I run it from a DOS command line using a pipe:
type url.txt | url2file
That works great.
I want to create, in Python, an executable whose input is a file and whose output is the URL to the file, e.g.,
a file --> file2url --> URL
Again, I am using DOS and connecting executables via pipes:
type url.txt | url2file | file2url
Question: file2url is receiving a file. How do I get the file's URL (or path)?
In general, you probably can't.
If the url is not stored in the file, I seems very difficult to get the url. Imagine someone reads a text to you. Without further information you have no way to know what book it comes from.
However there are certain usecases where you can do it.
Pipe the url together with the file.
If you need the url and you can do that, try to keep the url together with the file. Make url2file pipe your url first and then the file.
Restructure your pipeline
Maybe you don't need to find the url for the file, if you restructure your pipeline.
Index your files
If only a certain files could potentially be piped into file2url, you could precalculate a hash for all files and store it in your program together with the url. In python you would do this using a dict where the key is the file (as a string) and the value is the url. You could use pickle to write the dict object to a file and load it at the start of your program.
Then you could simply lookup the url from this dict.
You might want to research how databases or search functions in explorers handle indexing or alternative solutions.
Searching for the file
You could use one significant line of the file and use something like grep or head on linux to search all files of your computer for this line. Note that grep and head are programs, not python functions. For DOS, you might need to google the equivalent programs.
FYI: grep searches for one line of text inside a file.
head puts out the first few lines of a file. I suggest comparing only the first few lines of files to avoid searching through huge file.
Searching all files on the computer might take very long.
You could only search files with the same size as your piped input.
Use url.txt
If file2url knows the location of the file url.txt, then you could look up all files in url.txt until you find a file identical to the file that was piped into your program. You could combine this with the hashing/ indexing solution.
'file2url' receives the data via standard input (like keyboard).
The data is transferred by the kernel and it doesn't necessarily have to have any file-system representation. So if there's no file there's no URL or path to that for you to get.
Let's try to do it by obvious way:
$ cat test.py | python test.py
import sys
print ''.join(sys.stdin.readlines())
print sys.stdin.name
<stdin>
So, filename is "< stdin>" because, for the python there is no filename - only input.
Another way is a system-dependent. Find a command line, which was used, for example, but no garantee that is will be works.

File names have a `hidden' m character prepended

I have a simple python script which produces some data in a Neutron star mode. I use it to automate file names so I don't later forget the inputs. The script succesfully saves the file as
some_parameters.txt
but when I then list the files in terminal I see
msome_parameters.txt
The file name without the "m" is still valid and trying to call the file with the m returns
$ ls m*
No such file or directory
So I think the "m" has some special meaning of which numerous google searches do not yields answers. While I can carry on without worrying, I would like to know the cause. Here is how I create the file in python
# chi,epsI etc are all floats. Make a string for the file name
file_name = "chi_%s_epsI_%s_epsA_%s_omega0_%s_eta_%s.txt" % (chi,epsI,epsA,omega0,eta)
# a.out is the compiled c file which outputs data
os.system("./a.out > %s" % (file_name) )
Any advise would be much appreciated, usually I can find the answer already posted in the stackoverflow but this time I'm really confused.
You have a file with some special characters in the name which is confusing the terminal output. What happens if you do ls -l or (if possible) use a graphical file manager - basically, find a different way of listing the files so you can see what's going on. Another possibility would be to do ls > some_other_filename and then look at the file with a hex editor.

Categories

Resources