Need to run a python script to do the following which i am doing manually right now for testing
cat /dev/pts/5
And then i need to echo this back to /dev/pts/6
echo <DATA_RECEIVED_FROM_5> > /dev/pts/6
I can't seem to get the python to actually read what is coming in from /dev/pts/5 and saving it to a list and then outputing one by one to /dev/pts/6 using echo
#!/bin/python
import sys
import subprocess
seq = []
count = 1
while True:
term = subprocess.call(['cat','/dev/pts/5'])
seq.append(term)
if len(seq) == count:
for i in seq:
subprocess.call(['echo',i,'/dev/pts/6'])
seq = []
count = count + 1
I'm not sure I understand your problem, and desired outcome, but to generate a list of filenames within /dev/pts/5 and just save it as a .txt file in /dev/pts/6 You should use the os module that comes standard with python. You can accomplish this task as such:
import os
list_of_files = []
for dirpath, dirnames, filenames in os.walk('/dev/pts/5'):
list_of_files.append([dirpath, dirnames, filenames])
with open('/dev/pts/6/output.txt', 'w+') as file:
for file_info in list_of_files:
file.write("{} -> {} -> {}".format(file_info[0], file_info[1], file_info[2]))
The output from this will likely be a bit much, but you can just apply some logic to filter out what you're looking for.
os.walk() documentation
update
To read the data from an arbitrary file and write it to an arbitrary file (no extensions) is (if I understand correctly) pretty easy to do in python:
with open('/dev/pts/5', 'rb') as file: # use 'rb' to handle arbitrary data formats
data = file.read()
with open(''/dev/pts/6', 'wb+') as file:
# 'wb+' will handle arbitrary data and make file if it doesn't exist.
# if it does exist it will be overwritten!! To append instead use 'ab+'
file.write(data)
third time is the charm
Following the example here it looks like you need to run:
term = subprocess.run(['cat','/dev/pts/5'], capture_output=True)
print(term.stdout)
Where the important bit is capture_output=True and then you have to access the .stdout of the CompletedProcess object!
Related
I have a list of .txt file in one folder, with names like: "image1.txt", "image2.txt", "image3.txt", etc.
I need to perform some operations for each file.
I was trying like this:
import glob
for each_file in glob.glob("C:\...\image\d+\.txt"):
print(each_file) (or whatever)
But it seems it doesn't work. How can I solve?
I think you are looking for something like this:
import os
for file in os.listdir('parent_folder'):
with open(os.path.join('parent_folder', file), 'r') as f:
data = f.read()
# operation on data
#Alternatively
for i in range(10):
with open(f'image{i}.txt', 'r') as f:
data = f.read()
# operation on data
The with operator takes care of everything to do with the file, so you don't need to worry about the file after it goes out of scope.
If you want to read and also write to the file in the same operation, use open(file, 'r+) and then the following:
with open(f'image{i}.txt', 'r+') as f:
data = f.read()
# operation on data
f.seek(0)
f.write(data)
f.truncate()
Take this answer, that I wrote.
path objects have the read_text method. As long as it can decode it, then it will read it - you shouldn't have a problem with text files. Also, since you are using windows paths, make sure to put an r before the string, like this r"C:\...\image\d+\.txt" or change the direction of the slashes. A quick example:
from pathlib import Path
for f in Path(r"C:\...\image\d+\").rglob('**/*.txt'):
print(f.read_text())
I have a folder with csv formated documents with a .arw extension. Files are named as 1.arw, 2.arw, 3.arw ... etc.
I would like to write a code that reads all the files, checks and replaces the forwardslash / with a dash -. And finally creates new files with the replaced character.
The code I wrote as follows:
for i in range(1,6):
my_file=open("/path/"+str(i)+".arw", "r+")
str=my_file.read()
if "/" not in str:
print("There is no forwardslash")
else:
str_new = str.replace("/","-")
print(str_new)
f = open("/path/new"+str(i)+".arw", "w")
f.write(str_new)
my_file.close()
But I get an error saying:
'str' object is not callable.
How can I make it work for all the files in a folder? Apparently my for loop does not work.
The actual error is that you are replacing the built-in str with your own variable with the same name, then try to use the built-in str() after that.
Simply renaming the variable fixes the immediate problem, but you really want to refactor the code to avoid reading the entire file into memory.
import logging
import os
for i in range(1,6):
seen_slash = False
input_filename = "/path/"+str(i)+".arw"
output_filename = "/path/new"+str(i)+".arw"
with open(input_filename, "r+") as input, open(output_filename, "w") as output:
for line in input:
if not seen_slash and "/" in line:
seen_slash = True
line_new = line.replace("/","-")
print(line_new.rstrip('\n')) # don't duplicate newline
output.write(line_new)
if not seen_slash:
logging.warn("{0}: No slash found".format(input_filename))
os.unlink(output_filename)
Using logging instead of print for error messages helps because you keep standard output (the print output) separate from the diagnostics (the logging output). Notice also how the diagnostic message includes the name of the file we found the problem in.
Going back and deleting the output filename when you have examined the entire input file and not found any slashes is a mild wart, but should typically be more efficient.
This is how I would do it:
for i in range(1,6):
with open((str(i)+'.arw'), 'r') as f:
data = f.readlines()
for element in data:
element.replace('/', '-')
f.close()
with open((str(i)+'.arw'), 'w') as f:
for element in data:
f.write(element)
f.close()
this is assuming from your post that you know that you have 6 files
if you don't know how many files you have you can use the OS module to find the files in the directory.
I am trying to pull out file names from a specifically formatted document, and put them into a list. The document contains a large amount of information, but the lines I am concerned about look like the following with "File Name: " always at the start of the line:
File Name: C:\windows\system32\cmd.exe
I tried the following:
xmlfile = open('my_file.xml', 'r')
filetext = xmlfile.read()
file_list = []
file_list.append(re.findall(r'\bFile Name:\s+.*\\.*(?=\n)', filetext))
This makes file_list look like:
[['File Name: c:\\windows\\system32\\file1.exe',
'File Name: c:\\windows\\system32\\file2.exe',
'File Name: c:\\windows\\system32\\file3.exe']]
I'm looking for my output to simply be:
(file1.exe, file2.exe, file3.exe)
I also tried using ntpath.basename on my above output, but it looks like it wants a string as input and not a list.
I'm very new to Python and scripting in general, so any suggestions would be appreciated.
You can get the expected output with following regular expression:
file_list = re.findall(r'\bFile Name:\s+.*\\([^\\]*)(?=\n)', filetext)
([^\\]*) will capture everything except a slash after final path separator until \n is encountered, see online example. Since findall already returns a list there's no need to append the return value to existing list.
You can do it in a more declarative style. It ensures less bugs, high memory efficiency.
import os.path
pat = re.compile(r'\bFile Name:\s+.*\\.*(?=\n)')
with open('my_file.xml') as f:
ms = (pat.match(line) for line in f)
ns = (os.path.basename(m) for m in ms)
# the iterator ns emits names such as 'foo.txt'
for n in ns:
# do something
If you change the regex slightly, i.e the grouping you don't even need os.path.
I would change this up a bit to make it a bit clearer to read and separate the process a bit - clearly it can be done in one step, but I think your code is going to be tough to manage later
import re
import os
with open('my_file.xml', 'r') as xmlfile:
filetext = xmlfile.read() # this way the file handle goes away - you left the file open
file_list = []
my_pattern = re.compile(r'\bFile Name:\s+.*\\.*(?=\n)')
for filename in my_pattern.findall(filetext):
cleaned_name = filename.split(os.sep)[-1]
file_list.append(cleaned_name)
You're on the right track. The reason basename wasn't working was because re.findall() returns a list which was being put into yet another list. Here's a fix for that which iterates through that list returned and creates another with just the base file names in:
import re
import os
with open('my_file.xml', 'rU') as xmlfile:
file_text = xmlfile.read()
file_list = [os.path.basename(fn)
for fn in re.findall(r'\bFile Name:\s+.*\\.*(?=\n)', file_text)]
I am a newbie to python. I have a code in which I must write the contents again to my same file,but when I do it it clears my content.Please help to fix it.
How should I modify my code such that the contents will be written back on the same file?
My code:
import re
numbers = {}
with open('1.txt') as f,open('11.txt', 'w') as f1:
for line in f:
row = re.split(r'(\d+)', line.strip())
words = tuple(row[::2])
if words not in numbers:
numbers[words] = [int(n) for n in row[1::2]]
numbers[words] = [n+1 for n in numbers[words]]
row[1::2] = map(str, numbers[words])
indentation = (re.match(r"\s*", line).group())
print (indentation + ''.join(row))
f1.write(indentation + ''.join(row) + '\n')
In general, it's a bad idea to write over a file you're still processing (or change a data structure over which you are iterating). It can be done...but it requires much care, and there is little safety or restart-ability should something go wrong in the middle (an error, a power failure, etc.)
A better approach is to write a clean new file, then rename it to the old name. For example:
import re
import os
filename = '1.txt'
tempname = "temp{0}_{1}".format(os.getpid(), filename)
numbers = {}
with open(filename) as f, open(tempname, 'w') as f1:
# ... file processing as before
os.rename(tempname, filename)
Here I've dropped filenames (both original and temporary) into variables, so they can be easily referred to multiple times or changed. This also prepares for the moment when you hoist this code into a function (as part of a larger program), as opposed to making it the main line of your program.
You don't strictly need the temporary name to embed the process id, but it's a standard way of making sure the temp file is uniquely named (temp32939_1.txt vs temp_1.txt or tempfile.txt, say).
It may also be helpful to create backups of the files as they were before processing. In which case, before the os.rename(tempname, filename) you can drop in code to move the original data to a safer location or a backup name. E.g.:
backupname = filename + ".bak"
os.rename(filename, backupname)
os.rename(tempname, filename)
While beyond the scope of this question, if you used a read-process-overwrite strategy frequently, it would be possible to create a separate module that abstracted these file-handling details away from your processing code. Here is an example.
Use
open('11.txt', 'a')
To append to the file instead of w for writing (a new or overwriting a file).
If you want to read and modify file in one time use "r+' mode.
f=file('/path/to/file.txt', 'r+')
content=f.read()
content=content.replace('oldstring', 'newstring') #for example change some substring in whole file
f.seek(0) #move to beginning of file
f.write(content)
f.truncate() #clear file conent "tail" on disk if new content shorter then old
f.close()
I'm sorting a text file from Python using a custom unix command that takes a filename as input (or reads from stdin) and writes to stdout. I'd like to sort myfile and keep the sorted version in its place. Is the best way to do this from Python to make a temporary file? My current solution is:
inputfile = "myfile"
# inputfile: filename to be sorted
tmpfile = "%s.tmp_file" %(inputfile)
cmd = "mysort %s > %s" %(inputfile, tmpfile)
# rename sorted file to be originally sorted filename
os.rename(tmpfile, inputfile)
Is this the best solution? thanks.
If you don't want to create temporary files, you can use subprocess as in:
import sys
import subprocess
fname = sys.argv[1]
proc = subprocess.Popen(['sort', fname], stdout=subprocess.PIPE)
stdout, _ = proc.communicate()
with open(fname, 'w') as f:
f.write(stdout)
You either create a temporary file, or you'll have to read the whole file into memory and pipe it to your command.
The best solution is to use os.replace because it would work on Windows too.
This is not really what I regards as "in-place sorting" though. Usually, in-place sorting means that you actually exchange single elements in the lists without doing copies. You are making a copy since the sorted list has to get completely built before you can overwrite the original. If your files get very large, this obviously won't work anymore. You'd probably need to choose between atomicity and in-place-ity at that point.
If your Python is too old to have os.replace, there are lots of resources in the bug adding os.replace.
For other uses of temporary files, you can consider using the tempfile module, but I don't think it would gain you much in this case.
You could try a truncate-write pattern:
with open(filename, 'r') as f:
model.read(f)
model.process()
with open(filename, 'w') as f:
model.write(f)
Note this is non-atomic
This entry describes some pros/cons of updating files in Python:
http://blog.gocept.com/2013/07/15/reliable-file-updates-with-python/