sorry but i'm very new to python, i need a script to search by pattern and replace entire line into a file, i have insert entire script but the problem is after with fileinput...
#!/usr/bin/env python3
import json
import requests
import sys
import fileinput
url = 'http://169.254.169.254/latest/meta-data/iam/security-credentials/test'
r = requests.get(url)
accesskey = json.loads(r.content.decode('utf-8'))['AccessKeyId']
secretkey = json.loads(r.content.decode('utf-8'))['SecretAccessKey']
with fileinput.input(files=('./envFile.sh')) as envfile:
for line in envfile:
if line.strip().startswith('export AWS_ACCESS_KEY='):
line = 'AWS_ACCESS_KEY="%s"\n' % (accesskey)
if line.strip().startswith('export AWS_SECRET_KEY='):
line = 'AWS_SECRET_KEY="%s"\n' % (secretkey)
sys.stdout.write(line)
The output is:
AWS_ACCESS_KEY="xxxxxxx"
AWS_SECRET_KEY="xxxxxxxxxxxxxxxxxxxxxxxxxx"
Now, output is correct, butI have to overwrite the file, how can I do?
Use inplace=True
Ex:
import fileinput
with fileinput.input(files='./envFile.sh', inplace=True) as envfile:
for line in envfile:
if line.strip().startswith('export AWS_ACCESS_KEY='):
print(line.replace(line.strip(), 'AWS_ACCESS_KEY="%s"' % (accesskey)))
elif line.strip().startswith('export AWS_SECRET_KEY='):
print(line.replace(line.strip(), 'AWS_SECRET_KEY="%s"' % (secretkey)))
else:
print(line)
You can store all of your result into one list and iterate over to that list and write into the file using "with statement"
as shown below
temp='AWS_ACCESS_KEY="{}"\n'.format(accesskey)
a.append(temp)
temp='AWS_SECRET_KEY="{}"\n'.format(secretkey)
a.append(temp)
with open(file_name,'w') as stream:
for i in a:
stream.write(i)
Related
I am fairly new to python and I trying to capture the last line on a syslog file using python but unable to do so. This is a huge log file so I want to avoid loading the complete file in memory. I just want to read the last line of the file and capture the timestamp for further analysis.
I have the below code which captures all the timestamps into a python dict which take a really long time to run for it to get to the last timestamp once it completed my plan was to reverse the list and capture the first object in the index[0]:
The lastFile function uses glob module and gives me the most latest log file name which is being fed into recentEdit of the main function.
Is there a better way of doing this
Script1:
#!/usr/bin/python
import glob
import os
import re
def main():
syslogDir = (r'Location/*')
listOfFiles = glob.glob(syslogDir)
recentEdit = lastFile(syslogDir)
print(recentEdit)
astack=[]
with open(recentEdit, "r") as f:
for line in f:
result = [re.findall(r'\d{4}.\d{2}.\d{2}T\d{2}.\d{2}.\d{2}.\d+.\d{2}.\d{2}',line)]
print(result)
def lastFile(i):
listOfFiles = glob.glob(i)
latestFile = max(listOfFiles, key=os.path.getctime)
return(latestFile)
if __name__ == '__main__': main()
Script2:
###############################################################################
###############################################################################
#The readline() gives me the first line of the log file which is also not what I am looking for:
#!/usr/bin/python
import glob
import os
import re
def main():
syslogDir = (r'Location/*')
listOfFiles = glob.glob(syslogDir)
recentEdit = lastFile(syslogDir)
print(recentEdit)
with open(recentEdit, "r") as f:
fLastLine = f.readline()
print(fLastLine)
# astack=[]
# with open(recentEdit, "r") as f:
# for line in f:
# result = [re.findall(r'\d{4}.\d{2}.\d{2}T\d{2}.\d{2}.\d{2}.\d+.\d{2}.\d{2}',line)]
# print(result)
def lastFile(i):
listOfFiles = glob.glob(i)
latestFile = max(listOfFiles, key=os.path.getctime)
return(latestFile)
if __name__ == '__main__': main()
I really appreciate your help!!
Sincerely.
If you want to directly go,to the end of the file. Follow these steps:
1.Every time your program runs persist or store the last '\n' index.
2.If you have persisted index of last '\n' then you can directly seek to that index using
file.seek(yourpersistedindex)
3.after this when you call file.readline() you will get the lines starting from yourpersistedindex.
4.Store this index everytime your are running your script.
For Example:
you file log.txt has content like:
timestamp1 \n
timestamp2 \n
timestamp3 \n
import pickle
lastNewLineIndex = None
#here trying to read the lastNewLineIndex
try:
rfile = open('pickledfile', 'rb')
lastNewLineIndex = pickle.load(rfile)
rfile.close()
except:
pass
logfile = open('log.txt','r')
newLastNewLineIndex = None
if lastNewLineIndex:
#seek(index) will take filepointer to the index
logfile.seek(lastNewLineIndex)
#will read the line starting from the index we provided in seek function
lastLine = logfile.readline()
print(lastLine)
#tell() gives you the current index
newLastNewLineIndex = logfile.tell()
logfile.close()
else:
counter = 0
text = logfile.read()
for c in text:
if c == '\n':
newLastNewLineIndex = counter
counter+=1
#here saving the new LastNewLineIndex
wfile = open('pickledfile', 'wb')
pickle.dump(newLastNewLineIndex,wfile)
wfile.close()
Can you help me identify what's wrong in this code? I want to put all the print output on the cmd to a txt file. This code only puts the last line.
import urllib.request
fhand = urllib.request.urlopen('http://data.pr4e.org/romeo.txt')
for line in fhand:
z = line.decode().strip()
with open('romeo.txt', 'w') as f:
print(z, file=f)
You are creating and writing 'romeo.txt' file for every line of the content. Swap the for loop and the opening file. Something like this:
import urllib.request
fhand = urllib.request.urlopen('http://data.pr4e.org/romeo.txt')
with open('romeo.txt', 'w') as f:
for line in fhand:
z = line.decode().strip()
print(z, file=f)
tiedosto = input("Anna luettavan tiedoston nimi: ") #Here i take user added "File"
sana = input("Sana joka korvataan: ") #Here is word that i want to replace
korvaa = input("Sana jolla korvataan: ") #Here is word that i want to replace
td = open(tiedosto,"r+")#here we open the file
for line in td:
muutos = td.read().replace(sana, korvaa) #Here it replaces the words
td.write(muutos)#Doesent work?
print(muutos)
td.close()
So the td.write(muutos) why doesent it save to the file?
td is of the type file. you want to convert it to a list of lines.
What you should do is:
for line in td.readlines()
Also for replacing words, try using fileinput.
from this post:
fileinput already supports inplace editing. It redirects stdout to the file in this case:
#!/usr/bin/env python3
import fileinput
with fileinput.FileInput(filename, inplace=True, backup='.bak') as file:
for line in file:
print(line.replace(text_to_search, replacement_text), end='')
My program recursively processes a string to reverse it. I would like to have it pull data directly from the website instead of a text file as it currently does, but I can't get it to pull the data from the website.
import urllib.request
def reverse(alist):
#print(alist)
if alist == []:
return []
else:
return reverse(alist[1:]) + [alist[0]]
def main():
#file1 = urllib.request.urlopen('http://devel.cs.stolaf.edu/parallel/data/cathat.txt').read()
file1 = open('cat.txt','r')
for line in file1:
stulist = line.split()
x = reverse(stulist)
print(' '.join(x))
file1.close()
main()
The commented-out lines are to show what I have tried.
You can use the url normally as a file:
import urllib
...
f = urllib.urlopen(url)
for line in f:
...
f.close()
What you did was to call read on the opened url. So you read all the content into file1 variable and file1 became a string.
For python 3:
import urllib.request
...
f = urllib.request.urlopen(url)
for line in f:
...
f.close()
Also you need to convert each line to the correct encoding. If the encoding is utf-8 then you can do the following:
for line in f:
line = line.decode("utf-8")
import urllib2
def reverse(alist):
if alist == []:
return []
else:
return reverse(alist[1:]) + [alist[0]]
def main():
lines = [line.strip() for line in urllib2.urlopen('http://devel.cs.stolaf.edu/parallel/data/cathat.txt')]
print lines
print lines[::-1]
main()
Output
['The cat in the party hat', 'wore the hat', 'to the cat hat party.']
['to the cat hat party.', 'wore the hat', 'The cat in the party hat']
Thanks to stackoverflow, I am able to read and copy a file. However, I need to read a picture file one line at a time, and the buffer array can't exceed 3,000 integers. How would I separate the lines, read them, and then copy them? Is that the best way to execute this?
Here is my code, courtesy of #Chayim:
import os
import sys
import shutil
import readline
source = raw_input("Enter source file path: ")
dest = raw_input("Enter destination path: ")
file1 = open(source,'r')
if not os.path.isfile(source):
print "Source file %s does not exist." % source
sys.exit(3)
file_line = infile.readline()
try:
shutil.copy(source, dest)
infile = open(source,'r')
outfile = open(dest,'r')
file_contents = infile.read()
file_contents2 = outfile.read()
print(file_contents)
print(file_contents2)
infile.close()
outfile.close()
except IOError, e:
print "Could not copy file %s to destination %s" % (source, dest)
print e
sys.exit(3)
I added
file_line = infile.readline()
but I'm concerned that infile.readline() will return a string, instead of integers. Also, how do I limit the number of integers it processes?
I think you want to do something like this:
infile = open(source,'r')
file_contents_lines = infile.readlines()
for line in file_contents_lines:
print line
This will get you all the lines in the file and put them into a list containing each line as an element in the list.
Take a look at the documentation here.