Python Serial Readline() vs Readlines() - python

I need to use some At commands & read the data for that I am using below cmds , the response I would be getting is in multiple lines
serialPort.write(b"AT+CMD\r\n")
time.sleep(1)
response = serialPort.readlines()
if I am using only readline() i dont get full expected response but if i do read lines() i do get the full data but some lines skipped sometime , i need to know the difference between these 2 methods & also for the
timeout flag
how does it effects functionality of these

readline(): reads one line, if you use multiple times, each time it will keep giving next line until it reaches end of file or file is closed.
readlines(): returns a list of lines from a file
If you want the output of a whole file, you can use read() instead.
About the timeout flag, I don't think timeout flag exists for I/O operations in Python? But if you are talking about the one in Serial, then it specifies the maximum time to wait for serial data.

Related

Write x lines of subprocess stdout to a file

I'm writing a python script that runs a command using subprocess module and then writes the output to a file. Since the output is too big, I want to write just x last lines of the output that contains the wanted information.
import subprocess
outFile=open('output.txt', 'w')
proc=subprocess.Popen(command, cwd=rundir, stdout=outFile)
The above code writes the whole output to a file (which is very very large), but what I want is just an x number of lines from the end of the output.
EDIT:
I know that I can post-process the file afterward, but what I want really is to write just the lines I need from the beginning without handling all those data.
I would suggest to store the output in a variable and then do some processing. The Python Interpreter will take care of all the data that is produced - even if it is larger than your RAM.
CODE
with open('output.txt', 'w') as fobj:
out = subprocess.check_output(command).rstrip().splitlines()
fobj.write('\n'.join(out[-MAX_LINES:]))
EXPLANATION
The function subprocess.check_output returns the console output as string.
The method str.rstrip() returns the the string lack of trailing whitespaces. So the parameter MAX_LINES has control of the last non-empty lines.
The method str.splitlines() returns a list of strings, each represents one line.
out[-MAX_LINES:]
When MAX_LINES > len(out), this will return the whole output as list.
COMMENT
Always use context managers (with ...)!!! This is more safe for file management.
You can either truncate the file after it has been fully written, or give an io.StringIO to your process, which you can getvalue() and then write only the lines you want.

Python conditional statement based on text file string

Noob question here. I'm scheduling a cron job for a Python script for every 2 hours, but I want the script to stop running after 48 hours, which is not a feature of cron. To work around this, I'm recording the number of executions at the end of the script in a text file using a tally mark x and opening the text file at the beginning of the script to only run if the count is less than n.
However, my script seems to always run regardless of the conditions. Here's an example of what I've tried:
with open("curl-output.txt", "a+") as myfile:
data = myfile.read()
finalrun = "xxxxx"
if data != finalrun:
[CURL CODE]
with open("curl-output.txt", "a") as text_file:
text_file.write("x")
text_file.close()
I think I'm missing something simple here. Please advise if there is a better way of achieving this. Thanks in advance.
The problem with your original code is that you're opening the file in a+ mode, which seems to set the seek position to the end of the file (try print(data) right after you read the file). If you use r instead, it works. (I'm not sure that's how it's supposed to be. This answer states it should write at the end, but read from the beginning. The documentation isn't terribly clear).
Some suggestions: Instead of comparing against the "xxxxx" string, you could just check the length of the data (if len(data) < 5). Or alternatively, as was suggested, use pickle to store a number, which might look like this:
import pickle
try:
with open("curl-output.txt", "rb") as myfile:
num = pickle.load(myfile)
except FileNotFoundError:
num = 0
if num < 5:
do_curl_stuff()
num += 1
with open("curl-output.txt", "wb") as myfile:
pickle.dump(num, myfile)
Two more things concerning your original code: You're making the first with block bigger than it needs to be. Once you've read the string into data, you don't need the file object anymore, so you can remove one level of indentation from everything except data = myfile.read().
Also, you don't need to close text_file manually. with will do that for you (that's the point).
Sounds more for a job scheduling with at command?
See http://www.ibm.com/developerworks/library/l-job-scheduling/ for different job scheduling mechanisms.
The first bug that is immediately obvious to me is that you are appending to the file even if data == finalrun. So when data == finalrun, you don't run curl but you do append another 'x' to the file. On the next run, data will be not equal to finalrun again so it will continue to execute the curl code.
The solution is of course to nest the code that appends to the file under the if statement.
Well there probably is an end of line jump \n character which makes that your file will contain something like xx\n and not simply xx. Probably this is why your condition does not work :)
EDIT
What happens if through the python command line you type
open('filename.txt', 'r').read() # where filename is the name of your file
you will be able to see whether there is an \n or not
Try using this condition along with if clause instead.
if data.count('x')==24
data string may contain extraneous data line new line characters. Check repr(data) to see if it actually a 24 x's.

Creating a program which counts words number in a row of a text file (Python)

I am trying to create a program which takes an input file, counts the number of words in each row and writes a string of that certain number in another output file. I managed to develope this code:
in_file = "our_input.txt"
out_file = "output.txt"
f=open(in_file)
g=open(out_file,"w")
for line in f:
if line == "\n":
g.write("0\n")
else:
g.write(str(line.count(" ")+1)+"\n")
now, this works well, but the problem is that it works for only a certain amount of lines. If my input file has 8000 lines, it will display only the first 6800. If it has 6000, than will be displayed (all numbers are rounded, right).
I tried creating another program, which splits each line to a list, and then counting the length of it, but the problem remains just the same.
Any idea what could cause this?
You need to close each file after you're done with it. The safest way to do this is by using the with statement:
with open(in_file) as f, open(out_file,"w") as g:
for line in f:
if line == "\n":
g.write("0\n")
else:
g.write(str(line.count(" ")+1)+"\n")
When reaching the end of a with block, all files you opened in the with line will be closed.
The reason for the behavior you see is that for performance reasons, reading and writing to/from files is buffered. Because of the way hard drives are constructed, data is read/written in blocks rather than in individual bytes - so even if you attempt to read/write a single byte, you have to read/write an entire block. Therefore, most programming languages' built-in file IO functions actually read (at least) one block at a time into memory and feed you data from that in-memory block until it needs to read another block. Similarly, writing is performed by actually writing into a memory block first, and only writing the block to disk when it is full. If you don't close the file writer, whatever is in the last in-memory block won't be written.

Python: how to constantly update text lines

i have a list of 20 lines, either a 0 or a 2. right now i have it rewriting the 20 lines of the text files based off ping results. i'm having a seperate program read that 20 lines, but it generates errors when there is not 20 lines (as the text file is being written). How can i edit each individual text line without rewriting the document?
ping ip
if ping == 0
f= open("status", 'ab')
f.write("0\n")
f.close
thats one condition on how it writes. i do wipe the document before this executes.
If I understand the use of constantly in the title correctly, you're trying to pass real time data here... Programs should not communicate in real time through files. That's not stable as well as awfully slow. if that's not the case you may want to rewrite the file opening it with w (write) instead of a (append).
if ping == 0
with open("status", 'wb') as f:
# write all 20 lines
Read more about modes.
note: to actually close a file you should call file.close by using f.close() and not f.close. If you're using with as a context manager like I suggest, the file is closed once the context is over (indentation returns to with level).

Overwriting a file in python

I'm trying to over write a file in python so it only keeps the most up to date information read from a serial port. I've tried several different methods and read quite a few different posts but the file keeps writing the information over and over with out overwriting the previous entry.
import serial
ser=serial.Serial('/dev/ttyUSB0',57600)
target=open( 'wxdata' , 'w+' )
with ser as port, target as outf:
while 1:
target.truncate()
outf.write(ser.read))
outf.flush()
I have a weather station sending data wirelessly to a raspberry pi, I just want the file to keep one line of current data received. right now it just keeps looping and adding over and over. Any help would be greatly appreciated..
I would change your code to look like:
from serial import Serial
with Serial('/dev/ttyUSB0',57600) as port:
while True:
with open('wxdata', 'w') as file:
file.write(port.read())
This will make sure it gets truncated, flushed, etc. Why do work you don't have to? :)
By default, truncate() only truncates the file to the current position. Which, with your loop, is only at 0 the first time through. Change your loop to:
while 1:
outf.seek(0)
outf.truncate()
outf.write(ser.read())
outf.flush()
Note that truncate() does accept an optional size argument, which you could pass 0 for, but you'd still need to seek back to the beginning before writing the next part anyway.
Before you start writing the file, add the following line:
outf.seek(0)
outf.truncate()
This will make it so that whatever you write next will overwrite the file

Categories

Resources