File is created but cannot be written in Python - python

I am trying to write some results I get from a function for a range but I don't understand why the file is empty. The function is working fine because I can see the results in the console when I use print. First, I'm creating the file which is working because it is created; the output file name is taken from a string, and that part is working too. So the following creates the file in the given path:
report_strategy = open(output_path+strategy.partition("strategy(")[2].partition(",")[0]+".txt", "w")
it creates a text file with the name taken from a string named "strategy", for example:
strategy = "strategy(abstraction,Ent_parent)"
a file called "abstraction.txt" is created in the output path folder. So far so good. But I can't get to write anything to this file. I have a range of a few integers
maps = (175,178,185)
This is the function:
def strategy_count(map_path,map_id)
The following loop does the counting for each item in the range "maps" to return an integer:
for i in maps:
report_strategy.write(str(i), ",", str(strategy_count(maps_path,str(i))))
and the file is closed at the end:
report_strategy.close()
Now the following:
for i in maps:
print str(i), "," , strategy_count(maps_path,str(i))
does give me what I want in the console:
175 , 3
178 , 0
185 , 1
What am I missing?! The function works, the file is created. I see the output in the console as I want, but I can't write the same thing in the file. And of course, I close the file.
This is a part of a program that reads text files (actually Prolog files) and runs an Answer Set Programming solver called Clingo. Then the output is read to find instances of occurring strategies (a series of actions with specific rules). The whole code:
import pmaps
import strategies
import generalization
# select the strategy to count:
strategy = strategies.abstraction_strategy
import subprocess
def strategy_count(path,name):
p=subprocess.Popen([pmaps.clingo_path,"0",""],
stdout=subprocess.PIPE,stderr=subprocess.STDOUT,stdin=subprocess.PIPE)
#
## write input facts and rules to clingo
with open(path+name+".txt","r") as source:
for line in source:
p.stdin.write(line)
source.close()
# some generalization rules added
p.stdin.write(generalization.parent_of)
p.stdin.write(generalization.chain_parent_of)
# add the strategy
p.stdin.write(strategy)
p.stdin.write("#hide.")
p.stdin.write("#show strategy(_,_).")
#p.stdin.write("#show parent_of(_,_,_).")
# close the input to clingo
p.stdin.close()
lines = []
for line in p.stdout.readlines():
lines.append(line)
counter=0
for line in lines:
if line.startswith('Answer'):
answer = lines[counter+1]
break
if line.startswith('UNSATISFIABLE'):
answer = ''
break
counter+=1
strategies = answer.count('strategy')
return strategies
# select which data set (from the "pmaps" file) to count strategies for:
report_strategy = open(pmaps.hw3_output_path+strategy.partition("strategy(")[2].partition(",")[0]+".txt", "w")
for i in pmaps.pmaps_hw3_fall14:
report_strategy.write(str(i), ",", str(strategy_count(pmaps.path_hw3_fall14,str(i))))
report_strategy.close()
# the following is for testing the code. It is working and there is the right output in the console
#for i in pmaps.pmaps_hw3_fall14:
# print str(i), "," , strategy_count(pmaps.path_hw3_fall14,str(i))

write takes one argument, which must be a string. It doesn't take multiple arguments like print, and it doesn't add a line terminator.
If you want the behavior of print, there's a "print to file" option:
print >>whateverfile, stuff, to, print
Looks weird, doesn't it? The function version of print, active by default in Python 3 and enabled with from __future__ import print_function in Python 2, has nicer syntax for it:
print(stuff, to, print, out=whateverfile)

The problem was with the write which as #user2357112 mentioned takes only one argument. The solution could also be joining the strings with + or join():
for i in maps:
report.write(str(i)+ ","+str(strategy_count(pmaps.path_hw3_fall14,str(i)))+"\n")
#user2357112 your answer might have the advantage of knowing if your test debug in the console produces the write answer, you just need to write that. Thanks.

Related

Loop an existing script

I'm using a script from a third party I can't modify or show (let's call it original.py) which takes a file and produces some calculations. At the end it ouputs a result (using the print statment).
Since I have many files I decided to make a second script that gets all wanted files and runs them through the original.py
1st get list of all files to run
2nd run each file through the original.py
3rd obtain results from each file
I have the 1st and 2nd step. However, the end result only saves the calculations from the last file it read.
import sys
import original
import glob
import os
fn=str(sys.argv[1])
for filename in sys.argv[1:]:
print(filename)
ficheiros = [f for f in glob.glob(fn)]
for ficheiro in ficheiros:
original.file = bytes(ficheiro,'utf-8')
original.function()
To summarize:
Knowing I can't change the original script (which is made with a print statement) how can I obtain the results for each loop? Is there a better way than using a for loop?.
The first script can be invoked with python original.py
It requires the file to be changed manually inside the script in the original.file line.
This script outputs the result in the console and I redirect it with: python original.py > result.txt
At the moment when I try to run my script, it reads all the correct files in the folder but only returns the results for the last file.
#
(I tried to reformulate the question hopefully it's easier to understand)
#
The problem is due to a mistake in the ````ficheiros = [f for f in glob.glob(fn)]`````it's only reading one file, hence only outputting one result.
Thanks for the time.sleep() trick in the comments.
Solved:
I changed the initial part to:
fn=str(sys.argv[1])
ficheiros= []
for filename in sys.argv[1:]:
ficheiros.append(filename)
#print(filename)
and now it correctly reads all the files and it outputs all the results
Depending on your operating system there are different ways to take what is printed to the console and append it to a file.
For example on Linux, you could run this file that calls original.py for every file python yourfile.py >> outputfile.txt, which will then effectively save everything that is printed into outputfile.txt.
The syntax is similar for Windows.
I'm not quite sure what you're asking, but you could try one of these:
Either redirecting all output to a file for later use, by running the script like so: python secondscript.py > outfilename.txt
Or, and this might or might not work for you, redefining the print command to a function that outputs the result how you want, eg:
def print(x):
with open('outfile.txt','w') as f:
f.write('example: ' + x)
If you choose the second option, I recommend saving the old print function (oldprint = print) so you can restore and use the regular print later.
I don't know if I got exactly what you want. You have a first script named original.py which takes some arguments and returns things in the form of print statements and you would like to grab these prints statements in your scripts to do things?
If so, a solution could be the subprocess module:
Let's say that this is original.py:
print("Hi, I'm original.py")
print("print me!")
And this is main.py:
import subprocess
script_path = "original.py"
print("Executing ", script_path)
process = subprocess.Popen(["python3", script_path], stdout=subprocess.PIPE)
for line in process.stdout:
print(line.decode("utf8"))
You can easily add more arguments in the Popen call like ["arg1", "arg2",] etc.
Output:
Executing original.py
Hi, I'm original.py
print me!
and you can grab the lines in the main.py to do what you want with them.

log of everything typed via input('foo') python

I want to log what is typed in my program (home assistant) to go to a sort of file/directory/log of what is typed.
I only want to log it when I call foo = input('::') otherwise, like just random typing that has nothing to do with the program what so ever.
I have looked into this question with google and it is all about defining you own functions and that stuff.
I would like to end up with something like this: Joe/chat/day/good with each time 'Joe' is typed in, start a new file/directory/log or whatever you want to call it.
Should I make a function of input, or anything else?
Solution for your problem is to make your own input.
import os
logdir = os.getcwd() # Path to the log directory
activation = ["joe", "Joe"] # Start new log when a word is first in input
thelog = open(os.path.join(logdir, "0 - startlog.log"), "wb")
lognum = 0
_input = input # Save original input()
def input (prompt):
global thelog, lognum
i = _input(prompt)
check = i.split(None, 1) # Split by first word and the rest of sentence
if not check or not check[0] in activation:
thelog.write(b"")
return i
# Now change the log:
thelog.close()
lognum += 1
thelog = open(os.path.join(logdir, ("%i - " % lognum)+check[0]+".log"), "wb")
thelog.writeline(i)
return i
Or something similar.
After this code, you use input() as usual, but anything entered also goes into a file.
Entries are written in lines. Empty inputs will reflect as empty lines.
When activation word is first in the entered line a new log is started in a new file.
The files are numbered 0,1,2,... and an activation word follows in their name. I understood you wish to achieve something like that.
If you want to store everything user types ever, then you make a wrapper around sys.stdin whose read() method reads from sys.stdin, then writes a result to the log, and then returns the result.
And you swap sys.stdin with the wrapper object.

Python how to execute code from a txt file

I want to keep a piece of code in a txt file and execute it from my Python script.
For example in the txt file there is
print("ok")
I want my programme to print ok and not print print("ok "). How can I do this?
Doing what you want usually is a security risk, but not necessarily so.
You'll definitely have to notify the user about potential risk.
There is more than one program using execfile() or compile() and exec statement to provide plug-ins system.
There is nothing so ugly about it, you just have to know what are you doing, when and where.
Both execfile(), eval() and exec statement allow you to specify a scope in which will your code be executed/evaluated.
myscope = {}
execfile("myfile.txt", myscope)
This will prevent the new code being mixed with the old one. All variables classes, functions and modules from myfile.txt will be contained in myscope dictionary.
This, however, does not prevent malicious code to delete all files it can from your disk or something similar.
Python 2 has a nice module called rexec, but from Python 2.2 it doesn't work any more.
It implements execfile() exec statement and eval() in restricted environment.
Though it doesn't work it is there, and you can dig through the code to see how it is done.
So, you see, it is possible to allow only secure code to execute (well, as secure as it can be) from external source.
There is another way too.
You can load the file, compile the code from it and then inspect it to see what it does. Then say, yes I'll execute it or no, I won't. This is, however, a little bit more work and a lot of complications.
But, I don't think that it'll be necessary to go through all that stuff. Please elaborate some more on your problem. What exactly do you mean by level editor?
I don't think external code is a solution for that.
You are looking for eval function
eval is a function that takes arbitrary python code represented as a string and execute it at run time.
Example
>>> x = 1
>>> y = eval('x+1')
>>> print(y)
2
It works in both Python 2.x and 3.x
Check the documentation : https://docs.python.org/2.7/library/functions.html#eval
So I wanted to do the same thing. I came across this: GeeksforGeeks.
With this we could make a text file. Let's say it's called myfile.txt and in the first line we will add print("ok") in the second add A += 1.
Now let's move over to the script editor.
# open txt file
f = open("myfile.txt", "r")
# add the data in the file the a var
data = f.readlines()
# remove unwanted \n for new lines and '' left over by the code above
# readlines() returns a list so we need to convert data to str
# in data[] add the line you wish to read from
info = str(data[0]).strip("\n").strip("'")
# run the code
exec(info)
# running the A += 1 in the txt file
# run this
A = 0
info = str(data[1]).strip("\n").strip("'")
while A == 0:
print(A)
exec(info)
print(A)
# if you wanted too you can even define a variable with this
alist = ["B = 0", "B += 1", "print(B)"]
runner = [0, 1, 2, 1, 2, 0, 2]
for i in range(len(runner)):
exec(alist[int(runner[i])])

Python: how to capture output to a text file? (only 25 of 530 lines captured now)

I've done a fair amount of lurking on SO and a fair amount of searching and reading, but I must also confess to being a relative noob at programming in general. I am trying to learn as I go, and so I have been playing with Python's NLTK. In the script below, I can get everything to work, except it only writes what would be the first screen of a multi-screen output, at least that's how I am thinking about it.
Here's the script:
#! /usr/bin/env python
import nltk
# First we have to open and read the file:
thefile = open('all_no_id.txt')
raw = thefile.read()
# Second we have to process it with nltk functions to do what we want
tokens = nltk.wordpunct_tokenize(raw)
text = nltk.Text(tokens)
# Now we can actually do stuff with it:
concord = text.concordance("cultural")
# Now to save this to a file
fileconcord = open('ccord-cultural.txt', 'w')
fileconcord.writelines(concord)
fileconcord.close()
And here's the beginning of the output file:
Building index...
Displaying 25 of 530 matches:
y .   The Baobab Tree : Stories of Cultural Continuity The continuity evident
regardless of ethnicity , and the cultural legacy of Africa as well . This Af
What am I missing here to get the entire 530 matches written to the file?
text.concordance(self, word, width=79, lines=25) seem to have other parameters as per manual.
I see no way to extract the size of concordance index, however, the concordance printing code seem to have this part: lines = min(lines, len(offsets)), therefore you can simply pass sys.maxint as a last argument:
concord = text.concordance("cultural", 75, sys.maxint)
Added:
Looking at you original code now, I can't see a way it could work before. text.concordance does not return anything, but outputs everything to stdout using print. Therefore, the easy option would be redirection stdout to you file, like this:
import sys
....
# Open the file
fileconcord = open('ccord-cultural.txt', 'w')
# Save old stdout stream
tmpout = sys.stdout
# Redirect all "print" calls to that file
sys.stdout = fileconcord
# Init the method
text.concordance("cultural", 200, sys.maxint)
# Close file
fileconcord.close()
# Reset stdout in case you need something else to print
sys.stdout = tmpout
Another option would be to use the respective classes directly and omit the Text wrapper. Just copy bits from here and combine them with bits from here and you are done.
Update:
I found this write text.concordance output to a file Options
from the ntlk usergroup. It's from 2010, and states:
Documentation for the Text class says: "is intended to support
initial exploration of texts (via the interactive console). ... If you
wish to write a program which makes use of these analyses, then you
should bypass the Text class, and use the appropriate analysis
function or class directly instead."
If nothing has changed in the package since then, this may be the source of your problem.
--- previously ---
I don't see a problem with writing to the file using writelines():
file.writelines(sequence)
Write a sequence of strings to the file. The sequence can be any
iterable object producing strings, typically a list of strings. There
is no return value. (The name is intended to match readlines();
writelines() does not add line separators.)
Note the italicized part, did you examine the output file in different editors? Perhaps the data is there, but not being rendered correctly due to missing end of line seperators?
Are you sure this part is generating the data you want to output?
concord = text.concordance("cultural")
I'm not familiar with nltk, so I'm just asking as part of eliminating possible sources for the problem.

awk in python: How to use awk scripts in a python class?

I am trying to run an awk script using python, so I can process some data.
Is there any way to get an awk script to run in a python class without using the system class to invoke it as shell process? The framework where I run these python scripts does not allow the use of a subprocess call, so I am stuck either figuring out a way to convert my awk script in python, or if is possible, running the awk script in python.
Any suggestions? My awk script basically read a text file and isolate blocks of proteins that contains a specific chemical compound (the output is generated by our framework; I've add an example of how does it looks like below) and isolate them printing them out on a different file.
buildProtein compoundA compoundB
begin fusion
Calculate : (lots of text here on multiple lines)
(more lines)
Final result - H20: value CO2: value Compound: value
Other Compounds X: Value Y: value Z:value
[...another similar block]
So for example if I build a protein and I need to see if in the compounds I have CH3COOH in the final result line, if it does I have to take the whole block, starting from the command "buildProtein", until the beginning of the next block; and save it on a file; and then move to the next and see if it has again the compound that I am looking for...if it does not have it I skip to the next, until the end of the file (the file has multiple occurrence of the compound that I search for, sometimes they are contiguous while other times they are alternate with blocks that has not the compound.
Any help is more than welcome; banging my head for weeks now and after finding out this site I decided to ask for some help.
Thanks in advance for your kindness!
If you can't use the subprocess module, the best bet is to recode your AWK script in Python. To that end, the fileinput module is a great transition tool with and AWK-like feel.
Python's re module can help, or, if you can't be bothered with regular expressions and just need to do some quick field seperation, you can use the built in str .split() and .find() functions.
I have barely started learning AWK, so I can't offer any advice on that front. However, for some python code that does what you need:
class ProteinIterator():
def __init__(self, file):
self.file = open(file, 'r')
self.first_line = self.file.readline()
def __iter__(self):
return self
def __next__(self):
"returns the next protein build"
if not self.first_line: # reached end of file
raise StopIteration
file = self.file
protein_data = [self.first_line]
while True:
line = file.readline()
if line.startswith('buildProtein ') or not line:
self.first_line = line
break
protein_data.append(line)
return Protein(protein_data)
class Protein():
def __init__(self, data):
self._data = data
for line in data:
if line.startswith('buildProtein '):
self.initial_compounds = tuple(line[13:].split())
elif line.startswith('Final result - '):
pieces = line[15:].split()[::2] # every other piece is a name
self.final_compounds = tuple([p[:-1] for p in pieces])
elif line.startswith('Other Compounds '):
pieces = line[16:].split()[::2] # every other piece is a name
self.other_compounds = tuple([p[:-1] for p in pieces])
def __repr__(self):
return ("Protein(%s)"% self._data[0])
#property
def data(self):
return ''.join(self._data)
What we have here is an iterator for the buildprotein text file which returns one protein at a time as a Protein object. This Protein object is smart enough to know it's inputs, final results, and other results. You may have to modify some of the code if the actual text in the file is not exactly as represented in the question. Following is a short test of the code with example usage:
if __name__ == '__main__':
test_data = """\
buildProtein compoundA compoundB
begin fusion
Calculate : (lots of text here on multiple lines)
(more lines)
Final result - H20: value CO2: value Compound: value
Other Compounds X: Value Y: value Z: value"""
open('testPI.txt', 'w').write(test_data)
for protein in ProteinIterator('testPI.txt'):
print(protein.initial_compounds)
print(protein.final_compounds)
print(protein.other_compounds)
print()
if 'CO2' in protein.final_compounds:
print(protein.data)
I didn't bother saving values, but you can add that in if you like. Hopefully this will get you going.

Categories

Resources