A function that can be stored on memory - python

suppose we have two python programs. calculate.py and show_results.py.
When calculate.py program runs on terminal, it returns a variable (let's say a list called result) to the computer memory. And when we run show_results.py on terminal, it prints the result from the programs before.
Suppose the result of the calculate.py is a list A = [83, 22]. So it will be like below on terminal:
$:~ python3 calculate.py
-------Calculation Done--------
$:~ python3 show_results.py
83, 22
Any suggestions ?
Any response will be appreciated.

As #blue_note suggests, it will not possible from kernal level.
1) You can store first script result into filesystem/database to retrieve later.
2) You can write a program which will handle all these functionality in one python script if you run it together.

I think you can store that data in a .json file. For that, you can use the json library:
import json
with open('data.json', 'w') as outfile:
json.dump(data, outfile)
And, read it from show_results.py file:
import json
with open('data.json') as f:
data = json.load(f)
Here you have the documentation about Json Python library:
https://docs.python.org/3/library/json.html

You can't do that. And there's no way around it, it's not in python's hands, the operating system decides. When you run python my_script.py, you create a process. The process has its own memory space, as long as it runs. When the program terminates, this memory is cleared. When you run the second script, the execution of the first script has never happened, as far as the OS is concerned.
You could get around it with keeping the first process running and using some interprocess communication method. But it's difficult and has no real benefits. Just create as script that gets the results from the first script and passes them to the second script, if you just want one-off results. Or, store to a file or database, if you care about the result long-term.

Related

Loop an existing script

I'm using a script from a third party I can't modify or show (let's call it original.py) which takes a file and produces some calculations. At the end it ouputs a result (using the print statment).
Since I have many files I decided to make a second script that gets all wanted files and runs them through the original.py
1st get list of all files to run
2nd run each file through the original.py
3rd obtain results from each file
I have the 1st and 2nd step. However, the end result only saves the calculations from the last file it read.
import sys
import original
import glob
import os
fn=str(sys.argv[1])
for filename in sys.argv[1:]:
print(filename)
ficheiros = [f for f in glob.glob(fn)]
for ficheiro in ficheiros:
original.file = bytes(ficheiro,'utf-8')
original.function()
To summarize:
Knowing I can't change the original script (which is made with a print statement) how can I obtain the results for each loop? Is there a better way than using a for loop?.
The first script can be invoked with python original.py
It requires the file to be changed manually inside the script in the original.file line.
This script outputs the result in the console and I redirect it with: python original.py > result.txt
At the moment when I try to run my script, it reads all the correct files in the folder but only returns the results for the last file.
#
(I tried to reformulate the question hopefully it's easier to understand)
#
The problem is due to a mistake in the ````ficheiros = [f for f in glob.glob(fn)]`````it's only reading one file, hence only outputting one result.
Thanks for the time.sleep() trick in the comments.
Solved:
I changed the initial part to:
fn=str(sys.argv[1])
ficheiros= []
for filename in sys.argv[1:]:
ficheiros.append(filename)
#print(filename)
and now it correctly reads all the files and it outputs all the results
Depending on your operating system there are different ways to take what is printed to the console and append it to a file.
For example on Linux, you could run this file that calls original.py for every file python yourfile.py >> outputfile.txt, which will then effectively save everything that is printed into outputfile.txt.
The syntax is similar for Windows.
I'm not quite sure what you're asking, but you could try one of these:
Either redirecting all output to a file for later use, by running the script like so: python secondscript.py > outfilename.txt
Or, and this might or might not work for you, redefining the print command to a function that outputs the result how you want, eg:
def print(x):
with open('outfile.txt','w') as f:
f.write('example: ' + x)
If you choose the second option, I recommend saving the old print function (oldprint = print) so you can restore and use the regular print later.
I don't know if I got exactly what you want. You have a first script named original.py which takes some arguments and returns things in the form of print statements and you would like to grab these prints statements in your scripts to do things?
If so, a solution could be the subprocess module:
Let's say that this is original.py:
print("Hi, I'm original.py")
print("print me!")
And this is main.py:
import subprocess
script_path = "original.py"
print("Executing ", script_path)
process = subprocess.Popen(["python3", script_path], stdout=subprocess.PIPE)
for line in process.stdout:
print(line.decode("utf8"))
You can easily add more arguments in the Popen call like ["arg1", "arg2",] etc.
Output:
Executing original.py
Hi, I'm original.py
print me!
and you can grab the lines in the main.py to do what you want with them.

Python reading a file line by line and sending it to another Python script

Good day.
Today i was trying to practice python and Im trying to make a script that reads the lines from a file containing only numbers and using said numbers as a parameter in another Python script.
Here at work i sometimes need to execute a python script called Suspend.py, ever time i excute this scripit i must type the following information:
Suspend.py suspend telefoneNumber
I have to do this procedure many times during the day and i have to do this for every number on the list, it is usually a very long list. SO i though on trying to make things a little bit faster and creat a Python script myself.
Thing is a just started learning Python on my own i kinda suck at it, so i have no idea on how to do this.
In one file i have the following numbers:
87475899
87727856
87781681
87794922
87824499
88063188
88179211
88196532
88244043
88280924
88319531
88421427
88491113
I want python to be able to read line by line and send this number to another file script together with the word "suspend" on the previously said python script.
If I understand you correctly:
import subprocess
with open("file_with_numbers.txt") as f:
for line in f:
subprocess.call(["python", "Suspend.py", "suspend", line.strip()])

Using textfile as stdin in python under windows 7

I'm a win7-user.
I accidentally read about redirections (like command1 < infile > outfile) in *nix systems, and then I discovered that something similar can be done in Windows (link). And python is also can do something like this with pipes(?) or stdin/stdout(?).
I do not understand how this happens in Windows, so I have a question.
I use some kind of proprietary windows-program (.exe). This program is able to append data to a file.
For simplicity, let's assume that it is the equivalent of something like
while True:
f = open('textfile.txt','a')
f.write(repr(ctime()) + '\n')
f.close()
sleep(100)
The question:
Can I use this file (textfile.txt) as stdin?
I mean that the script (while it runs) should always (not once) handle all new data, ie
In the "never-ending cycle":
The program (.exe) writes something.
Python script captures the data and processes.
Could you please write how to do this in python, or maybe in win cmd/.bat or somehow else.
This is insanely cool thing. I want to learn how to do it! :D
If I am reading your question correctly then you want to pipe output from one command to another.
This is normally done as such:
cmd1 | cmd2
However, you say that your program only writes to files. I would double check the documentation to see if their isn't a way to get the command to write to stdout instead of a file.
If this is not possible then you can create what is known as a named pipe. It appears as a file on your filesystem, but is really just a buffer of data that can be written to and read from (the data is a stream and can only be read once). Meaning your program reading it will not finish until the program writing to the pipe stops writing and closes the "file". I don't have experience with named pipes on windows so you'll need to ask a new question for that. One down side of pipes is that they have a limited buffer size. So if there isn't a program reading data from the pipe then once the buffer is full the writing program won't be able to continue and just wait indefinitely until a program starts reading from the pipe.
An alternative is that on Unix there is a program called tail which can be set up to continuously monitor a file for changes and output any data as it is appended to the file (with a short delay.
tail --follow=textfile.txt --retry | mycmd
# wait for data to be appended to the file and output new data to mycmd
cmd1 >> textfile.txt # append output to file
One thing to note about this is that tail won't stop just because the first command has stopped writing to the file. tail will continue to listen to changes on that file forever or until mycmd stops listening to tail, or until tail is killed (or "sigint-ed").
This question has various answers on how to get a version of tail onto a windows machine.
import sys
sys.stdin = open('textfile.txt', 'r')
for line in sys.stdin:
process(line)
If the program writes to textfile.txt, you can't change that to redirect to stdin of your Python script unless you recompile the program to do so.
If you were to edit the program, you'd need to make it write to stdout, rather than a file on the filesystem. That way you can use the redirection operators to feed it into your Python script (in your case the | operator).
Assuming you can't do that, you could write a program that polls for changes on the text file, and consumes only the newly written data, by keeping track of how much it read the last time it was updated.
When you use < to direct the output of a file to a python script, that script receives that data on it's stdin stream.
Simply read from sys.stdin to get that data:
import sys
for line in sys.stdin:
# do something with line

Python code not writing to file unless run in interpreter

I have written a few lines of code in Python to see if I can make it read a text file, make a list out of it where the lines are lists themselves, and then turn everything back into a string and write it as output on a different file. This may sound silly, but the idea is to shuffle the items once they are listed, and I need to make sure I can do the reading and writing correctly first. This is the code:
import csv,StringIO
datalist = open('tmp/lista.txt', 'r')
leyendo = datalist.read()
separando = csv.reader(StringIO.StringIO(leyendo), delimiter = '\t')
macrolist = list(separando)
almosthere = ('\t'.join(i) for i in macrolist)
justonemore = list(almosthere)
arewedoneyet = '\n'.join(justonemore)
with open('tmp/randolista.txt', 'w') as newdoc:
newdoc.write(arewedoneyet)
newdoc.close()
datalist.close()
This seems to work just fine when I run it line by line on the interpreter, but when I save it as a separate Python script and run it (myscript.py) nothing happens. The output file is not even created. After having a look at similar issues raised here, I have introduced the 'with' parameter (before I opened the output file through output = open()), I have tried flushing as well as closing the file... Nothing seems to work. The standalone script does not seem to do much, but the code can't be too wrong if it works on the interpreter, right?
Thanks in advance!
P.S.: I'm new to Python and fairly new to programming, so I apologise if this is due to a shallow understanding of a basic issue.
Where are the input file and where do you want to save the output file. For this kind of scripts i think that it's better use absolute paths
Use:
open('/tmp/lista.txt', 'r')
instead of:
open('tmp/lista.txt', 'r')
I think that the error can be related to this
It may have something to do with where you start your interpreter.
Try use a absolute path /tmp/randolista.txt instead of relative path tmp/randolista.txt to isolate the problem.

Embed pickle (or arbitrary) data in python script

In Perl, the interpreter kind of stops when it encounters a line with
__END__
in it. This is often used to embed arbitrary data at the end of a perl script. In this way the perl script can fetch and store data that it stores 'in itself', which allows for quite nice opportunities.
In my case I have a pickled object that I want to store somewhere. While I can use a file.pickle file just fine, I was looking for a more compact approach (to distribute the script more easily).
Is there a mechanism that allows for embedding arbitrary data inside a python script somehow?
With pickle you can also work directly on strings.
s = pickle.dumps(obj)
pickle.loads(s)
If you combine that with """ (triple-quoted strings) you can easily store any pickled data in your file.
If the data is not particularly large (many K) I would just .encode('base64') it and include that in a triple-quoted string, with .decode('base64') to get back the binary data, and a pickle.loads() call around it.
In Python, you can use """ (triple-quoted strings) to embed long runs of text data in your program.
In your case, however, don't waste time on this.
If you have an object you've pickled, you'd be much, much happier dumping that object as Python source and simply including the source.
The repr function, applied to most objects, will emit a Python source-code version of the object. If you implement __repr__ for all of your custom classes, you can trivially dump your structure as Python source.
If, on the other hand, your pickled structure started out as Python code, just leave it as Python code.
I made this code. You run something like python comp.py foofile.tar.gz, and it creates decomp.py, with foofile.tar.gz's contents embedded in it. I don't think this is really portable with windows because of the Popen though.
import base64
import sys
import subprocess
inf = open(sys.argv[1],"r+b").read()
outs = base64.b64encode(inf)
decomppy = '''#!/usr/bin/python
import base64
def decomp(data):
fname = "%s"
outf = open(fname,"w+b")
outf.write(base64.b64decode(data))
outf.close()
# You can put the rest of your code here.
#Like this, to unzip an archive
#import subprocess
#subprocess.Popen("tar xzf " + fname, shell=True)
#subprocess.Popen("rm " + fname, shell=True)
''' %(sys.argv[1])
taildata = '''uudata = """%s"""
decomp(uudata)
''' %(outs)
outpy = open("decomp.py","w+b")
outpy.write(decomppy)
outpy.write(taildata)
outpy.close()
subprocess.Popen("chmod +x decomp.py",shell=True)

Categories

Resources