I'm working on https://github.com/JsBergbau/MiTemperature2 with raspberry pi 3 model b. It's working properly on its own infinite loop but I am not able to catch the output from the terminal. How can I reach to output by using python?
Here is the part of printing:
measurement_time = datetime.datetime.fromtimestamp(measurement.timestamp)
print(measurement_time)
humidity=int.from_bytes(data[2:3],byteorder='little')
print("Temperature: " + str(temp))
print("Humidity: " + str(humidity))
voltage=int.from_bytes(data[3:5],byteorder='little') / 1000.
print("Battery voltage:",voltage,"V")
measurement.temperature = temp
measurement.humidity = humidity
measurement.voltage = voltage
measurement.sensorname = args.name
batteryLevel = min(int(round((voltage - 2.1),2) * 100), 100) #3.1 or above --> 100% 2.1 --> 0 %
measurement.battery = batteryLevel
print("Battery level:",batteryLevel)
measurement_time = datetime.datetime.fromtimestamp(measurement.timestamp)
Here is the script I run on terminal:
python3 LYWSD03MMC.py -d AA:BB:CC:DD:EE:FF
And here is the output:
2021-08-05 11:21:24
Temperature: 24.79
Humidity: 47
Battery voltage: 3.092 V
Battery level: 99
here is the run command and sample output, thanks for helps, best regards.
Change your code so it returns the information instead of just printing it. If you have code which looks like
something = some_function_call(123)
print(something)
other_one = different_function("some data here?").strip()
print(other_one)
probably refactor to
def get_something(number):
return some_function_call(number)
def get_other_one():
return different_function("some data here?").strip()
if __name__ == '__main__':
print(get_something(123))
print(get_other_one())
Now, you can create additional code which retrieves these values without printing them, and does whatever it wants with them. Put them on a web site? Upload them to a database? Rot13 encrypt them and send an email to Bill Gates? Your imagination is the limit.
How exactly you design your code is a broad topic where many books have been written, and more will be. A common arrangement is to make sure the useful parts are in modular functions which do one thing only (ideally without any side effects) so you can import this code and use it from other programs. (That's why the if __name__ part is useful. It makes sure code inside the block doesn't run when you import this file.)
Have you had a closer look at the code? There is a callback option. This is the easiest way to get values from this script. Or is this question more academically on how to capture python output?
If not, that should help you:
Documentation where callback is described:
https://github.com/JsBergbau/MiTemperature2#callback-for-processing-the-data
Accessing the single values:
In sendToInflux.sh https://github.com/JsBergbau/MiTemperature2/blob/master/sendToInflux.sh is an example in which argument are the values like temperature and so on.
Or when using sendToFile.sh it gives line by line
sensorname,temperature,humidity,voltage,humidityCalibrated,timestamp MySensor 20.61 54 2.944 49 1582120122
That data should be easy to process by python or awk.
add commandline
2>&1 | tee result.txt
it can save command line output
If you are running a command from python you can use subprocess.check_output to get the returning output from the terminal. Don't work if the called script runs forever.
Like this:
output = subprocess.check_output([sys.executable, 'LYWSD03MMC.py', '-d', 'AA:BB:CC:DD:EE:FF']).decode()
Related
I'm not sure if it's allowed to seek for help(if not, I don't mind not getting an answer until the competition period is over).
I was solving the Interactive Problem (Dat Bae) on CodeJam. On my local files, I can run the judge (testing_tool.py) and my program (<name>.py) separately and copy paste the I/O manually. However, I assume I need to find a way to make it automatically.
Edit: To make it clear, I want every output of x file to be input in y file and vice versa.
Some details:
I've used sys.stdout.write / sys.stdin.readline instead of print / input throughout my program
I tried running interactive_runner.py, but I don't seem to figure out how to use it.
I tried running it on their server, my program in first tab, the judge file in second. It's always throwing TLE error.
I don't seem to find any tutorial to do the same either, any help will be appreciated! :/
The usage is documented in comments inside the scripts:
interactive_runner.py
# Run this as:
# python interactive_runner.py <cmd_line_judge> -- <cmd_line_solution>
#
# For example:
# python interactive_runner.py python judge.py 0 -- ./my_binary
#
# This will run the first test set of a python judge called "judge.py" that
# receives the test set number (starting from 0) via command line parameter
# with a solution compiled into a binary called "my_binary".
testing_tool.py
# Usage: `testing_tool.py test_number`, where the argument test_number
# is 0 for Test Set 1 or 1 for Test Set 2.
So use them like this:
python interactive_runner.py python testing_tool.py 0 -- ./dat_bae.py
I am using Jupyter Notebook and Python 3.0.
I have a block of code that takes a while to execute in Jupyter Notebook and to identify its current status, I would like to make a counter of what loop it is on, something like this:
large_number = 1000
for i in range(large_number):
print('{} / {} complete.'.format(i,large_number))
The problem with this is that it will print a new line for each iteration, which I do not want to do... instead I just want to update the value.
Is there anyway I can do this in Jupyter Notebook?
The de facto standard for this functionality in Jupyter is tqdm, specifically tqdm_notebook. It is simple to use, provides informative output, and has a multitude of options. You can also use it for cli work.
from tqdm import tqdm_notebook
from time import sleep
for i in tqdm_notebook(range(100)):
sleep(.05)
The output would be something like this:
I'm fond of making an ascii status bar. Say you need to run 1000 iterations and want to see 20 updates before it is done:
num_iter = 1000
num_updates = 20
update_per = num_iter // num_updates # make sure it's an integer
print('|{}|'.format(' ' * (num_updates - 2))) # gives you a reference
for i in range(num_iter):
# code stuff
if i % update_per == 0:
print('*', end='', flush=True)
Gives you an update that looks like:
| |
*******
as it runs.
import sys
large_number = 1000
for i in range(large_number):
print('{} / {} complete.'.format(i,large_number), end='\r')
sys.stdout.flush()
I always find that Alexander Kukushkin's Jupyter Widget is the best for these sorts of things. It creates a nice looking progress bar, can work on generators and you can set how often it updates the progress value.
To use it, copy the code from the github link into a cell, then run your code like so:
large_number = 1000
for i in log_progress(range(large_number)):
//your code here - no need to print anything manually!
I am developing a program in python, and one element tells the user how much bandwidth they have used since the program has opened (not just within the program, but regular web browsing while the program has been opened). The output should be displayed in GTK
Is there anything in existence, if not can you point me in the right direction. It seems like i would have to edit an existing proxy script like pythonproxy, but i can't see how i would use it.
Thanks,
For my task I wrote very simple solution using psutil:
import time
import psutil
def main():
old_value = 0
while True:
new_value = psutil.net_io_counters().bytes_sent + psutil.net_io_counters().bytes_recv
if old_value:
send_stat(new_value - old_value)
old_value = new_value
time.sleep(1)
def convert_to_gbit(value):
return value/1024./1024./1024.*8
def send_stat(value):
print ("%0.3f" % convert_to_gbit(value))
main()
import time
def get_bytes(t, iface='wlan0'):
with open('/sys/class/net/' + iface + '/statistics/' + t + '_bytes', 'r') as f:
data = f.read();
return int(data)
while(True):
tx1 = get_bytes('tx')
rx1 = get_bytes('rx')
time.sleep(1)
tx2 = get_bytes('tx')
rx2 = get_bytes('rx')
tx_speed = round((tx2 - tx1)/1000000.0, 4)
rx_speed = round((rx2 - rx1)/1000000.0, 4)
print("TX: %fMbps RX: %fMbps") % (tx_speed, rx_speed)
should be work
Well, not quiet sure if there is something in existence (written in python) but you may want to have a look at the following.
Bandwidth Monitoring (Not really an active project but may give you an idea).
Munin Monitoring (A pearl based Network Monitoring Project)
ntop (written in C/C++, based on libpcap)
Also just to give you pointers if you are looking to do something on your own, one way could be to count and store packets using sudo cat /proc/net/dev
A proxy would only cover network applications that were configured to use it. You could set, e.g. a web browser to use a proxy, but what happens when your proxy exits?
I think the best thing to do is to hook in lower down the stack. There is a program that does this already, iftop. http://en.wikipedia.org/wiki/Iftop
You could start by reading the source code of iftop, perhaps wrap that into a Python C extension. Or rewrite iftop to log data to disk and read it from Python.
Would something like WireShark (https://wiki.wireshark.org/FrontPage) do the trick? I am tackling a similar problem now, and am inclined to use pyshark, a WireShark/TShark wrapper, for the task. That way you can get capture file info readily.
I am using some fortran code in python via f2py. I would like to redirect the fortran output to a variable I can play with. There is this question which I found helpful.
Redirecting FORTRAN (called via F2PY) output in Python
However, I would also like to optionally have the fortran code write to the terminal as well as recording it. Is this possible?
I have the following silly class which I cobbled together from the question above and also from
http://websrv.cs.umt.edu/isis/index.php/F2py_example.
class captureTTY:
'''
Class to capture the terminal content. It is necessary when you want to
grab the output from a module created using f2py.
'''
def __init__(self, tmpFile = '/tmp/out.tmp.dat'):
'''
Set everything up
'''
self.tmpFile = tmpFile
self.ttyData = []
self.outfile = False
self.save = False
def start(self):
'''
Start grabbing TTY data.
'''
# open outputfile
self.outfile = os.open(self.tmpFile, os.O_RDWR|os.O_CREAT)
# save the current file descriptor
self.save = os.dup(1)
# put outfile on 1
os.dup2(self.outfile, 1)
return
def stop(self):
'''
Stop recording TTY data
'''
if not self.save:
# Probably not started
return
# restore the standard output file descriptor
os.dup2(self.save, 1)
# parse temporary file
self.ttyData = open(self.tmpFile, ).readlines()
# close the output file
os.close(self.outfile)
# delete temporary file
os.remove(self.tmpFile)
My code currently looks something like this:
from fortranModule import fortFunction
grabber = captureTTY()
grabber.start()
fortFunction()
grabber.stop()
My idea is to have a flag called silent that I could use to check whether I allow the fortran output to be displayed or not. This would then be passed to the captureTTY when I construct it, i.e.
from fortranModule import fortFunction
silent = False
grabber = captureTTY(silent)
grabber.start()
fortFunction()
grabber.stop()
I am not really sure how to go about implementing this. The obvious thing to do is:
from fortranModule import fortFunction
silent = False
grabber = captureTTY()
grabber.start()
fortFunction()
grabber.stop()
if not silent:
for i in grabber.ttyData:
print i
I am not a big fan of this, as my fortran method takes a long time to run, and it would be nice to see it updated in real time and not just at the end.
Any ideas? The code will be run on Linux & Mac machines, not windows. I've had a look around the web, but haven't found the solution. If there is one, I am sure it will be painfully obvious!
Cheers,
G
Clarification:
From the comments I realise that the above isn't the clearest. What I currently have is the capability to record the output from the fortran method. However, this prevents it from printing to the screen. I can have it print to the screen, but then cannot record it. I want to have the option to do both simultaneously, i.e. record the output and have it print to the screen in real time.
Just as an aside, the fortran code is a fitting algorithm and the actual output that I am interested is the parameters for each iteration.
Have you tried something like this in the Fortran subroutine? (Assuming foo is what you want to print, and 52 is the unit number of your log file)
write(52,*) foo
write(*,*) foo
This should print foo to the log file and to the screen.
I am trying to run a very simple python script via hive and hadoop.
This is my script:
#!/usr/bin/env python
import sys
for line in sys.stdin:
line = line.strip()
nums = line.split()
i = nums[0]
print i
And I want to run it on the following table:
hive> select * from test;
OK
1 3
2 2
3 1
Time taken: 0.071 seconds
hive> desc test;
OK
col1 int
col2 string
Time taken: 0.215 seconds
I am running:
hive> select transform (col1, col2) using './proba.py' from test;
But always get something like:
...
2011-11-18 12:23:32,646 Stage-1 map = 0%, reduce = 0%
2011-11-18 12:23:58,792 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201110270917_20215 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
I have tried many different modifications of this procedure but I constantly fail. :(
Am I do something wrong or there is a problem with my hive/hadoop installation?
A few things I'd check for if I were debugging this:
1) Is the python file set to be executable (chmod +x file.py)
2) Make sure the python file is in the same place on all machines. Probably better - put the file in hdfs then you can use " using 'hdfs://path/to/file.py' " instead of a local path
3) Take a look at your job on the hadoop dashboard (http://master-node:9100), if you click on a failed task it will give you the actual java error and stack trace so you can see what actually went wrong with the execution
4) make sure python is installed on all the slave nodes! (I always overlook this one)
Hope that helps...
Check hive.log and/or the log from the hadoop job (job_201110270917_20215 in your example) for a more detailed error message.
"FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask" is a generic error that hive returns when something goes wong in the underlying map/reduce task. You need to go to hive log files(located on the HiveServer2 machine) and find the actual exception stack trace.