How to save Iperf result in an output file - python

I am running iperf between a set of hosts that are read from a txt file, here's how I am running it:
h1,h2 = net.getNodeByName(node_id_1, node_id_2)
net.iperf((h1, h2))
It runs well and displays the results. But, I want to save the output of iperf result in a separate txt file. Does anyone know how I can apply it on the above code?

In order to store the results of iperf test in a file , add | tee followed by the filename.txt to your command line for example :
iperf -c ipaddress -u -t 10 -i 1 | tee result.txt

Do you already try:
--output test.log
(in newer versions --logfile)
or using
youriperfexpr > test.log

I had this problem as well. Although the manpage specifies "-o" or "--output" to save your output to a file, this does not actually work.
It seems that this was marked as "WontFix":
https://code.google.com/p/iperf/issues/detail?id=24:
Looks like -o/--output existed in a previous version but in not in the
current version. The consensus in yesterday's meeting was that if
--output existed then we should fix it, otherwise people should just use shell redirection and we'll mark this WontFix. So, WontFix.
So maybe just use typescript or ">test.log" as suggested by Paolo

I think the answer is given by Chiara Contoli in here: iperf result in output file
In summary:
h1.cmd('iperf -s > server_output.txt &')
h2.cmd('iperf -t 5 -c ', h1.IP() + ' > client_output.txt &')

Since you are running it on python, another method to save the result is to use popen:
popen( '<command> > <filename>', shell=True)
For example:
popen('iperf -s -u -i 1 > outtest.txt', shell=True)
You can check this for further information:
https://github.com/mininet/mininet/wiki/Introduction-to-Mininet#popen

If you need to save a file in the txt format.
On the client machine run cmd(adm) and after that you need to write this:
cd c:\iperf3
iperf3.exe -c "you server address" -p "port" -P 10 -w 32000 -t 0 >> c:\iperf3\text.txt
(-t 0) - infinity
On the client machine, you will see a black screen in cmd. It's normal. You will see all the process in the server machine. After your test, on the client machine in cmd need push ctrl+ c and after (y).
Your file in directory c:\iperf3\text.txt after that collect all information about this period.
If you push close in cmd this file text.txt will be empty.
Recommended open this file in NotePad or WordPad for the correct view.

Server
iperf3 -s -p -B >> &
Client
iperf3 -p -c <server_ip> -B <client_ip> -t 5 >>
Make sure you kill the iperf process on the server when done

Related

Empty file when output is written to a file (python gsutil run from cmd)

I use gsutil for uploading a file to Google cloud storage. I would like to write the output to a file.
I have created a shortcut with this command in it
%windir%\system32\cmd.exe /k python2 c:\gsutil\gsutil -m rsync -r -n -d "XX" gs://xx/XX > C:\myoutput.txt
I run the cmd as admin. The output.txt file is created but it's empty after the script exits.
Any idea how I solve this?
Old question:
I have tried adding /myoutput.txt cf here after gs://xx/XX it doesn't works : I get a Access is denied. message.
I guess output went to errorstream, to merge with normal output append 2>&1
To redirect to a file and see on screen you need a tee or t-pipe tool.
There is one contained in GNU utilities for Win32 or one from Bill Stewart's Site
So your command could look like (untested)
%windir%\system32\cmd.exe /k python2 c:\gsutil\gsutil -m rsync -r -n -d "XX" gs://xx/XX 2>&1|tee "%USERPROFILE%\Desktop\myoutput.txt"
try to write to a different place or give the cmd admin, that would be my guess. Hope this helps)

Copy one tar.gz file without scp(using echo or cat)

Is it possible for us to copy contents of a .tar.gz file using echo command?
I am using telnet(through telnetlib in python) to execute commands in a server. I need to copy few files into the server. However, scp just hangs after authentication. The server is a busybox server. Another team is looking into the issue for now. The scp command I used is this:
scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
I side stepped by reading the contents of the file, put them in the echo command in the remote. However, when I try to read a tar.gz file, it fails. I could not untar the file and copy the files within it as the tar file has nearly 500 files in it. Including a few tar files.
So any possible way to copy a tar file contents(read through open command in python) without scp?
Or is it possible to copy a file using the telnetlib in python? using the Telnet function?
To be more clear, I need to upload a tar.gz file from local machine to the remote machine. But without the help of scp. It will be more helpful if it is a python solution. If bash is the way to go, I could run os.system too. So python/shell scripting solution is what I am looking for.
If you need any more information, please ask away in the comments.
You can cat and redirect, for example:
ssh user#server cat file.tar.gz > file.tar.gz
Note that cat will happen at the server side, but the redirection will happen locally, to a local file.
You could also directly gunzip + untar to the local filesystem:
ssh user#server cat file.tar.gz | tar zxv
To do it the other way around, copy from local to server:
ssh user#server 'cat > file.tar.gz' < file.tar.gz
And gzip + tar to the server:
tar zc . | ssh user#server 'cat > file.tar.gz'
if you try to the run the command outside of the python script it will ask you for password:
scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
to pass the password for Unix scp/ssh command you need to redirect the password as input to the command like:
myPass > scp -i /key/private.pem /home/tempuser/file.tar.gz tempuser#remote1:/tmp/
There is an alternative method using the base64 utility. By base64-encoding the file you wish to transfer, you'll avoid issues with any escape chars, etc. that may trip echo. For example:
some_var="$( base64 -w 0 path_to_file )"
ssh user#server "echo $some_var | base64 -d > path_to_remote_file"
Option -w 0 is important to prevent base64 from inserting line breaks (after 76 characters by default).

How do I pipe to a text file the command prompt output from iperf to a file

I am trying to start an iPerf server (its a program similar to ping), and pipe its output to a .txt file.
On another PC, the client is sending traffic to this IP address.
Here is the command :
start "LocalFLServer" iperf -s -w 1024k -i 2 -B 10.42.113.120 -p5003
I want to pipe its output to a txt file, so I tried both
start "LocalFLServer" iperf -s -w 1024k -i 2 -B 10.42.113.120 -p5003 -o dl_tcp.txt
and
start "LocalFLServer" iperf -s -w 1024k -i 2 -B 10.42.113.120 -p5003 > dl_tcp.txt
But both instructions are not able to pipe the results to a .txt file.
The problem here is because I am starting this server on a separate command prompt using the 'start' command. I know it will write a .txt file if I remove the 'start'., but unfortunately, in the Perl script soon after I send this command using system(), I need to run another instruction, or the program will not proceed further and would remain stuck which I dont want.
Help.
Edit : In Perl, I add/send the instruction like this :
system(start "LocalFLServer" iperf -s -w 1024k -i 2 -B 10.42.113.120 -p5003 -o dl_tcp.txt)
I'm not 100% sure, but I think there might be an issue with iperf's logging to file feature. From this link: https://code.google.com/p/iperf/issues/detail?id=24 , it appears it was an unresolved issue up till atleast Dec 23, 2013.
I also took a look at iperf's source code, and in this file : https://github.com/esnet/iperf/blob/master/src/main.c , the arguments are passed to the function "iperf_parse_arguments" which is located in the file: https://github.com/esnet/iperf/blob/master/src/iperf_api.c , and when I look at the method, I do not see anything which handles "-o" or "--output".
I am not sure why using the ">" is not working.

HOW TO use fabric use with dtach,screen,is there some example

i have googled a lot,and in fabric faq also said use screen dtach with it ,but didn't find how to implement it?
bellow is my wrong code,the sh will not execute as excepted it is a nohup task
def dispatch():
run("cd /export/workspace/build/ && if [ -f spider-fetcher.zip ];then mv spider-fetcher.zip spider-fetcher.zip.bak;fi")
put("/root/build/spider-fetcher.zip","/export/workspace/build/")
run("cd /export/script/ && sh ./restartCrawl.sh && echo 'finished'")
I've managed to do it in two steps:
Start tmux session on remote server in detached mode:
run("tmux new -d -s foo")
Send command to the detached tmux session:
run("tmux send -t foo.0 ls ENTER")
here '-t' determines target session ('foo') and 'foo.0' tells the
number of the pane the 'ls' command is to be executed in.
you can just prepend screen to the command you want to run:
run("screen long running command")
Fabric though doesn't keep state like something like expect would, as each run/sudo/etc are their own sperate command runs without knowing the state of the last command. Eg run("cd /var");run("pwd") will not print /var but the home dir of the user who has logged into the box.

How to start a background process with nohup using Fabric?

Through Fabric, I am trying to start a celerycam process using the below nohup command. Unfortunately, nothing is happening. Manually using the same command, I could start the process but not through Fabric. Any advice on how can I solve this?
def start_celerycam():
'''Start celerycam daemon'''
with cd(env.project_dir):
virtualenv('nohup bash -c "python manage.py celerycam --logfile=%scelerycam.log --pidfile=%scelerycam.pid &> %scelerycam.nohup &> %scelerycam.err" &' % (env.celery_log_dir,env.celery_log_dir,env.celery_log_dir,env.celery_log_dir))
I'm using Erich Heine's suggestion to use 'dtach' and it's working pretty well for me:
def runbg(cmd, sockname="dtach"):
return run('dtach -n `mktemp -u /tmp/%s.XXXX` %s' % (sockname, cmd))
This was found here.
As I have experimented, the solution is a combination of two factors:
run process as a daemon: nohup ./command &> /dev/null &
use pty=False for fabric run
So, your function should look like this:
def background_run(command):
command = 'nohup %s &> /dev/null &' % command
run(command, pty=False)
And you can launch it with:
execute(background_run, your_command)
This is an instance of this issue. Background processes will be killed when the command ends. Unfortunately on CentOS 6 doesn't support pty-less sudo commands.
The final entry in the issue mentions using sudo('set -m; service servicename start'). This turns on Job Control and therefore background processes are put in their own process group. As a result they are not terminated when the command ends.
For even more information see this link.
you just need to run
run("(nohup yourcommand >& /dev/null < /dev/null &) && sleep 1")
DTACH is the way to go. It's a software you need to install like a lite version of screen.
This is a better version of the "dtach"-method found above, it will install dtach if necessary. It's to be found here where you can also learn how to get the output of the process which is running in the background:
from fabric.api import run
from fabric.api import sudo
from fabric.contrib.files import exists
def run_bg(cmd, before=None, sockname="dtach", use_sudo=False):
"""Run a command in the background using dtach
:param cmd: The command to run
:param output_file: The file to send all of the output to.
:param before: The command to run before the dtach. E.g. exporting
environment variable
:param sockname: The socket name to use for the temp file
:param use_sudo: Whether or not to use sudo
"""
if not exists("/usr/bin/dtach"):
sudo("apt-get install dtach")
if before:
cmd = "{}; dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(
before, sockname, cmd)
else:
cmd = "dtach -n `mktemp -u /tmp/{}.XXXX` {}".format(sockname, cmd)
if use_sudo:
return sudo(cmd)
else:
return run(cmd)
May this help you, like it helped me to run omxplayer via fabric on a remote rasberry pi!
You can use :
run('nohup /home/ubuntu/spider/bin/python3 /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py > /home/ubuntu/spider/Desktop/baidu_index/baidu_index.py.log 2>&1 &', pty=False)
nohup did not work for me and I did not have tmux or dtach installed on all the boxes I wanted to use this on so I ended up using screen like so:
run("screen -d -m bash -c '{}'".format(command), pty=False)
This tells screen to start a bash shell in a detached terminal that runs your command
You could be running into this issue
Try adding 'pty=False' to the sudo command (I assume virtualenv is calling sudo or run somewhere?)
This worked for me:
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
Edit: I had to make sure the pid file was removed first so this was the full code:
# Create new celerycam
sudo('rm celerycam.pid', warn_only=True)
sudo('python %s/manage.py celerycam --detach --pidfile=celerycam.pid' % siteDir)
I was able to circumvent this issue by running nohup ... & over ssh in a separate local shell script. In fabfile.py:
#task
def startup():
local('./do-stuff-in-background.sh {0}'.format(env.host))
and in do-stuff-in-background.sh:
#!/bin/sh
set -e
set -o nounset
HOST=$1
ssh $HOST -T << HERE
nohup df -h 1>>~/df.log 2>>~/df.err &
HERE
Of course, you could also pass in the command and standard output / error log files as arguments to make this script more generally useful.
(In my case, I didn't have admin rights to install dtach, and neither screen -d -m nor pty=False / sleep 1 worked properly for me. YMMV, especially as I have no idea why this works...)

Categories

Resources