I need to create a sniff in python using the sniff command, to collect packets entering several interfaces.
When I do it specifying the interfaces with their names with the following command:
sniff(iface=["s1-cpu-eth1","s2-cpu-eth1","s3-cpu-eth1","s4-cpu-eth1"], prn=self.recv)
It works, but if I try to use a variable (this is needed because interfaces can change depending on the context and they can be obtained through a for loop populating a variable), such as:
if_to_sniff="\"s1-cpu-eth1\",\"s2-cpu-eth1\",\"s3-cpu-eth1\",\"s4-cpu-eth1\""
sniff(iface=[if_to_sniff], prn=self.recv)
It doesn't work.
I actually tried several ways, but I always get an error saying that the device doesn't exist. How can I do this?
if_to_sniff="\"s1-cpu-eth1\",\"s2-cpu-eth1\",\"s3-cpu-eth1\",\"s4-cpu-eth1\""
This string looks like CSV format? In which case we can use Python's CSV reader to parse it for us:
import csv
if_to_sniff="\"s1-cpu-eth1\",\"s2-cpu-eth1\",\"s3-cpu-eth1\",\"s4-cpu-eth1\""
# csv.reader expects a file or a list of strings (like lines in csv file)
reader = csv.reader([if_to_sniff])
# get the first row from our 'csv'
# interfaces will be the 'columns' of that row
# (i.e. split into a list of sub strings)
interfaces = reader.__next__()
sniff(iface=interfaces, prn=self.recv)
Related
I am currently trying to do some datadriven testing with robot framework from a csv file, using a python customlibrary. I am running in some problems though, would be grateful if someone can point me in the right direction
This is the error I am getting:
Resolving variable '${Tlogdata.0}' failed: SyntaxError: unexpected EOF while parsing (, line 1)
The csv I want to process currently has two records (I tried without, with single, and double codes):
1-KR8P27,11.0,1000
1-KR8P27,12.0,1001
I suspect the problem is with the customlibrary. I tried a lot in tweaking my code, but with what I found and my Python knowledge (that is admittably very basic) I cannot find any issue. This is what I currently have:
import csv
def read_csv_file(filename):
data = []
with open(filename,) as csvfile:
reader = csv.reader(csvfile)
for row in reader:
data.append(row)
return data
I am using some more keywords in Robot Framework to use this customlibrary to fetch data from my csv. While I suspect that my python code is the problem and I double checked everything I might be overlooking something here instead:
In a datamanager keyword file I created the following Keyword:
Get CSV Data
[Arguments] ${FilePath}
${Data} = read csv file ${FilePath}
[Return] ${Data}
Than I created a 'looping' keyword with a for loop:
Check multiple results
[Arguments] ${tlogdatas}
FOR ${tlogdata} IN ${tlogdatas}
Check result TLOG3 ${tlogdata}
The keyword I call in my loop is already used in a testcase without a datadriven setup, and works. Only the variables are named differently to make it work with the datadriven thing. The keyword looks like this:
Check result TLOG3
[Arguments] ${Tlogdata}
${queryResults} = query select x_ord_pts_earn, total_amt from siebel.s_order where
contact_id = ${Tlogdata.0} and total_amt = ${Tlogdata.1} and X_ORD_PTS_earn = ${Tlogdata.2}
# log #{queryResults[0][1]}
${dbvalue} = set variable ${queryResults}
${DB ordptsearn} = set variable ${queryResults[0][0]}
${DB contact_id} = set variable ${queryResults[0][1]}
should be equal as integers ${DB ordptsearn} ${Tlogdata.2}
should be equal as strings ${DB contact_id} ${Tlogdata.1}
END
Than in my testcase I define a variable which fetches its results from my datamanager keyword and use the looping keyword to go through the csv values:
Check TLOG results from CSVFile
${Tlogdata} = DataManager.Get CSV Data ${TLOG_RESULTS_CSVPath}
TLOG.Check multiple results ${Tlogdata}
It might also be worth it to show the values from the csv that are fetched according to the report file:
${Tlogdata} = [["'1-KR8P27'", "'11.0'", "'1000'"], ["'1-KR8P27'", "'12.0'", "'1001'"]]
I hope this is somewhat clear, I understand it is quit some text. But I am not 100% sure where the problem is in my scripts. I hope someone can point me in the right direction.
You are indexing your list wrong. Instead of ${Tlogdata.0} you should have ${Tlogdata[0]}, etc..
Here is a quick example:
*** Test Cases ***
Test
${Tlogdata}= Evaluate [["'1-KR8P27'", "'11.0'", "'1000'"], ["'1-KR8P27'", "'12.0'", "'1001'"]]
Log ${Tlogdata[0]}
Log ${Tlogdata[1]}
Log ${Tlogdata[0][1]}
Log ${Tlogdata[1][1]}
I'm trying to use the Tableau Server Client with Python to generate a csv file from a particular view which has a filter with multiple options, as shown in the image below.
Is it possible to specify multiple values in the CSVRequestOptions for the same filter?
I've tried to call the vf method multiple times with the same filter name (client) as the first parameter, but it only returns the data for the latter one.
def view_populate_csv(view_item):
csv_req_option = TSC.CSVRequestOptions()
csv_req_option.vf("client", "client1")
csv_req_option.vf("client", "client2")
csv_req_option.vf("client", "client3")
server.views.populate_csv(view_item, csv_req_option)
with open("./view_data.csv", "wb") as f:
f.write(b"".join(view_item.csv))
Also tried to add only the "(All)" option, but it won't return anything
csv_req_option.vf("client", "(all)")
csv_req_option.vf("client", "client1,client2,client3")
I have a list in my program. I have a function to append to the list, unfortunately when you close the program the thing you added goes away and the list goes back to the beginning. Is there any way that I can store the data so the user can re-open the program and the list is at its full.
You may try pickle module to store the memory data into disk,Here is an example:
store data:
import pickle
dataset = ['hello','test']
outputFile = 'test.data'
fw = open(outputFile, 'wb')
pickle.dump(dataset, fw)
fw.close()
load data:
import pickle
inputFile = 'test.data'
fd = open(inputFile, 'rb')
dataset = pickle.load(fd)
print dataset
You can make a database and save them, the only way is this. A database with SQLITE or a .txt file. For example:
with open("mylist.txt","w") as f: #in write mode
f.write("{}".format(mylist))
Your list goes into the format() function. It'll make a .txt file named mylist and will save your list data into it.
After that, when you want to access your data again, you can do:
with open("mylist.txt") as f: #in read mode, not in write mode, careful
rd=f.readlines()
print (rd)
The built-in pickle module provides some basic functionality for serialization, which is a term for turning arbitrary objects into something suitable to be written to disk. Check out the docs for Python 2 or Python 3.
Pickle isn't very robust though, and for more complex data you'll likely want to look into a database module like the built-in sqlite3 or a full-fledged object-relational mapping (ORM) like SQLAlchemy.
For storing big data, HDF5 library is suitable. It is implemented by h5py in Python.
I'm almost an absolute beginner in Python, but I am asked to manage some difficult task. I have read many tutorials and found some very useful tips on this website, but I think that this question was not asked until now, or at least in the way I tried it in the search engine.
I have managed to write some url in a csv file. Now I would like to write a script able to open this file, to open the urls, and write their content in a dictionary. But I have failed : my script can print these addresses, but cannot process the file.
Interestingly, my script dit not send the same error message each time. Here the last : req.timeout = timeout
AttributeError: 'list' object has no attribute 'timeout'
So I think my script faces several problems :
1- is my method to open url the right one ?
2 - and what is wrong in the way I build the dictionnary ?
Here is my attempt below. Thanks in advance to those who would help me !
import csv
import urllib
dict = {}
test = csv.reader(open("read.csv","rb"))
for z in test:
sock = urllib.urlopen(z)
source = sock.read()
dict[z] = source
sock.close()
print dict
First thing, don't shadow built-ins. Rename your dictionary to something else as dict is used to create new dictionaries.
Secondly, the csv reader creates a list per line that would contain all the columns. Either reference the column explicitly by urllib.urlopen(z[0]) # First column in the line or open the file with a normal open() and iterate through it.
Apart from that, it works for me.
I'm creating a script to convert a whole lot of data into CSV format. It runs on Google AppEngine using the mapreduce API, which is only relevant in that it means each row of data is formatted and output separately, in a callback function.
I want to take advantage of the logic that already exists in the csv module to convert my data into the correct format, but because the CSV writer expects a file-like object, I'm having to instantiate a StringIO for each row, write the row to the object, then return the content of the object, each time.
This seems silly, and I'm wondering if there is any way to access the internal CSV formatting logic of the csv module without the writing part.
The csv module wraps the _csv module, which is written in C. You could grab the source for it and modify it to not require the file-like object, but poking around in the module, I don't see any clear way to do it without recompiling.
One option could be having your own "file-like" object. Actually, cvs.writer requires for the object only to have a write method, so:
class PseudoFile(object):
def write(self, string):
# Do whatever with your string
csv.writer(PseudoFile()).writerow(row)
You're skipping a couple steps in there, but maybe it's just what you want.