How to get the complete response from a telnet command using python? - python

I am trying to use python 3.10.9 on windows to create a telnet session using telnetlib, but I have trouble to read the complete response.
I create a telnet session like
session = telnetlib.Telnet(host, port, timeout)
and then I write a command like
session.write(command + b"\n")
and then I wait some really long time (like 5 seconds) before I try to read the response using
session.read_some()
but I only get half of the response back!
The complete response is e.g.
Invalid arguments
Usage: $IMU,START,<SAMPLING_RATE>,<OUTPUT_RATE>
where SAMPLING_RATE = [1 : 1000] in Hz
OUTPUT_RATE = [1 : SAMPLING_RATE] in Hz
but all I read is the following:
b'\x1b[0GInvalid arguments\r\n\r\nUsage: $IMU,START,<'
More than half of the response is missing! How to read the complete response in a non-blocking way?
Other strange read methods:
read_all: blocking
read_eager: same issue
read_very_eager: sometimes works, sometimes not. Seems to contain a repetition of the message ...
read_lazy: does not read anything
read_very_lazy: does not read anything
I have not the slightest idea what all these different read methods are for. The documentation is not helping at all.
But read_very_eager seems to work sometimes. But sometimes I get a response like
F
FI
FIL
FILT
FILTE
FILTER
and so on. But I am reading only once, not adding the output myself!
Maybe there is a more simple-to-use module I can use instead if telnetlib?

Have you tried read_all(), or any of the other read_* options available?
Available functions here.

Related

Cannot Print output for Fortigate Command with Python [duplicate]

This question already has answers here:
Get output from a Paramiko SSH exec_command continuously
(6 answers)
Closed 11 months ago.
I am having an issue with getting my python script to return or print any output for the following command run on my fortigate firewall
'diagnose sniffer packet any "host X.X.X.X and port 53" 4 0 a
def dns_pcap():
device = ConnectHandler(device_type="fortinet", ip="X.X.X.X", username="xxxxxxxx", password="xxxxxxxxx")
lines = []
gi_pcap = device.send_command('diagnose sniffer packet GI "host X.X.X>X and port 53" 4 0 a')
output = device.read_channel()
print(output)
dns_pcap()
The script outputs nothing to my terminal, anyone have any idea how to get the output of this to command to print to my screen?
(Also please note I am using python2.7)
I have many scripts running to both fortinet and cisco devices, and they all print outputs from variable assigned commands to screen when i execute my scripts, but not in this case
I am assuming it is because the output is not static like a 'show system interface' but I am not sure how to handle dynamic output from a command.
The output is the result of the method device.send_command. output = device.read_channel() is not necessary. Replace print(output) with print(gi_pcap) and you'll be on the right track.
If the command requires any sort of interaction, that can also cause it to freeze up because the script is waiting for a prompt that will never come. You can see if this is happening by saving the session log output. You can do this by adding session_log="device.txt" to your session arguments, with device.txt being any filename relevant to you. The contents of that file will be whatever is being sent back and forth in the SSH session to the device.
The answer was indeed found in this post
Get output from a Paramiko SSH exec_command continuously
using paramiko and the get_pty=True argument in the function allowed me to print the pseudu terminal
Thanks very much to Richard Dodson for the help
Can you share what is the interval time of getting an output in your Fortigate device for this command? Depending on the interval time, netmiko should have also worked.
What I have also tested in a different vendor using Netmiko for an dynamic output that I tried to get data in 10 seconds interval (Totally 100 seconds), it worked without an issue, I was able to get the output properly. However, when I have increased time of the interval of getting data as 11 or 12, I get some errors so it did not work.
Can you also try with Netmiko "timing" method to get your data? If the interval is shorter than 1-2 seconds, this method should also help. (For example, I can get ping output with 100 count without an error using timing method. Did you also try to get ping output in your network box if it works?)
I think that the size of the data you expect from your output is also important in your case. If the size is too big to be shown in a CLI screen, this may cause also a problem. In that case, you might need to open a file to save your data and use it from that file might be another option.
Better if you can advise if there is a regular time interval of this command along with the expected size of the data.
In addition, what I have also figure it out that in case the output takes more than 100 seconds, it might cause the problem. Here is the sample of how to this option following:
{
'device_type': 'device_type',
'ip': '135.243.92.119',
'username': 'admin',
'password': 'admin',
'port': port,
"fast_cli": False, # fast_cli
"global_delay_factor": 2 # if the outputs takes more than 100 seconds.
}

Python Parallel SSH get only command output

I'm new to Python and i'm looking to run multiple parallel ssh connections and commands to the devices. I'm using pssh link for it.
Issue is the device returns some big header after the connection like 20-30 lines.
When i use the below code what is printed out is the result of the command but at the top there's also the big header that is printed after the login.
hosts = ['XX.XXX.XX.XXX']
client = ParallelSSHClient(hosts, user='XXXX', password='XXXXX')
output = client.run_command('command')
for host in output:
for line in output[host]['stdout']:
print line
Anyway i can get JUST the command output ?
Not sure I understand what you mean.
I'm using pssh as well, and seems like I'm using the same method as you are in order to print my command's output, see below:
client = pssh.ParallelSSHClient(nodes, pool_size=args.batch, timeout=10, num_retries=1)
output = client.run_command(command, sudo=True)
for node in output:
for line in output[node]['stdout']:
print '[{0}] {1}'.format(node, line)
Could you elaborate a bit more? Maybe provide an example of a command you run and the output you get?
checkout pssh.
This tool uses multi-threads and performs quickly.
you can read more about it here.

Python Django PDFKIT - [Errno 9] Bad file descriptor

I use pdfkit and wkhtmltopdf to generate pdf documents. When i generate the first pdf all is well. When i quickly (within 5 seconds) generate an other i get the error [Errno 9] Bad file descriptor. If i close the error (step back in browser) and open again, it will create the pdf.
my views.py
config = pdfkit.configuration(wkhtmltopdf='C:/wkhtmltopdf/bin/wkhtmltopdf.exe')
pdfgen = pdfkit.from_url(url, printname, configuration=config)
pdf = open(printname, 'rb')
response = HttpResponse(pdf.read())
response['Content-Type'] = 'application/pdf'
response['Content-disposition'] = 'attachment ; filename =' + filename
pdf.close()
return response
Maybe important note: i run this site on IIS8, when running from commandline (python manage.py runserver) the error is not present.
Any guidelines on how to handle this error would be great.
When i quickly (within 5 seconds) generate an other
This point suggests that your code is flawless and the problem lies with your browser rejecting the URL as Peter suggests.
Most probably the cause of the error lies with file buffer flush. Consider flushing buffer at appropriate places.
With no further information forth-coming, I'll convert my comment to an answer...
Most likely the issues are that your URL is being rejected by the web server when you try the quick reload (via from_url) or that you are having problems accessing the local file you are trying to create.
You could try to eliminate the latter by just writing straight to a variable by passing False as your output file name - e.g. pdf = pdfkit.from_url('google.com', False).
If that doesn't solve it, your issue is almost certainly with the server rejecting the URL - and so you need to look at the diagnostics on that server.

How do I get the snapshot length of a .pcap file using dpkt?

I am trying to get the snapshot length of a .pcap file. I have gone to the man page for pcap and pcap_snapshot but have not been able to get the function to work.
I am running a VM Fedora20 and it is written in python
First I try to import the file that the man page says to include but I get a syntax error on the import and the pcap_snapshot()
I am new at python so I imagine its something simple but not sure what it is. Any help is much appreciated!
import <pcap/pcap.h>
import dpkt
myPcap = open('mycapture.pcap')
myFile = dpkt.pcap.Reader(myPcap)
print "Snapshot length = ", myFile.pcap_snapshot()
Don't read the man page first unless you're writing code in C, C++, or Objective-C.
If you're not using a C-flavored language, you'll need to use a wrapper for libpcap, and should read the documentation for the wrapper first, as you won't be calling the C functions from libpcap, you'll be calling functions from the wrapper. If you try to import a C-language header file, such as pcap/pcap.h, in Python, that will not work. If you try to directly call a C-language function, such as pcap_snapshot(), that won't work, either.
Dpkt is not a wrapper; it is, instead, a library to parse packets and to read pcap files, with the code to read pcap files being independent of libpcap. Therefore, it won't offer wrappers for libpcap APIs such as pcap_snapshot().
Dpkt's documentation is, well, rather limited. A quick look at its pcap.py module seems to suggest that
print "Snapshot length = ", myFile.snaplen
would work; give that a try.

Pycurl redirect option ignored and assertion failed trying to read video from the web?

I am trying to write a program that reads a webpage looking for file links, which it then attempts to download using curl/libcurl/pycurl. I have everything up to the pycurl correctly working, and when I use a curl command in the terminal, I can get the file to download. The curl command looks like the following:
curl -LO https://archive.org/download/TheThreeStooges/TheThreeStooges-001-WomanHaters1934moeLarryCurleydivxdabaron19m20s.mp4
This results in one redirect (a file that reads as all 0s on the output) and then it correctly downloads the file. When I remove the -L flag (so the command is just -O) it only reaches the first line, where it doesn't find a file, and stops.
But when I try to do the same operation using pycurl in a Python script, I am unable to successfully set [Curl object].FOLLOWLOCATION to 1, which is supposed to be the equivalent of the -L flag. The python code looks like the following:
c = [class Curl object] # get a Curl object
fp = open(file_name,'wb')
c.setopt(c.URL , full_url) # set the url
c.setopt(c.FOLLOWLOCATION, 1)
c.setopt(c.WRITEDATA , fp)
c.perform()
When this runs, it gets to c.perform() and shows the following:
python2.7: src/pycurl.c:272: get_thread_state: Assertion `self->ob_type == p_Curl_Type' failed.
Is it missing the redirect, or am I missing something else earlier because I am relatively new to cURL?
When I enabled verbose output for the c.perform() step, I was able to uncover what I believe was/is the underlying problem that my program had. The first line, which was effectively flagged, indicated that an open connection was being reused.
I had originally packaged the file into an object oriented setup, as opposed to a script, so the curl object had been read and reused without being closed. Therefore after the first connection attempt, which failed because I didn't set options correctly, it was reusing the connection to the website/server (which presumably had the wrong connection settings).
The problem was resolved by having the script close any existing Curl objects, and create a new one before the file download.

Categories

Resources