python skips broken json output - python

Im using masscan and the example script but when I print the results only the command is printed because the json output for scan is broken(atleast one issue mentiont that). How can I fix that/still acces the json data?
The example code:
import masscan
mas = masscan.PortScanner()
mas.scan('172.0.8.78/24', ports='22,80,8080', arguments='--max-rate 1000')
print(mas.scan_result)
Masscan should return the scan results.
Instead it returns: {"command_line": "masscan -oJ - 0.0.0.0/24 -p 25565 --rate=1000", "scan":{}

Related

How to combine boto3 python code and get one csv output

I have written two codes to get some details about EC2, the reason written two code, I am not able to get the 'ComputerName' in EC2 describe_instance, so I have created separate code using boto3 client SSM get the 'ComputerName'. Now I tried to combine both codes into single code and get the output in single csv with separate columns and rows, someone help me with the below code to get the single csv output. Also please find the sample output.
import boto3
import csv
profiles = ['Dev_Databases','Dev_App','Prod_Database','Prod_App']
########################EC2-Details################################
csv_ob=open("EC2-Inventory.csv","w" ,newline='')
csv_w=csv.writer(csv_ob)
csv_w.writerow(["S_NO","profile","Instance_Id",'Instance_Type','Platform','State','LaunchTime','Privat_Ip'])
cnt=1
for ec2 in profiles:
aws_mag_con=boto3.session.Session(profile_name=ec2)
ec2_con_re=aws_mag_con.resource(service_name="ec2",region_name="ap-southeast-1")
for each in ec2_con_re.instances.all():
print(cnt,ec2,each.instance_id,each.instance_type,each.platform,each.state,each.launch_time.strftime("%Y-%m-%d"),each.private_ip_address,)
csv_w.writerow([cnt,ec2,each.instance_id,each.instance_type,each.platform,each.state,each.launch_time.strftime("%Y-%m-%d"),each.private_ip_address])
cnt+=1
csv_ob.close()
#######################HostName-Details###########################
csv_ob1=open("Hostname-Inventory.csv","w" ,newline='')
csv_w1=csv.writer(csv_ob1)
csv_w1.writerow(["S_NO",'Profile','InstanceId','ComputerName','PlatformName'])
cnt1=1
for ssm in profiles:
session = boto3.Session(profile_name=ssm)
ssm_client=session.client('ssm', region_name='ap-southeast-1')
paginator = ssm_client.get_paginator('describe_instance_information')
response_iterator = paginator.paginate(Filters=[{'Key': 'PingStatus','Values': ['Online']}])
for item in response_iterator:
for instance in item['InstanceInformationList']:
if instance.get('PingStatus') == 'Online':
InstanceId = instance.get('InstanceId')
ComputerName = instance.get('ComputerName')#.replace(".WORKGROUP", "")
PlatformName = instance.get('PlatformName')
print(InstanceId,ComputerName,PlatformName)
csv_w1.writerow([cnt1,ssm,InstanceId,ComputerName,PlatformName])
cnt1+=1
csv_ob1.close()
Sample Output Below:

Python writing to file and json returns None/null instead of value

I'm trying to write data to a file with the following code
#!/usr/bin/python37all
print('Content-type: text/html\n\n')
import cgi
from Alarm import *
import json
htmldata = cgi.FieldStorage()
alarm_time = htmldata.getvalue('alarm_time')
alarm_date = htmldata.getvalue('alarm_date')
print(alarm_time,alarm_date)
data = {'time':alarm_time,'date':alarm_date}
# print(data['time'],data['date'])
with open('alarm_data.txt','w') as f:
json.dump(data,f)
...
but when opening the the file, I get the following output:
{'time':null,'date':null}
The print statement returns what I except it to: 14:26 2020-12-12.
I've tried this same method with f.write() but it returns both values as None. This is being run on a raspberry pi. Why aren't the correct values being written?
--EDIT--
The json string I expect to see is the following:{'time':'14:26','date':'2020-12-12'}
Perhaps you meant:
data = {'time':str(alarm_time), 'date':str(alarm_date)}
I would expect to see your file contents like this:
{"time":"14:26","date":"2020-12-12"}
Note the double quotes: ". json is very strict about these things, so don't fool yourself into having single quotes ' in a file and expecting json to parse it.

Changing output of speedtest.py and speedtest-cli to include IP address in output .csv file

I added a line in the python code “speedtest.py” that I found at pimylifeup.com. I hoped it would allow me to track the internet provider and IP address along with all the other speed information his code provides. But when I execute it, the code only grabs the next word after the find all call. I would also like it to return the IP address that appears after the provider. I have attached the code below. Can you help me modify it to return what I am looking for.
Here is an example what is returned by speedtest-cli
$ speedtest-cli
Retrieving speedtest.net configuration...
Testing from Biglobe (111.111.111.111)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by GLBB Japan (Naha) [51.24 km]: 118.566 ms
Testing download speed................................................................................
Download: 4.00 Mbit/s
Testing upload speed......................................................................................................
Upload: 13.19 Mbit/s
$
And this is an example of what it is being returned by speediest.py to my .csv file
Date,Time,Ping,Download (Mbit/s),Upload(Mbit/s),myip
05/30/20,12:47,76.391,12.28,19.43,Biglobe
This is what I want it to return.
Date,Time,Ping,Download (Mbit/s),Upload (Mbit/s),myip
05/30/20,12:31,75.158,14.29,19.54,Biglobe 111.111.111.111
Or may be,
05/30/20,12:31,75.158,14.29,19.54,Biglobe,111.111.111.111
Here is the code that I am using. And thank you for any help you can provide.
import os
import re
import subprocess
import time
response = subprocess.Popen(‘/usr/local/bin/speedtest-cli’, shell=True, stdout=subprocess.PIPE).stdout.read().decode(‘utf-8’)
ping = re.findall(‘km]:\s(.*?)\s’, response, re.MULTILINE)
download = re.findall(‘Download:\s(.*?)\s’, response, re.MULTILINE)
upload = re.findall(‘Upload:\s(.*?)\s’, response, re.MULTILINE)
myip = re.findall(‘from\s(.*?)\s’, response, re.MULTILINE)
ping = ping[0].replace(‘,’, ‘.’)
download = download[0].replace(‘,’, ‘.’)
upload = upload[0].replace(‘,’, ‘.’)
myip = myip[0]
try:
f = open(‘/home/pi/speedtest/speedtestz.csv’, ‘a+’)
if os.stat(‘/home/pi/speedtest/speedtestz.csv’).st_size == 0:
f.write(‘Date,Time,Ping,Download (Mbit/s),Upload (Mbit/s),myip\r\n’)
except:
pass
f.write(‘{},{},{},{},{},{}\r\n’.format(time.strftime(‘%m/%d/%y’), time.strftime(‘%H:%M’), ping, download, upload, myip))
Let me know if this works for you, it should do everything you're looking for
#!/usr/local/env python
import os
import csv
import time
import subprocess
from decimal import *
file_path = '/home/pi/speedtest/speedtestz.csv'
def format_speed(bits_string):
""" changes string bit/s to megabits/s and rounds to two decimal places """
return (Decimal(bits_string) / 1000000).quantize(Decimal('.01'), rounding=ROUND_UP)
def write_csv(row):
""" writes a header row if one does not exist and test result row """
# straight from csv man page
# see: https://docs.python.org/3/library/csv.html
with open(file_path, 'a+', newline='') as csvfile:
writer = csv.writer(csvfile, delimiter=',', quotechar='"')
if os.stat(file_path).st_size == 0:
writer.writerow(['Date','Time','Ping','Download (Mbit/s)','Upload (Mbit/s)','myip'])
writer.writerow(row)
response = subprocess.run(['/usr/local/bin/speedtest-cli', '--csv'], capture_output=True, encoding='utf-8')
# if speedtest-cli exited with no errors / ran successfully
if response.returncode == 0:
# from the csv man page
# "And while the module doesn’t directly support parsing strings, it can easily be done"
# this will remove quotes and spaces vs doing a string split on ','
# csv.reader returns an iterator, so we turn that into a list
cols = list(csv.reader([response.stdout]))[0]
# turns 13.45 ping to 13
ping = Decimal(cols[5]).quantize(Decimal('1.'))
# speedtest-cli --csv returns speed in bits/s, convert to bytes
download = format_speed(cols[6])
upload = format_speed(cols[7])
ip = cols[9]
date = time.strftime('%m/%d/%y')
time = time.strftime('%H:%M')
write_csv([date,time,ping,download,upload,ip])
else:
print('speedtest-cli returned error: %s' % response.stderr)
$/usr/local/bin/speedtest-cli --csv-header > speedtestz.csv
$/usr/local/bin/speedtest-cli --csv >> speedtestz.csv
output:
Server ID,Sponsor,Server Name,Timestamp,Distance,Ping,Download,Upload,Share,IP Address
Does that not get you what you're looking for? Run the first command once to create the csv with header row. Then subsequent runs are done with the append '>>` operator, and that'll add a test result row each time you run it
Doing all of those regexs will bite you if they or a library that they depend on decides to change their debugging output format
Plenty of ways to do it though. Hope this helps

For loop outputting one character per line

I'm writing a quick python script wrapper to query our crashplan server so I can gather data from multiple sites then convert that to json for a migration and I've got most of it done. It's probably a bit ugly, but I'm one step away from getting the data I need to pass on to the json module so I can format the data I need for reports.
The script should query ldap, get a list of names from a list of sites, then create a command (which works).
But when printing the list in a for loop it prints out each character, instead of each name. If I just print the list it prints out each name on a single line. This obviously munges up the REST call as the username isn't right.
'''
Crashplan query script
Queries the crashplan server using subprocess calls and formats the output
'''
import subprocess
import json
password = raw_input("What password do you want to use: ")
sitelist = ['US - DC - Washington', 'US - FL - Miami', 'US - GA - Atlanta', 'CA - Toronto']
cmdsites = ""
for each in sitelist:
cmdsites = cmdsites + '(OfficeLocation={})'.format(each)
ldap_cmd = "ldapsearch -xLLL -S OfficeLocation -h ldap.local.x.com -b cn=users,dc=x,dc=com '(&(!(gidNumber=1088))(|%s))' | grep -w 'uid:' | awk {'print $2'}" % cmdsites
users = subprocess.check_output([ldap_cmd], shell=True)
##### EVERYTHING WORKS UP TO THIS POINT #####
for each in users:
# subprocess.call(['curl -X GET -k -u "admin:'+password+'" "https://crashplan.x.com:4285/api/User?username='+each+'#x.com&incBackupUsage=true&strKey=lastBackup"'], shell=True) ### THIS COMMAND WORKS IT JUST GETS PASSED THE WRONG USERNAME
print each #### THIS PRINTS OUT ONE LETTER PER LINE ####
print type(users) #### THIS PRINTS OUT ONE NAME PER LINE ####
You get an output as a string which, when iterated, produce one character per iteration.
You should split it by line breaks:
for each in users.splitlines():
print each

Using ExtractMsg in a loop?

I am trying to write a script that will extract details from Outlook .msg files and append then to a .csv file. ExtractMsg (https://github.com/mattgwwalker/msg-extractor) will process the messages one at a time, at the command line with 'python ExtractMsg.py message' but I can't work out how to use this to loop through all the messages in the directory.
I have tried:
import ExtractMsg
import glob
for message in glob.glob('*.msg'):
print 'Reading', message
ExtractMsg(message)
This gives "'module' object is not callable". I have tried to look at the ExtractMsg module but the structure of it is beyond me at the moment. How can I make the module callable?
ExtractMsg(message)
You are trying to call module object - exactly what error message us telling you.
Perhaps you need to use ExtractMsg.Message class instead
msg = ExtractMsg.Message(message)
In the next link on the very bottom you will find example of usage
https://github.com/mattgwwalker/msg-extractor/blob/master/ExtractMsg.py
Thanks all - the following sorted it:
import ExtractMsg
import glob
for message in glob.glob('*.msg'):
print 'Reading', message
msg = ExtractMsg.Message(message)
body = msg._getStringStream('__substg1.0_1000')
sender = msg._getStringStream('__substg1.0_0C1F')

Categories

Resources