I added a line in the python code “speedtest.py” that I found at pimylifeup.com. I hoped it would allow me to track the internet provider and IP address along with all the other speed information his code provides. But when I execute it, the code only grabs the next word after the find all call. I would also like it to return the IP address that appears after the provider. I have attached the code below. Can you help me modify it to return what I am looking for.
Here is an example what is returned by speedtest-cli
$ speedtest-cli
Retrieving speedtest.net configuration...
Testing from Biglobe (111.111.111.111)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by GLBB Japan (Naha) [51.24 km]: 118.566 ms
Testing download speed................................................................................
Download: 4.00 Mbit/s
Testing upload speed......................................................................................................
Upload: 13.19 Mbit/s
$
And this is an example of what it is being returned by speediest.py to my .csv file
Date,Time,Ping,Download (Mbit/s),Upload(Mbit/s),myip
05/30/20,12:47,76.391,12.28,19.43,Biglobe
This is what I want it to return.
Date,Time,Ping,Download (Mbit/s),Upload (Mbit/s),myip
05/30/20,12:31,75.158,14.29,19.54,Biglobe 111.111.111.111
Or may be,
05/30/20,12:31,75.158,14.29,19.54,Biglobe,111.111.111.111
Here is the code that I am using. And thank you for any help you can provide.
import os
import re
import subprocess
import time
response = subprocess.Popen(‘/usr/local/bin/speedtest-cli’, shell=True, stdout=subprocess.PIPE).stdout.read().decode(‘utf-8’)
ping = re.findall(‘km]:\s(.*?)\s’, response, re.MULTILINE)
download = re.findall(‘Download:\s(.*?)\s’, response, re.MULTILINE)
upload = re.findall(‘Upload:\s(.*?)\s’, response, re.MULTILINE)
myip = re.findall(‘from\s(.*?)\s’, response, re.MULTILINE)
ping = ping[0].replace(‘,’, ‘.’)
download = download[0].replace(‘,’, ‘.’)
upload = upload[0].replace(‘,’, ‘.’)
myip = myip[0]
try:
f = open(‘/home/pi/speedtest/speedtestz.csv’, ‘a+’)
if os.stat(‘/home/pi/speedtest/speedtestz.csv’).st_size == 0:
f.write(‘Date,Time,Ping,Download (Mbit/s),Upload (Mbit/s),myip\r\n’)
except:
pass
f.write(‘{},{},{},{},{},{}\r\n’.format(time.strftime(‘%m/%d/%y’), time.strftime(‘%H:%M’), ping, download, upload, myip))
Let me know if this works for you, it should do everything you're looking for
#!/usr/local/env python
import os
import csv
import time
import subprocess
from decimal import *
file_path = '/home/pi/speedtest/speedtestz.csv'
def format_speed(bits_string):
""" changes string bit/s to megabits/s and rounds to two decimal places """
return (Decimal(bits_string) / 1000000).quantize(Decimal('.01'), rounding=ROUND_UP)
def write_csv(row):
""" writes a header row if one does not exist and test result row """
# straight from csv man page
# see: https://docs.python.org/3/library/csv.html
with open(file_path, 'a+', newline='') as csvfile:
writer = csv.writer(csvfile, delimiter=',', quotechar='"')
if os.stat(file_path).st_size == 0:
writer.writerow(['Date','Time','Ping','Download (Mbit/s)','Upload (Mbit/s)','myip'])
writer.writerow(row)
response = subprocess.run(['/usr/local/bin/speedtest-cli', '--csv'], capture_output=True, encoding='utf-8')
# if speedtest-cli exited with no errors / ran successfully
if response.returncode == 0:
# from the csv man page
# "And while the module doesn’t directly support parsing strings, it can easily be done"
# this will remove quotes and spaces vs doing a string split on ','
# csv.reader returns an iterator, so we turn that into a list
cols = list(csv.reader([response.stdout]))[0]
# turns 13.45 ping to 13
ping = Decimal(cols[5]).quantize(Decimal('1.'))
# speedtest-cli --csv returns speed in bits/s, convert to bytes
download = format_speed(cols[6])
upload = format_speed(cols[7])
ip = cols[9]
date = time.strftime('%m/%d/%y')
time = time.strftime('%H:%M')
write_csv([date,time,ping,download,upload,ip])
else:
print('speedtest-cli returned error: %s' % response.stderr)
$/usr/local/bin/speedtest-cli --csv-header > speedtestz.csv
$/usr/local/bin/speedtest-cli --csv >> speedtestz.csv
output:
Server ID,Sponsor,Server Name,Timestamp,Distance,Ping,Download,Upload,Share,IP Address
Does that not get you what you're looking for? Run the first command once to create the csv with header row. Then subsequent runs are done with the append '>>` operator, and that'll add a test result row each time you run it
Doing all of those regexs will bite you if they or a library that they depend on decides to change their debugging output format
Plenty of ways to do it though. Hope this helps
I have successfully worked out the addition of a value to a key in YAML with Python, and started to work on the reverse of it with reference to the code for addition. Here is my proposal of how the code work:
connected_guilds:
- 1
- 2
after the code is ran, the YAML file should be changed to:
connected_guilds:
- 1
Here is my code, however it didn't work, it ended up completely wiping out and the remaining is the -1 in the first YAML example I enclosed.
with open('guilds.yaml', 'r+') as guild_remove:
loader = yaml.safe_load(guild_remove)
content = loader['connected_guilds']
for server in content:
if server != guild_id:
continue
else:
content.remove(guild_id)
guild_remove.seek(0)
yaml.dump(content, guild_remove)
guild_remove.truncate()
I'd be grateful if anyone could help me out :D
Don't try to reimplement searching for the item to remove when Python already provides this to you:
with open('guilds.yaml', 'r+') as guild_remove:
content = yaml.safe_load(guild_remove)
content["connected_guilds"].remove(guild_id)
guild_remove.seek(0)
yaml.dump(content, guild_remove)
guild_remove.truncate()
Here is the solution(with reference to the addition code):
with open('guilds.yaml', 'r+') as guild_remove:
loader = yaml.safe_load(guild_remove)
content = loader['connected_guilds']
for server in content:
if server != guild_id:
continue
else:
content.remove(guild_id)
guild_remove.seek(0)
yaml.dump({'connected_guilds': content}, guild_remove)
guild_remove.truncate()
I've written a short code to download and rename files from a specific folder in my outlook account. The code works great, the only problem is that I typically need to run the code several times to actually download all of the messages. It seems the code is just failing to acknowledge some of the messages, there are no errors when I run through it.
I've tried a few things like walking through each line step by step in the python window, running the code with outlook closed or opened, and trying to print the files after they're successfully saved to see if there are specific messages that are causing the problem.
Here's my code
#! python3
# downloadAttachments.py - Downloads all of the weight tickets from Bucky
# Currently saves to desktop due to instability of I: drive connection
import win32com.client, os, re
#This line opens the outlook application
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
#Not exactly sure why the inbox is default folder 6 but it works
inbox = outlook.GetDefaultFolder(6)
#box where the messages are to save
TicketSave = inbox.Folders('WDE').Folders('SAVE').Folders('TicketSave')
#box where the messages are moved to
done = inbox.Folders('WDE').Folders('CHES').Folders('Weight Tickets')
ticketMessages = TicketSave.Items
#Key is used to verify the subject line is correct. This script only works if the person sends
# their emails with a consistent subject line (can be altered for other cases)
key = re.compile(r'wde load \d{3}') #requires regulars expressions (i.e. 'import re')
for message in ticketMessages:
#will skip any message that does not match the correct subject line format (non-case sensitive)
check = str(message.Subject).lower()
if key.search(check) == None:
continue
attachments = message.Attachments
tic = attachments.item(1)
ticnum = str(message.Subject).split()[2]
name = str(tic).split()[0] + ' ticket ' + ticnum + '.pdf' #changes the filename
tic.SaveAsFile('C:\\Users\\bhalvorson\\Desktop\\Attachments' + os.sep + str(name))
if message.UnRead == True:
message.UnRead = False
message.Move(done)
print('Ticket pdf: ' + name + ' save successfully')
Alright I found the answer to my own question. I'll post it here in case any other youngster runs into the same problem as me.
The main problem is the "message.Move(done)" second from the bottom.
Apparently the move function alters the current folder thus altering the number of loops that the for loop will go through. So, the way it's written above, the code only ever processes half of the items in the folder.
An easy work around is to switch the main line of the for loop to "for message in list(ticketMessages):" the list is not affected by the Move function and therefore you'll be able to loop through every message.
Hope this helps someone.
I've built a script that makes two parallelized process over the same text files, each one saving results into a new text file with a proper file name. One of these process does it well, while another gives error message from question title.
I thought it was because I save results in a buffer and then I write it completely into the file, so I've change this part of code such that each new results line generated would be immediately saved on file, and nonetheless error message still appears, and so I can't get results saved on file.
I'm now testing a new version of script with processes unparalleled, but how could I solve this problem such that I could keep processes parallelized?
Here's the sample code below:
from concurrent.futures import ProcessPoolExecutor, as_completed
def process():
counter_object_results = make_data_processing()
txt_file_name = f'file_name.txt'
with open(txt_file_name, 'a') as txt_file:
for count in counter_object_results.items():
txt_file_content = f'{counter_object_results[0]}\t{counter_object_results[1]}\n'
txt_file.write(txt_file_content)
def process_2():
counter_object_results = make_data_processing()
txt_file_name = f'file_name.txt'
with open(txt_file_name, 'a') as txt_file:
for count in counter_object_results.items():
txt_file_content = f'{counter_object_results[0]}\t{counter_object_results[1]}\n'
txt_file.write(txt_file_content)
with ProcessPoolExecutor() as executor:
worker_a = executor.submit(process)
worker_b = executor.submit(process_2)
futures = [worker_a, worker_b]
for worker in as_completed(futures):
resp = worker.result()
print(f'Results saved on {resp}')
I am trying to take a twitter stream, save it to a file and then analyze the contents. However I am having an issue with files generated in the program as opposed to create by CLI
Twitter Analysis program:
import json
import pandas as pd
import matplotlib.pyplot as plt
tweets_data = []
tweets_file = open(“test.txt”, “r”)
for line in tweets_file:
try:
tweet = json.loads(line)
tweets_data.append(tweet)
except:
continue
tweets = pd.DataFrame()
tweets[‘text’] = map(lambda tweet: tweet[‘text’], tweets_data)
However with the last line I keep getting “KeyError: ‘text’ “ which I understand to be related to the fact it can’t find the key.
When I first run the twitter search program, if I output the results to a file from the CLI, it works fine with no issues. If I save the output to a file from inside the program, it gives me the error?
Twitter Search program:
class Output(StreamListener):
def on_data(self, data):
with open(“test.txt”, “a”) as tf:
tf.write(data)
def on_error(self, status):
print status
L = Output()
auth = OAuthHandler(consKey, consSecret)
auth.set_access_token(Tok1, Tok2)
stream = Stream(auth, L)
stream.filter(track=[´cyber´])
If I run the above as is, analyzing the test.txt will give me the error. But if I remove the line and instead run the program as:
python TwitterSearch.py > test.txt
then it will work with no problem when running text.txt through the analysis program
I have tried changing the file handling from append to write which was of no help.
I also added the line:
print tweet[‘text’]
tweets[‘text’] = map(lambda tweet: tweet[‘text’], tweets_data)
This worked and showing that the program can see a value for the text key. I also compared the output file from the program and the CLI and could not see any difference. Please help me to understand and resolve the problem?