I'm trying to create a program that ask the user link of any websites to block it (in hosts)
import requests
print('Time to block some websites')
ask = input('> Give me the link... ') ; link = {} # Ask user
try:
r = requests.get(ask) # Try the url
print('Protocol : ' , r.url [:r.url.find(":")]) # Check protocol of the url
url = r.url
print('your url : ' , url)
except:
raise ValueError('Give the url with https/http', ask)
url = url.split('/') ; urlist = list(url) # Split url
link.update({'linko': urlist[2]}) ; link.update({'host': '127.0.0.1 '}) # Add host and url to link
x = link['linko'] ; y = link['host'] # Transform value of dict in string
z = str(y+x) # Assign host and link
f = open('hosts', 'a') # Open document in append mode
f.write(z) ; f.write('\n') # Write z and newline for future use
f.close() # Always close after use
f = open('hosts' , 'r') # Open document in read mode
print(f.read()) # Read document
f.close() # Always close after use
Traceback :
File "C:\Windows\System32\drivers\etc\block.py", line 17, in <module>
f = open('hosts', 'a') # Open document in append mode
PermissionError: [Errno 13] Permission denied: 'hosts'
When I tried to execute the program with runas administrator :
RUNAS ERROR: Unable to run - C:\Windows\System32\drivers\etc\block.py
193: C:\Windows\System32\drivers\etc\block.py is not a valid Win32 application.
How do I get the program to have permissions to add sites to hosts?
Thanks for your answers, I found a 'manually' solution :
Running cmd.exe as admin then move to C:\Windows\System32\drivers\etc and execute block.py
Command prompt :
C:\Windows\system32>cd drivers
C:\Windows\System32\drivers>cd etc
C:\Windows\System32\drivers\etc>python block.py
Time to block some websites
> Give me the link... https://stackoverflow.com/questions/64506935/run-python-file-as-admin
Protocol : https
https://stackoverflow.com/questions/64506935/run-python-file-as-admin
# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
# 102.54.94.97 rhino.acme.com # source server
# 38.25.63.10 x.acme.com # x client host
# localhost name resolution is handled within DNS itself.
# 127.0.0.1 localhost
# ::1 localhost
127.0.0.1 stackoverflow.com
C:\Windows\System32\drivers\etc>
Related
I have the following python script that gets values from a txt and construct a file with some needed values:
f = open("url-threatview.txt")
data = f.read().strip().split("\n")
f.close()
for line in data:
protocol,null,hostport,*url = line.split("/")
uri = "/".join(url)
host,*port = hostport.split(":")
def makerule(host,port,uri):
return 'alert http any %s -> any any (http.host;content:"%s";http.uri;content:"%s";classtype:bad-url;)' % (port,host,uri)
What i want to do is to insert a new field into each one of the alert rule that is created. This field must be a number always different from each rule that is created when executing the script. For example:
alert http any 8080 -> any any (http.host;content:"www.sermifleksiks.com";http.uri;content:"tab_home_active.js";classtype:bad-url;id:20)
alert http any 8080 -> any any (http.host;content:"www.sermifleksiks.com";http.uri;content:"tab_home_active.js";classtype:bad-url;id:21)`
Tried to use count function but i cannot make that work because i am not a python expert
I'm trying to extract pertinent information from a large textfile (1000+ lines), most of which isn't important:
ID: 67108866 Virtual-system: root, VPN Name: VPN-NAME-XYZ
Local Gateway: 1.1.1.1, Remote Gateway: 2.2.2.2
Traffic Selector Name: TS-1
Local Identity: ipv4(10.10.10.0-10.10.10.255)
Remote Identity: ipv4(10.20.10.0-10.20.10.255)
Version: IKEv2
DF-bit: clear, Copy-Outer-DSCP Disabled, Bind-interface: st0.287
Port: 500, Nego#: 0, Fail#: 0, Def-Del#: 0 Flag: 0x2c608b29
Multi-sa, Configured SAs# 1, Negotiated SAs#: 1
Tunnel events:
From this I need to extract only certain bits, and example output would be something like:
VPN Name: VPN-NAME-XYZ, Local Gateway: 1.1.1.1, Remote Gateway: 2.2.2.2
I've tried a couple different ways to get this, however my code keeps stopping on the 1st match, I need the code to match 1 line, then move onto the following line and match that:
with open('/path/to/vpn.txt', 'r') as file:
for vpn in file:
vpn = vpn.strip().lower()
name = "xyz"
if name in vpn:
print(vpn)
if "1.1.1.1" in vpn:
print(vpn)
I'm able to print both if I move the 2nd if in line:
with open('/path/to/vpn.txt', 'r') as file:
for vpn in file:
vpn = vpn.strip().lower()
name = "xyz"
if name in vpn:
print(vpn)
if "1.1.1.1" in vpn:
print(vpn)
Is it possible to match clauses on both lines?
I've tried a few different ways, with my indents and matches but can't get it, also the problem with print(vpn) is it's printing the entire line
Use regex to match the regions you need and then get all matched from the entire text. You need not do this line by line as well. An example below.
import re
found_text = []
with open('/path/to/vpn.txt', 'r') as file:
file_text = file.read()
[found_text.extend(found.split(",")) for found in [finds.group(0) for finds in
re.finditer(
r"((VPN Name|Local Gateway|Remote Gateway):.*)",
file_text)]]
# split by comma, if you want it to be splitted further
print(found_text)
This will yield an output like
['VPN Name: VPN-NAME-XYZ', 'Local Gateway: 1.1.1.1', ' Remote Gateway: 2.2.2.2']
I added a line in the python code “speedtest.py” that I found at pimylifeup.com. I hoped it would allow me to track the internet provider and IP address along with all the other speed information his code provides. But when I execute it, the code only grabs the next word after the find all call. I would also like it to return the IP address that appears after the provider. I have attached the code below. Can you help me modify it to return what I am looking for.
Here is an example what is returned by speedtest-cli
$ speedtest-cli
Retrieving speedtest.net configuration...
Testing from Biglobe (111.111.111.111)...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by GLBB Japan (Naha) [51.24 km]: 118.566 ms
Testing download speed................................................................................
Download: 4.00 Mbit/s
Testing upload speed......................................................................................................
Upload: 13.19 Mbit/s
$
And this is an example of what it is being returned by speediest.py to my .csv file
Date,Time,Ping,Download (Mbit/s),Upload(Mbit/s),myip
05/30/20,12:47,76.391,12.28,19.43,Biglobe
This is what I want it to return.
Date,Time,Ping,Download (Mbit/s),Upload (Mbit/s),myip
05/30/20,12:31,75.158,14.29,19.54,Biglobe 111.111.111.111
Or may be,
05/30/20,12:31,75.158,14.29,19.54,Biglobe,111.111.111.111
Here is the code that I am using. And thank you for any help you can provide.
import os
import re
import subprocess
import time
response = subprocess.Popen(‘/usr/local/bin/speedtest-cli’, shell=True, stdout=subprocess.PIPE).stdout.read().decode(‘utf-8’)
ping = re.findall(‘km]:\s(.*?)\s’, response, re.MULTILINE)
download = re.findall(‘Download:\s(.*?)\s’, response, re.MULTILINE)
upload = re.findall(‘Upload:\s(.*?)\s’, response, re.MULTILINE)
myip = re.findall(‘from\s(.*?)\s’, response, re.MULTILINE)
ping = ping[0].replace(‘,’, ‘.’)
download = download[0].replace(‘,’, ‘.’)
upload = upload[0].replace(‘,’, ‘.’)
myip = myip[0]
try:
f = open(‘/home/pi/speedtest/speedtestz.csv’, ‘a+’)
if os.stat(‘/home/pi/speedtest/speedtestz.csv’).st_size == 0:
f.write(‘Date,Time,Ping,Download (Mbit/s),Upload (Mbit/s),myip\r\n’)
except:
pass
f.write(‘{},{},{},{},{},{}\r\n’.format(time.strftime(‘%m/%d/%y’), time.strftime(‘%H:%M’), ping, download, upload, myip))
Let me know if this works for you, it should do everything you're looking for
#!/usr/local/env python
import os
import csv
import time
import subprocess
from decimal import *
file_path = '/home/pi/speedtest/speedtestz.csv'
def format_speed(bits_string):
""" changes string bit/s to megabits/s and rounds to two decimal places """
return (Decimal(bits_string) / 1000000).quantize(Decimal('.01'), rounding=ROUND_UP)
def write_csv(row):
""" writes a header row if one does not exist and test result row """
# straight from csv man page
# see: https://docs.python.org/3/library/csv.html
with open(file_path, 'a+', newline='') as csvfile:
writer = csv.writer(csvfile, delimiter=',', quotechar='"')
if os.stat(file_path).st_size == 0:
writer.writerow(['Date','Time','Ping','Download (Mbit/s)','Upload (Mbit/s)','myip'])
writer.writerow(row)
response = subprocess.run(['/usr/local/bin/speedtest-cli', '--csv'], capture_output=True, encoding='utf-8')
# if speedtest-cli exited with no errors / ran successfully
if response.returncode == 0:
# from the csv man page
# "And while the module doesn’t directly support parsing strings, it can easily be done"
# this will remove quotes and spaces vs doing a string split on ','
# csv.reader returns an iterator, so we turn that into a list
cols = list(csv.reader([response.stdout]))[0]
# turns 13.45 ping to 13
ping = Decimal(cols[5]).quantize(Decimal('1.'))
# speedtest-cli --csv returns speed in bits/s, convert to bytes
download = format_speed(cols[6])
upload = format_speed(cols[7])
ip = cols[9]
date = time.strftime('%m/%d/%y')
time = time.strftime('%H:%M')
write_csv([date,time,ping,download,upload,ip])
else:
print('speedtest-cli returned error: %s' % response.stderr)
$/usr/local/bin/speedtest-cli --csv-header > speedtestz.csv
$/usr/local/bin/speedtest-cli --csv >> speedtestz.csv
output:
Server ID,Sponsor,Server Name,Timestamp,Distance,Ping,Download,Upload,Share,IP Address
Does that not get you what you're looking for? Run the first command once to create the csv with header row. Then subsequent runs are done with the append '>>` operator, and that'll add a test result row each time you run it
Doing all of those regexs will bite you if they or a library that they depend on decides to change their debugging output format
Plenty of ways to do it though. Hope this helps
I have config file which contains network configurations something like given below.
LISTEN=192.168.180.1 #the network which listen the traffic
NETMASK=255.255.0.0
DOMAIN =test.com
Need to grep the values from the config. the following is my current code.
import re
with open('config.txt') as f:
data = f.read()
listen = re.findall('LISTEN=(.*)',data)
print listen
the variable listen contains
192.168.180.1 #the network which listen the traffic
but I no need the commented information but sometimes comments may not exist like other "NETMASK"
If you really want to this using regular expressions I would suggest changing it to LISTEN=([^#$]+)
Which should match anything up to the pound sign opening the comment or a newline character.
I come up with solution which will have common regex and replace "#".
import re
data = '''
LISTEN=192.168.180.1 #the network which listen the traffic
NETMASK=255.255.0.0
DOMAIN =test.com
'''
#Common regex to get all values
match = re.findall(r'.*=(.*)#*',data)
print "Total match found"
print match
#Remove # part if any
for index,val in enumerate(match):
if "#" in val:
val = (val.split("#")[0]).strip()
match[index] = val
print "Match after removing #"
print match
Output :
Total match found
['192.168.180.1 #the network which listen the traffic', '255.255.0.0', 'test.com']
Match after removing #
['192.168.180.1', '255.255.0.0', 'test.com']
data = """LISTEN=192.168.180.1 #the network which listen the traffic"""
import re
print(re.search(r'\d{1,3}.\d{1,3}.\d{1,3}.\d{1,3}', data).group())
>>>192.168.180.1
print(re.search(r'[0-9]+(?:\.[0-9]+){3}', data).group())
>>>192.168.180.1
In my experience regex is slow runtime and not very readable. I would do:
with open('config.txt') as f:
for line in f:
if not line.startswith("LISTEN="):
continue
rest = line.split("=", 1)[1]
nocomment = rest.split("#", 1)[0]
print nocomment
I think the better approach is to read the whole file as the format it is given in. I wrote a couple of tutorials, e.g. for YAML, CSV, JSON.
It looks as if this is an INI file.
Example Code
Example INI file
INI files need a header. I assume it is network:
[network]
LISTEN=192.168.180.1 #the network which listen the traffic
NETMASK=255.255.0.0
DOMAIN =test.com
Python 2
#!/usr/bin/env python
import ConfigParser
import io
# Load the configuration file
with open("config.ini") as f:
sample_config = f.read()
config = ConfigParser.RawConfigParser(allow_no_value=True)
config.readfp(io.BytesIO(sample_config))
# List all contents
print("List all contents")
for section in config.sections():
print("Section: %s" % section)
for options in config.options(section):
print("x %s:::%s:::%s" % (options,
config.get(section, options),
str(type(options))))
# Print some contents
print("\nPrint some contents")
print(config.get('other', 'use_anonymous')) # Just get the value
Python 3
Look at configparser:
#!/usr/bin/env python
import configparser
# Load the configuration file
config = configparser.RawConfigParser(allow_no_value=True)
with open("config.ini") as f:
config.readfp(f)
# Print some contents
print(config.get('network', 'LISTEN'))
gives:
192.168.180.1 #the network which listen the traffic
Hence you need to parse that value as well, as INI seems not to know #-comments.
I'm trying to retrieve ids of a specific set of keywords from pubmed using the following standard code,
import os
from Bio import Entrez
from Bio import Medline
#Defining keyword file
keywords_file = "D:\keywords.txt"
# Splitting keyword file into a list
keyword_list = []
keyword_list = open(keywords_file).read().split(',')
#print keyword_list
# Creating folders by keywords and creating a text file of the same keyword in each folder
for item in keyword_list:
create_file = item +'.txt.'
path = r"D:\Thesis"+'\\'+item
#print path
if not os.path.exists(path):
os.makedirs(path)
#print os.getcwd()
os.chdir(path)
f = open(item+'.txt','a')
f.close()
# Using biopython to fetch ids of the keyword searches
limit = 10
def fetch_ids(keyword,limit):
for item in keyword:
print item
print "Fetching search for "+item+"\n"
#os.environ['http_proxy'] = '127.0.0.1:13828'
Entrez.email = 'A.N.Other#example.com'
search = Entrez.esearch(db='pubmed',retmax=limit,term = '"'+item+'"')
print term
result = Entrez.read(search)
ids = result['IdList']
#print len(ids)
return ids
print fetch_ids(keyword_list,limit)
id_res = fetch_ids(keyword_list,limit)
print id_res
def write_ids_in_file(id_res):
with open(item+'.txt','w') as temp_file:
temp_file.write('\n'.join(ids))
temp_file.close()
write_ids_in_file(id_res)
In a nutshell what I'm trying to do is to create folders with the same name as each of the keywords, create a text file within the folder, fetch the ids from pubmed through the code and save the ids in the text files. My program worked fine when I initially tested it, however, after a couple of tries it started throwing me the target machine actively refused connection error. Some more details that could be useful,
header information
Host = 'eutils.ncbi.nlm.nih.gov'
Connection = 'close'
User-Agent = 'Python-urllib/2.7'
URL
http://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?term=%22inflammasome%22&retmax=10&db=pubmed&tool=biopython&email=A.N.Other%40example.com
host = '127.0.0.1:13828'
I know that this question has been asked many times with the response that the port is not listening but what I want to know is if this is my issue as well, then how do get the application to work on this specific port. I've already gone to my firewall settings and opened a port 13828 but I'm not sure what to do beyond this. If this is not the case, what could potentially be a work around solution?
Thanks!
You need search.close() after result = Entrez.read(search). Check official instruction here. http://biopython.org/DIST/docs/api/Bio.Entrez-module.html
Shut down port or TCP due to too many open connections is a normal behavior for a public website.