I'm trying to use pyspeedtest to get the upload/download speed of my connecting but I keep getting the following error which I couldn't resolve:
import pyspeedtest
st = pyspeedtest.SpeedTest()
st.download()
Exception: Cannot find a test server
Any suggestions/insights would be welcome!
It actually does work if you change the url in the pyspeedtest.py file from www.speedtest.net to c.speedtest.net on line 186 in v1.2.7 of the script.
Edit: added an example of how to get it to work
You can edit the pyspeedtest.py script (located at /usr/local/lib/python2.7/dist-packages/pyspeedtest.py on my raspberry pi 3) by using vi, e.g.:
sudo vi /usr/local/lib/python2.7/dist-packages/pyspeedtest.py
Go to line 186 and change the following line:
connection = self.connect('www.speedtest.net')
to:
connection = self.connect('c.speedtest.net')
Then run pyspeedtest using the wrapper in /usr/local/bin:
/usr/local/bin/pyspeedtest
Using server: speedtest.wilkes.net
Ping: 41 ms
Download speed: 46.06 Mbps
Upload speed: 11.58 Mbps
Or use the python interpreter:
>>> import pyspeedtest
>>> st = pyspeedtest.SpeedTest()
>>> st.ping()
41.70024394989014
>>> st.download()
44821310.72337018
>>> st.upload()
14433296.732646577
The project hasn't been updated since mid-2016. And the last update was updated user-agent to prevent SpeedTest block... And if you skim the code, there are [comments like this]:(https://github.com/fopina/pyspeedtest/blob/master/pyspeedtest.py#L188)
# really contribute to speedtest.net OS statistics
# maybe they won't block us again...
And there have been bugs posted to GitHub about the project not working, with no response.
So, my guess is: This project probably violates SpeedTest.net's terms of service, so they blocked it. The author tried to get around the block, they blocked it again, and the author gave up. In the intervening two years, any other servers it used as backups either blocked it, or shut down (e.g., speedtest.serv.pt, mentioned in the docs, no longer exists).
There is a pull request from another user that might fix it, although it appears to be failing the CI test. If you want to try it yourself, you can.
But otherwise, you can't use this library, and there's no way anyone can help you use it; it just doesn't work. You'll have to find another way to do the same thing.
Related
I try to sniff network adapter(TP-LINK, 0bda:b711) with python3 , But I get an OSError: Could not activate the pcap handler
from scapy.all import *
from scapy.config import conf
from scapy.layers.dot11 import Dot11
conf.use_pcap = True
def callBack(pkg):
if pkg.haslayer(Dot11):
if pkg.type == 0 and pkg.subtype == 8:
print("dBm_AntSignal=", pkg.dBm_AntSignal)
print("dBm_AntNoise=", pkg.dBm_AntNoise)
sniff(iface='wlp1s1', monitor='True', prn=callBack)
I think there is something wrong with libpcap, I want to get dBm_AntSignal and dBm_AntNoise from sniff, the code can run Macbook according to other people(you can browse my last question). Is there somebody can solve this issue ?
If you posted issue #1136 on the libpcap issues list, then you presumably somehow managed to determine that pcap_activate() returned PCAP_ERROR. If you did that by modifying the Scapy code, try modifying it further to, if pcap_activate() returns PCAP_ERROR, report the result of pcap_geterr(), in order to try to find out why, in this particular instance, pcap_activate() returned PCAP_ERROR. The problem is that PCAP_ERROR can be returned for a number of different reasons, and it's difficult if not impossible to guess which one it was.
(And then file an issue on Scapy's issue list indicating that the error message for pcap_activate() failing should be based on both the return value from pcap_activate() and, for certain errors, the result of pcap_geterr(). They should also distinguish between error returns from pcap_activate(), which are negative numbers, and warning returns from pcap_activate(), which indicate that the "pcap handler" could be activated, but something unexpected happened, and are positive numbers.)
Update:
No need to file the Scapy issue; I've already submitted a pull request for the change to fix the error reporting, and it's been merged. Apply the changes from that pull request to Scapy and try again.
I am trying to change the user of a print job in the queue, as I want to create it on a service account but send the job to another users follow-me printing queue. I'm using the win32 module in python. Here is an example of my code:
from win32 import win32print
JOB_INFO_LEVEL = 2
pclExample = open("sample.pcl")
printer_name = win32print.GetDefaultPrinter()
hPrinter = win32print.OpenPrinter(printer_name)
try:
jobID = win32print.StartDocPrinter(hPrinter, 1, ("PCL Data test", None, "RAW"))
# Here we try to change the user by extracting the job and then setting it again
jobInfoDict = win32print.GetJob(hPrinter, jobID , JOB_INFO_LEVEL )
jobInfoDict["pUserName"] = "exampleUser"
win32print.SetJob(hPrinter, jobID , JOB_INFO_LEVEL , jobInfoDict , win32print.JOB_CONTROL_RESUME )
try:
win32print.StartPagePrinter(hPrinter)
win32print.WritePrinter(hPrinter, pclExample)
win32print.EndPagePrinter(hPrinter)
finally:
win32print.EndDocPrinter(hPrinter)
finally:
win32print.ClosePrinter(hPrinter)
The problem is I get an error at the win32print.SetJob() line. If JOB_INFO_LEVEL is set to 1, then I get the following error:
(1804, 'SetJob', 'The specified datatype is invalid.')
This is a known bug to do with how the C++ works in the background (Issue here).
If JOB_INFO_LEVEL is set to 2, then I get the following error:
(1798, 'SetJob', 'The print processor is unknown.')
However, this is the processor that came from win32print.GetJob(). Without trying to change the user this prints fine, so I'm not sure what is wrong.
Any help would be hugely appreciated! :)
EDIT:
Using Python 3.8.5 and Pywin32 303
At the beginning I thought it was a misunderstanding (I was also a bit skeptical about the bug report), mainly because of the following paragraph (which apparently seems to be wrong) from [MS.Docs]: SetJob function (emphasis is mine):
The following members of a JOB_INFO_1, JOB_INFO_2, or JOB_INFO_4 structure are ignored on a call to SetJob: JobId, pPrinterName, pMachineName, pUserName, pDrivername, Size, Submitted, Time, and TotalPages.
But I did some tests and ran into the problem. The problem is as described in the bug: filling JOB_INFO_* string members (which are LPTSTRs) with char* data.
Submitted [GitHub]: mhammond/pywin32 - Fix: win32print.SetJob sending ANSI to UNICODE API (and none of the 2 errors pops up). It was merged to main on 220331.
When testing the fix, I was able to change various properties of an existing job, I was amazed that it didn't have to be valid data (like below), I'm a bit curious to see what would happen when the job would be executed (as now I don't have a connection to a printer):
Change pUserName to str(random.randint(0, 10000)) to make sure it changes on each script run (PrintScreens taken separately and assembled in Paint):
Ways to go further:
Wait for a new PyWin32 version (containing this fix) to be released. This is the recommended approach, but it will also take more time (and it's unclear when it will happen)
Get the sources, either:
from main
from b303 (last stable branch), and apply the (above) patch(1)
build the module (.pyd) and copy it in the PyWin32's site-packages directory on your Python installation(s). Faster, but it requires some deeper knowledge, and maintenance might become a nightmare
Footnotes
#1: Check [SO]: Run / Debug a Django application's UnitTests from the mouse right click context menu in PyCharm Community Edition? (#CristiFati's answer) (Patching UTRunner section) for how to apply patches (on Win).
I try to download a bingads report using python SDK, but I keep getting an error says: "Type not found: 'Aggregation'" after submitting a report request. I've tried all 4 options mentioned in the following link:
https://github.com/BingAds/BingAds-Python-SDK/blob/master/examples/v13/report_requests.py
Authentication process prior to request works just fine.
I execute the following:
report_request = get_report_request(authorization_data.account_id)
reporting_download_parameters = ReportingDownloadParameters(
report_request=report_request,
result_file_directory=FILE_DIRECTORY,
result_file_name=RESULT_FILE_NAME,
overwrite_result_file=True, # Set this value true if you want to overwrite the same file.
timeout_in_milliseconds=TIMEOUT_IN_MILLISECONDS
)
output_status_message("-----\nAwaiting download_report...")
download_report(reporting_download_parameters)
after a careful debugging, it seems that the program fails when trying to execute a command within "reporting_service_manager.py". Here is workflow:
download_report(self, download_parameters):
report_file_path = self.download_file(download_parameters)
then:
download_file(self, download_parameters):
operation = self.submit_download(download_parameters.report_request)
then:
submit_download(self, report_request):
self.normalize_request(report_request)
response = self.service_client.SubmitGenerateReport(report_request)
SubmitGenerateReport starts a sequence of events ending with a call to "_SeviceCall.init" function within "service_client.py", returning an exception "Type not found: 'Aggregation'"
try:
response = self.service_client.soap_client.service.__getattr__(self.name)(*args, **kwargs)
return response
except Exception as ex:
if need_to_refresh_token is False \
and self.service_client.refresh_oauth_tokens_automatically \
and self.service_client._is_expired_token_exception(ex):
need_to_refresh_token = True
else:
raise ex
Can anyone shed some light? .
Thanks
Please be sure to set Aggregation e.g., as shown here.
aggregation = 'Daily'
If the report type does not use aggregation, you can set Aggregation=None.
Does this help?
This may be a bit late 2 months after the fact but maybe this will help someone else. I had the same error (though I suppose it may not be the same issue). It does look like you did what I did (and I'm sure others will as well): copy-paste the Microsoft example code and tried to run it only to find that it didn't work.
I spent quite some time trying to debug the issue and it looked to me like the XML wasn't being searched correctly. I was using suds-py3 for the script at the time so I tried suds-community and everything just worked after that.
I also re-read the Bing Ads API walkthrough for getting started again and found that they recommend suds-jurko instead.
Long story short: If you want to use the bingads API don't use suds-py3, use either suds-community (which I can confirm works for everything I've used the API for) or suds-jurko (which is the one recommended by Microsoft).
I have the following case:
Two servers running each their own name server (NS)
Each of these servers has the same object registering with a different URI. The URI includes the local server's hostname to make it unique
A third server, the client, tries to target the right server to query information available only that server
Please note that all of these 3 servers can communicate with each other.
My questions and issues:
The requests from the third server always goes to the first server, no matter what; except when I shutdown the first NS. Is there something definitely wrong with that I'm doing? I guess I do, but I can't figure it out...
Is running separate nameservers the root cause? What would be the alternative if this not allowed? I run multiple name servers for redundancy as some other upcoming operations can run on any of the two first servers. When I list the content of each name server (locally on each server), I get the right registration (which includes the hostname).
Is the use of Pyro4.config.NS_HOST parameter wrong, see below the usage in the code? What would be the alternative?
My configuration:
Pyro 4-4.63-1.1
Python 2.7.13
Linux OpenSuse (kernel version 4.4.92)
The test code is listed below. I got rid of the details like try blocks and imports, etc...
My server skeleton code (which runs on the first two servers):
daemon = Pyro4.Daemon(local_ip_address)
ns = Pyro4.locateNS()
uri = daemon.register(TestObject())
ns.register("test.object#%s" % socket.gethostname(), uri)
daemon.requestLoop()
The local_ip_address is the one supplied below by the user to contact the correct name server (on the correct server).
The name server is started on each of the first tow servers as follows:
python -m Pyro4.naming -n local_ip_address
The local_ip_address is the same as above.
My client skeleton code (which runs on the third server):
target_server_hostname = user_provided_hostname
target_server_ip = user_provided_ip
Pyro4.config.NS_HOST = target_server_ip
uri = "test.object#%s" % target_server_hostname
proxy = Pyro4.Proxy(uri)
proxy._pyroTimeout = my_timeout
proxy._pyroMaxRetries = my_retries
rc, reason = proxy.testAction(target_server_hostname)
if rc != 0:
print reason
else:
print "Hostname matches"
If more information is required, please let me know.
Djurdjura.
I think figured it out. Hope this will be useful to anyone else looking for a similar use case.
You just need to specify where to look for the name server itself. The client code becomes something like the following:
target_server_hostname = user_provided_hostname
target_server_ip = user_provided_ip
# The following statement finds the correct name server
ns = Pyro4.locateNS(host=target_server_ip)
name = "test.object#%s" % target_server_hostname
uri = ns.lookup(name)
proxy = Pyro4.Proxy(uri)
proxy._pyroTimeout = my_timeout
proxy._pyroMaxRetries = my_retries
rc, reason = proxy.testAction(target_server_hostname)
if rc != 0:
print reason
else:
print "Hostname matches"
In my case, I guess the alternative would be using a single common name server running on ... the third server (the client server). This server is always on and ready before the other ones. I didn't try this approach one yet.
Regards.
D.
PS. Thanks Irmen for your answer.
I'm slowly building a web browser in PyQt4 and like the speed i'm getting out of it. However, I want to combine easylist.txt with it. I believe adblock uses this to block http requests by the browser.
How would you go about it using python/PyQt4?
[edit1] Ok. I think i've setup Privoxy. I haven't setup any additional filters and it seems to work. The PyQt4 i've tried to use looks like this
self.proxyIP = "127.0.0.1"
self.proxyPORT= 8118
proxy = QNetworkProxy()
proxy.setType(QNetworkProxy.HttpProxy)
proxy.setHostName(self.proxyIP)
proxy.setPort(self.proxyPORT)
QNetworkProxy.setApplicationProxy(proxy)
However, this does absolutely nothing and I cannot make sense of the docs and can not find any examples.
[edit2] I've just noticed that i'f I change self.proxyIP to my actual local IP rather than 127.0.0.1 the page doesn't load. So something is happening.
I know this is an old question, but I thought I'd try giving an answer for anyone who happens to stumble upon it. You could create a subclass of QNetworkAccessManager and combine it with https://github.com/atereshkin/abpy. Something kind of like this:
from PyQt4.QtNetwork import QNetworkAccessManager
from abpy import Filter
adblockFilter = Filter(file("easylist.txt"))
class MyNetworkAccessManager(QNetworkAccessManager):
def createRequest(self, op, request, device=None):
url = request.url().toString()
doFilter = adblockFilter.match(url)
if doFilter:
return QNetworkAccessManager.createRequest(self, self.GetOperation, QNetworkRequest(QUrl()))
else:
QNetworkAccessManager.createRequest(self, op, request, device)
myNetworkAccessManager = MyNetworkAccessManager()
After that, set the following on all your QWebView instances, or make a subclass of QWebView:
QWebView.page().setNetworkAccessManager(myNetworkAccessManager)
Hope this helps!
Is this question about web filtering?
Then try use some of external web-proxy, for sample Privoxy (http://en.wikipedia.org/wiki/Privoxy).
The easylist.txt file is simply plain text, as demonstrated here: http://adblockplus.mozdev.org/easylist/easylist.txt
lines beginning with [ and also ! appear to be comments, so it is simply a case of sorting through the file, and searching for the correct things in the url/request depending upon the starting character of the line in the easylist.txt file.
Privoxy is solid. If you want it to be completely API based though, check out the BrightCloud web filtering API as well.