I'm making a simple update checker function for my code that I set to run every time before the bulk of the code is executed. It notifies the user that a new version is available for download.
Here's a MWE:
import urllib
def updater(__version__):
try:
# Get latest version number from master repo at Github.
f = urllib.urlopen("https://raw.githubusercontent.com/chrisglass"
"/django_polymorphic/master/polymorphic/__version__.py")
s = f.read().split('"')
if s[-2] != __version__:
print "New version {} is available.".format(s[-2])
except:
pass
# Call function to check if new version is available.
__version__ = '0.1'
updater(__version__)
(that repo isn't mine, I'm using it in this example because I use a similar version of its __version.py__ file)
This works fine, but I'm concerned about Github eventually taking too long to respond, which would hold back the execution of the code.
Is there a way to jump out of the try block after say 5 seconds have elapsed? Is this the recommended way to go about this?
Use urllib2, its urlopen has a timeout parameter.
urllib2.urlopen(url[, data[, timeout[, cafile[, capath[, cadefault[, context]]]]])
https://docs.python.org/2/library/urllib2.html
Related
I am trying to change the user of a print job in the queue, as I want to create it on a service account but send the job to another users follow-me printing queue. I'm using the win32 module in python. Here is an example of my code:
from win32 import win32print
JOB_INFO_LEVEL = 2
pclExample = open("sample.pcl")
printer_name = win32print.GetDefaultPrinter()
hPrinter = win32print.OpenPrinter(printer_name)
try:
jobID = win32print.StartDocPrinter(hPrinter, 1, ("PCL Data test", None, "RAW"))
# Here we try to change the user by extracting the job and then setting it again
jobInfoDict = win32print.GetJob(hPrinter, jobID , JOB_INFO_LEVEL )
jobInfoDict["pUserName"] = "exampleUser"
win32print.SetJob(hPrinter, jobID , JOB_INFO_LEVEL , jobInfoDict , win32print.JOB_CONTROL_RESUME )
try:
win32print.StartPagePrinter(hPrinter)
win32print.WritePrinter(hPrinter, pclExample)
win32print.EndPagePrinter(hPrinter)
finally:
win32print.EndDocPrinter(hPrinter)
finally:
win32print.ClosePrinter(hPrinter)
The problem is I get an error at the win32print.SetJob() line. If JOB_INFO_LEVEL is set to 1, then I get the following error:
(1804, 'SetJob', 'The specified datatype is invalid.')
This is a known bug to do with how the C++ works in the background (Issue here).
If JOB_INFO_LEVEL is set to 2, then I get the following error:
(1798, 'SetJob', 'The print processor is unknown.')
However, this is the processor that came from win32print.GetJob(). Without trying to change the user this prints fine, so I'm not sure what is wrong.
Any help would be hugely appreciated! :)
EDIT:
Using Python 3.8.5 and Pywin32 303
At the beginning I thought it was a misunderstanding (I was also a bit skeptical about the bug report), mainly because of the following paragraph (which apparently seems to be wrong) from [MS.Docs]: SetJob function (emphasis is mine):
The following members of a JOB_INFO_1, JOB_INFO_2, or JOB_INFO_4 structure are ignored on a call to SetJob: JobId, pPrinterName, pMachineName, pUserName, pDrivername, Size, Submitted, Time, and TotalPages.
But I did some tests and ran into the problem. The problem is as described in the bug: filling JOB_INFO_* string members (which are LPTSTRs) with char* data.
Submitted [GitHub]: mhammond/pywin32 - Fix: win32print.SetJob sending ANSI to UNICODE API (and none of the 2 errors pops up). It was merged to main on 220331.
When testing the fix, I was able to change various properties of an existing job, I was amazed that it didn't have to be valid data (like below), I'm a bit curious to see what would happen when the job would be executed (as now I don't have a connection to a printer):
Change pUserName to str(random.randint(0, 10000)) to make sure it changes on each script run (PrintScreens taken separately and assembled in Paint):
Ways to go further:
Wait for a new PyWin32 version (containing this fix) to be released. This is the recommended approach, but it will also take more time (and it's unclear when it will happen)
Get the sources, either:
from main
from b303 (last stable branch), and apply the (above) patch(1)
build the module (.pyd) and copy it in the PyWin32's site-packages directory on your Python installation(s). Faster, but it requires some deeper knowledge, and maintenance might become a nightmare
Footnotes
#1: Check [SO]: Run / Debug a Django application's UnitTests from the mouse right click context menu in PyCharm Community Edition? (#CristiFati's answer) (Patching UTRunner section) for how to apply patches (on Win).
I try to download a bingads report using python SDK, but I keep getting an error says: "Type not found: 'Aggregation'" after submitting a report request. I've tried all 4 options mentioned in the following link:
https://github.com/BingAds/BingAds-Python-SDK/blob/master/examples/v13/report_requests.py
Authentication process prior to request works just fine.
I execute the following:
report_request = get_report_request(authorization_data.account_id)
reporting_download_parameters = ReportingDownloadParameters(
report_request=report_request,
result_file_directory=FILE_DIRECTORY,
result_file_name=RESULT_FILE_NAME,
overwrite_result_file=True, # Set this value true if you want to overwrite the same file.
timeout_in_milliseconds=TIMEOUT_IN_MILLISECONDS
)
output_status_message("-----\nAwaiting download_report...")
download_report(reporting_download_parameters)
after a careful debugging, it seems that the program fails when trying to execute a command within "reporting_service_manager.py". Here is workflow:
download_report(self, download_parameters):
report_file_path = self.download_file(download_parameters)
then:
download_file(self, download_parameters):
operation = self.submit_download(download_parameters.report_request)
then:
submit_download(self, report_request):
self.normalize_request(report_request)
response = self.service_client.SubmitGenerateReport(report_request)
SubmitGenerateReport starts a sequence of events ending with a call to "_SeviceCall.init" function within "service_client.py", returning an exception "Type not found: 'Aggregation'"
try:
response = self.service_client.soap_client.service.__getattr__(self.name)(*args, **kwargs)
return response
except Exception as ex:
if need_to_refresh_token is False \
and self.service_client.refresh_oauth_tokens_automatically \
and self.service_client._is_expired_token_exception(ex):
need_to_refresh_token = True
else:
raise ex
Can anyone shed some light? .
Thanks
Please be sure to set Aggregation e.g., as shown here.
aggregation = 'Daily'
If the report type does not use aggregation, you can set Aggregation=None.
Does this help?
This may be a bit late 2 months after the fact but maybe this will help someone else. I had the same error (though I suppose it may not be the same issue). It does look like you did what I did (and I'm sure others will as well): copy-paste the Microsoft example code and tried to run it only to find that it didn't work.
I spent quite some time trying to debug the issue and it looked to me like the XML wasn't being searched correctly. I was using suds-py3 for the script at the time so I tried suds-community and everything just worked after that.
I also re-read the Bing Ads API walkthrough for getting started again and found that they recommend suds-jurko instead.
Long story short: If you want to use the bingads API don't use suds-py3, use either suds-community (which I can confirm works for everything I've used the API for) or suds-jurko (which is the one recommended by Microsoft).
Has anyone used the python wrapper for the Five9 API?
I'm trying to run one of our custom reports through the API but am unsure how to pass the time criteria.
If the report is already built:
Is there a way to run it without passing criteria?
It seems like this should be enough (since the report is already built):
client.configuration.runReport(folderName='Shared Reports', reportName='Test report')
But it didn't work as expected.
What can I do to solve this?
This works for me. You can probably get fancier with a while loop checking if the report is done but this isn't time sensitive.
from five9 import Five9
import time
client = Five9('username','password')
start = '2019-08-01T00:00:00.000'
end = '2019-08-29T00:00:00.000'
criteria = {'time':{'end':end, 'start':start}}
identifier = client.configuration.runReport(folderName='SharedReports',reportName='reportname',criteria=criteria)
time.sleep(15)
get_results = client.configuration.getReportResult(identifier)
I'm trying to use pyspeedtest to get the upload/download speed of my connecting but I keep getting the following error which I couldn't resolve:
import pyspeedtest
st = pyspeedtest.SpeedTest()
st.download()
Exception: Cannot find a test server
Any suggestions/insights would be welcome!
It actually does work if you change the url in the pyspeedtest.py file from www.speedtest.net to c.speedtest.net on line 186 in v1.2.7 of the script.
Edit: added an example of how to get it to work
You can edit the pyspeedtest.py script (located at /usr/local/lib/python2.7/dist-packages/pyspeedtest.py on my raspberry pi 3) by using vi, e.g.:
sudo vi /usr/local/lib/python2.7/dist-packages/pyspeedtest.py
Go to line 186 and change the following line:
connection = self.connect('www.speedtest.net')
to:
connection = self.connect('c.speedtest.net')
Then run pyspeedtest using the wrapper in /usr/local/bin:
/usr/local/bin/pyspeedtest
Using server: speedtest.wilkes.net
Ping: 41 ms
Download speed: 46.06 Mbps
Upload speed: 11.58 Mbps
Or use the python interpreter:
>>> import pyspeedtest
>>> st = pyspeedtest.SpeedTest()
>>> st.ping()
41.70024394989014
>>> st.download()
44821310.72337018
>>> st.upload()
14433296.732646577
The project hasn't been updated since mid-2016. And the last update was updated user-agent to prevent SpeedTest block... And if you skim the code, there are [comments like this]:(https://github.com/fopina/pyspeedtest/blob/master/pyspeedtest.py#L188)
# really contribute to speedtest.net OS statistics
# maybe they won't block us again...
And there have been bugs posted to GitHub about the project not working, with no response.
So, my guess is: This project probably violates SpeedTest.net's terms of service, so they blocked it. The author tried to get around the block, they blocked it again, and the author gave up. In the intervening two years, any other servers it used as backups either blocked it, or shut down (e.g., speedtest.serv.pt, mentioned in the docs, no longer exists).
There is a pull request from another user that might fix it, although it appears to be failing the CI test. If you want to try it yourself, you can.
But otherwise, you can't use this library, and there's no way anyone can help you use it; it just doesn't work. You'll have to find another way to do the same thing.
I have a very odd bug. I'm writing some code in python3 to check a url for changes by comparing sha256 hashes. The relevant part of the code is as follows:
from requests import get
from hashlib import sha256
def fetch_and_hash(url):
file = get(url, stream=True)
f = file.raw.read()
hash = sha256()
hash.update(f)
return hash.hexdigest()
def check_url(target): # passed a dict containing hash from previous examination of url
t = deepcopy(target)
old_hash = t["hash"]
new_hash = fetch_and_hash(t["url"])
if old_hash == new_hash:
t["justchanged"] = False
return t
else:
t["justchanged"] = True
return handle_changes(t, new_hash) # records the changes
So I was testing this on an old webpage of mine. I ran the check, recorded the hash, and then changed the page. Then I re-ran it a few times, and the code above did not reflect a new hash (i.e., it followed the old_hash == new_hash branch).
Then I waited maybe 5 minutes and ran it again without changing the code at all except to add a couple of debugging calls to print(). And this time, it worked.
Naturally, my first thought was "huh, requests must be keeping a cache for a few seconds." But then I googled around and learned that requests doesn't cache.
Again, I changed no code except for print calls. You might not believe me. You might think "he must have changed something else." But I didn't! I can prove it! Here's the commit!
So what gives? Does anyone know why this is going on? If it matters, the webpage is hosted on a standard commercial hosting service, IIRC using Apache, and I'm on a lousy local phone company DSL connection---don't know if there are any serverside caching settings going on, but it's not on any kind of CDN.
So I'm trying to figure out whether there is some mysterious ISP cache thing going on, or I'm somehow misusing requests... the former I can handle; the latter I need to fix!