Testing a function based on third party service - python

I'm trying to figure out how to create unit tests for a function, which behavior is based on a third party service.
Suppose a function like this:
def sync_check():
delta_secs = 90
now = datetime.datetime.now().utcnow()
res = requests.get('<url>')
alert = SlackAlert()
last_value = res[-1]['date'] # Last element of the array is the most recent
secs = (now - last_value).seconds
if secs >= delta_secs:
alert.notify("out of sync. Delay: {} seconds".format(secs))
else:
alert.notify('in sync')
What's best practice to write unit test for this function? I need to test both if and else branches, but this depends on the third party service.
The first thing that come to my mind is to create a fake webserver and point to that one (changing url) but this way the codebase would include testing logic, like:
if test:
url = <mock_web_server_url>
else:
url = <third_party_service_url>
Moreover, unit testing would trigger slack alerts, which doesn't have to happen.
So there I shoulde change again the codebase like:
if secs >= delta_secs:
if test:
logging.debug("out of sync alert sent - testing mode")
else:
alert.notify("out of sync. Delay: {} seconds".format(secs))
else:
if test:
logging.debug("in sync alert sent - testing mode")
else:
alert.notify('in sync')
Which I don't really like.
Am I missing any design to solve this problem?

Check out Dependency Injection to test code that depends on third party services, without having to check whether you're running in test mode, like in your example. The basic idea is to have the slack alert service be an argument of your function, so for unit testing you can use a fake service that acts the way you want it to for each test.
Your code would end up looking something like this:
def sync_check(alert):
delta_secs = 90
now = datetime.datetime.now().utcnow()
res = requests.get('<url>')
last_value = res[-1]['date'] # Last element of the array is the most recent
secs = (now - last_value).seconds
if secs >= delta_secs:
alert.notify("out of sync. Delay: {} seconds".format(secs))
else:
alert.notify('in sync')
and in a test case, you could have your alert object be something as simple as:
class TestAlert:
def __init__(self):
self.message = None
def notify(self, message):
self.message = message
You could then test your function by passing on an instance of your TestAlert class, and check the logged output if you want to, by accessing the message attribute. This code would not access any third party services.
def test_sync_check():
alert = TestAlert()
sync_check(alert)
assert alert.message == 'in sync'

Related

Add webserver to existing python service

I have a script that continually runs, processing data that it gets from an external device. The core logic follows something like:
from external_module import process_data, get_data, load_interesting_things
class MyService:
def __init__(self):
self.interesting_items = load_interesting_things()
self.run()
def run(self):
try:
while True:
data = get_data()
for item in self.interesting_items:
item.add_datapoint(process_data(data, item))
except KeyboardInterrupt:
pass
I would like to add the ability to request information for the various interesting things via a RESTful API.
Is there a way in which I can add something like a Flask web service to the program such that the web service can get a stat from the interesting_items list to return? For example something along the lines of:
#app.route("/item/<idx>/average")
def average(idx: int):
avg = interesting_items[idx].getAverage()
return jsonify({"average":avg})
Assuming there is the necessary idx bounds checking and any appropriate locking implemented.
It does not have to be Flask, but it should be light weight. I want to avoid using a database. I would prefer to use a webservice, but if it is not possible without completely restructuring the code base I can use a socket instead, but this is less preferable.
The server would be running on a local network only and usually only handling a single user, sometimes it may have a few.
I needed to move the run() method out of the __init__() method, so that I could have a global reference to the service, and start the run method in a separate thread. Something along the lines of:
service = MyService()
service_thread = threading.Thread(target=service.run, daemon=True)
service_thread.start()
app = flask.Flask("appname")
...
#app.route("/item/<idx>/average")
def average(idx: int):
avg = service.interesting_items[idx].getAverage()
return jsonify({"average":avg})
...
app.run()

How do I handle different errors when working with an API?

I am working a project in Python were I am gathering rain data and in this case temperature data from the Netatmo weather stations (Its basically just a private weather station you can set up in you garden and it will collect rain data, temperature, wind etc.)
When using the patatmo API you need a user with credential, a client. This client then has 500 requests pr. Hour which can be used on different requests, among these are the client.GetPublicdata request and the client.Getmeassure request. The Getmeassure request requires a station id and a module id, which I get from the the Getpuclicdat request. If I run out of requests I will catch that error using the except ApiResponseError: and I will then change the client credentials since I am not alone in this project and have credentials from two other people. My issues are:
If a station or module ID is not found using the Getmeassure request it will return a different type of ApiResponseError, which in my current code also is caught in the previously mentioned except, and this results in an endless loop where the code just changes credential all the time.
The code looks like this:
from patatmo.api.errors import ApiResponseError
...
...
...
a_test1 = 0
while a_test1 == 0:
try:
Outdoor_data = client.Getmeasure(
device_id = stations_id ,
module_id = modul_ID ,
type = typ ,
real_time = True ,
date_begin = Start ,
date_end = End
)
time.sleep(p)
a_test1 = 1
except ApiResponseError:
credentials = cred_dict[next(ns)]
client = patatmo.api.client.NetatmoClient()
client.authentication.credentials = credentials
client.authentication.tmpfile = 'temp_auth.json'
print('Changeing credentials')
credError = credError + 1
if credError > 4:
time.sleep(600)
credError = 0
pass
except:
print('Request Error')
time.sleep(p)
The documentation for the error.py script, that was made by someone else, looks like this:
class ApiResponseError(BaseException):
pass
class InvalidCredentialsError(ApiResponseError):
pass
class InvalidApiInputError(BaseException):
pass
class InvalidRegionError(InvalidApiInputError):
def __init__(self):
message = \
("'region' required keys: "
"lat_ne [-85;85], lat_sw [-85;85], "
"lon_ne [-180;180] and lon_sw [-180;180] "
"with lat_ne > lat_sw and lon_ne > lon_sw")
super().__init__(message)
class InvalidRequiredDataError(InvalidApiInputError):
def __init__(self):
message = "'required_data' must be None or in {}".format(
GETPUBLICDATA_ALLOWED_REQUIRED_DATA)
super().__init__(message)
class InvalidApiRequestInputError(BaseException):
pass
class InvalidPayloadError(InvalidApiRequestInputError):
pass
API_ERRORS = {
"invalid_client": InvalidCredentialsError("wrong credentials"),
"invalid_request": ApiResponseError("invalid request"),
"invalid_grant": ApiResponseError("invalid grant"),
"Device not found": ApiResponseError("Device not found - Check Device ID "
"and permissions")
}
What I want to do is catch the error depending what type of error I get, and I just doesn't seem to have any luck doing so
Which of those exceptions subclasses do you get on request overrun? Use only that one to drive the auth swap. If it’s not a particular one but is shared with other bad events, you will need to examine the exception’s variables. Also you will need to figure out which exception to work around - bad station id might mean someone is offline so ignore and try later. Vs logic flaws in your program, abend on those, fix, retry.
Exception handling is on a first-catch, first-handled basis. Your most specific classes have to do the “except” first to match first, else a generic one would grab it and handle it. Watch your APIs exception hierarchy carefully!
maxexc = 1000
countexc = 1
while ...
# slow your loop
time.sleep(p)
try:
... what you normally do
# dont use the too generic ApiResponseError here yet
except requestoverrunexception as e:
... swap credendentials
countexc+=1
except regionexception as e:
... ignore this region for a while
countexc += 1
# whoops abend and fix
except ApiResponseError as e:
print vars(e)
raise
except Exception as e:
print(e)
raise
I would bail after too many exceptions, thats what countexc is about.
Also 500 requests an hour seems generous. Don’t try to fudge that unless you have a real need. Some providers may even have watchdogs and get rid of you if you abuse them.

Python API Rate Limiting - How to Limit API Calls Globally

I'm trying to restrict the API calls in my code. I already found a nice python library ratelimiter==1.0.2.post0
https://pypi.python.org/pypi/ratelimiter
However, this library can only limit the rate in local scope. i.e) in function and loops
# Decorator
#RateLimiter(max_calls=10, period=1)
def do_something():
pass
# Context Manager
rate_limiter = RateLimiter(max_calls=10, period=1)
for i in range(100):
with rate_limiter:
do_something()
Because I have several functions, which make API calls, in different places, I want to limit the API calls in global scope.
For example, suppose I want to limit the APIs call to one time per second. And, suppose I have functions x and y in which two API calls are made.
#rate(...)
def x():
...
#rate(...)
def y():
...
By decorating the functions with the limiter, I'm able to limit the rate against the two functions.
However, if I execute the above two functions sequentially, it looses track of the number of API calls in global scope because they are unaware of each other. So, y will be called right after the execution of x without waiting another second. And, this will violate the one time per second restriction.
Is there any way or library that I can use to limit the rate globally in python?
I had the same problem, I had a bunch of different functions that calls the same API and I wanted to make rate limiting work globally. What I ended up doing was to create an empty function with rate limiting enabled.
PS: I use a different rate limiting library found here: https://pypi.org/project/ratelimit/
from ratelimit import limits, sleep_and_retry
# 30 calls per minute
CALLS = 30
RATE_LIMIT = 60
#sleep_and_retry
#limits(calls=CALLS, period=RATE_LIMIT)
def check_limit():
''' Empty function just to check for calls to API '''
return
Then I just call that function at the beginning of every function that calls the API:
def get_something_from_api(http_session, url):
check_limit()
response = http_session.get(url)
return response
If the limit is reached, the program will sleep until the (in my case) 60 seconds have passed, and then resume normally.
After all, I implemented my own Throttler class. By proxying every API request to the request method, we can keep track of all API requests. Taking advantage of passing function as the request method parameter, it also caches the result in order to reduce API calls.
class TooManyRequestsError(Exception):
def __str__(self):
return "More than 30 requests have been made in the last five seconds."
class Throttler(object):
cache = {}
def __init__(self, max_rate, window, throttle_stop=False, cache_age=1800):
# Dict of max number of requests of the API rate limit for each source
self.max_rate = max_rate
# Dict of duration of the API rate limit for each source
self.window = window
# Whether to throw an error (when True) if the limit is reached, or wait until another request
self.throttle_stop = throttle_stop
# The time, in seconds, for which to cache a response
self.cache_age = cache_age
# Initialization
self.next_reset_at = dict()
self.num_requests = dict()
now = datetime.datetime.now()
for source in self.max_rate:
self.next_reset_at[source] = now + datetime.timedelta(seconds=self.window.get(source))
self.num_requests[source] = 0
def request(self, source, method, do_cache=False):
now = datetime.datetime.now()
# if cache exists, no need to make api call
key = source + method.func_name
if do_cache and key in self.cache:
timestamp, data = self.cache.get(key)
logging.info('{} exists in cached # {}'.format(key, timestamp))
if (now - timestamp).seconds < self.cache_age:
logging.info('retrieved cache for {}'.format(key))
return data
# <--- MAKE API CALLS ---> #
# reset the count if the period passed
if now > self.next_reset_at.get(source):
self.num_requests[source] = 0
self.next_reset_at[source] = now + datetime.timedelta(seconds=self.window.get(source))
# throttle request
def halt(wait_time):
if self.throttle_stop:
raise TooManyRequestsError()
else:
# Wait the required time, plus a bit of extra padding time.
time.sleep(wait_time + 0.1)
# if exceed max rate, need to wait
if self.num_requests.get(source) >= self.max_rate.get(source):
logging.info('back off: {} until {}'.format(source, self.next_reset_at.get(source)))
halt((self.next_reset_at.get(source) - now).seconds)
self.num_requests[source] += 1
response = method() # potential exception raise
# cache the response
if do_cache:
self.cache[key] = (now, response)
logging.info('cached instance for {}, {}'.format(source, method))
return response
Many API providers constrain developers from making too many API calls.
Python ratelimit packages introduces a function decorator preventing a function from being called more often than that allowed by the API provider.
from ratelimit import limits
import requests
TIME_PERIOD = 900 # time period in seconds
#limits(calls=15, period=TIME_PERIOD)
def call_api(url):
response = requests.get(url)
if response.status_code != 200:
raise Exception('API response: {}'.format(response.status_code))
return response
Note: This function will not be able to make more then 15 API call within a 15 minute time period.
Adding to Sunil answer, you need to add #sleep_and_retry decorator, otherwise your code will break when reach the rate limit:
#sleep_and_retry
#limits(calls=0.05, period=1)
def api_call(url, api_key):
r = requests.get(
url,
headers={'X-Riot-Token': api_key}
)
if r.status_code != 200:
raise Exception('API Response: {}'.format(r.status_code))
return r
There are lots of fancy libraries that will provide nice decorators, and special safety features, but the below should work with django.core.cache or any other cache with a get and set method:
def hit_rate_limit(key, max_hits, max_hits_interval):
'''Implement a basic rate throttler. Prevent more than max_hits occurring
within max_hits_interval time period (seconds).'''
# Use the django cache, but can be any object with get/set
from django.core.cache import cache
hit_count = cache.get(key) or 0
logging.info("Rate Limit: %s --> %s", key, hit_count)
if hit_count > max_hits:
return True
cache.set(key, hit_count + 1, max_hits_interval)
return False
Using the Python standard library:
import threading
from time import time, sleep
b = threading.Barrier(2)
def belay(s=1):
"""Block the main thread for `s` seconds."""
while True:
b.wait()
sleep(s)
def request_something():
b.wait()
print(f'something at {time()}')
def request_other():
b.wait()
print(f'or other at {time()}')
if __name__ == '__main__':
thread = threading.Thread(target=belay)
thread.daemon = True
thread.start()
# request a lot of things
i = 0
while (i := i+1) < 5:
request_something()
request_other()
There's about s seconds between each timestamp printed. Because the main thread waits rather than sleeps, time it spends responding to requests is unrelated to the (minimum) time between requests.

How do I log multiple very similar events gracefully in python?

With pythons logging module, is there a way to collect multiple events into one log entry? An ideal solution would be an extension of python's logging module or a custom formatter/filter for it so collecting logging events of the same kind happens in the background and nothing needs to be added in code body (e.g. at every call of a logging function).
Here an example that generates a large number of the same or very similar logging events:
import logging
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.info('more of the same %d' % i)
# ... and so on ...
So we have the same exception 99999 times and log it. It would be nice, if the log just said something like:
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "./exceptionlogging.py", line 10, in <module>
asdf[i] # not defined!
NameError: name 'asdf' is not defined
INFO:root:foo more of the same (occured 88888 times with various values)
You should probably be writing a message aggregate/statistics class rather than trying to hook onto the logging system's singletons but I guess you may have an existing code base that uses logging.
I'd also suggest that you should instantiate your loggers rather than always using the default root. The Python Logging Cookbook has extensive explanation and examples.
The following class should do what you are asking.
import logging
import atexit
import pprint
class Aggregator(object):
logs = {}
#classmethod
def _aggregate(cls, record):
id = '{0[levelname]}:{0[name]}:{0[msg]}'.format(record.__dict__)
if id not in cls.logs: # first occurrence
cls.logs[id] = [1, record]
else: # subsequent occurrence
cls.logs[id][0] += 1
#classmethod
def _output(cls):
for count, record in cls.logs.values():
record.__dict__['msg'] += ' (occured {} times)'.format(count)
logging.getLogger(record.__dict__['name']).handle(record)
#staticmethod
def filter(record):
# pprint.pprint(record)
Aggregator._aggregate(record)
return False
#staticmethod
def exit():
Aggregator._output()
logging.getLogger().addFilter(Aggregator)
atexit.register(Aggregator.exit)
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
logging.exception('foo') # generates large number of logging events
else: pass
# ... more code with more logging ...
for i in range(88888): logging.error('more of the same')
# ... and so on ...
Note that you don't get any logs until the program exits.
The result of running it this is:
ERROR:root:foo (occured 99999 times)
Traceback (most recent call last):
File "C:\work\VEMS\python\logcount.py", line 38, in
asdf[i] # not defined!
NameError: name 'asdf' is not defined
ERROR:root:more of the same (occured 88888 times)
Your question hides a subliminal assumption of how "very similar" is defined.
Log records can either be const-only (whose instances are strictly identical), or a mix of consts and variables (no consts at all is also considered a mix).
An aggregator for const-only log records is a piece of cake. You just need to decide whether process/thread will fork your aggregation or not.
For log records which include both consts and variables you'll need to decide whether to split your aggregation based on the variables you have in your record.
A dictionary-style counter (from collections import Counter) can serve as a cache, which will count your instances in O(1), but you may need some higher-level structure in order to write the variables down if you wish. Additionally, you'll have to manually handle the writing of the cache into a file - every X seconds (binning) or once the program has exited (risky - you may lose all in-memory data if something gets stuck).
A framework for aggregation would look something like this (tested on Python v3.4):
from logging import Handler
from threading import RLock, Timer
from collections import defaultdict
class LogAggregatorHandler(Handler):
_default_flush_timer = 300 # Number of seconds between flushes
_default_separator = "\t" # Seperator char between metadata strings
_default_metadata = ["filename", "name", "funcName", "lineno", "levelname"] # metadata defining unique log records
class LogAggregatorCache(object):
""" Keeps whatever is interesting in log records aggregation. """
def __init__(self, record=None):
self.message = None
self.counter = 0
self.timestamp = list()
self.args = list()
if record is not None:
self.cache(record)
def cache(self, record):
if self.message is None: # Only the first message is kept
self.message = record.msg
assert self.message == record.msg, "Non-matching log record" # note: will not work with string formatting for log records; e.g. "blah {}".format(i)
self.timestamp.append(record.created)
self.args.append(record.args)
self.counter += 1
def __str__(self):
""" The string of this object is used as the default output of log records aggregation. For example: record message with occurrences. """
return self.message + "\t (occurred {} times)".format(self.counter)
def __init__(self, flush_timer=None, separator=None, add_process_thread=False):
"""
Log record metadata will be concatenated to a unique string, separated by self._separator.
Process and thread IDs will be added to the metadata if set to True; otherwise log records across processes/threads will be aggregated together.
:param separator: str
:param add_process_thread: bool
"""
super().__init__()
self._flush_timer = flush_timer or self._default_flush_timer
self._cache = self.cache_factory()
self._separator = separator or self._default_separator
self._metadata = self._default_metadata
if add_process_thread is True:
self._metadata += ["process", "thread"]
self._aggregation_lock = RLock()
self._store_aggregation_timer = self.flush_timer_factory()
self._store_aggregation_timer.start()
# Demo logger which outputs aggregations through a StreamHandler:
self.agg_log = logging.getLogger("aggregation_logger")
self.agg_log.addHandler(logging.StreamHandler())
self.agg_log.setLevel(logging.DEBUG)
self.agg_log.propagate = False
def cache_factory(self):
""" Returns an instance of a new caching object. """
return defaultdict(self.LogAggregatorCache)
def flush_timer_factory(self):
""" Returns a threading.Timer daemon object which flushes the Handler aggregations. """
timer = Timer(self._flush_timer, self.flush)
timer.daemon = True
return timer
def find_unique(self, record):
""" Extracts a unique metadata string from log records. """
metadata = ""
for single_metadata in self._metadata:
value = getattr(record, single_metadata, "missing " + str(single_metadata))
metadata += str(value) + self._separator
return metadata[:-len(self._separator)]
def emit(self, record):
try:
with self._aggregation_lock:
metadata = self.find_unique(record)
self._cache[metadata].cache(record)
except Exception:
self.handleError(record)
def flush(self):
self.store_aggregation()
def store_aggregation(self):
""" Write the aggregation data to file. """
self._store_aggregation_timer.cancel()
del self._store_aggregation_timer
with self._aggregation_lock:
temp_aggregation = self._cache
self._cache = self.cache_factory()
# ---> handle temp_aggregation and write to file <--- #
for key, value in sorted(temp_aggregation.items()):
self.agg_log.info("{}\t{}".format(key, value))
# ---> re-create the store_aggregation Timer object <--- #
self._store_aggregation_timer = self.flush_timer_factory()
self._store_aggregation_timer.start()
Testing this Handler class with random log severity in a for-loop:
if __name__ == "__main__":
import random
import logging
logger = logging.getLogger()
handler = LogAggregatorHandler()
logger.addHandler(handler)
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
logger.info("entering logging loop")
for i in range(25):
# Randomly choose log severity:
severity = random.choice([logging.DEBUG, logging.INFO, logging.WARN, logging.ERROR, logging.CRITICAL])
logger.log(severity, "test message number %s", i)
logger.info("end of test code")
If you want to add more stuff, this is what a Python log record looks like:
{'args': ['()'],
'created': ['1413747902.18'],
'exc_info': ['None'],
'exc_text': ['None'],
'filename': ['push_socket_log.py'],
'funcName': ['<module>'],
'levelname': ['DEBUG'],
'levelno': ['10'],
'lineno': ['17'],
'module': ['push_socket_log'],
'msecs': ['181.387901306'],
'msg': ['Test message.'],
'name': ['__main__'],
'pathname': ['./push_socket_log.py'],
'process': ['65486'],
'processName': ['MainProcess'],
'relativeCreated': ['12.6709938049'],
'thread': ['140735262810896'],
'threadName': ['MainThread']}
One more thing to think about:
Most features you run depend on a flow of several consecutive commands (which will ideally report log records accordingly); e.g. a client-server communication will typically depend on receiving a request, processing it, reading some data from the DB (which requires a connection and some read commands), some kind of parsing/processing, constructing the response packet and reporting the response code.
This highlights one of the main disadvantages of using an aggregation approach: by aggregating log records you lose track of the time and order of the actions that took place. It will be extremely difficult to figure out what request was incorrectly structured if you only have the aggregation at hand.
My advice in this case is that you keep both the raw data and the aggregation (using two file handlers or something similar), so that you can investigate a macro-level (aggregation) and a micro-level (normal logging).
However, you are still left with the responsibility of finding out that things have gone wrong, and then manually investe what caused it. When developing on your PC this is an easy enough task; but deploying your code in several production servers makes these tasks cumbersome, wasting a lot of your time.
Accordingly, there are several companies developing products specifically for log management. Most aggregate similar log records together, but others incorporate machine learning algorithms for automatic aggregation and learning your software's behavior. Outsourcing your log handling can then enable you to focus on your product, instead of on your bugs.
Disclaimer: I work for Coralogix, one such solution.
You can subclass the logger class and override the exception method to put your error types in a cache until they reach a certain counter before they are emitted to the log.
import logging
from collections import defaultdict
MAX_COUNT = 99999
class MyLogger(logging.getLoggerClass()):
def __init__(self, name):
super(MyLogger, self).__init__(name)
self.cache = defaultdict(int)
def exception(self, msg, *args, **kwargs):
err = msg.__class__.__name__
self.cache[err] += 1
if self.cache[err] > MAX_COUNT:
new_msg = "{err} occurred {count} times.\n{msg}"
new_msg = new_msg.format(err=err, count=MAX_COUNT, msg=msg)
self.log(logging.ERROR, new_msg, *args, **kwargs)
self.cache[err] = None
log = MyLogger('main')
try:
raise TypeError("Useful error message")
except TypeError as err:
log.exception(err)
Please note this isn't copy paste code.
You need to add your handlers (I recommend formatter, too) yourself.
https://docs.python.org/2/howto/logging.html#handlers
Have fun.
Create a counter and only log it for count=1, then increment thereafter and write out in a finally block (to ensure it gets logged no matter how bad the application crashes and burns). This could of course pose an issue if you have the same exception for different reasons, but you could always search for the line number to verify it's the same issue or something similar. A minimal example:
name_error_exception_count = 0
try:
for i in range(99999):
try:
asdf[i] # not defined!
except NameError:
name_error_exception_count += 1
if name_error_exception_count == 1:
logging.exception('foo')
else: pass
except Exception:
pass # this is just to get the finally block, handle exceptions here too, maybe
finally:
if name_error_exception_count > 0:
logging.exception('NameError exception occurred {} times.'.format(name_error_exception_count))

Using pyUSB to read data from ELM327 OBDII to USB device

I am having problems using the pyUSB library to read data from an ELM327 OBDII to USB device. I know that I need to write a command to the device on the write endpoint and read the received data back on the read endpoint. It doesn't seem to want to work for me though.
I wrote my own class obdusb for this:
import usb.core
class obdusb:
def __init__(self,_vend,_prod):
'''Handle to USB device'''
self.idVendor = _vend
self.idProduct = _prod
self._dev = usb.core.find(idVendor=_vend, idProduct=_prod)
return None
def GetDevice(self):
'''Must be called after constructor'''
return self._dev
def SetupEndpoint(self):
'''Must be called after constructor'''
try:
self._dev.set_configuration()
except usb.core.USBError as e:
sys.exit("Could not set configuration")
self._endpointWrite = self._dev[0][(0,0)][1]
self._endpointRead = self._dev[0][(0,0)][0]
#Resetting device and setting vehicle protocol (Auto)
#20ms is required as a delay between each written command
#ATZ resets device
self._dev.write(self._endpointWrite.bEndpointAddress,'ATZ',0)
sleep(0.002)
#ATSP 0 should set vehicle protocol automatically
self._dev.write(self._endpointWrite.bEndpointAddress,'ATSP 0',0)
sleep(0.02)
return self._endpointRead
def GetData(self,strCommand):
data = []
self._dev.write(self._endpintWrite.bEndpointAddress,strCommand,0)
sleep(0.002)
data = self._dev.read(self._endpointRead.bEndpointAddress, self._endpointRead.wMaxPacketSize)
return data
So I then use this class and call the GetData method using this code:
import obdusb
#Setting up library,device and endpoint
lib = obdusb.obdusb(0x0403,0x6001)
myDev = lib.GetDevice()
endp = lib.SetupEndpoint()
#Testing GetData function with random OBD command
#0902 is VIN number of vehicle being requested
dataArr = lib.GetData('0902')
PrintResults(dataArr)
raw_input("Press any key")
def PrintResults(arr):
size = len(arr)
print "Data currently in buffer:"
for i in range(0,size):
print "[" + str(i) + "]: " + str(make[i])
This only ever prints the numbers 1 and 60 from [0] and [1] element in the array. No other data has been return from the command. This is the case whether the device is connected to a car or not. I don't know what these 2 pieces of information are. I am expecting it to return a string of hexadecimal numbers. Does anyone know what I am doing wrong here?
If you don't use ATST or ATAT, you have to expect a timeout of 200ms at start, between every write/read combination.
Are you sending a '\r' after each command? It looks like you don't, so it's forever waiting for a Carriage Return.
And a hint: test with 010D or 010C or something. 09xx might be difficult what to expect.
UPDATE:
You can do that both ways. As long as you 'seperate' each command with a carriage return.
http://elmelectronics.com/ELM327/AT_Commands.pdf
http://elmelectronics.com/DSheets/ELM327DS.pdf (Expanded list).
That command list was quite usefull to me.
ATAT can be used to the adjust the timeout.
When you send 010D, the ELM chip will wait normally 200 ms, to get all possible reactions. Sometimes you can get more returns, so it waits the 200 ms.
What you also can do, and it's a mystery as only scantools tend to implement this:
'010D1/r'
The 1 after the command, specifies the ELM should report back, when it has 1 reply from the bus. So it reduces the delay quite efficiently, at the cost of not able to get more values from the address '010D'. (Which is speed!)
Sorry for my english, I hope send you in the right direction.

Categories

Resources