I have a device that's connected through usb and I'm using pyUSB to interface with the data.
This is what my code currently looks like:
import usb.core
import usb.util
def main():
device = usb.core.find(idVendor=0x072F, idProduct=0x2200)
# use the first/default configuration
device.set_configuration()
# first endpoint
endpoint = device[0][(0,0)][0]
# read a data packet
data = None
while True:
try:
data = device.read(endpoint.bEndpointAddress,
endpoint.wMaxPacketSize)
print data
except usb.core.USBError as e:
data = None
if e.args == ('Operation timed out',):
continue
if __name__ == '__main__':
main()
It is based off the mouse reader, but the data that I'm getting isn't making sense to me:
array('B', [80, 3])
array('B', [80, 2])
array('B', [80, 3])
array('B', [80, 2])
My guess is that it's reading only a portion of what's actually being provided? I've tried settign the maxpacketsize to be bigger, but nothing.
pyUSB sends and receives data in string format. The data which you are receiving is ASCII codes. You need to add the following line to read the data properly in the code.
data = device.read(endpoint.bEndpointAddress,
endpoint.wMaxPacketSize)
RxData = ''.join([chr(x) for x in data])
print RxData
The function chr(x) converts ASCII codes to string. This should resolve your problem.
I'm only an occasional Python user so beware. If your python script cannot keep up with the amount of data being sampled, then this is what works for me. I'm sending from a uC to the PC blocks of 64 bytes. I use a buffer to hold my samples and later I save them in a file or plot them. I adjust the number multiplying 64 (10 in the example below) until I receive all the samples I was expecting.
# Initialization
rxBytes = array.array('B', [0]) * (64 * 10)
rxBuffer = array.array('B')
Within a loop, I get the new samples and store them in the buffer
# Get new samples
hid_dev.read(endpoint.bEndpointAddress, rxBytes)
rxBuffer.extend(rxBytes)
Hope this helps.
Related
I am working on a project and I am trying to save the video from a drone in my computer and also show it live. I thought about converting the video in Images about 30 per second and updating my frontend with these pictures so that it looks like a video.
Since It is the first time I am working with video and image strings I need some help. As far as I can figure out with my knowledge I am receiving a byte string.
I can not use the libh264 decoder because i am unable to intergrate it in python 3.7 its only working with python 2
Here some strings:
b'\x00\x00\x00\x01A\xe2\x82_\xc1^\xa9y\xae\xa3\xf2G\x1a \x89z\' \x8c\xa6\xe7I}\xf3F\x07t\xf4*b\xd8\xc7\xff\x82\xb32\xb7\x07\x9b\xf5r0\xa0\x1e\x10\x8e\x80\x07\xc1\xdd\xb8g\xba)\x02\xee\x9f\x16][\xe4\xc6\x16\xc8\x17y\x02\xdb\x974\x13Mfn\xcc6TB\xadC\xb3Y\xadB~"\xd5\xdb\xdbg\xbaQ:{\xbftV\xc4s\xb8\xa3\x1cC\xe9?\xca\xf6\xef\x84]?\xbd`\x94\xf6+\xa8\xb4]\xc3\xe9\xa8-I\xd1\x180\xed\xc9\xee\xf4\x93\xd2\r\x00p1\xb3\x1d\xa2~\xfa\xe8\xf4\x97\x08\xc1\x18\x8ad,\xb6\x80\x86\xc6\x05V\x0ba\xcb\x7f\x82\xf2\x03\x9a)\xd6\xd9\x11\x92\x7f\xb5\x8a)R\xaa\xa0 \x85$\xd82(\xee\xd2\x8b\x94N\xacg\x07\x98n\x95OJ\xa4\xcc_\\-\x17\x13\xf3V\x96_\xb5\x97\xe2\xa2;\x03q\xce\x9b\x9e,\xe37{Z\x00\xce|\\\xf9\xdb\xa7\xba\xf3\'c\xee\xc9\xe7I\xfadZ\xb2\xfb\t\xb6\x03\x03\xfe\x9dM!!k\xec\xe0t{\xfeig\xcbL\xf6\x0bOP\r\x97\t\x95Hb\xd81\xb5\xbfVLZ#\x16s\xb6\x1adf\xb5\xe2\xb5\xb7\xccI\x82l\x05\xe9\x85\xd3\'x\x14C\xeb\xc4\xcb\xa5\xc7\xb6=\x7f\\m4\xa4\x00~\xdb\x97\xe4\xbb\xf3A\x86 Mm\xc7\x9a\x90\xda&\xc5\xf2wY\nr.1\xb9\x0c\xb4\xb1\xb2!\x03)\xb3\x19\x1d\xba\xfb)\xb0\xd2LS\x93\xe3\xb4t\x91\xed\xa7\xfe\xceV\x10\xa7Vcd\xcbIt\xdf\xff0\xcb9Q\xef(\x11&W0|p\x13\xfe\xd6\x93A\xa7\xc2(f\xde\xcc[\x8f#P\x07\x1f\xb0\\.\xd0\xa07\xab\xd5\xce\xb1N\xfb\xd3\xcc\x0f\x89+gm1p4\x87_\xf6\xfe\x13\xe8\xec\xa3vd,\xb3jW\x96\xe2\x937\xcb\xc5\xc4\xdb\xd9(wj\xa85y\xccE \xf8\xe4\x83\xd5\xcf\xe5A\xf9\x18T;v\x00\xbc\xac\xd1a\xed\tK\xd6\xd4\xd4\xc4W\xe4F7L\xfc\xb4\xeb3\x937\x94\x02i\xf3\x85\xbe\x05B\xf5\xb8\xccO\x84\xfb]M\x0c\xd8k\x00va\x0f\x91M\xd9\x9f9\xfc\x0f6\xa4f\xc5\xbe\xd9GItD\xdf7*\x93Kv)~[\xf1%\xeb(o\xef;\xc0\xb4,\xa1\xc2V\x8a\xff\xe1\x86\x17\xe7\xf17\xe81l&\x14<j\xb0AS\xf92\xb1C;\x81\x8a\x06D\xab\x11j\xcd\xb1q\x9e\xefm\x0ei7\x15\x8d\x03\xdd6B\xd9qg*X\x0f\xe6F\xdc\xb6\x93N\xbe\x12\xc9#I\xe3\xd4\x80j\xe8z\xd5t\x05,Y\xd7\xec\xd1\x9a\x97\xae\x16\xb0\xdfi\xb2\xb8\xb5J-\xde9&\x1ai\x19\xb7\x81\xa3\'\xccf]\xeeK#\x8bk3\x11\x97\\T\x88\xfb\xee\xd3El:\x16\x13\xafi\xc0\xf9\xef\xefe7\xe4w\x14\xdf76g^\xd02J\x96Z\xedl\x19\x8eG\xb7\xc6\xebHj\x86\x84/:R{+co\xa0\xaa\xeb.\xbb\x0e\xc9\xf3\xa8\x1e\xd4\x1a\x010\x87;\xef\xbe\xaf.\x87\x9a5\xfdG\x82\xd5\xb2\x01\x1e\xf2\xd3l\xef\tb\xe7=1\x03\x8f\xae\x83\x84:0\x9bE;x\x03UB\x87\xbco\xb2\x80xZ\x96\x1a\x0e?i\xe51^\x9b\x1d\xb4\\|\xccH\xdf3G\x83\xbd/\rhS0;\x9a\xdb\xf6NG\x16 ?\xf3\x13<\xcf!p\xd5\n\xb1\xf2\x0e\xcc\xdc\x0b\xe6\xe8\xcb#\x85\x17s#\x87\xb4\xf8f\xc7\x9fi\xcc\xe4b\xca\xc0\x1eh\xc1u\xad\x98\x92\x12\x00\xb5`\xfa!~{\xac\xc0\x14:\xce\xfc\xa4\x90\x12\xc4K\xa5\xb9\x83\xd1\x03\x1a\xd8z\xf6A\xe9\xfbb\x07\x99\xf80\x9b,\x17\x8d /ZXb]\xb2P\\\'\xcb\n\xae\x82\x99X\xf5\t\xd1\xc9p\x11\x8d\xcaD\xf2\x8b\x8bc%\x17] \x89b\xa9kF\x93\xc0\xe1{INUg\xec\xb4\x1b`{\xd1:\xb3\xa4\x7f\t\x9b\xde\xb0V\x1f\xd7\x85>\xbeT\xbb\xe5\xf0u\x96\x98\xad\x9a\xc3N\xf8A\x91\xd95h\x1ef\xbc\xf2\x08B\xe0\x9f\xe0\x1d+\xb6$\xafA\xca\xf6\xc5MX\x88\x9e\xf1\xbawZ\x87\xe7\xf7\xf4\xcd\xe4\x92|L\x1ep69\x81\x8f\xc6\'\xc1q\xe3\x98\x1ev\x94\xa3\xd5\xb8g\xee\x82\xd3Y\xccs\x81\x06\x97\x02\xf0\xd8S\xf1\x1b!\x8emp\x02w\x97\x11t]5?\x16\xfa\xf2\xfb\xf7\xef\xdf\xe4\x82V\x07?F`\xcf\xee\xef\xe7\xae\x18\xef\x83a\x87\xb1zh\xe7\xaez]\x1e\xc5\xd9\xe7&\x9a\xf0\xd0\xa4!\x05\x07\xff\xca\x10\xfa\xb7\x01\x9aU\x8b(\xb5#\x11\x95\x98\x8b\xe3\x84\x9b\x13\xecw\x0e\xc9\xad<X\xde\x11\tuo\xd2\xfd\xb6\xc2\x1c\xfb\x82 \xb2\xa6\x02\x8c0\x19\xadP\x1b\xc3C\x08\xc9-\xaa\xd0\x15\xb3\xd2g\x07\x980:u\r\xfc\xf4&\xf9\x06$#\x85\xe1l\x16\x8a\x9f\xedX\xa0b\x1a^\x90#256\xc0z\xc7\xfax\xde\xa2\x0fKHY\xed8\xc6`\xa7^#\x0b.\xc4\x1a\r\x938\x17\xe2|\xb0\x95-\xce\xaa}\xc3\xb5\x0bS\xbb\xc6\x0cA\x00`\xe5:\x00\xc6\x0b\x93(1]\xb1\xb6\xc0\xc0de;]~\xa1\xc6d\xf7\x12\xc9\x0f\xfc\xd4\xd0\xfcJ\xb9\xd5\nE\x9a\x7f\x12\xbd\x83\x87\xff\xb8\x15\x0fm\x14p\xba\xc0\xef\x87v\x9e\\\xfd\x8f;\xe3\xb5\x03\x94\xd6t\xa5\xc2\xe9\x92\xd1\xcd9cS\x15\x9c}\xdd\x9f\xf4\xe1\xd2\xb6cR\xb1\x18\x83\xe7\n\xde\xfeUM\x90\xf9\xbf\xf6\xd8J\xc7\x1a:z\x0bGL\x00l\xf6\xa5\x1f$\x86O6\xfa\x13\x04G\x0e\xfe\xca\xbe\xaf\xe1\xb6\xfa\x91\x9b\xb5\x9f]\x12N\x9c\xcf4b}E\x07\xa6B\xd2\x10\xe0Xjxi\x93\x92w\x1d \xd5\xd1\x87,5\xa0\xd3\x18\x8e\xe0\xad9o\x92\x8d\xb1\x95o\x0c"\xb4\xadW\xf9\xc9\xa0\xe5i\xdb\x17\xea\xd6o$Y\xfb\xb5\x9c\x93\x16\xf7\xc0\x1cz\x00\xfc$\x08\x9ay38Y\xe1_8\xb2\xe2\xd1\t\xcdfmcpSEt\x86\xa6'
I would appreciate it if you could help me understand where every picture starts and where it ends. I assume that there has to be some kind of parity bits.
How is it possible to make a picture out of it.
Here is my code and what I've tried so far:
def videoLogging(self):
logging.info("-----------[Tello] Video Thread: started------------------")
INTERVAL = 0.2
index = 0
while True:
try:
packet_data = None
index += 1
res_string, ip = self.video_socket.recvfrom(2048)
packet_data = res_string
print(packet_data)
self.createImg(packet_data)
time.sleep(5)
# videoResponse = self.video_socket.recv(2048)
# mv = memoryview(videoResponse).cast('H')
# if mv is not None:
# self.createImg(mv)
# print("image created")
# print('VIDEO %s' % videoResponse)
# time.sleep(3)
except Exception as ex:
logging.error("Error in listening to tello\t\t %s" % ex)
def createImg(self, data):
with open('image.jpg', 'wb') as f:
f.write(data)
Unfortunately, the image can't be opened.
Thank you in regards.
This looks like an annex b stream. There are no parity bits. You can read about the bitstream format here. Possible Locations for Sequence/Picture Parameter Set(s) for H.264 Stream
i have a acceleration sensor which continuously outputs reading in 400 Hz ( like [0.21511 0.1451 0.2122] ). I want to store them and post process them. Now im able to store the first entry of the reading not all.
How to make it happen.
thanks
from altimu10v5.lsm6ds33 import LSM6DS33
from time import sleep
import numpy as np
lsm6ds33 = LSM6DS33()
lsm6ds33.enable()
accel=lsm6ds33.get_accelerometer_g_forces()
while True:
DataOut = np.column_stack(accel)
np.savetxt('output.dat',np.expand_dims(accel, axis=0), fmt='%2.2f %2.2f %2.2f')
sleep(1)
´
Actual problem is, You are calling get_accelerometer_g_forces() only once.
Just move it inside While looop
Updated:
while True:
accel=lsm6ds33.get_accelerometer_g_forces()
f=open('output.dat','ab')
DataOut = np.column_stack(accel)
np.savetxt(f,np.expand_dims(accel, axis=0), fmt='%2.2f %2.2f %2.2f')
sleep(1)
Here is a reference :How to write a numpy array to a csv file?
Make sure that reading the data is enclosed within to loop!
You don't need numpy here yet:
while True:
with open("output.dat", "w") as f:
f.write("%.5f, %.5f, %.5f" % tuple(accelerometer_g_forces()))
Note that there is no condition to stop outputting the data.
I was just playing around with sound input and output on a raspberry pi using python.
My plan was to read the input of a microphone, manipulate it and playback the manipulated audio. At the moment I tried to read and playback the audio.
The reading seems to work, since i wrote the read data into a wave file in the last step, and the wave file seemed fine.
But the playback is noise sounds only.
Playing the wave file worked as well, so the headset is fine.
I think maybe I got some problem in my settings or the output format.
The code:
import alsaaudio as audio
import time
import audioop
#Input & Output Settings
periodsize = 1024
audioformat = audio.PCM_FORMAT_FLOAT_LE
channels = 16
framerate=8000
#Input Device
inp = audio.PCM(audio.PCM_CAPTURE,audio.PCM_NONBLOCK,device='hw:1,0')
inp.setchannels(channels)
inp.setrate(framerate)
inp.setformat(audioformat)
inp.setperiodsize(periodsize)
#Output Device
out = audio.PCM(audio.PCM_PLAYBACK,device='hw:0,0')
out.setchannels(channels)
out.setrate(framerate)
out.setformat(audioformat)
out.setperiodsize(periodsize)
#Reading the Input
allData = bytearray()
count = 0
while True:
#reading the input into one long bytearray
l,data = inp.read()
for b in data:
allData.append(b)
#Just an ending condition
count += 1
if count == 4000:
break
time.sleep(.001)
#splitting the bytearray into period sized chunks
list1 = [allData[i:i+periodsize] for i in range(0, len(allData), periodsize)]
#Writing the output
for arr in list1:
# I tested writing the arr's to a wave file at this point
# and the wave file was fine
out.write(arr)
Edit: Maybe I should mention, that I am using python 3
I just found the answer. audioformat = audio.PCM_FORMAT_FLOAT_LE this format isn't the one used by my Headset (just copied and pasted it without a second thought).
I found out about my microphones format (and additional information) by running speaker-test in the console.
Since my speakers format is S16_LE the code works fine with audioformat = audio.PCM_FORMAT_S16_LE
consider using plughw (alsa subsystem supporting resampling/conversion) for the sink part of the chain at least:
#Output Device
out = audio.PCM(audio.PCM_PLAYBACK,device='plughw:0,0')
this should help to negotiate sampling rate as well as the data format.
periodsize is better to estimate based on 1/times of the sample rate like:
periodsize = framerate / 8 (8 = times for 8000 KHz sampling rate)
and sleeptime is better to estimate as a half of the time necessary to play periodsize:
sleeptime = 1.0 / 16 (1.0 - is a second, 16 = 2*times for 8000 KHz sampling rate)
I feel like this is a fairly common problem but I haven't yet found a suitable answer. I have many audio files of human speech that I would like to break on words, which can be done heuristically by looking at pauses in the waveform, but can anyone point me to a function/library in python that does this automatically?
An easier way to do this is using pydub module. recent addition of silent utilities does all the heavy lifting such as setting up silence threahold , setting up silence length. etc and simplifies code significantly as opposed to other methods mentioned.
Here is an demo implementation , inspiration from here
Setup:
I had a audio file with spoken english letters from A to Z in the file "a-z.wav". A sub-directory splitAudio was created in the current working directory. Upon executing the demo code, the files were split onto 26 separate files with each audio file storing each syllable.
Observations:
Some of the syllables were cut off, possibly needing modification of following parameters,
min_silence_len=500
silence_thresh=-16
One may want to tune these to one's own requirement.
Demo Code:
from pydub import AudioSegment
from pydub.silence import split_on_silence
sound_file = AudioSegment.from_wav("a-z.wav")
audio_chunks = split_on_silence(sound_file,
# must be silent for at least half a second
min_silence_len=500,
# consider it silent if quieter than -16 dBFS
silence_thresh=-16
)
for i, chunk in enumerate(audio_chunks):
out_file = ".//splitAudio//chunk{0}.wav".format(i)
print "exporting", out_file
chunk.export(out_file, format="wav")
Output:
Python 2.7.9 (default, Dec 10 2014, 12:24:55) [MSC v.1500 32 bit (Intel)] on win32
Type "copyright", "credits" or "license()" for more information.
>>> ================================ RESTART ================================
>>>
exporting .//splitAudio//chunk0.wav
exporting .//splitAudio//chunk1.wav
exporting .//splitAudio//chunk2.wav
exporting .//splitAudio//chunk3.wav
exporting .//splitAudio//chunk4.wav
exporting .//splitAudio//chunk5.wav
exporting .//splitAudio//chunk6.wav
exporting .//splitAudio//chunk7.wav
exporting .//splitAudio//chunk8.wav
exporting .//splitAudio//chunk9.wav
exporting .//splitAudio//chunk10.wav
exporting .//splitAudio//chunk11.wav
exporting .//splitAudio//chunk12.wav
exporting .//splitAudio//chunk13.wav
exporting .//splitAudio//chunk14.wav
exporting .//splitAudio//chunk15.wav
exporting .//splitAudio//chunk16.wav
exporting .//splitAudio//chunk17.wav
exporting .//splitAudio//chunk18.wav
exporting .//splitAudio//chunk19.wav
exporting .//splitAudio//chunk20.wav
exporting .//splitAudio//chunk21.wav
exporting .//splitAudio//chunk22.wav
exporting .//splitAudio//chunk23.wav
exporting .//splitAudio//chunk24.wav
exporting .//splitAudio//chunk25.wav
exporting .//splitAudio//chunk26.wav
>>>
You could look at Audiolab It provides a decent API to convert the voice samples into numpy arrays.
The Audiolab module uses the libsndfile C++ library to do the heavy lifting.
You can then parse the arrays to find the lower values to find the pauses.
Use IBM STT. Using timestamps=true you will get the word break up along with when the system detects them to have been spoken.
There are a lot of other cool features like word_alternatives_threshold to get other possibilities of words and word_confidence to get the confidence with which the system predicts the word. Set word_alternatives_threshold to between (0.1 and 0.01) to get a real idea.
This needs sign on, following which you can use the username and password generated.
The IBM STT is already a part of the speechrecognition module mentioned, but to get the word timestamp, you will need to modify the function.
An extracted and modified form looks like:
def extracted_from_sr_recognize_ibm(audio_data, username=IBM_USERNAME, password=IBM_PASSWORD, language="en-US", show_all=False, timestamps=False,
word_confidence=False, word_alternatives_threshold=0.1):
assert isinstance(username, str), "``username`` must be a string"
assert isinstance(password, str), "``password`` must be a string"
flac_data = audio_data.get_flac_data(
convert_rate=None if audio_data.sample_rate >= 16000 else 16000, # audio samples should be at least 16 kHz
convert_width=None if audio_data.sample_width >= 2 else 2 # audio samples should be at least 16-bit
)
url = "https://stream-fra.watsonplatform.net/speech-to-text/api/v1/recognize?{}".format(urlencode({
"profanity_filter": "false",
"continuous": "true",
"model": "{}_BroadbandModel".format(language),
"timestamps": "{}".format(str(timestamps).lower()),
"word_confidence": "{}".format(str(word_confidence).lower()),
"word_alternatives_threshold": "{}".format(word_alternatives_threshold)
}))
request = Request(url, data=flac_data, headers={
"Content-Type": "audio/x-flac",
"X-Watson-Learning-Opt-Out": "true", # prevent requests from being logged, for improved privacy
})
authorization_value = base64.standard_b64encode("{}:{}".format(username, password).encode("utf-8")).decode("utf-8")
request.add_header("Authorization", "Basic {}".format(authorization_value))
try:
response = urlopen(request, timeout=None)
except HTTPError as e:
raise sr.RequestError("recognition request failed: {}".format(e.reason))
except URLError as e:
raise sr.RequestError("recognition connection failed: {}".format(e.reason))
response_text = response.read().decode("utf-8")
result = json.loads(response_text)
# return results
if show_all: return result
if "results" not in result or len(result["results"]) < 1 or "alternatives" not in result["results"][0]:
raise Exception("Unknown Value Exception")
transcription = []
for utterance in result["results"]:
if "alternatives" not in utterance:
raise Exception("Unknown Value Exception. No Alternatives returned")
for hypothesis in utterance["alternatives"]:
if "transcript" in hypothesis:
transcription.append(hypothesis["transcript"])
return "\n".join(transcription)
pyAudioAnalysis can segment an audio file if the words are clearly separated (this is rarely the case in natural speech). The package is relatively easy to use:
python pyAudioAnalysis/pyAudioAnalysis/audioAnalysis.py silenceRemoval -i SPEECH_AUDIO_FILE_TO_SPLIT.mp3 --smoothing 1.0 --weight 0.3
More details on my blog.
My variant of function, which probably will be easier to modify for your needs:
from scipy.io.wavfile import write as write_wav
import numpy as np
import librosa
def zero_runs(a):
iszero = np.concatenate(([0], np.equal(a, 0).view(np.int8), [0]))
absdiff = np.abs(np.diff(iszero))
ranges = np.where(absdiff == 1)[0].reshape(-1, 2)
return ranges
def split_in_parts(audio_path, out_dir):
# Some constants
min_length_for_silence = 0.01 # seconds
percentage_for_silence = 0.01 # eps value for silence
required_length_of_chunk_in_seconds = 60 # Chunk will be around this value not exact
sample_rate = 16000 # Set to None to use default
# Load audio
waveform, sampling_rate = librosa.load(audio_path, sr=sample_rate)
# Create mask of silence
eps = waveform.max() * percentage_for_silence
silence_mask = (np.abs(waveform) < eps).astype(np.uint8)
# Find where silence start and end
runs = zero_runs(silence_mask)
lengths = runs[:, 1] - runs[:, 0]
# Left only large silence ranges
min_length_for_silence = min_length_for_silence * sampling_rate
large_runs = runs[lengths > min_length_for_silence]
lengths = lengths[lengths > min_length_for_silence]
# Mark only center of silence
silence_mask[...] = 0
for start, end in large_runs:
center = (start + end) // 2
silence_mask[center] = 1
min_required_length = required_length_of_chunk_in_seconds * sampling_rate
chunks = []
prev_pos = 0
for i in range(min_required_length, len(waveform), min_required_length):
start = i
end = i + min_required_length
next_pos = start + silence_mask[start:end].argmax()
part = waveform[prev_pos:next_pos].copy()
prev_pos = next_pos
if len(part) > 0:
chunks.append(part)
# Add last part of waveform
part = waveform[prev_pos:].copy()
chunks.append(part)
print('Total chunks: {}'.format(len(chunks)))
new_files = []
for i, chunk in enumerate(chunks):
out_file = out_dir + "chunk_{}.wav".format(i)
print("exporting", out_file)
write_wav(out_file, sampling_rate, chunk)
new_files.append(out_file)
return new_files
I have three atmega32 sending three sets of sensor values(accelerometer, gyroscope, magnetometer) to a bluetooth module over using SPI protocol. I've received the Bluetooth data, but its coming in like this,
0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,54.00
0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,51.00
0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,0.00,53.00
It's being received as a csv data and I am unable to separate the individual components. I want to store them in separate variables (like x1,y1,z1,...)
Here's my code:
# Author: P. Vinod Ranganath
# Description: Receiving data from ATmega32 via bluetooth
# Date: 27/04/2015
import bluetooth
import sys
#address of the bluetooth device
addr = "00:06:66:61:1E:76"
#port number
port = 1
#create socket and connect to it
socket = bluetooth.BluetoothSocket(bluetooth.RFCOMM)
socket.connect((addr, port))
#if there is an incoming transmission, enter loop
while True:
#receive transmitted data
data = socket.recv(1024)
#print data
sys.stdout.write(data)
Python has a csv module, that does all the CSV parsing you'd typically want.
In your case, it's even simpler: Strings in python have the split(char) method, so
data.split(",")
should give you an array of the substrings:
["0.0", "0.0", ....
Now, you want to have floating point numbers, right?
Stick to your favourite Python tutorial (really, read one!) and do
values = [float(substring) for substring in data.split(",")]
so that values is an array of floating point numbers.
You can then do something like
x1 = values[10]
But you typically don't want to do that --- having this kind of data as an array is usually more useful.
def get_words(data):
l = []
w = ''
for c in data.lower():
if c in '\r \n ,':
if w != '':
l.append(w)
w = ''
else:
w = w + c
if w != '':
l.append(w)
return l
found this on stack overflow.
It separates the different elements and float conversion proceeds without errors.But, the list isn't getting filled in order(last item to second place and so on, randomly)