I am working on a project where we are looking to get some data from a universal robot such as position and force data and then store that data in a text file for later reference. We can receive the data just fine, but turning it into readable coordinates is an issue. An example data string is below:
b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80\xbf\x00\x00\x80\xbf\x00\x00\x80\xbf\x00\x00\x80\xbf\x00\x00\x80\xbf\x00\x00\x80\xbf\x00\x00\xc0?\x00\x00\x16C\x00\x00\xc0?\x00\x00\x16C\x00\x00\x00?\xcd\xcc\xcc>\x00\x00\x96C\x00\x00\xc8A\x1e\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x88\xfb\x7f?\xd0M><\xc0G\x9e:tNT?\r\x11\x07\xbc\xb9\xfd\x7f?~\xa0\xa1:\x03\x02+?\x16\xeb\x7f\xbf#\xce\xcc\xbc9\xdfl\xbbq\xc3\x8a>i\x19T<\xf3\xf9\x7f\xbf\xb4k\x87\xbb->\xc2>\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x80?\xdb\x0f\xc9#\xa7\xdcU#\xa7\xdcU#\xa7\xdcU#\xa7\xdcU#\xa7\xdcU#\xa7\xdcU#\xfe\xff\xff\xff\xfe\xff\xff\xff\xfe\xff\xff\xff\xfe\xff\xff\xff\xfe\xff\xff\xff\xff\xff\xff\xff\xecb\xc7#\xecb\xc7#\xecb\xc7#\
*not entire string received
At first I thought it was hex so I tried the code:
packet_12 = packet_12.encode('hex')
x = str(packet_12)
x = struct.unpack('!d', packet_12.decode('hex'))[0]
all_data.write("X=", x * 1000)
But to no avail. I tried several different decoding methods using codecs and .encode, but none worked. I found on a different post here the two code blocks below:
y = codecs.decode(packet_12, 'utf-8', errors='ignore')
packet_12 = s.recv(8)
z = str(packet_12)
x = ''.join('%02x' % ord(c) for c in packet_12)
Neither worked for my application. Finally I tried saving the entire sting in a .txt file and opening it with python and decoding it with the code below, but again nothing seemed to happen.
with io.open('C:\\Users\\myuser\\Desktop\\decode.txt', 'r', encoding='utf8') as f:
text = f.read()
with io.open('C:\\Users\\myuser\\Desktop\\decode', 'w', encoding='utf8') as f:
f.write(text)
I am aware I might be missing something incredibly simple such as using the wrong decoding type or I might even have jibberish as the robot output, but any help is appreciated.
The easiest way to receive data from the robot with python is to use Universal Robots' Real-Time-Data-Exchange Interface. They offer some python examples for receiving and sending data.
Check out my GitHub repo for an example code which is based on the official code from UR:
https://github.com/jonenfabian/Read_Data_From_Universal_Robots
Related
I am having a problem to import a set of keywords in Russian to a code I'm writing for the the extraction and calculation of those keywords in a corpus of historical texts that I'm working on.
My code looks like this:
f = open('keyword_rayoni.txt', 'r', 'utf-8')
allKeywords = f.read().lower().split("\n")
f.close()`
print(allKeywords)
I get a TypeError: an integer is required (got type str)
I used the same code on an English set of keywords and it worked. I also tried to set the encoding for the Russian keywords to UTF-8, but it didn't solve the problem. Could you please help?
You are using the open function wrong. Input help(open) in a python console. This will give documentation on the open function. If you read it you will see that the third argument is buffering, a different parameter which takes an int (but you are giving it a string, utf-8, see?)
Try:
f = open('blah.txt', 'r', encoding='utf-8')
/I apologize for my limited level of English which may cause the question not that clear./
I'm now using python to write the data which was sent from the arduino into a csv file, I want around 200 data in a group, one group for one row, every data separately in different colums. The data from my aruino is in the format: number+, (for example: 123,144,135,....) but in csv file, the number had been separated into different colums (1,2,3 in different colums instead of 123 in one colum), and when open the file by writing note, the data looks like "1,2,3","1,4,4",.....
I tried different delimeters like \t, space... \t looks fine when I viewed the file by excel but still didn't work in writing pad (a tab between every two numbers).
I also tried to delete the "," in arduino code but it doesn't help as well.
In the writerows() function, I tried data, str(data) and str(data)+"," ,not much difference.
I even changed the delimeter setting of my laptop from "," to "\t" but dosen't help.
The arduino part:
Serial.print(value);
Serial.print(",");
The python part:
while True:
try:
ser_bytes = ser.readline()
decoded_bytes = ser_bytes.decode('utf-8')
print(decoded_bytes)
#decoded_bytes = decoded_bytes.strip('|')
with open("test_data.csv","a",newline="") as f:
writer = csv.writer(f,delimiter=",")
writer.writerows([str(decoded_bytes),])
I searched a lot about the csv format but I still can't get the point why the code doesn't work.
Thank you for the help.
You're right, I think I didn't totally get what your question is, but here are some ideas. To get correct csv output, you have to change your code to something like this:
while True:
try:
ser_bytes = ser.readline()
// the following line splits the line you got from your arduino into an array
decoded_bytes = ser_bytes.split(",");
print(decoded_bytes)
with open("test_data.csv","a") as f:
writer = csv.writer(f,delimiter=",")
writer.writerow(decoded_bytes)
Like this you should get correct csv output and every line you get from the arduino is written to a line in the file.
Some additional thoughts: Since you're a getting a csv style line from your arduino already you could write that to a file directly, without splitting and using the csv writer. That's actually a little overkill, but it probably doesn't matter that much ;)
I'm working on a program that uses a BMP and a separate file for the transparency layer. I need to convert them into a PNG from that so I'm using PIL in python to do so. However, I need the data from the transparency file in hex so it can be added to the image. I am using the binascii.hexlify function to do that.
Now, the problem I'm having is for some reason the data, after going through the hexlify function (I've systematically narrowed it down by going through my code piece by piece), looks different than it does in my hex editor and is causing slight distortions in the images. I can't seem to figure out where I am going wrong.
Data before processing in Hex editor
Data after processing in Hex editor
Here is the problematic part off my code:
filename = askopenfilename(parent=root)
with open(filename, 'rb') as f:
content = f.read()
f.close()
hexContent = binascii.hexlify(content).decode("utf-8")
My input
My output (This is hexcontent written to a file. Since I know that it is not going wrong in the writing of the file, and it is also irrelevant to my actual program I did not add that part to the code snippet)
Before anyone asks I tried codecs.encode(content, 'hex') and binascii.b2a_hex(content).
As for how I know that it is this part that is messing up, I printed out binascii.hexlify(content) and found the same part as in the hex editor and it looked identical to what I had got in the end.
Another possibility for where it is going wrong is in the "open(filename, 'rb')" step. I haven't yet thought of a way to test that. So any help or suggestions would be appreciated. If you need one of the files I'm using for testing purposes, I'll gladly add one here.
If I understand your question correctly then your desired output should match Data before processing in Hex editor. I can obtain this with the following code:
with open('Input.alp', 'rb') as f:
i = 0
for i, chunk in enumerate(iter(lambda: f.read(16), b'')):
if 688 <= i * 16 <= 736:
print i * 16, chunk.encode('hex')
Outputs:
688 ffffffffffffffffffffffffffffffff
704 ffffffffffffffffffffffe000000000
720 000000000000000001dfffffffffffff
736 ffffffffffffffffffffffffffffffff
See this answer for a more detailed explanation.
i have a code that updates CSVs from a server. it gets data using:
a = urllib.urlopen(url)
data = a.read().strip()
then i append the data to the csv by
f = open(filename+".csv", "ab")
f.write(ndata)
f.close()
the problem is that randomly, a line in the csv gets written like this (or gets a line break somewhere along the csv):
2,,,,,
015-04-21 13:00:00,18,998,50,31,2293
instead of its usual form:
2015-04-21 13:00:00,6,1007,29,25,2394
2015-04-21 13:00:00,7,1004,47,26,2522
i tried printing my data in shell after the program ran, and it would show that the broken csv entry actually appears to be normal.
hope you guys can help me out. thanks.
running python 2.7.9 on win8.1
What actions are performed on your "ndata" variable ?
You should use the csv module to manage CSV files : https://docs.python.org/2/library/csv.html
Edit after comment :
If you do not want to use the "csv" module I linked to you, instead of
a = urllib.urlopen(url)
data = a.read().strip()
ndata = data.split('\n')
f.write('\n'.join(ndata[1:]))
you should do this :
a = urllib.urlopen(url)
f.writelines(a.readlines()[1:])
I don't see any reason explaining your randomly unwanted "\n" if you are sure that your incoming data is correct. Do you manage very long lines ?
I recommand you to use the csv module to read your input : you'll be sure to have a valid CSV content if your input is correct.
I've worked through several similar posts around this issue, but to no avail - I output a list of lists to a csv file, however the special characters are not showing properly as Excel isn't reading it as UTF8.
I'm a beginner so am struggling to implement some of the workarounds people have used such as writing to UTF16 and using BOMs - my latest attempt was to try to add the BOM to my output CSV but its not working.
with open(outputname, "wb") as f:
writer = csv.writer(f)
writer.writerows(my_list)
f.write(u'\ufeff'.encode('utf8')) # BOM
I've tried some more complicated ways like using UnicodeWriter, but with no luck. Any ideas would be appreciated!
I finally figured this out, so just in case its useful for anyone else - I got around it by decoding my csv content (stored in a nested list prior to exporting to csv) and then re-encoding the content into 'cp1252' which I believe is the encoding used by Excel. Haven't tested this on Mac yet but it certainly works on Windows.
for i in range(len(nestedlist)):
for j in range(7):
x = nestedlist[i][j]
y = unicode(x, 'utf-8')
nestedlist[i][j] = y.encode('cp1252')