Encoding Spyne SOAP XML response with Latin-1 - python

I recently set up a Spyne application for sending an XML response. As it stands now, the application is correctly sending the response -- however, it is currently sending a UTF-8 encoded document. I would like to instead send the document as Latin-1 (iso-8859-1) encoded.
I've tried to use the "encoding=" argument, but it seems to have no effect on the response beyond changing the header.
Below is the code for my application:
import logging
from spyne import Application, rpc, ServiceBase, Integer, Unicode, AnyDict
from spyne import Iterable
from spyne.protocol.soap import Soap11
from spyne.protocol.xml import XmlDocument
from spyne.server.wsgi import WsgiApplication
class CoreService(ServiceBase):
#rpc(Unicode, Unicode, Integer, Integer, _returns=AnyDict) ##rpc arguments corespond to the retrieve_score() arguments below
def retreive_score(ctx):
return score # This is a dictionary
application = Application([CoreService], 'spyne.iefp.soap',
in_protocol=Soap11(validator='lxml'),
out_protocol=XmlDocument(polymorphic=True, encoding='iso-8859-1'))
wsgi_application = WsgiApplication(application)
if __name__ == '__main__':
import logging
from wsgiref.simple_server import make_server
logging.basicConfig(level=logging.DEBUG)
logging.getLogger('spyne.protocol.xml').setLevel(logging.DEBUG)
logging.info("listening on port 8000")
logging.info("wsdl is at: http://10.10.28.84:8000/?wsdl")
server = make_server('0.0.0.0', 8000, wsgi_application)
server.serve_forever()

I fixed (and switched to HttpRpc for input as I didn't have a SOAP client handy) and ran your code. It works for me.
import logging
from spyne import Application, rpc, ServiceBase, Integer, Unicode, AnyDict
from spyne import Iterable
from spyne.protocol.soap import Soap11
from spyne.protocol.http import HttpRpc
from spyne.protocol.xml import XmlDocument
from spyne.server.wsgi import WsgiApplication
class CoreService(ServiceBase):
#rpc(Unicode, Unicode, Integer, Integer, _returns=AnyDict)
def retrieve_score(ctx, s1, s2, i1, i2):
return {'rain': u'yağmur'} # This is a dictionary
application = Application([CoreService], 'spyne.iefp.soap',
in_protocol=HttpRpc(),
out_protocol=XmlDocument(polymorphic=True, encoding='iso-8859-9'))
wsgi_application = WsgiApplication(application)
if __name__ == '__main__':
import logging
from wsgiref.simple_server import make_server
logging.basicConfig(level=logging.DEBUG)
logging.getLogger('spyne.protocol.xml').setLevel(logging.DEBUG)
logging.info("listening on port 8000")
logging.info("wsdl is at: http://127.0.0.1:8000/?wsdl")
server = make_server('0.0.0.0', 8000, wsgi_application)
server.serve_forever()
The excerpt from curl -s localhost:8000/retrieve_score | hexdump -C is:
00000070 65 5f 73 63 6f 72 65 52 65 73 75 6c 74 3e 3c 72 |e_scoreResult><r|
00000080 61 69 6e 3e 79 61 f0 6d 75 72 3c 2f 72 61 69 6e |ain>ya.mur</rain|
00000090 3e 3c 2f 6e 73 30 3a 72 65 74 72 69 65 76 65 5f |></ns0:retrieve_|
Where you have 0xF0 for 'ğ', which is correct, according to: https://en.wikipedia.org/wiki/ISO/IEC_8859-9

Related

Python NMEA GNSS Cold Start command

On my hardware, I translate the usb port to com using the usb_transit_on internal command.
After that, I connect to the port using the program and when I enter this command "b5 62 06 04 04 00 ff ff 00 00 0c 5d"
I perform a cold restart and note the time spent by the satellites, the task is to do the same only without the program, the question is whether it is possible how then send command "b5 62 06 04 04 00 ff ff 00 00 0c 5d" or "$PMTK103*30"
With Python, I tried but nothing happened.
import time
import pynmea2
import serial
import csv
import io
def status():
# ser = serial.Serial('COM12')
ser = serial.Serial(
port = "COM12",
baudrate = 9600,
bytesize = serial.EIGHTBITS,
parity = serial.PARITY_NONE,
stopbits = serial.STOPBITS_ONE
)
print("Waiting for data")
ser.write(b"b5 62 06 04 04 00 ff ff 00 00 0c 5d")
while True:
message = ser.readline().decode()
message = message.strip()
if "$GNRMC" in message:
gnrmc = pynmea2.parse(message)
gnrmc_status = gnrmc.status
return gnrmc_status
else:
continue
print(status())
I thought with this command you can send a message to the GNSS module

python socket; my connection is breaking my data stream

I am trying to connect to a socket server and view/download a temperature readings polling stream
i.e.
72.81
72.83
72.79
72.85
But what I get are float values split in half.
72
.35
72
.36
72
.36
72
.37
72
.38
72
.38
72
.38
72
.39
How do I output unbroken float values from a socket connection?
client code:
import socket
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.connect(("192.168.1.249" , 8080))
s.sendall(b"GET / HTTP/1.1\r\nHost: webcode.me\r\nAccept: text/html\r\nConnection: close\r\n\r\n")
while True:
data = s.recv(1024)
if not data:
break
print(data.decode())

Can't connect to modbus RTU over TCP. What is wrong?

Prove that device is working: ./modpoll -m enc -p4660 -t4:float -r60 -a3 192.168.1.1:
Protocol configuration: Encapsulated RTU over TCP
Slave configuration...: address = 3, start reference = 60, count = 1
Communication.........: 192.168.1.1, port 4660, t/o 1.00 s, poll rate 1000 ms
Data type.............: 32-bit float, output (holding) register table
TRACELOG: Set poll delay 0
TRACELOG: Set port 4660
TRACELOG: Open connection to 192.168.1.1
TRACELOG: Configuration: 1000, 1000, 0
-- Polling slave... (Ctrl-C to stop)
TRACELOG: Read multiple floats 3 60
TRACELOG: Send(6): 03 03 00 3B 00 02
TRACELOG: Recv(9): 03 03 04 6E 08 42 F7 35 FF
[60]: 123.714905
And how I try to get the connection with pymodbus library:
from pymodbus.client.sync import ModbusTcpClient
from pymodbus.transaction import ModbusRtuFramer
ModbusTcpClient(host='192.168.1.1', port=4660, framer=ModbusRtuFramer, timeout=5)
client.connect() # returns True
client.read_holding_registers(60, count=3, unit=0x03)
And get this result:
pymodbus.exceptions.ConnectionException: Modbus Error: [Connection] 192.168.1.1:4660
Modbus Error: [Connection] 192.168.1.1:4660
What I'm doing wrong?

why is it timing out like this?

I'm trying to build a TCP-proxy script that sends and receives data, i managed to get it to listen but it doesn't seem to be connecting properly...my code looks right to me and after checking python docs(i'm trying to run it in python 2.7 and 3.6) i get this timeout message:
Output:
anon#kali:~/Desktop/python scripts$ sudo python TCP\ proxy.py 127.0.0.1 21 ftp.target.ca 21 True
[*] Listening on 127.0.0.1:21d
[==>] Received incoming connection from 127.0.0.1:44806d
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 801, in __bootstrap_inner
self.run()
File "/usr/lib/python2.7/threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "TCP proxy.py", line 60, in proxy_handler
remote_socket.connect((remote_host,remote_port))
File "/usr/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 110] Connection timed out
i looked into the file "/usr/lib/python2.7/socket.py" but couldn't really understand what i was looking for as it seemed right when i compared it to python docs and my script
my code:
# import the modules
import sys
import socket
import threading
#define the server
def server_loop(local_host,local_port,remote_host,remote_port,receive_first):
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
try:
server.bind((local_host, local_port))
server.listen(5)
print ("[*] Listening on %s:%sd" % (local_host, local_port))
except:
print("[!!] Failed to listen on %s:%sd" % (local_host,local_port))
print ("[!!] Check for others listening sockets or correct permissions")
sys.exit(0)
while True:
client_socket, addr = server.accept()
#print out the local connection information
print ("[==>] Received incoming connection from %s:%sd" % (addr[0],addr[1]))
#start a thread to talk to the remote host
proxy_thread = threading.Thread(target=proxy_handler,args=(client_socket,remote_host,remote_port,receive_first))
proxy_thread.start()
else:
print ("something went wrong")
def main():
#no fancy command-line parasing here
if len(sys.argv[1:]) !=5:
print ("Usage: ./TCP proxy.py [localhost] [localport] [remotehost] [remoteport] [receive_first]")
print("Example: ./TCP proxy.py 127.0.0.1 9000 10.12.132.1 9000 True")
#set up local listening parameters
local_host = sys.argv[1]
local_port = int(sys.argv[2])
#set up remote target
remote_host = sys.argv[3]
remote_port = int(sys.argv[4])
#this tells proxy to connect and receive data before sending to remote host
receive_first = sys.argv[5]
if "True" in receive_first:
receive_first = True
else:
receive_first = False
#now spin up our listening socket
server_loop(local_host,local_port,remote_host,remote_port,receive_first)
def proxy_handler(client_socket, remote_host, remote_port, receive_first):
#connect to the remote host
remote_socket = socket.socket(socket.AF_INET,socket.SOCK_STREAM)
remote_socket.connect((remote_host,remote_port))
#receive data from the remote end if necessary
if receive_first:
remote_buffer = receive_from(remote_socket)
hexdump(remote_buffer)
#send it to the repsonse handler
remote_buffer = respsonse_handler(remote_buffer)
#if data is able to be sent to local client, send it
if len(remote_buffer):
print ("[<==] Sending %d bytes to localhost." % len(remote_buffer))
client_socket.send(remote_buffer)
#now loop and read from local,sent to remote,send to local,rinse/wash/repeat
while True:
#read from local host
local_buffer = receive_from(client_socket)
if len(local_buffer):
print ("[==>] Received %d bytes from localhost." % len(local_buffer))
#send it to request handler
local_buffer = request_handler(local_buffer)
#send data to remote host
remote_socket.send(local_buffer)
print ("[==>] Sent to remote.")
#receive back response
remote_buffer = receive_from(remote_socket)
if len(remote_buffer):
print ("[<==] Received %d bytes from remote." % len(remote_buffer))
hexdump(remote_buffer)
#send response to handler
remote_buffer = response_handler(remote_buffer)
#send response to local socket
client_socket.send(remote_buffer)
print ("[<==] Sent to localhost.")
#if no data left on either side, close connection
if not len(local_buffer) or not len(remote_buffer):
client_socket.close()
remote_socket.close()
print ("[*] No more data, closing connections.")
break
#this is a pretty hex dumping function taken from the comments of http://code.activestate.com/recipes/142812-hex-dumper/
def hexdump(src, length=16):
result = []
digits = 4 if isinstance(src,unicode) else 2
for i in xrange(0,len(src), length):
s = src[i:i+length]
hexa = b' '.join(["%0*X" % (digits, ord(x)) for x in s])
text = b' '.join([x if 0x20 <= ord(x) < 0x7F else b'.' for x in s])
result.append( b"%04X %-*s %s" % (i, length*(digits + 1), hexa, text) )
print (b'/n'.join(result))
def receive_from(connection):
buffer = ""
#set a 2 second timeout; depending on your target this may need to be adjusted
connection.settimeout(2)
try:
#keep reading the buffer until no more data is there or it times out
while True:
data = connection.recv(4096)
if not data:
break
buffer += data
except:
pass
return buffer
#modify any requested destined for the remote host
def request_handler(buffer):
#perform packet modifications
return buffer
#modify any responses destined for the local host
def response_handler(buffer):
#perform packet modifications
return buffer
main()
i have tried different ftp servers/sites,etc but get the same result, where am i going wrong with my code? any input or direction would be greatly appreciated.
okay so turns out my script is good just the ftp servers i was running weren't haha
this is the final output:
anon#kali:~/Desktop/python scripts$ sudo python TCP\ proxy.py 127.0.0.1 21 ftp.uconn.edu 21 True
[*] Listening on 127.0.0.1:21d
[==>] Received incoming connection from 127.0.0.1:51532d
0000 32 32 30 20 50 72 6F 46 54 50 44 20 31 2E 32 2E 2 2 0 P r o F T P D 1 . 2 ./n0010 31 30 20 53 65 72 76 65 72 20 28 66 74 70 2E 75 1 0 S e r v e r ( f t p . u/n0020 63 6F 6E 6E 2E 65 64 75 29 20 5B 31 33 37 2E 39 c o n n . e d u ) [ 1 3 7 . 9/n0030 39 2E 32 36 2E 35 32 5D 0D 0A 9 . 2 6 . 5 2 ] . .
[<==] Sending 58 bytes to localhost.
[==>] Received 353 bytes from localhost.
[==>] Sent to remote.
[<==] Received 337 bytes from remote.

ZMQ: No subscription message on XPUB socket for multiple subscribers (Last Value Caching pattern)

I implemented the Last Value Caching (LVC) example of ZMQ (http://zguide.zeromq.org/php:chapter5#Last-Value-Caching), but can't get a 2nd subscriber to register at the backend.
The first time a subscriber comes on board, the event[0] == b'\x01' condition is met and the cached value is sent, but the second subscriber (same topic) doesn't even register (if backend in events: is never true). Everything else works fine. Data gets passed from the publisher to the subscribers (all).
What could be the reason for this? Is the way the backend is connected correct? Is this pattern only supposed to work with the first subscriber?
Update
When I subscribe the 2nd subscriber to another topic, I get the right behaviour (i.e. \x01 when subscribing). This really seems to work for the first subscriber onlt . Is is a bug in ZeroMQ?
Update 2
Here's a minimal working example that shows that the LVC pattern is not working (at least not the way it's implemented here).
# subscriber.py
import zmq
def main():
ctx = zmq.Context.instance()
sub = ctx.socket(zmq.SUB)
sub.connect("tcp://127.0.0.1:5558")
# Subscribe to every single topic from publisher
print 'subscribing (sub side)'
sub.setsockopt(zmq.SUBSCRIBE, b"my-topic")
poller = zmq.Poller()
poller.register(sub, zmq.POLLIN)
while True:
try:
events = dict(poller.poll(1000))
except KeyboardInterrupt:
print("interrupted")
break
# Any new topic data we cache and then forward
if sub in events:
msg = sub.recv_multipart()
topic, current = msg
print 'received %s on topic %s' % (current, topic)
if __name__ == '__main__':
main()
And here's the broker (as in the example, but with a bit more verbosity and an integrated publisher).
# broker.py
# from http://zguide.zeromq.org/py:lvcache
import zmq
import threading
import time
class Publisher(threading.Thread):
def __init__(self):
super(Publisher, self).__init__()
def run(self):
time.sleep(10)
ctx = zmq.Context.instance()
pub = ctx.socket(zmq.PUB)
pub.connect("tcp://127.0.0.1:5557")
cnt = 0
while True:
msg = 'hello %d' % cnt
print 'publisher is publishing %s' % msg
pub.send_multipart(['my-topic', msg])
cnt += 1
time.sleep(5)
def main():
ctx = zmq.Context.instance()
frontend = ctx.socket(zmq.SUB)
frontend.bind("tcp://*:5557")
backend = ctx.socket(zmq.XPUB)
backend.bind("tcp://*:5558")
# Subscribe to every single topic from publisher
frontend.setsockopt(zmq.SUBSCRIBE, b"")
# Store last instance of each topic in a cache
cache = {}
# We route topic updates from frontend to backend, and
# we handle subscriptions by sending whatever we cached,
# if anything:
poller = zmq.Poller()
poller.register(frontend, zmq.POLLIN)
poller.register(backend, zmq.POLLIN)
# launch a publisher
p = Publisher()
p.daemon = True
p.start()
while True:
try:
events = dict(poller.poll(1000))
except KeyboardInterrupt:
print("interrupted")
break
# Any new topic data we cache and then forward
if frontend in events:
msg = frontend.recv_multipart()
topic, current = msg
cache[topic] = current
backend.send_multipart(msg)
### this is where it fails for the 2nd subscriber.
### There's never even an event from the backend
### in events when the 2nd subscriber is subscribing.
# When we get a new subscription we pull data from the cache:
if backend in events:
print 'message from subscriber'
event = backend.recv()
# Event is one byte 0=unsub or 1=sub, followed by topic
if event[0] == b'\x01':
topic = event[1:]
print ' => subscribe to %s' % topic
if topic in cache:
print ("Sending cached topic %s" % topic)
backend.send_multipart([ topic, cache[topic] ])
elif event[0] == b'\x00':
topic = event[1:]
print ' => unsubscribe from %s' % topic
if __name__ == '__main__':
main()
Running this code (1 x broker.py, 2 x subscriber.py) shows that the first subscriber registers at the broker as expected (\x01 and cache lookup), but the 2nd subscriber does not get registered the same way. Interestingly, the 2nd subscriber is hooked up to the pub/sub channel, as after a while (10 sec) both subscribers receive data from the publisher.
This is very strange. Perhaps some of my libraries are outdated. Here's what I got:
Python 2.7.9 (v2.7.9:648dcafa7e5f, Dec 10 2014, 10:10:46)
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import zmq
>>> zmq.__version__
'14.1.1'
$ brew info zeromq
zeromq: stable 4.0.5 (bottled), HEAD
High-performance, asynchronous messaging library
http://www.zeromq.org/
/usr/local/Cellar/zeromq/4.0.5_2 (64 files, 2.8M) *
Poured from bottle
From: https://github.com/Homebrew/homebrew/blob/master/Library/Formula/zeromq.rb
==> Dependencies
Build: pkg-config ✔
Optional: libpgm ✘, libsodium ✘
Update 3
This behaviour can also be observed in zeromq 4.1.2 and pyzmq-14.7.0 (with or without libpgm and libsodium installed).
Update 4
Another observation suggests that the first subscriber is somehow handled differently: The first subscriber is the only one unsubscribing in the expected way from the XPUB socket (backend) by preceding its subscription topic with \x00. The other subscribers (I tried more than 2) stayed mute on the backend channel (although receiving messages).
Update 5
I hope I'm not going down a rabbit hole, but I've looked into the czmq bindings and ran my Python example in C. The results are the same, so I guess it's not a problem with the bindings, but with libzmq.
I also verified that the 2nd subscriber is sending a subscribe message and indeed I can see this on the wire:
First subscribe:
0000 02 00 00 00 45 00 00 3f 98 be 40 00 40 06 00 00 ....E..? ..#.#...
0010 7f 00 00 01 7f 00 00 01 fa e5 15 b6 34 f0 51 c3 ........ ....4.Q.
0020 05 e4 8b 77 80 18 31 d4 fe 33 00 00 01 01 08 0a ...w..1. .3......
0030 2a aa d1 d2 2a aa cd e9 00 09 01 6d 79 2d 74 6f *...*... ...my-to
0040 70 69 63 pic
2nd subscribe message with difference (to above) marked and explained. The same data is sent in the subscribe frame.
identification
v
0000 02 00 00 00 45 00 00 3f ed be 40 00 40 06 00 00 ....E..? ..#.#...
src port sequence number
v v v v v
0010 7f 00 00 01 7f 00 00 01 fa e6 15 b6 17 da 02 e7 ........ ........
Acknowledgement number window scaling factor
v v v v v
0020 71 4b 33 e6 80 18 31 d5 fe 33 00 00 01 01 08 0a qK3...1. .3......
timestamp value timestamp echo reply
v v v |<-------- data -------
0030 2a aa f8 2c 2a aa f4 45 00 09 01 6d 79 2d 74 6f *..,*..E ...my-to
------>|
0040 70 69 63 pic
I found the solution for this problem, and even though I read the docs front to back and back to front, I had not seen it. The key is XPUB_VERBOSE. Add this line to after the backend initialisation and everything works fine
backend.setsockopt(zmq.XPUB_VERBOSE, True)
Here's an extract from the official documentation:
ZMQ_XPUB_VERBOSE: provide all subscription messages on XPUB sockets
Sets the XPUB socket behavior on new subscriptions and
unsubscriptions. A value of 0 is the default and passes only new
subscription messages to upstream. A value of 1 passes all
subscription messages upstream.
Option value type int Option value unit 0, 1 Default value 0
Applicable socket types ZMQ_XPUB
Pieter Hintjens has some more information on this in his blog. This is the relevant section:
A few months ago we added a neat little option (ZMQ_XPUB_VERBOSE) to
XPUB sockets that disables its filtering of duplicate subscriptions.
This now works for any number of subscribers. We use this as follows:
void *publisher = zsocket_new (ctx, ZMQ_XPUB);
zsocket_set_xpub_verbose (publisher, 1);
zsocket_bind (publisher, "tcp://*:6001");
The LVC pattern description should be updated to reflect this setting, as this pattern won't work otherwise.

Categories

Resources