How can I check that my application is shutdown with QFTest? - python

I check a Java application with QFTest. I need to prove that the HMI is stopped at Shutdown.
In QFTest, I created a Jython procédure which try to send a socket to the HMI, if it can't, then it means that the HMI is stopped and then the test is OK. here is the jython script:
import threading
import time
rc.setLocal("returnValue", False)
for i in range(50):
time.sleep(0.5)
try:
# here we try to send a socket to HMI
rc.toSUT("client", vars)
except:
# here there was an exception trying to send the socket, the HMI is shutdown: test OK."
rc.setLocal("returnValue", True)
break
It seems that the QFTest javaagent used to connect my Java program to QFTest, prevents my application to be fully killed. Have you an idea to prove that my HMI is killed in a QFTest procedure ?

You should avoid any communication with the SUT while shutting down. QF-Tests tries to stop the application gracefully, if you record a sequence for the steps a user would do. There are also the dedicated nodes Additionally you may try to kill the SUT-client. For example of such a construct look in the procedure startStop.terminate from the demo suite delivered with QF-Test under <qftest_isntall_dir>\demo\carconfig\carconfig_en.qft.
If the problem persist you should write to the QF-Test support, since additional details my be required, and the stackoverflow.com is not suitable for such communcation.
Disclaimer: I am a QF-Test Employee

Related

How to cancel a blocking thread caused by input() in Python?

I'm starting to learn more about TCP protocols in Python and I've been having some trouble with blocking threads inside clients.
Ideally, my application would work like this: I have different clients with thread functions, each one of them containing an input function in order to receive a specific command to send to the server (for example 'X'). When the 'X' is tapped in ONE client, the server receives it and sends a message to all the other clients informing that the program will continue and releasing them from their input functions - almost like cancelling them.
The problem lies on the fact that the input functions are blocking the clients from leaving the loop. I've tried setting the input thread functions as daemon but it blocks until you tap something anyway - which is unfortunately the only workaround that I've found so far.
I would like to use socket and the select module for connection, without being attached to any particular OS (so no msvcrt that works on Windows or the select module to monitor the stdin, which is only available in UNIX based OS).
Any help would be greatly appreciated!

Python Sockets - How to shut down the server?

I tried to make a simple chat system with the socket module in Python. everything works, except, that i need to kill the process everytime when i want to shutdown the server. And i don't want to do this everytime.
So my question is:
How can i make a function, that when i type shutdown in the server terminal, it shutdowns the whole server?
I already tried to do this:
def close(self):
server.close(self)
server.shutdown(self)
But it doesn't work. When i type close(), nothing happens. Nothing.
Heres the full code of the server.py:
https://pastebin.com/gA4QYmQe
Every help is useful. Thanks.
Well... there are many problems with the code (your "MY IP" and "SERVERIP" are probably not what you want, but this is beside the point.
Your close() function has a "self" parameter, which is pointless as this is not a class. You also need to move the close function to the beginning of your code if you want to call it from your try-except -structure. You need to call shutdown() first and then close(), and shutdown takes an argument. I modified your close() to do this and it works.
def close():
server.shutdown(socket.SHUT_RDWR)
server.close()
print ("closed")
When you open your socket, you should also set SO_REUSEADDR to make the address reusable (meaning you can start the server again if you shut it down, instead of waiting for a minute for TIME_WAIT status to finish with your server port):
server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
But how exactly are you calling close() when you type "shutdown" somewhere? You are not doing this. Your program is in the socket loop and is not reading keyboard input.
I see no point whatsoever adding keyboard input to this program. First, it adds complexity as you are operating with two possibly blocking inputs (socket input and keyboard input) and you would need to manage this. It is possible but definitely complicated. Second, it is faster to press Control + C instead of typing "shutdown" and hitting enter.
You currently do not call close after a keyboard interrupt. I added this to the inner KeyboardInterrupt (the outer you can remove - it is not doing anything and is never reached) and it now shuts down your program neatly, closing all connections. Remember to move close() function from the bottom of your code to the front before the try: statement:
except KeyboardInterrupt:
print("[!] Keyboard Interrupted!")
close()
break
If you want a remote shutdown (server shuts down if you type "shutdown" to the socket), you can add this to your server loop:
if message == "shutdown":
close()
exit(0)
There are other problems as well. For example, if you start your server, connect to it and shut down the connection, your server exits as it does not return to listen().
This is also (in my opinion), somewhat bad programming, as you use "server" as a global variable. I would rather create a class and put all socket operations in it, but if style is not important, this should work.
Hannu
One other approach would be having a default admin-client that can control the server. Admin-client will be created when the server starts and from that client admin can shutdown and do any of the admin tasks on the server.

How can I debug a Python SIGCHLD handler?

I'm writing a server program in Python that uses the following workflow:
1) Start a daemon
2) Start a server socket and listen for incoming connections
3) When an incoming socket is accepted successfully, fork a new process to handle the connection, closing the client socket in the child and the server socket in the daemon.
When I register a signal handler for SIGCHLD in the daemon process to reap child processes (regardless of the content of the handler) and run the server, the daemon crashes when it receives SIGCHLD. I can't for the life of me figure out why because for whatever reason logging to syslog won't work for me, and I have no way of debugging this. I'm using PyCharm and it has no way to debug forked processes. How can I debug this problem ? What could be causing the program to fail on invocation of the SIGCHLD handler ?
I'm using Python 3.4 on Mac OS X.8
As it turns out, I was using an incorrect signature for my signal handling function. I was using def my_handler() instead of def my_handler(signum, frame), as per this thread

Constantly running python script, calling functions via terminal

quick question that I'm never even sure is possible :3
I have a python script, a network script that connects to a server and remains connected until I either disconnect or it kicks me (which it normally shouldn't), which is constantly receiving data and doing other tasks.
I was curious if it's at all possible while the script is running, to trigger functions from within the script? Say while the script was running, if I had the urge to send some sort of data to the server, I could type it up and send it to the function that handles this?
Wasn't quite sure if it was possible or not, as I've never had to attempt or even seen it done. If it helps, I'm on Ubuntu linux running the script from the terminal.
The usual 'UNIX-way' to solve such problems is to poll or select on both the socket and the standard input file descriptors. You then handle network input on 'IN' event on the socket and terminal input on 'IN' event on the stdin file descriptor.
This is not portable to Windows (which sucks), but that is the most natural way to do it on UNIX-like systems. And you don't get all the problems which come with threads (which often need polling in Python too, as they get 'unkillable' otherwise).
Take a look at gevent:
gevent is a coroutine-based Python networking library that uses
greenlet to provide a high-level synchronous API on top of the
libevent event loop.
and gevent.socket.
Jacek Konieczny's solution is good and simple. Should you want more flexible message passing, consider ZeroMQ. This gives you lots of power to easily create various messaging solutions around your main program. Using a single thread, your main program would look something like this:
#!/usr/bin/env python
import zmq
from time import sleep
CTX = zmq.Context()
incoming = CTX.socket(zmq.PULL)
incoming.bind("tcp://127.0.0.1:3000")
outgoing = CTX.socket(zmq.PUB)
outgoing.bind("tcp://127.0.0.1:3001")
# Poller for the incoming messages
poller = zmq.Poller()
poller.register(incoming, zmq.POLLIN)
def main():
while True:
# Do things on the network
print("[Did things on the network]")
# Send messages if you want
outgoing.send("Important message")
# Poll for incoming messages
socks = dict(poller.poll(zmq.NOBLOCK))
if incoming in socks and socks[incoming] == zmq.POLLIN:
message = incoming.recv()
# Handle message
print("[Handled message '%s']" % message)
sleep(1) # Only for this dummy program
if __name__ == "__main__":
main()
You would then write a client (in any language that has ZeroMQ bindings) that pushes and subscribes to messages from the main program. Example pusher:
#!/usr/bin/env python
import zmq
CTX = zmq.Context()
pusher = CTX.socket(zmq.PUSH)
pusher.connect("tcp://127.0.0.1:3000")
def main():
pusher.send("Message to main program")
if __name__ == "__main__":
main()
Example subscriber:
#!/usr/bin/env python
import zmq
CTX = zmq.Context()
subscriber = CTX.socket(zmq.SUB)
subscriber.connect("tcp://127.0.0.1:3001")
subscriber.setsockopt(zmq.SUBSCRIBE, "")
def main():
while True:
msg = subscriber.recv()
print("[Received message] %s" % msg)
if __name__ == "__main__":
main()
It sounds like you will want to combine the pusher and subscriber programs into one. If you decide to use ZeroMQ have a look at the excellent user guide.
You can of course also use ZeroMQ with multiple threads or processess (just be careful not to share individual ZeroMQ sockets between threads).
Without more details, I can only provide you with general ideas. In order to do two things at once (download from the server and wait for data to send) you will need to use either multiple threads or processes. There is a tutorial with some examples of multiple threads here. If you use multiple processes, you would be using the multiprocessing package.
With either solution, you would need a similar setup. I'll use the term thread for the rest, but you could easily replace that with process if you used multiple processes instead. You would probably have (at least) a thread to send and receive data (this might be two threads) and a separate thread to wait for something to send. This is a simplified example of the producer/consumer problem. The thread that waits for the commands/data would be a simple input loop that produces data to send, while the thread that sends data would consume the data as it sends it to the server.
Stick your server stuff in another thread (investigate the threading module) and use the main thread for interaction with the user via raw_input/input.

python xinetd client disconnection handling

This may or may not being a coding issue. It may also be an xinetd deamon issue, i do not know.
I have a python script which is triggered from a linux server running xinetd. Xinetd has been setup to only allow one instance as I only want one machine to be able to connect to the service, which is therefore also limited by IP.
Currently when the client connects to xinetd the service works correctly and the script begins sending its output to the client machine. However, when the client disconnects (i.e: due to reboot), the process is still alive on the server, and this blocks the ability for the client to connect once its finished rebooting or so on.
Q: How can i detect in python that the client has disconnected. Perhaps i can test if stdout is no longer being read from by the client (and then exit the script), or is there a much eaiser way in xinetd to have the child process be killed when the client disconnects ?
(I'm using python 2.4.3 on RHEL5 linux - solutions for 2.4 are needed, but 3.1 solutions would be useful to know also.)
Add a signal handler for SIGHUP. (x)inetd sends this upon the socket disconnecting.
Monitor the signals sent to your proccess. Maybe your script isn't responding to the SIGHUP sent by xinet, monitor the signal and let it die.
You don't seem to get a SIGHUP, but you do get a SIGPIPE, at least so long as you are attempting any IO on the connection. If the application spends long periods of time not doing any IO, then you could just start a thread reading stdin to ensure you get the SIGPIPE as soon as the disconnection occurs. This was good enough for my application but then I didn't use any pipes other than the ones xinetd gave me.
I've seen several places on the net where people talk about the SIGHUP getting sent on client disconnection, so I've written an inetd python script to test out a couple of servers (one inetd and another xinetd), so you could use that to check on the signals getting sent. It just logs what it finds to /var/log/test.log. Perhaps it will be useful.
#!/usr/bin/python
import os, signal, sys
skip = ["SIGKILL", "SIG_DFL", "SIGSTOP", "SIG_IGN", "SIGCLD", "SIGCHLD"]
name_map = {}
identifiers = [i for i in dir(signal) if i.startswith("SIG") and not i in skip]
for i in identifiers:
name_map[getattr(signal, i)] = i
def handler(num, frame):
signame = name_map[num]
os.system("echo handled %s >> /var/log/test.log" % signame)
if __name__ == "__main__":
for id, name in name_map.iteritems():
signal.signal(id, handler)
while True:
print sys.stdin.readline()
sys.stdout.flush()

Categories

Resources