Ctrl-C does not kill my python code due to threading problems - python

I am using Flask as a local server and initiating a new thread:
rospy.init_node('path_planner')
As I am initiating this thread on a main thread, when I press Ctrl-C, nothing happens and I have to manually kill the process using kill -9
I have tried to signal_handler but still I was not able to kill my program.
Here is my code for POST method, which is used often:
app = Flask(__name__)
app.config["MONGO_URI"] = "mongodb://ed:123#ds029227.mlab.com:2325/test"
mongo = PyMongo(app)
#app.route('/goal', methods=['POST'])
def add_goal():
goal = mongo.db.goal
position = request.json['position']
orientation = request.json['orientation']
goal_id = goal.insert({'position' : position, 'orientation' :
orientation})
new_goal = goal.find_one({'_id' : goal_id})
output = {'position' : new_goal['position'], 'orientation' :
new_goal['orientation']}
position_x = json.loads(position['x'])
position_y = json.loads(position['y'])
return jsonify(output)
And here is my main:
if __name__ == '__main__':
rospy.init_node("path_planner")
app.run(debug=True)
When I run my code, flask server fires up and everything works as expected, POST method does its job. However, when I am done and I need to exit the program, I press Ctrl-C but nothing appears to happen.

try to press the break button. It will break the process I think.

Related

python QApplication freezes and does not close while another thread is running

In the main file i run a Qapplication in the main thread (that starts with no problem) and a method on a separate thread , called backupGiornaliero.
When I try to close the application via the X button of the user interface of the app or by a button that should close the application (associated with the method sys.exit(app.exec_())) the application freezes or simply doesn't end.
This is my main without all the imports:
def backupGiornaliero():
dataOggi=datetime.today().strftime('%Y-%m-%d')
orarioBackup = " 22:00:00"
dataOrarioBackup = str(dataOggi+orarioBackup)
datatimeBackup = datetime.strptime(dataOrarioBackup, '%Y-%m-%d %H:%M:%S')
if datetime.now() < datatimeBackup:
unixDataBackup = time.mktime(datatimeBackup.timetuple())
else:
unixDataBackup = time.mktime(datatimeBackup.timetuple())+ float(86400)
print("unix timestamp dell'orario di backup di oggi "+str(unixDataBackup))
# Set up scheduler
s = sched.scheduler(time.time, time.sleep)
# Schedule when you want the action to occur
s.enterabs(unixDataBackup, 0, BackupModel().eseguiBackup)
# Block until the action has been run
s.run()
print("fatto backup")
# Press the green button in the gutter to run the script.
if __name__ == '__main__':
x = threading.Thread(target=backupGiornaliero)
x.start()
app = QApplication(sys.argv)
vistaLogin = LoginView(app)
vistaLogin.show()
sys.exit(app.exec())
Thanks for the help.
Actually it seems that it's sufficient to set the separate thread as a daemon so that when the main thread ends its work all the deamon threads stop as well.
In my case so i solved with x.setDaemon(true)

Apscheduler keeps spawned processes alive for no reason

EDIT: I found the issue. It was a problem with PyCharm. I ran the .py outside of PyCharm and it worked as expected. In PyCharm I enabled "Emulate terminal in output console" and it now also works there...
Expectations:
Apscheduler spawns a thread that checks a website for something.
If the something was found (or multiple of it), the thread spawns (multiple) processes to download it/them.
After five seconds the next check thread spawns. While the other downloads may continue in the background.
Problem:
The spawned processes never stop to exist, which makes other parts of the code (not included) not work, because I need to check if the processes are done etc.
If I use a simple time.sleep(5) instead (see code), it works as expected.
No I cannot set max_instances to 1 because this will stop the scheduled job from running if there is one active download process.
Code:
import datetime
import multiprocessing
from apscheduler.schedulers.background import BackgroundScheduler
class DownloadThread(multiprocessing.Process):
def __init__(self):
super().__init__()
print("Process started")
def main():
print(multiprocessing.active_children())
# prints: [<DownloadThread name='DownloadThread-1' pid=3188 parent=7088 started daemon>,
# <DownloadThread name='DownloadThread-3' pid=12228 parent=7088 started daemon>,
# <DownloadThread name='DownloadThread-2' pid=13544 parent=7088 started daemon>
# ...
# ]
new_process = DownloadThread()
new_process.daemon = True
new_process.start()
new_process.join()
if __name__ == '__main__':
sched = BackgroundScheduler()
sched.add_job(main, 'interval', args=(), seconds=5, max_instances=999, next_run_time=datetime.datetime.now())
sched.start()
while True:
# main() # works. Processes despawn.
# time.sleep(5)
input()

Correctly terminating a thread in Python

I'm not too familiar with threading, and probably not using it correctly, but I have a script that runs a speedtest a few times and prints the average. I'm trying to use threading to call a function which displays something while the tests are running.
Everything works fine unless I try to put input() at the end of the script to keep the console window open. It causes the thread to run continuously.
I'm looking for some direction in terminating a thread correctly. Also open to any better ways to do this.
import speedtest, time, sys, datetime
from threading import Thread
s = speedtest.Speedtest()
best = s.get_best_server()
def downloadTest(tries):
x=0
downloadList = []
for x in range(tries):
downSpeed = (s.download()/1000000)
downloadList.append(downSpeed)
x+=1
results_dict = s.results.dict()
global download_avg, isp
download_avg = (sum(downloadList)/len(downloadList))
download_avg = round(download_avg,1)
isp = (results_dict['client']['isp'])
print("")
print(isp)
print(download_avg)
def progress():
while True:
print('~ ',end='', flush=True)
time.sleep(1)
def start():
now=(datetime.datetime.today().replace(microsecond=0))
print(now)
d = Thread(target= downloadTest, args=(3,))
d.start()
d1 = Thread(target = progress)
d1.daemon = True
d1.start()
d.join()
start()
input("Complete...") # this causes progress thread to keep running
There is no reason for your thread to exit, which is why it does not terminate. A daemon thread normally terminates when your programm (all other threads) terminate, which does not happen in this as the last input does not quit.
In general it is a good idea to make a thread stop by itself, rather than forcefully killing it, so you would generally kill this kind of thread with a flag. Try changing the segment at the end to:
killflag = False
start()
killflag = True
input("Complete...")
and update the progress method to:
def progress():
while not killflag:
print('~ ',end='', flush=True)
time.sleep(1)

PyQt: mainWindow closed, but process still running

I'm writing a Python application with PyQt 5.10.It seems I have some sort of bug/memory leak, since when I call close() on my MainWindow the process keeps running. After a bit of research and debugging I was able to circumscribe the supposedly faulty code.This is my main:
if __name__ == '__main__':
app = QApplication(sys.argv)
matteo = God()
matteo.runApp()
sys.exit(app.exec())
Here you will find the runApp function from God class:
def runApp(self):
self.painter = Painter()
self.dbManager = DBManager()
self.userInput = UserInput()
self.excelFile = ExcelFile()
self.painter.connectToClasses(self, ["god","db","ui"])
self.excelFile.connectToClasses(self, ["god",])
self.painter.drawMainWindow()
self.loadConf()
self.openDB(True)
if self.dbManager.error==None:
self.painter.drawSearchWidget()
else:
print("Closed.")
The process keeps running when the app is not able to find the configuration file, and so it creates a new one from scratch and it asks the user to select the database which he wants to connect to. This prompts an error message - if the selected file is corrupted or it's not the right format - and I think there might lie my problem.That's the code:
def checkError(self, classType):
if classType=="db":
error = self.dbManager.error
elif classType=="excel":
error = self.excelFile.error
if error!=None:
self.painter.drawError(classType)
self.userInput.error = self.painter.error.clickedButton()
self.userInput.error = self.painter.error.buttonRole(self.userInput.error)
if (self.userInput.error==1):
self.painter.mainWindow.close()
return 0
return 1
def drawError(self, classType):
if (classType=="db"):
title = "Database"
error = self.dbManager.error
otherButton = "Browse"
elif (classType=="excel"):
title = "Excel file"
error = self.excelFile.error
try:
self.setErrorText(False, error)
if error[0]:
if self.error.icon()!=3:
self.error.setIcon(3)
buttons = self.error.buttons()
for button in buttons:
if button.text()!="Quit":
button.hide()
button.deleteLater()
except AttributeError:
self.error = QMessageBox()
self.setErrorText(True, error)
if (error[0]):
self.error.setIcon(3)
else:
self.error.setIcon(2)
self.error.addButton(otherButton, self.error.AcceptRole)
self.error.addButton("Quit", self.error.RejectRole)
self.error.setWindowTitle(title)
self.error.exec()
If the user clicks the quit button on the error window, the function closes main window and return 0. The other functions return and the application prints closed on the console. But the process keeps going.
I kind of resolved this issue calling sys.exit() instead of printing closed.
It's a brute force solution to say the least, but I needed to deliver the application fast and I had no time to debug it.I guess this sad story will end here.

Web app starts many times - web.py

I have this code where it loads necessary files and prints necessary information when the server starts but inside if __name__ == "__main__": I'm starting a background process as well then finally app.run() is executed.
My problem is after loading all and comes to the starting of background process it starts to print and load everything from beginning again. And also it does the same when the server get its first request (GET/POST). How can I make it load only once?
import web
from multiprocessing import Process
import scripts
print 'Engine Started'
# Code to load and print necessary stuff goes here...
urls = (
'/test(.*)', 'Test'
)
class Test():
def GET(self,r):
tt = web.input().t
print tt
return tt
if __name__ == "__main__":
try:
print 'Cache initializing...'
p = Process(target=scripts.initiate_cleaner)
p.start() # Starts the background process
except Exception, err:
print "Error initializing cache"
print err
app = web.application(urls, globals())
app.run()
So this loads thrice ('Engine Started' prints thrice) after starting process and requesting from localhost:8080/test?t=er
I went through this but it solves the problem in flask and I use web.py
I'm not sure why this surprises you, or why it would be a problem. A background process is by definition a separate process from the web process; each of those will import the code, and therefore print that message. If you don't want to see that message, put it inside the if __name__ block.

Categories

Resources