I am currently developing a macOS AppKit application that depends on running a shell script which is included inside the app bundle. Only when running in Catalina, the following error is produced upon running the script through a Task:
Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 174, in _run_module_as_main
I was able to stop the issue from occurring by completely disabling App Sandbox in the Xcode project. The Task's currentDirectoryURL is set to a location which the app explicitly allowed to access according to the Sandbox exceptions.
How can I run the bundled script without disabling App Sandbox?
I'm fairly new at this myself, so I could be wrong, but I'm working on an app that is using a task for an external process, and this is what solved my permission issues.
There are other permissions you can grant in the entitlements file. https://developer.apple.com/documentation/security/app_sandbox_entitlements
Related
I want to know if there's a way to have windows server 2019 automatically launch django's web server. I also want the launch to be performed at startup and by SYSTEM.
I tried using batch scripts that launch manage.py from venv's python interpreter. When I launch the batch manually (i.e. double click) it works fine and dandy. But it appears that SYSTEM fails in running the script correctly when planning the task.
I made SYSTEM launch another script at startup (a simple python script that creates a txt file from within its own venv) and it works.
If the Django launch sceipt is launched by USER then it works.
The problem is with the launching of django with SYSTEM. I've also tried streamlit and the result is the same.
Do you have any Ideas?
Sample batch script:
cd path\of\managepyfile\
C:\path_to_venv\Scripts\python -m manage.py runserver
We run a similar application (not python) but an application that uses a web server.
We have it setup as a task in task scheduler that when the server starts up, it runs the powershell script that executes a command to start the web server.
Link to setup
However, you could use a web server like IIS and deploy the files to the www folder in the cdrive and run the site as an IIS service.
Setting it up on IIS was a little tricky if you've never used IIS before. Happy to help out as we have deployed our test access tool for one of our apps this way.
It's the first time I'm trying to run scripts on the Abaqus server from my university. The IT team provided me the credentials to access the server, but they don't know how to run scripts from there. I'm using Putty to connect to the server, and Filezilla to transfer files.
I tried to run Python scripts from the work directory on the server, but this error came out:
Traceback (most recent call last):
File "SMAPylModules/SMAPylDriverPy.m/src/driverEnv.py", line 324, in envRunFile
File "/home/gd00357/abaqus_v6.env", line 208, in <module>
raise 'Cannot find the graphics configuration environment file (graphicsConfig.env)'
I couldn't find any documentation on how to run Python scripts, I hope it's possible in some way.
How could I solve that error?
EDIT:
Ok, apparently remote servers don't support any scripts that make use of the GUI.
Now my question is: how do I import a model as input file (.inp) using a Python script? Is there a way that avoids that a Python script uses the GUI?
Abaqus has some built-in tools for scripting parametric studies.
Please read about parametric studies in the Abaqus manual here.
I am trying to do auto-deployment of a Python Flask application using Jenkins and then run it by using shell command on a Raspberry Pi server.
Here are some background info,
Before using Jenkins, my deployment and execution process was manual described below:
FTP to the directory where my Python scripts and Python venv are located
Replace Flask application scripts using FTP
Activate virtual environment to of Python(3.5) through the terminal on Raspberry Pi ("./venv/bin/activate")
Run myFlaskApp.py by executing "python myFlaskApp.py" in terminal
Now I have integrated Jenkins with the deployment/execution process described below:
Code change pushed to github
Jenkins automatically pulls from github
Jenkins deploy files to specified directories by executing shell commands
Jenkins then activates virtual environment and run myFlaskApp.py by bashing a .sh script in the shell terminal.
Now the problem that I am having is on step 4, because a Flask app has to always be alive, my Jenkins will never "finish building successfully", it will always be in a loading state as the Flask app is running on the shell terminal Jenkins is using.
Now my question:
What is the correct approach that I should be taking in order to activate myFlaskApp.py with Jenkins after deploying the files while not causing it to be "locked down" by the build process?
I have read up about Docker, SubShell and the Linux utility "Screen". Will any of these tools be useful to assist me in my situation right now and which approach should I be taking?
The simple and robust solution (in my opinion) is to use Supervisor which is available in Debian as supervisor package. It allows you do make a daemon from script like your app, it can spawn multiple processes, watch if app doesn't crash and if it does it can start it again.
Note about virtualenv - you don't need to activate venv to use it. You just need to point appropriate Python executable (your_venv/bin/python) instead of default one. For example:
$ ./venv/bin/python myFlaskApp.py
You need to create these files for deployment over jenkins.
Code can be found: https://github.com/ishwar6/django_ci_cd
This will work for both flask as well as django.
initial-setup.sh - This file is the first file to look at when setting up this project. It installs the required packages to make this project work such as Nginx, Jenkins, Python etc. Refer to the youtube video to see how and when it is used.
Jenkinsfile - This file contains the definition of the stages in the pipeline. The stages in this project's pipeline are Setup Python Virtual Environment, Setup gunicorn service and Setup Nginx. The stages in this pipeline just does two things. First it makes a file executable and then runs the file. The file carries out the commands that is described by the stage description.
envsetup.sh - This file sets up the python virtual environment, installs the python packages and then creates log files that will be used by Nginx.
gunicorn.sh - This file runs some Django management commands like migration commands and static files collection commands. It also sets up the gunicorn service that will be running the gunicorn server in the background.
nginx.sh - This file sets up Nginx with a configuration file that points Nginx to the gunicorn service that is running our application. This allows Nginx serve our application. I have followed a digital ocean article to setup this file. You can go through the video once to replicate sites-available and sites-enabled scanerio.
app.conf - This is an Nginx server configuration file. This file is used to setup Nginx as proxy server to gunicorn. For this configuration to work, change the value of server_name to the IP address or domain name of your server.
After following the steps in the official wiki I keep getting the following error when launching with breakpoints or setting breakpoints:
/ptvsd/wrapper.py", line 423, in pydevd_request
os.write(self.pipe_w, s.encode('utf8'))
File "google-cloud-sdk/platform/google_appengine/google/appengine/tools/devappserver2/python/runtime/stubs.py", line 40, in os_error_not_implemented
raise OSError(errno.ENOSYS, 'Function not implemented')
OSError: [Errno 38] Function not implemented
The application runs anyway but the breakpoints are never hit. It seems that ptvsd is trying to use some method that is blocked by the app engine sandboxed environment. I'm running vscode in a python virtualenv, any clue?
My solution was to use PyCharm community edition's debugger, its similar perhaps more capable IDE and debugger for Python specific debugging.
I have tried to find a reliable way to get rid of this error but it's proving quite difficult. Here are some advices though:
Use the --threadsafe_override=default:false flag when running the app engine dev server as explained here.
The app engine dev server must be launched from vscode(for example via a task) instead of a separate terminal window.
If you still get the error, stop the debugger, kill the task and restart everything.
(After that the debugger correctly hits the breakpoints, but curiously the callstack is set to the main thread instead of the thread containing the breakpoint, you need to manually click on the correct thread in the callstack window.)
I have a flask app that creates a sqlite db to load fixtures for tests. When I run pytest within osx, there are no issues. However, when I set 'PRAGMA journal_mode=WAL' within a ubuntu 14.04 docker container, I get this:
disk I/O error
Traceback (most recent call last):
File "/tmp/my_app/util/sqlalchemy_helpers.py", line 23, in pragma_journalmode_wal
cursor.execute('PRAGMA journal_mode=WAL')
OperationalError: disk I/O error
The sqlite db file is written to a folder within tmp that is dynamically created using python's "tempfile.mkdtemp" function. Even though the tests run as root (because docker), I still made sure the folder has full read/write/execute permissions. I verified that there is plenty of space left on /tmp. I have test code that creates, modifies, and deletes a file in the database folder, and it passes successfully.
I cannot seem to find a way to get an error code or better explanation as to what failed. Any ideas how I can better debug the issue? Could there be an issue with the docker container?
I had similar problem right now, when recreating sqlite3 database:
Removed database.sqlite3
Created database.sqlite3
Setup right permissions.
Error ocurred.
After some titme figured out that I have also database.sqlite3-shm and database.sqlite3-wal
Removed database.sqlite3-shm and database.sqlite3-wal
And everything goes back to normal.