I have a Django web app on Azure. I'm using Azure's built in continuous deployment, so I don't have any .deployment files within my repo.
However, in order for my application to work correctly, I have to manually install a couple packages after that aren't available via pip. So after every deployment, I have to SSH into the deployment and execute the following 2 commands:
source home/site/wwwroot/pythonenv3.6/bin/activate
sudo apt-get install XXX XXX XXX
I believe I can simply execute this via a post deployment script, but am having a hard time finding any literature explaining exactly how to do so. Is utilizing a post deployment script the best way of doing this? I'd rather not manage a custom docker file if possible.
UPDATE
Here is a screen shot of the packages installed after I manually run the apt-get command:
And here is a screen shot of the packages installed AFTER a re-deploy:
So unless there's a way to run this command from a post-deployment script (and have it execute correctly), my only option will be to create a custom container?
I'm not wild about doing so considering that this is such a small customization... but I may no longer have a choice?
UPDATE
I tried adding "apt-get install XXX XXX" to "PRE_BUILD_COMMAND" in Azure... and I got the following message as a result:
Are there any ideas how to get around this message using a prebuild or post build command?
UPDATE
The command like below.
You must have the wrong path, just enter /home, and then enter the command you want to execute, remember not to add sudo.
PRIVIOUS
First of all, please check this post. The pyodbc in the post cannot be used. Later, I used apt-get install xxx and pip install to solve this problem.
I know you want to use the .sh file, when you publish or re-publish, or when you publish for the first time, execute your apt-get install xxx command.
But after my test, I can tell you that you only need to execute the command once after the portal creates the webapp, or after the first release, to configure the environment without having to execute the .sh file again.
In conclusion:
It is recommended to execute commands such as apt-get install xxx and pip install xxx after the first release using git. Then modify the Readme.md file on github to trigger the redeployment process.
You don’t need to execute the .sh file for every deployment in the future. I replied to your other post. You can execute the .sh file, but there is no need to call it every time after you execute it manually. Because this is a configurable environment variable, the configuration is successful.
Step 1. Install env.
Step 2. Modify readme.md,check deploy status.
we can see my custom .sh command is running.But we no need to do that.
Step 3. Check the webapp after finishing env config .
Related
I am creating a webapp using Dash. I have created the requirements file to install the different python modules I need.
One of the modules, Pyspice, works as interface to a program called ngspice. The question is how do I install ngspice on the azure app plan I have. I can see the app plan is running Linux. But how do I add this linux library so the python app can use it?
I have this documentation from pyspice, see 4.2
https://pyspice.fabrice-salvaire.fr/releases/v1.4/installation.html
But I don't know how to proceed.
UPDATE:
I created an startup.sh file which include
apt-get update
apt-get -y install ngspice
gunicorn --bind=0.0.0.0 --timeout 600 app:app
Looking in the application log in azure shows that it install ngspice and the app start. But pyspice in python cannot do the analysis. So still some more needs to be done.
UPDATE:
Using the above script then to connect the pyspice to ngspice just use this command:
simulator = circuit.simulator(temperature=25, nominal_temperature=25, simulator='ngspice-subprocess', spice_command='ngspice')
So when defining simulator as ngspice-subprocess and spice command as ngspice then it works ! :)
You can open an SSH session via azure portal, in your app service blade:
Also, you can open an SSH session in browser
Paste the following URL into your browser and replace with your app name:
https://<app-name>.scm.azurewebsites.net/webssh/host
More info: https://learn.microsoft.com/en-us/azure/app-service/configure-linux-open-ssh-session
UPDATED:
To configure the startup command you must add the script here
More info: https://learn.microsoft.com/en-us/azure/app-service/configure-common?tabs=portal#configure-general-settings
Hope this helps!
I deployed odoo 10 CE on my local Ubuntu server 16.04 LTS with nginx reverse proxy. Now i installed Ubuntu 16.04, Odoo 10 CE with nginx as reverse proxy on another server. I restored the database from old server to new server.
If i access my new server like http://x.x.x.x:8069 , it works fine.
But access like http://x.x.x.x , the login page shows with out css styling. After login can't see any menus, company logo only.
If i try with Private browsing it works fine.
How can i resolve this.
EDIT
I ran the odoo server in two ways, first ran directly from the terminal like :
` sudo su - odoo -s /bin/bash
/odoo/odoo-server/./odoo-bin`
then i access like <ip_address>:8069, it works fine.
But when i try to run as demon ( sudo /etc/init.d/odoo-server start) , i face the same problem.
My system user is: odooadmin
odoo user is : odoo
And if i access via debug mode with assets it woks fine.
<ip_address>:8069/web?debug=assets
Any solution?
Sometimes you can get an Internal Server Error 500. In other cases you can get what you got. There are a few things you can try:
Remove Browser Cache. You can press Shift + Ctrl + Supr. Or reload without cache with Ctrl + F5
Remove Cookies. If you are using chrome or chromium you can delete the cookies stored for the domain you are using as you can see in the images:
web.base.url Parameter. Activate deleloper mode. Go to Settings > Parameters > System Parameters and check that the parameter web.base.url is correctly set. This parameter is updated each time you log in with the Administrator user, the value of the url bar is assigned to the parameter. But if you can get it to work with the private mode I assume this is well assigned.
If you moved to another machine there might be issue with loading some assets cached at the level of odoo server, such as transpiled js code for instance.
In order to solve this issue, you should (after restoring the database) connect to your database and run the following command :
DELETE FROM ir_attachment WHERE url LIKE '/web/content/%';.
More information here : https://github.com/odoo/odoo/issues/13808
Did you compile your CSS?
If it looks funnyou may need lessc CSS compiling (from https://www.odoo.com/documentation/8.0/setup/install.html )
We symlink node because some versions of debian/ubuntu don't agree on the bin name
Install Node
sudo apt install -y nodejs npm
sudo ln -s /usr/bin/nodejs /usr/bin/node
Compiling CSS
If you install less, odoo will automatically compile the CSS
sudo npm install -g less less-plugin-clean-css
Consider blowing away the file cache:
If the compilation had failed, sometimes you need to blow away the postgres cache,
(thanks to sebalix for this tip):
You can run this SQL query in postgres to let Odoo rebuild its CSS+JS assets + reload icons via dbeaver:
DELETE FROM ir_attachment WHERE datas_fname SIMILAR TO '%.(js|css)';
DELETE FROM ir_attachment WHERE name='web_icon_data';
Then restart the server (assuming it was installed via ansible)
sudo service odoo restart
There's also files in ~/.local/share/Odoo/filestore you may want to temporarily move/rename
Check postgres version
Some versions of postgres are not supported, check psql --version and that you can download a backup of your database
Check odoo version
I've seen installs where odoo-bin --version returns 10 but the logs show odoo11 before every log line
I think your browser has some files on cache. Can you try after delete cache files on your browser ?
I'm completely new in malware research analysis field. I am trying to install cuckoo sandbox 2.0.3. It looks like it has been installed, because when I run the command $cuckoo it shows:
I have windows 7 as guest in the VirtualBox and I have copied the agent.py file in the guest. According to the Cuckoo documentation, the agent.py file should show this when double clicked:
Starting agent on 0.0.0.0:8000
But in my case, whenever I double click the agent.py file, it shows black screen like this:
So, I have completely stuck here and cannot proceed further. I checked the .conf files; everything looks fine, IP addresses are also given properly. So, I don't know what the actual reason for this situation is. As I have said I am completely new in this field, so any kind of help will be highly appreciated. Please let me know if you need any other information regarding my installation process.
Try to run CMD as administrator, and the goes to the path where agent.py is installed and then type
Python agent.py
Once the agent is run try to submit.
Just try to submit the url or file using below command on cuckoo server
Step 1: start cuckoo server
cuckoo -d
Step 2: submit url to analyzer.
cuckoo -d submit -u www.google.com
Past the logs from Cuckoo server.
I need this command to run automatically on boot or when told to. At the moment i need to run the command in SSH and leave the session open, otherwise it stops.
python CouchPotatoServer/CouchPotato.py
This is on a ReadyNAS (Debian 7)
One easy way to do this would be to create it as a service. Take a look in /etc/init.d and you will find scripts that run as services. Copy one and modify it so that it calls your python script. An good example could be the init script used for starting the avahi daemon. Now, you can use 'service couchPotato start/stop/status', etc. It will also start the service automatically at boot time if the server ever reboots. Find a simple file to use as your template and google init scripts for further assistance. Good luck.
From this page:
To run on boot copy the init script. sudo cp CouchPotatoServer/init/ubuntu /etc/init.d/couchpotato
Change the paths inside the init script. sudo nano /etc/init.d/couchpotato
Make it executable. sudo chmod +x /etc/init.d/couchpotato
Add it to defaults. sudo update-rc.d couchpotato defaults
CouchPotatoServer/init/ubuntu can be found here
sudo update-rc.d <service> <runlevels> is the official Debian way of inserting a service at startup. Its manpage can be read there.
my 2 cents,
Use chkconfig to add the service and specify the run level. Google will give you all you need for examples of how to do this. Good luck.
I'm hooking a Python script up to run with cron (on Ubuntu 12.04), but authentication is not working.
The cron script accesses a couple services, and has to provide credentials. Storing those credentials with keyring is easy as can be, except that when the cron job actually runs, the credentials can't be retrieved. The script fails out every time.
As nearly as I can tell, this has something to do with the environment cron runs in. I tracked down a set of posts which suggest that the key is having the script export DBUS_SESSION_BUS_ADDRESS. All well and good, I can get that address and, export it, and source it from Python fairly easily. But it simply generates a new error: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11. Setting DISPLAY=:0 has no effect.
So, has anybody figured out how to unlock gnome-keyring from Python running on a Cron job on Ubuntu 12.04?
I'm sorry to say I don't have the answer, but I think I know a bit of what's going on based on an issue I'm dealing with. I'm trying to get a web application and cron script to use some code that stashes an oauth token for Google's API into a keyring using python-keyring.
No matter what I do, something about the environment the web app and cron job runs in requires manual intervention to unlock the keyring. That's quite impossible when your code is running in a non-interactive session. The problem persists when trying some tricks suggested in my research, like giving the process owner a login password that matches the keyring password and setting the keyring password to an empty string.
I will almost guarantee that your error stems from Gnome-Keyring trying to fire up an interactive (graphical) prompt and bombing because you can't do that from cron.
Install keychain:
sudo apt-get install keychain
Put it in your $HOME/.bash_profile :
if [ -z "$SSH_AUTH_SOCK" ] ; then
eval `ssh-agent -s`
fi
eval `keychain --eval id_rsa`
It will ask your password at the first login, and will store your credentials until next reboot.
Insert it at the beginning of your cron script:
source $HOME/.keychain/${HOSTNAME}-sh
If you use other language such as python, call it from a wrapper script.
It works for me, I hope it helps you too.
Adding:
PID=$(pgrep -u <replace with target userid> bash | head -n 1)
DBUS="$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/"$PID"/environ | sed 's/DBUS_SESSION_BUS_ADDRESS=//' )"
export DBUS_SESSION_BUS_ADDRESS=$DBUS
at the beginning of the script listed in the crontab worked for me. I still needed to unlock the keyring interactively once after a boot, but reboots are not frequent so it works ok.
(from https://forum.duplicacy.com/t/cron-job-failed-to-get-value-from-keyring/1238/3)
so the full script run by cron would be:
#! /usr/bin/bash
PID=$(pgrep -u <replace with target userid> bash | head -n 1)
DBUS="$(grep -z DBUS_SESSION_BUS_ADDRESS /proc/"$PID"/environ | sed 's/DBUS_SESSION_BUS_ADDRESS=//' )"
export DBUS_SESSION_BUS_ADDRESS=$DBUS
/home/user/miniconda/conda run -n myenv python myscript.py
using an environment from conda. Or change the python invocation to however you set up python to be run.