Postgresql failed to start - python

I am trying to use PostgreSQL on Ubuntu. I installed it and everything was working fine. However, I needed to change the location of my database due to space constraints so I tried an online guide to do it.
I proceeded to stop postgresql, create a new empty directory and give it permissions by using
chown postgres:postgres /my/dir/path
That worked fine too. Then I used
initdb -D /my/dir/path
to enable my database. I also changed the path_data in the postgresql.conf file to my new directory.
When I now try to start the database, it says: The postgresql server failed to start, please check the log file. However, there is no log file! Something got screwed up when I changed the default directory. How do I fix this?

First: You may find it easier to manage your Pg installs on Ubuntu using the custom tools Ubuntu provides as part of pg_wrapper: pg_createcluster, pg_dropcluster, pg_ctlcluster etc. These integrate with the Ubuntu startup scripts and move the configuration to /etc/postgresql/ where Ubuntu likes to keep it, instead of the PostgreSQL default of in the datadir. To move where the actual files are stored, use a symbolic link (see below).
When you have a problem, how are you starting PostgreSQL?
If you're starting it via pg_ctl it should work fine because you have to specify the data directory location. If you're using your distro package scripts, though, they don't know you've moved the data directory.
On Ubuntu, you will need to change configuration in /etc/postgresql to tell the scripts where the data dir is, probably pg_ctl.conf or start.conf for the appropriate version. I'm not sure of the specifics as I've never needed to do it. This is why:
There's a better way, though. Use a symbolic link from your old datadir location to the new one. PostgreSQL and the setup scripts will happily follow it and you won't have to change any configuration.
cd /var/lib/postgresql/9.1/main
mv main main.old
ln -s /new/datadir/location main
I'm guessing "9.1" because you didn't give your Ubuntu version or your PostgreSQL version.
An alternative is to use mount -o bind to map your new datadir location into the old place, so nothing notices the difference. Then add the bind mount to /etc/fstab to make it persistent across reboots. You only need to do that if one of the tools doesn't like the symbolic link approach. I don't think that'll be an issue with pg_wrapper etc.
You should also note that since you've used initdb manually, your new datadir will have its configuration directly inside the datadir, not in /etc/postgresql/.
It's way easier if you just use the Ubuntu cluster management scripts instead.

Related

How to set project-dir in dbt with environment variable?

I am trying to locate my dbt_project.yml file which is not in the root directory of my project. Previously, I was using an env var called DBT_PROJECT_DIR in order to define where dbt_project.yml file is located and it was working fine. In a similar way, I am using DBT_PROFILE_DIR and it still works correct. But I cannot make DBT_PROJECT_DIR work. Any help is appreciated.
I'm fairly certain this is not supported. Are you not able to change the directory to the directory of the dbt_project.yml before running dbt commands?
As a workaround, could you just add --project-dir $PROJECT_DIR to every command you plan to run?
It is indeed not supported, this get_nearest_project_dir function is used to find the project dir. That should be adjusted to allow for using a environment variable, similar to profiles indeed.
You could open an issue on Github, discuss adding this feature there.

Can't create a virtual environment in the Google Drive folder

I'm using Google Drive to keep a copy of my code projects in case my computer dies (I'm also using GitHub, but not on some private projects).
However, when I try to create a virtual environment using virtualenv, I get the following error:
PS C:\users\fchatter\google drive> virtualenv env
New python executable in C:\users\fchatter\google drive\env\Scripts\python.exe
ERROR: The executable "C:\users\fchatter\google drive\env\Scripts\python.exe" could not be run: [Error 5] Access is denied
Things I've tried:
I thought it was because the path to the venv included blank spaces, but the command works in other paths with blank spaces. I also tried installing the win32api library, as recommended in the virtualenv docs, but it didn't work.
running the PowerShell as an administrator.
Any ideas on how to solve this? My workaround at the moment is to create the venv outside of the Google Drive, which works but is inconvenient.
After running into the same issue and playing with it for several hours, it doesn't seem possible. It has nothing to do with spaces in file/folder names. I've tested that. It seems that Google Drive Stream perform some action to the file/folder after a period of time that makes python loose its path the files. For instance, you can take clone a python module into a Google Drive Stream folder, do a "pip install -e ./", and it will work in the virtevn for a few minutes, for instance importing it into a python shell. But after a few minutes you can no longer import the module. I suspect that Google Drive Stream is simply not fully compatible with all filesystem system calls, one of which is being used by python.
Don't set up a virtual env in a cloud synced folder, nor should you run a python script from such a folder. It's a bad idea. They are not meant for version control. Write access (modifying files) to the folder is limited because in your case Google drive will periodically sync the folder which will prevent exclusive write access to the folder almost always.
TLDR; One cant possibly modify files while they are being synced.
I suggest you stick to git for version control.

What's the preferred python distribution if I want to package my env and code into one bundle

I have a python env and code that runs on that env. I have code to setup this env using wget and such, but that's not OS independent really.
I wish to bundle this env and code into one (bundle?) and distribute, so the user doesn't has to set up the env before running the code.
Basically give the end user something (executable, tar, zip, .py), and after running/extracting that user should be able to run my main python script.
I looked into wheels, but I'm not sure if that solves the purpose.
If the code is run on a server you should consider using docker and docker-compose.
This technology allows you to define the entire setup in config-files, and the only thing you need to do when you deploy your code on a new server is to run a single command (docker-compose up)
Decided to use Pyinstaller. Seems straightforward and under active development.

How to distribute a stand-alone python application?

I want to distribute my Python application to co-workers for them to use. The application will on be run on Linux systems, but the users do not have admin privileges so cannot install my application's module dependencies. I would the users to be able to untar my application and then run my main.py script. Running another one-time 'install'-type script is okay, but not much else.
PyInstaller is close to what I want. Except I would like to distribute the source code of my application as well. So the application should be stand-alone and self-contained (with or without the python interpreter is fine, preferably with), but users should be able to make small changes to the code and rerun the application. My ideal solution is to create some sort of compressed/compiled archive of all my applications module dependencies and distribute that with my application. It doesn't have to be all dependencies, but at least the non-standard packages. The application will then import modules from this archive instead of the user's PYTHONPATH.
I tried virtualenv, but having the users source the activate script was a little too much. I've been looking into numerous other solutions, but I can't find one that works for me.
Why don't you create a directory with the interpreter you want to use, add in any modules etc. Then drop in a bash script, say run.sh which calls the program. It can launch your chosen interpretter with your python files, arguments etc.
Any source files can remain this way and be edited in place. You could tar and distribute the whole directory, or put in something like git.
One approach is to use virtualenv. It is designed to create isolated python environment and does a good job at it. It should be possible (link to package the virtualenv with your app with some effort. However virtualenv is not designed for that so it's not as easy as it could be.
package-virtualenv GitHub project might also help.

pythonpath and omniORB

In README file of omniORBpy-3.4 is written that I have to set PYTHONPATH as
set PYTHONPATH=%PYTHONPATH%;%TOP%\lib\python;%TOP%\lib\x86_win32
Where %TOP% is the top-level omniORBpy directory. (Windows machine)
I have done that and reboot my machine but when I try to run *.py files which have a line like
import omniORB
it gives me an error that no such module omniORB.
What I should do?
I think you will find that the README file of omniORBpy says that TOP must be set to the "the root of your omniORB tree" and not omniORBpy.
Not sure here, but I don't think, that changes made to the environment via a batch script will persist across reboots. Try setting the variable via the Workstation properties (sorry, I have no Windows machine at hand, and cannot give more than a few general directions):
Right click on the Workstation icon on your desktop.
Select "Manage..." (I think it was)
Somewhere in the advanced settings, you can modify environment variables (no need to reboot, but you may have to fire up a new CMD.EXE afterwards, as running apps might not get the change).
Alternatively, you can create a small batch script to start you application, and make it modify the environment before the application is started (I think, this is, what the README actually suggests)

Categories

Resources