I'm trying to connect to ms sql server hosted on a windows pc from a linux box. I'm trying to follow these instructions: http://www.freetds.org/userguide/serverthere.htm
Now, I verified that I can ping and telnet into the PC running the server. And I reached the place where the instructions are to run this:
Example 8-3. Connecting to the server, bypassing freetds.conf
$ cd src/apps
$ TDSVER=7.0 ./tsql -H myhost -p 1433 -U user
I cannot for the life of me figure out where src/apps is.
and when I try to run
$ ./tsql -S myserver -U user
or
$ tsql -S server-ip -U myusername -P mypassword
I get tsql command not found. What is happening?
Related
I want to export a table from a database in a container 8beb34269697, here is my procedure
$ docker exec -it 8beb34269697 bash
root#8beb34269697:/# mysqldump -u user -h localhost -p password database_name table_name > table.sql
Enter password:
root#8beb34269697:/# exit
exit
$ docker cp 8beb34269697:/table.sql .
I want to write a bash script run.sh, every time I run ./run.sh on local machine, then ta-da, table.sql is produced. Any help is appreciated.
You can run mysqldump command without having to enter in the container in two steps:
docker exec -ti 8beb34269697 /usr/bin/mysqldump -u user -h localhost -ppassword database_name table_name > table.sql
EDIT: changed from -p password to -ppassword because in the first case the string "password" will be taken as the database name instead of the password itself.
I am trying to do a ssh to my local machine - 127.0.0.1, which works fine.
Next, I am trying to run two commands through ssh client. However, I see that the next command fails. I could see that my tap device is created. However, the tap device is not turned up. Here is my code. I tried the ifconfig and it works fine.
However, it is the sudo commands that is creating a problem.
self.serverName is 127.0.0.1
def configure_tap_iface(self):
ssh = paramiko.SSHClient()
print('SSH on to PC')
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(self.serverName, username='zebra', password='Zebra#2018')
stdin, stdout, stderr = ssh.exec_command('ifconfig')
#print(stdout.read())
session = ssh.get_transport().open_session()
session.get_pty()
session.exec_command('sudo ip address add 192.168.0.1/24 dev cloud_tap && sudo ip link set cloud_tap up')
session.close()
time.sleep(3)
ssh.close()
You can use sudo sh -c 'commands' to run multiple shell commands in a single sudo invocation.
session.exec_command("sudo sh -c 'ip address add 192.168.0.1/24 dev cloud_tap && ip link set cloud_tap up'")
I am trying to run this command using python and it is running well with mongodb version under 3.4.x, but doesn't work with 3.4.x.
This is the command:
su - mongod -c "/usr/bin/mongo admin -u admin -p password
--authenticationDatabase admin --port 27017 --eval 'version()'
And this is the error that I got:
MongoDB shell version v3.4.14
connecting to: mongodb://127.0.0.1:27017/admin
MongoDB server version: 3.4.14
2018-04-26T17:34:06.036+0000 E QUERY [thread1] Error: Authentication failed. :DB.prototype._authOrThrow#src/mongo/shell/db.js:1461:20#(auth):6:1#(auth):1:2:
However when I ran the exact same command on the command line, I got the expected result:
MongoDB shell version v3.4.14
connecting to: mongodb://127.0.0.1:27017/admin
MongoDB server version: 3.4.14
3.4.14
How to solve this?
I am trying to connect to GitLab production (installed with omnibus package) postgresql database with psycopg2.
My configuration is like below:
onn = psycopg2.connect(database="gitlabhq_production", user="gitlab-psql", host="/var/opt/gitlab/postgresql", port="5432")
It gives the following error:
FATAL: Peer authentication failed for user "gitlab-psql"
I can connect to the postgresql server on command line with:
sudo -u gitlab-psql -i bash /opt/gitlab/embedded/bin/psql --port 5432 -h /var/opt/gitlab/postgresql -d gitlabhq_production
Does anyone know what will be the correct parameters to pass into?
Peer authentication works by checking the user the process is running as. In your command line example you switch to gitlab-psql using sudo.
There are two ways to fix this:
Assign a password to the gitlab-psql postgres user (not the system user!) and use that to connect via python. Setting the password is just another query you need to run as a superuser like so:
sudo -u postgres psql -c "ALTER USER gitlab-psql WITH PASSWORD 'ReplaceThisWithYourLongAndSecurePassword';"
Run your python script as gitlab-psql like so:
sudo -u gitlab-psql python /path/to/your/script.py
I wanted to write a command to ssh into vagrant, change the current working directory, and then run nosetests.
I found in the documentation for vagrant that this could be done with vagrant ssh -c COMMAND
http://docs.vagrantup.com/v2/cli/ssh.html
The problem is I'm getting different results if I run nose through -c or manually after SSH.
Command:
vagrant ssh -c 'pwd && cd core && pwd && nosetests -x --failed' web
Output:
/web
/web/core
----------------------------------------------------------------------
Ran 0 tests in 4.784s
OK
Connection to 127.0.0.1 closed.
Commands:
vagrant ssh web
/web$ pwd && cd core && pwd && nosetests -x --failed
Output
/web
/web/core
.........................................................
.........................................................
.........................................................
.........................................................
<snip>
...............................
---------------------------------------------------------
Ran 1399 tests in 180.325s
I don't understand why it makes a difference.
The first ssh session is not a terminal session. If you try ssh -t instead of vagrant ssh -c, likely the outputs will be the same. A command like the following should give output comparable to what you get locally:
ssh -t <username>#<ip-of-vagrant-machine> -p <vagrant-vm-ssh-port> 'pwd && cd core && pwd && nosetests -x --failed'
The default for username and password on vagrant machines is both "vagrant", the ssh-port and IP to ssh into are shown during the provisioning of the vagrant machine with vagrant up. If you prefer public key ssh login vagrant also can point you to the location of the ssh public key.
Depending on where you want to run nose on your vm you will have to adjust the cd command above, it seems that the vagrant ssh wrapper automatically moved you to /web on the VM.
If you just worry about wether the results of the tests will differ because of the visual difference: No, they shouldn't, the reason is just that on a non-interactive terminal nose displays the results in a different manner.
I was able to resolve this issue by running:
vagrant ssh -c 'cd core && nosetests -x --failed --exe' web
I'm not sure why this made a difference on my box.