I have docker running on Host. There are two docker containers on this host i.e container_1 and container_2. Now I want to execute some commands on container_1 from my remote dev machine.
pipe separated commands i.e,
sudo docker exec -it container_1 sudo find <dir> - type f -iname *_abc_* -print0 | du --files0-from - -b | awk 'BEGIN{sum=0} {sum+=$1} END{print sum}'
Form above command only first command till first pipe execute on docker container and next set of command execute on host.
I am using python fabric api to execute this from remote machine.
Is there any way to execute this full command on container from remote machine ?
That's because pipe command actually gets executed on your host, try this, it may work for you:
sudo docker exec -it container_1 bash -c "sudo find - type f -iname_abc_ -print0 | du --files0-from - -b | awk 'BEGIN{sum=0} {sum+=$1} END{print sum}'"
Related
Attempt to run ray docker image on M1 results in
$ docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
> WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
I've tried to use DOCKER_DEFAULT_PLATFORM=linux/amd64, but then nothing happens:
$ DOCKER_DEFAULT_PLATFORM=linux/amd64 docker run -p 10001:10001 -p 8265:8265 -p 33963:33963 rayproject/ray:latest
>
$ docker ps
> CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
The latest tag has digest 744f499644cc
The image has /bin/bash defined as the command to run when it starts. When you run it, you don't attach a TTY, so the container exits immediately.
I'm not familiar with the image, so I don't know the way to run it correctly and your port mappings confuse me a bit. But a way to run it is
docker run -it rayproject/ray:latest
That will put you at a prompt inside the container and you can explore the contents.
I am trying to do the following in a Makefile recipe. Get the server container-ip using a python script. Build the command to run within the docker container. Run the command in the docker container.
test:
SIP=$(shell python ./scripts/script.py get-server-ip)
CMD="iperf3 -c ${SIP} -p 33445"
docker exec server ${CMD}
I get this
$ make test
SIP=172.17.0.6
CMD="iperf3 -c -p 33445"
docker exec server
"docker exec" requires at least 2 arguments.
See 'docker exec --help'.
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
make: *** [test] Error 1
I ended up with something like this.
SERVER_IP=$(shell python ./scripts/script.py get-server-ip); \
SERVER_CMD="iperf3 -s -p ${PORT} -4 --logfile s.out"; \
CLIENT_CMD="iperf3 -c $${SERVER_IP} -p ${PORT} -t 1000 -4 --logfile c.out"; \
echo "Server Command: " $${SERVER_CMD}; \
echo "Client Command: " $${CLIENT_CMD}; \
docker exec -d server $${SERVER_CMD}; \
docker exec -d client $${CLIENT_CMD};
This seems to work ok. Would love to hear if there are other ways of doing this.
You could write something like this. Here I used a target-specific variable assuming
the IP address is required only in this rule. iperf_command is defined as a variable
as the format looks rather fixed except the IP address which is injected by call function.
Also, as the rule doesn't seem to be supposed to produce the target as a file, I put .PHONY target as well.
iperf_command = iperf3 -c $1 -p 33445
.PHONY: test
test: iperf_server_ip = $(shell python ./scripts/script.py get-server-ip)
test:
docker exec server $(call iperf_command,$(iperf_server_ip))
I want to write a program in Python which starts a docker, executes some commands inside the docker and returns the output. This is, something I would do in a shell like :
docker run --rm -i -it name_docker_image
echo "Input for programm" | script.sh
Output of the script
I am trying to do it with a code like:
process = subprocess.Popen("sudo docker run --rm -i -it syntaxnet",stdout=subprocess.PIPE, shell=True)
process.communicate("echo \""+ sentence + "\" | work/demo.sh")
out = process.stdout.read()
retyurned_value= out.splitlines()
But it gives me an error message: "the input device is not a TTY". How can I fix it?
I wanted to write a command to ssh into vagrant, change the current working directory, and then run nosetests.
I found in the documentation for vagrant that this could be done with vagrant ssh -c COMMAND
http://docs.vagrantup.com/v2/cli/ssh.html
The problem is I'm getting different results if I run nose through -c or manually after SSH.
Command:
vagrant ssh -c 'pwd && cd core && pwd && nosetests -x --failed' web
Output:
/web
/web/core
----------------------------------------------------------------------
Ran 0 tests in 4.784s
OK
Connection to 127.0.0.1 closed.
Commands:
vagrant ssh web
/web$ pwd && cd core && pwd && nosetests -x --failed
Output
/web
/web/core
.........................................................
.........................................................
.........................................................
.........................................................
<snip>
...............................
---------------------------------------------------------
Ran 1399 tests in 180.325s
I don't understand why it makes a difference.
The first ssh session is not a terminal session. If you try ssh -t instead of vagrant ssh -c, likely the outputs will be the same. A command like the following should give output comparable to what you get locally:
ssh -t <username>#<ip-of-vagrant-machine> -p <vagrant-vm-ssh-port> 'pwd && cd core && pwd && nosetests -x --failed'
The default for username and password on vagrant machines is both "vagrant", the ssh-port and IP to ssh into are shown during the provisioning of the vagrant machine with vagrant up. If you prefer public key ssh login vagrant also can point you to the location of the ssh public key.
Depending on where you want to run nose on your vm you will have to adjust the cd command above, it seems that the vagrant ssh wrapper automatically moved you to /web on the VM.
If you just worry about wether the results of the tests will differ because of the visual difference: No, they shouldn't, the reason is just that on a non-interactive terminal nose displays the results in a different manner.
I was able to resolve this issue by running:
vagrant ssh -c 'cd core && nosetests -x --failed --exe' web
I'm not sure why this made a difference on my box.
I have a very strange issue that I can't seem to figure out.
When I execute a python script containing the following lines while inside a SSH terminal (putty), it works fine. But the moment I run the script via crontab or even nohup python myscript >/dev/null 2>&1& it doesn't seem to execute these commands.
subprocess.call('rsync -avr /path/to/folder/. --include "delta.*" --exclude "*" -e "ssh -o StrictHostKeyChecking=no -i /path/to/key.pem" ec2-user#'+server+':/path/to/folder/', shell=True)
local('ssh -t -o StrictHostKeyChecking=no -i /path/to/key.pem ec2-user#'+server+' "sudo /usr/bin/indexer -c /path/to/sphinx.conf --merge main delta --rotate"')
Basically all the above is doing is syncing a folder with new sphinx search engine updates to a remote server, then the second line runs a remote ssh command to force the search engine to rotate updates into production.
I do have fabric installed (hence the local command) but to avoid having to fab a second file I was hoping a single line of code could allow me to execute sudo commands on a remote server.
Can someone help me out?
I found the answer, for ssh commands in a script run in the background, you need to to have -t -t to force a pseudo terminal.
Reference:
Pseudo-terminal will not be allocated because stdin is not a terminal