appium: how to push file - python

I tried to push file into my device using appium.
When I see my device in my windows machine, this is the toor path:
This PC\P00A\Internal shared storage
So if I want to move this file from my machine:
C:\file.txt
I tried using this command:
self.driver.push_file('C:\file.txt', android_path)
But what path should I put instead of android_path ? I am trying to write to sdcard.

The order of parameters is incorrect, according to Push File command documentation it should be:
driver.push_file("/sdcard/file.txt","c:/file.txt")
#remote path local path
You might also be interested in Run Command (adb) extension which allows executing arbitrary ADB commands like:
seetest.run("adb push c:/file.txt /sdcard/file.txt");

Related

Snakemake: creating directories and files anywhere while using singularity

Whenever I use snakemake with singularity and would like create a directory or file, I run into read-only errors if I try to write outside of the snakemake directory.
For example, if my rule contains the following:
container:
docker://some_image
shell:
"touch ~/hello.txt"
then everything runs fine. hello.txt is created inside the snakemake directory. However if my touch command tries to create a file outside of the snakemake directory:
container:
docker://some_image
shell:
"touch /home/user/hello.txt"
Then I get the following error:
touch: cannot touch '/home/user/hello.txt': Read-only file system
Is there anyway to give snakemake the ability to create files anywhere it wants when using singularity?
Singularity by default mounts certain directories including user's home directory. In your first command (touch ~/hello.txt), file gets written to home directory, where singularity has read/write permissions. However in your second command (touch /home/user/hello.txt), singularity doesn't have read/write access to /home/user/; you will need to bind that path manually using singularity's --bind and supply it to snakemake via --singularity-args.
So the snakemake command would look something like
snakemake --use-singularity --singularity-args "--bind /home/user/"

When I use PyCharm and odoo, [Errno 2] No such file or directory Process finished with exit code 2

I install odoo on the local docker container.
And I want to use Pycharm edit or run my python file in the container's, I mount local custom_addons directory into container, and run file sf on the PyCharm, but encounter below error message :
91626f8b18b:python -u /opt/project/custom_addons/sf.py
python: can't open file '/opt/project/custom_addons/sf.py': [Errno 2] No such file or directory
Process finished with exit code 2
Here is my screenshot:
And following is my python interpreter on the PyCharm settings' screenshot:
The issue is with your path, sir. Right click the file, "sf.py", and copy the path via the "Copy Path" option within the context menu.
What OS are you running PyCharm on? As
"This container mounts your project directory into the container at /opt/project in the container. Note: On Linux, you currently have to perform this volume mapping manually."
https://blog.jetbrains.com/pycharm/2015/12/using-docker-in-pycharm/
(this may have changed in the mean time)

shutil.copy2 on a cifs share only works as root

-when writing the file to the cifs share from linux as root using python 2.7 shutil, no issue
-when writing the file manually using cp as a normal user, no issues
-when using python shutil.copy or copy2 as a normal user:
File "/usr/lib64/python2.7/shutil.py", line 120, in copy
copymode(src, dst)
File "/usr/lib64/python2.7/shutil.py", line 91, in copymode
os.chmod(dst, mode)
OSError: [Errno 1] Operation not permitted: '/mnt/RW_H-drive/file.csv
I was wrong about Debian vs CentOS in the comment above, the Debian system drive died and I'm running my jobs on an enterprise box where the only possibilities are CentOS or SUSE so I can't compare /etc/fstab files. I was trying to mount the share using dir_mode=0777,file_mode=0777 without the uid=xxxx. I don't understand why but when you specify dir_mode and file_mode you get the permissions you expect and can write files to the share manually but using shutil fails with the error above. If you remove dir_mode and file_mode and use user uid=xxxx where xxxx is the user ID of any locally recognozised user (Local,LDAP or NIS) then permissions work as expected manually (cp or touch command etc) and also via python with shutil. I may have had the same issue with Debian when I first setup the fstab file that's why I had this little bell telling me to try uid, nothing else. Bizare thing is that the UID of the user in fstab and the cifs user are unrelated as far as I recall, they don't even have to have the same name. Also both mount lines work when you change files on the share as root user.
Example:
//nasx/sharex/ /mnt/RW_H-drive/ cifs credentials=/root/smbcredentials,_netdev,uid=4321 0 0
Rather than:
//nasx/sharex/ /mnt/RW_H-drive/ cifs credentials=/root/smbcredentials,_netdev,file_mode=0777,dir_mode=0777 0 0

Save file from Python script to Docker Container

I have a simple Python script, inside of a Docker container, that:
-Makes a call to an external API
-A license file is returned from the API
I want to be able to save the returned license file, to my directory inside of the docker container.
This is what I have done:
r = requests.post("http://api/endpoint", headers=headers, data=data, auth=("", "")
f = open('test.lic', 'w+')
f.write(r.content)
f.close()
My understanding is that this would create a test.lic file if one did not already exist, or open the existing test.lic file, and write the content of the request object to the test.lic. However this is not working. No file is being saved to my directory inside of the Docker container. If I run these lines from a python shell it works so I'm guessing it has something to do with being inside of a Docker container.
It could be that the file is getting saved, but it is in the working directory in the container.
The docker docs say:
The default working directory for running binaries within a container is the root directory (/), but the developer can set a different default with the Dockerfile WORKDIR command.
So the file may be ending up in the root directory / of your container.
You could add a print of os.getcwd() to your script (if you are able to see the output... which you might need to use the docker logs command) to verify this. Either way, the file will likely be in the location returned by os.getcwd().
In my case the file was saved to a different container. Check it if you have multiple containers in the docker-compose.

cannot compile cpp files under the python pyc script file

I am trying to compile & execute my hw cpp file under the python script file which we are given by lecturer. the how-to-manual.pdf he sent us it says use:
c:\>python ./submit.pyc problemID -u username -p password -b //submit.pyc is already given to us
and here is the manifest.txt we are given:
[main]
problem = gc
build =
g++ main.cpp -o solver
run =
./solver %f
my cpp file works normally like this:
./solver input_file
However, I am trying (I have to) to do this under the windows OS. I have Python 2.7.x installed and python.exe is in the Command PATH. I can't run it under the linux ssh sytem because there is 2.4.x python installed and I can't touch it (school's system).
Anyway, when I execute the line above, it returns me:
Command execution failed:
g++ solver.cpp -o solver
I think I told everything I can. So, any idea that what I have to do else? except asking to lecturer:)
For the above to work it needs to be able to find g++ so you need to add the directory that it resides in to the PATH environment variable. This can be done from within your python script or on the command line with:
path=Where\g++\lives;%path%
This will only apply within the current DOS session.
Or you can add it permanenty through system settings->advanced settings->environmental variables
You could also look at using a python virtual environments on the schools linux system.

Categories

Resources