Is it possible to make a program in Python in which the program automatically organize downloads from Whatsapp Web with Python?
By default when downloading an image (or file) from WhatsApp Web it remains in the folder "C:\Users\Name_User\Downloads" for windows users.
The purpose of the program is to dynamically change the default directory and to store each download according to the number (or name) of the contact from which the file comes.
Is this thing possible on python?
Sure thing you can manipulate and list any files with standard os module(copy,delete,move files,create directories etc.).Also a third party module called watchdog can monitor directory or even files changes.
Related
I wanna share my files to clients.
Files are created in my server everyday. (ex. 20210701_A, 20210702_A ....)
Client will download today's file.
I already made choosing right file using python code.
But How can my clients download files from their own web?
(If possible, I want using Python)
I would like to open different applications, e.g. chrome, edge, vlc and excel via python.
Based on the amount of apps and the resolution I would like to resize the apps.
How could I access this settings with python?
I only winth TKinter stuff and think this only handles newly created windows instead of already finished apps.
You can check https://github.com/pywinauto/pywinauto
This will allow you to manipulate applications started by a python script you wrote or already running.
There is also a specific SO tag you can check: [pywinauto]
I'm progressively adding images to a dropbox folder remotely which I then need to download on my raspberry pi 3.
The thing is I only need the latest uploaded image in that folder so that I can classify it remotely using some code deployed on my raspberry pi 3.
I don't know the dropbox api well so I don't know if there's any functionality to directly implement what I said above, so I'm trying to download the entire folder with all the images locally and then select the image that I want.
Dropbox api v2 says they added functionality to download entire folders as zip files but whenever I try to implement the code given in the api and save the file locally, the local zip files always says it's corrupt and can't be opened.
Does anyone know how this can be implemented in python ?
Edit: Or maybe shed light if there's a simpler way to download the latest uploaded image to a folder without explicitly changing the code with that specific image's name or link ?
https://www.dropbox.com/developers/documentation/http/documentation#files-download_zip
Start by getting the download working in a Linux terminal using CURL, then you can work your way up by making the HTTP request using Python Requests library. That way you can debug it systematically. Make sure there aren't any issues with file permissions on Dropbox or API tokens.
I want to copy my own photos in a given web directory to my Raspberry so I can display them in a slideshow.
I'm looking for a "simple" script to download these files using python. I can then paste this code into my slideshow so that it refreshes the pics every day.
I suppose that the python wget utility would be the tool to use. However, I can only find examples on how to download a single file, not a whole directory.
Any ideas how to do this?
It depends on the server used to host the images and if the script can see a list of images to download. If this list isn't there in some form e.g. a webpage list, JSON or XML feed, there is no way for a script to download the files as the script doesnt "know" what's there dynamically.
Another option is for a python script to SSH into the server, list the contents of a directory and then download. This presumes you have programmatic access to the server.
If access to the server is a no, and there is no dynamic list then the last option would be to go to this website where you know the photos are and scrape their paths and download them. However this may scrape unwanted data such as other images, icons, etc.
https://medium.freecodecamp.org/how-to-scrape-websites-with-python-and-beautifulsoup-5946935d93fe
I am writing a program in python that will automatically download pdf files from a website once a day.
When trying to test I noticed that the files downloaded had the correct extension but they are very small (<1kB) compared to the normal size of about 100kB when downloaded manually.
Can a website block a program from automatically downloading files?
Is there anything that can be done about this?
Yes. Cloudflare can block bots from downloading files. Blocking is usually done by detecting the user-agent or including javascript in a webpage. I would examine the pdf file in notepad and see what it contains also try adding a user-agent option in your python code.