Generate filenames with uuid in python - python

with open(r"path/sample.txt")as file:
some operations
print('exiting')
when i open the file is it possible to open the filename as below
sample2018-10-25-18-25-36669_devicename_uuid
How to create filenames in python with UTCdatetime & Hostname and guid, for example i need the below mentioned format of file
I am already opening a file to perform some string operations and store it in the same file. It could be very great if I can create a filename while starting the open operation or can I create a file and do all the operations and rename the file in the above mentioned format. How to proceed with this further. I am very new to python

Sure, you can generate a filename in python dynamically.
There is a simple code example that would help you generate file name as you describe.
import os
import socket
from datetime import datetime
from uuid import uuid4
dt = datetime.utcnow().strftime("%Y-%m-%d-%H-%M-%s")
path = 'path'
hostname = socket.gethostname()
filename = f"samle{dt}-{hostname}-{uuid4()}"
with open(os.path.join(path, filename), 'w') as f:
f.write('some_content')
If you want to get a unique hardware ID with Python please check this
link

Related

Python: Look for Files that will constantly change name

So, I'll explain briefly my idea, then, what I've tried and errors that I've got so far.
I want to make a Python script that will:
Search for files in a directory, example: /home/mystuff/logs
If he found it, he will execute a command like print('Errors found'), and then stop.
If not, he will keep it executing on and on.
But other logs will be there, so, my intention is to make Python read logs in /home/mystuff/logs filtering by the current date/time only.. since I want it to be executed every 2 minutes.
Here is my code:
import time
import os
from time import sleep
infile = r"/home/mystuff/logs`date +%Y-%m-%d`*"
keep_phrases = ["Error",
"Lost Connection"]
while True:
with open(infile) as f:
f = f.readlines()
if phrase in f:
cmd = ['#print something']
erro = 1
else:
sleep(1)
I've searched for few regex cases for current date, but nothing related to files that will keep changing names according by the date/time.. do you have any ideas?
You can't use shell features like command substitutions in file names. To the OS, and to Python, a file name is just a string. But you can easily create a string which contains the current date and time.
from datetime import datetime
infile = r"/home/mystuff/logs%s" % datetime.now().strftime('%Y-%m-%d')
(The raw string doesn't do anything useful, because the string doesn't contain any backslashes. But it's harmless, so I left it in.)
You also can't open a wildcard; but you can expand it to a list of actual file names with glob.glob(), and loop over the result.
from glob import glob
for file in glob(infile + '*'):
with open(file, 'r') as f:
# ...
If you are using a while True: loop you need to calculate today's date inside the loop; otherwise you will be perpetually checking for files from the time when the script was started.
In summary, your changed script could look something like this. I have changed the infile variable name here because it isn't actually a file or a file name, and fixed a few other errors in your code.
# Unused imports
# import time
# import os
from datetime import datetime
from glob import glob
from time import sleep
keep_phrases = ["Error",
"Lost Connection"]
while True:
pattern = "/home/mystuff/logs%s*" % datetime.now().strftime('%Y-%m-%d')
for file in glob(pattern):
with open(file) as f:
for line in f:
if any(phrase in line for phrase in keep_phrases):
cmd = ['#print something']
erro = 1
break
sleep(120)

Creating view in browser functionality with python

I have been struggling with this problem for a while but can't seem to find a solution for it. The situation is that I need to open a file in browser and after the user closes the file the file is removed from their machine. All I have is the binary data for that file. If it matters, the binary data comes from Google Storage using the download_as_string method.
After doing some research I found that the tempfile module would suit my needs, but I can't get the tempfile to open in browser because the file only exists in memory and not on the disk. Any suggestions on how to solve this?
This is my code so far:
import tempfile
import webbrowser
# grabbing binary data earlier on
temp = tempfile.NamedTemporaryFile()
temp.name = "example.pdf"
temp.write(binary_data_obj)
temp.close()
webbrowser.open('file://' + os.path.realpath(temp.name))
When this is run, my computer gives me an error that says that the file cannot be opened since it is empty. I am on a Mac and am using Chrome if that is relevant.
You could try using a temporary directory instead:
import os
import tempfile
import webbrowser
# I used an existing pdf I had laying around as sample data
with open('c.pdf', 'rb') as fh:
data = fh.read()
# Gives a temporary directory you have write permissions to.
# The directory and files within will be deleted when the with context exits.
with tempfile.TemporaryDirectory() as temp_dir:
temp_file_path = os.path.join(temp_dir, 'example.pdf')
# write a normal file within the temp directory
with open(temp_file_path, 'wb+') as fh:
fh.write(data)
webbrowser.open('file://' + temp_file_path)
This worked for me on Mac OS.

How to put the name of file list in order from the same folder in the same numpy file?

How to create a file list of my files included in the same folder?
In this question, I have asked about how to put all my file names from the same folder in one numpy file.
import os
path_For_Numpy_Files = 'C:\\Users\\user\\My_Test_Traces\\1000_Traces_npy'
with open('C:\\Users\\user\\My_Test_Traces\\Traces.list_npy', 'w') as fp:
fp.write('\n'.join(os.listdir(path_For_Numpy_Files)))
I have 10000 numpy files in my folder, so the result is:
Tracenumber=01_Pltx1
Tracenumber=02_Pltx2
Tracenumber=03_Pltx3
Tracenumber=04_Pltx4
Tracenumber=05_Pltx5
Tracenumber=06_Pltx6
Tracenumber=07_Pltx7
Tracenumber=08_Pltx8
Tracenumber=09_Pltx9
Tracenumber=10_Pltx10
Tracenumber=1000_Pltx1000
Tracenumber=100_Pltx100
Tracenumber=101_Pltx101
The order is very important to analyse my result, how to keep thqt order when creating the list please? I mean that I need my results like this:
Tracenumber=01_Pltx1
Tracenumber=02_Pltx2
Tracenumber=03_Pltx3
Tracenumber=04_Pltx4
Tracenumber=05_Pltx5
Tracenumber=06_Pltx6
Tracenumber=07_Pltx7
Tracenumber=08_Pltx8
Tracenumber=09_Pltx9
Tracenumber=10_Pltx10
Tracenumber=11_Pltx11
Tracenumber=12_Pltx12
Tracenumber=13_Pltx13
I try to iterate it by using:
import os
path_For_Numpy_Files = 'C:\\Users\\user\\My_Test_Traces\\1000_Traces_npy'
with open('C:\\Users\\user\\My_Test_Traces\\Traces.list_npy', 'w') as fp:
list_files=os.listdir(path_For_Numpy_Files)
list_files_In_Order=sorted(list_files, key=lambda x:(int(re.sub('D:\tt','',x)),x))
fp.write('\n'.join(sorted(os.listdir(list_files_In_Order))))
It gives me this error:
invalid literal for int() with base 10: ' Tracenumber=01_Pltx1'
How to solve this problem please?
I edit the solution, It may work now:
You will sort your files based on time.
import os
path_For_Numpy_Files = 'C:\\Users\\user\\My_Test_Traces\\1000_Traces_npy'
path_List_File='C:\\Users\\user\\My_Test_Traces\\Traces.list_npy'
with open(path_List_File, 'w') as fp:
os.chdir(path_For_Numpy_Files)
list_files=os.listdir(os.getcwd())
fp.write('\n'.join(sorted((list_files),key=os.path.getmtime)))

How to download a CSV file from the World Bank's dataset

I would like to automate the download of CSV files from the World Bank's dataset.
My problem is that the URL corresponding to a specific dataset does not lead directly to the desired CSV file but is instead a query to the World Bank's API. As an example, this is the URL to get the GDP per capita data: http://api.worldbank.org/v2/en/indicator/ny.gdp.pcap.cd?downloadformat=csv.
If you paste this URL in your browser, it will automatically start the download of the corresponding file. As a consequence, the code I usually use to collect and save CSV files in Python is not working in the present situation:
baseUrl = "http://api.worldbank.org/v2/en/indicator/ny.gdp.pcap.cd?downloadformat=csv"
remoteCSV = urllib2.urlopen("%s" %(baseUrl))
myData = csv.reader(remoteCSV)
How should I modify my code in order to download the file coming from the query to the API?
This will get the zip downloaded, open it and get you a csv object with whatever file you want.
import urllib2
import StringIO
from zipfile import ZipFile
import csv
baseUrl = "http://api.worldbank.org/v2/en/indicator/ny.gdp.pcap.cd?downloadformat=csv"
remoteCSV = urllib2.urlopen(baseUrl)
sio = StringIO.StringIO()
sio.write(remoteCSV.read())
# We create a StringIO object so that we can work on the results of the request (a string) as though it is a file.
z = ZipFile(sio, 'r')
# We now create a ZipFile object pointed to by 'z' and we can do a few things here:
print z.namelist()
# A list with the names of all the files in the zip you just downloaded
# We can use z.namelist()[1] to refer to 'ny.gdp.pcap.cd_Indicator_en_csv_v2.csv'
with z.open(z.namelist()[1]) as f:
# Opens the 2nd file in the zip
csvr = csv.reader(f)
for row in csvr:
print row
For more information see ZipFile Docs and StringIO Docs
import os
import urllib
import zipfile
from StringIO import StringIO
package = StringIO(urllib.urlopen("http://api.worldbank.org/v2/en/indicator/ny.gdp.pcap.cd?downloadformat=csv").read())
zip = zipfile.ZipFile(package, 'r')
pwd = os.path.abspath(os.curdir)
for filename in zip.namelist():
csv = os.path.join(pwd, filename)
with open(csv, 'w') as fp:
fp.write(zip.read(filename))
print filename, 'downloaded successfully'
From here you can use your approach to handle CSV files.
We have a script to automate access and data extraction for World Bank World Development Indicators like: https://data.worldbank.org/indicator/GC.DOD.TOTL.GD.ZS
The script does the following:
Downloading the metadata data
Extracting metadata and data
Converting to a Data Package
The script is python based and uses python 3.0. It has no dependencies outside of the standard library. Try it:
python scripts/get.py
python scripts/get.py https://data.worldbank.org/indicator/GC.DOD.TOTL.GD.ZS
You also can read our analysis about data from World Bank:
https://datahub.io/awesome/world-bank
Just a suggestion than a solution. You can use pd.read_csv to read any csv file directly from a URL.
import pandas as pd
data = pd.read_csv('http://url_to_the_csv_file')

Emails via Python - Paste contents of excel/csv as a formatted table onto the mail body

I am trying to send mails via Python using smtplib. My main concern is to get the contents of a csv/excel and paste the data as it is(tabular format) onto the mail body of the email being sent out. I have the following snippet ready to search for the file and print the contents on the shell. How would I get the same output onto a mail body?
from os import listdir
import csv
import os
#Search for a csv in the specified folder
directory = "folder_path"
def find_csv_filenames( path_to_dir, suffix="Data.csv" ):
filenames = listdir(path_to_dir)
return [ filename for filename in filenames if filename.endswith( suffix ) ]
filenames = find_csv_filenames(directory)
for name in filenames:
datafile=name
print(name)
path=directory+'//'+datafile
#Read the selected csv
with open(path,'r') as csvfile:
spamreader=csv.reader(csvfile,delimiter=' ',quotechar='|')
for row in spamreader:
print(', '.join(row))
TIA for your help.
Create a StringIO instance, say csvText and instead of print use
csvText.write(", ".join(row)+"\n")
The final newline is necessary, because it is not automatically added as by print. Finally (i.e. after the loop) calling csvText.getvalue() will return what you want to mail.
I would also suggest not to glue file specification together by yourself but call os.path.join() instead.

Categories

Resources