Python alias/pointer file creation - python

I have multiple directories that require the same large json file. Rather than storing the json file multiple times, I would like to store it in one directory and create an alias that points to the actual file in the other directories.
How do I create a shortcut/alias for a file in Python?

You can create a symlink to the file. In python this looks like:
import os
os.symlink("my_json_file.json", "the_link");

Related

How to create empty files in folder with names and extension from list in file Linux / Python

I have a list of file names in the single file record.txt as format like this:
/home/user/working/united.txt
/home/user/working/temporary.txt
/home/user/working/fruits.txt
I have the only temporary.txt file in the json format like temporary.json in the other folder which I needed as json
I want to create the all remaining files which are not in folder but present in list record.txt so that all the missing files are also there as blank files and at the end I have all three files. temporary.json has the data so I cannot generate all the files newly. Just the files which are missing in folder and present in list.
At the end, I want to get like below by python or shell script.
united.json
temporary.json
fruits.json
and temporary.json still has data
This command with "create" the files w/o erasing existing ones:
while read i ; do touch "${i%.*}.json"; done<record.txt
The modification time of temporary.json will change to the current one
in Python, you can use os.path.isfile to check if a file exist and then create with open(filename,'w'):
from os.path import isfile
_createTxtFlag = True
_createJSONFlag = True
with open('temporary.txt') as fil:
for l in fil.readlines():
l = l.rstrip() #to remove end of line
#create .txt files if not exists
if _createTxtFlag:
if not isfile(l):
with open(l,'w') as newFil:
pass
if _createJSONFlag:
json_name = f"{l.split('.')[0]}.json"
if not isfile(json_name):
with open(json_name,'w'):
pass

Python plugin structure - execute code from another file

Let's say I have a folder containing the main-file main.py and a configuration-file config.json.
In the JSON-file there are multiple entries containing different filenames and the name of a method within each file.
Depending on user input from main.py I get the corresponding filename and the method from config.json and now I want to execute that method.
I tried the following
file = reports [id]["filename"]
method = reports [id]["method"]
# which would give me: file = 'test.py' and method = 'execute'
import file
file.method()
This didn't work obviously. The problem is that I can't know which files there are during compilation.
Other developers would add scripts to a specific folder and add their entries to the configuration file (kind of like a plugin). Does anybody have a solution here?
Thanks in advance
Thanks for your input... importlib was the way to go. I used this answer here:
How to import a module given the full path?

How do I create an empty csv file on a specific folder?

I had a doubt on how to create an empty csv file where I can open it later to save some data using python. How do I do it?
Thanks
An empty csv file can be created like any other file - you just have to perform any operation on it. With bash, you can do touch /path/to/my/file.csv.
Note that you do not have to create an empty file for python to write in. Python will do so automatically if you write to a non-existing file. In fact, creating an empty file from within python means to open it for writing, but not writing anything to it.
with open("foo.csv", "w") as my_empty_csv:
# now you have an empty file already
pass # or write something to it already
you can also use Pandas to do the same as below:
import pandas as pd
df = pd.DataFrame(list())
df.to_csv('empty_csv.csv')
After creating above file you can Edit exported file as per your requirement.

file renaming using python

I am trying to rename a file in which it is auto-generated by some local modules but I was wondering if using os.listdir is the only way for me to filter/ narrow down this file.
This file will always be generated before it is removed and the code will generate the next one (still in the same directory) based on the next item in list.
Basically, whenever this file is generated, it comes in the following file path:
/user_data/.tmp/tempLibFiles/2015_03_16_192212_182096.con
I had only wanted to rename the 2015_03_16_192212_182096 into connectionFile while keeping the rest the same.
You can also use the glob module to narrow down the list of files to the one that matches a particular pattern. For example:
import glob
files = glob.glob('/user_data/.tmp/tempLibFiles/*.con')

How to run logic over several large apache log files in Python?

I have a bunch of apache log files, which I need to parse and extract information from. My script is working fine for a single file, but I'm wondering on the best approach to handle multiple files.
Should I:
- loop through all files and create a temporary file holding all contents
- run my logic on the "contact-ed" file
Or
- loop through every file
- run my logic file by file
- try to merge the results of every file
Filewise I'm looking at logs of about a year, with roughly 2 million entries per day, reported for a large number of machines. My single-file script is generating an object with "entries" for every machine, so I'm wondering:
Question:
Should I generate a joint temporary file or run file-by-file, generate file-based-objects and merge x files with entries for the same y machines?
You could use glob and the fileinput module to effectively loop over all of them and see it as one "large file":
import fileinput
from glob import glob
log_files = glob('/some/dir/with/logs/*.log')
for line in fileinput.input(log_files):
pass # do something

Categories

Resources