Merge CSV files in ADLS2 that are prepared through DataBricks - python

While running DataBricks code and preparing CSV files and loading them into ADLS2, the CSV files are split into many CSV files and are being loaded into ADLS2.
Is there a way to merge these CSV files in ADLS2 thru pyspark.
Thanks

Is there a way to merge these CSV files in ADLS2 thru pyspark.
As i know,spark dataframe does makes the files separately.Theoretically,you could use spark.csv method which could accept list of strings as parameters.
>>> df = spark.read.csv('path')
Then use df.toPandas().to_csv() method to write objects into pandas dataframe.You could refer to some clues from this case:Azure Data-bricks : How to read part files and save it as one file to blob?.
However,i'm afraid that this process could not hold such high memory consumption. So,i'd suggest you just using os package to do the merge job directly.I tested below 2 snippet of code for your reference.
1st:
import os
path = '/dbfs/mnt/test/'
file_suffix = '.csv'
filtered_files = [file for file in files if file.endswith(file_suffix)]
print(filtered_files)
with open(path + 'final.csv', 'w') as final_file:
for file in filtered_files:
with open(file) as f:
lines = f.readlines()
final_file.writelines(lines[1:])
2rd:
import os
path = '/dbfs/mnt/test/'
file_suffix = '.csv'
filtered_files = [os.path.join(root, name) for root, dirs, files in os.walk(top=path , topdown=False) for name in files if name.endswith(file_suffix)]
print(filtered_files)
with open(path + 'final2.csv', 'w') as final_file:
for file in filtered_files:
with open(file) as f:
lines = f.readlines()
final_file.writelines(lines[1:])
The second one is compatible hierarchy.
In additional, i provide a way here which is using ADF copy activity to transfer multiple csv files into one file in ADLS gen2.
Please refer to this doc and configure the folder path in ADLS gen2 source dataset.Then set MergeFiles with copyBehavior property.(Besides, you could use wildFileName like *.csv to exclude files which you don't want to touch in the specific folder)
Merges all files from the source folder to one file. If the file name
is specified, the merged file name is the specified name. Otherwise,
it's an autogenerated file name.

Related

can't zip files from Jupyter Notebook

I try to zip files, I used the example from https://thispointer.com/python-how-to-create-a-zip-archive-from-multiple-files-or-directory/
with ZipFile('sample2.zip', 'w') as zipObj2:
# Add multiple files to the zip
zipObj2.write('sample_file.csv')
sample2.zip is created, but it is empty. Of course that the csv file exists and is not empty.
I run this code from Jupyter Notebook
edit: I'm using relative paths -
input_dir = "../data/example/"
with zipfile.ZipFile(os.path.join(input_dir, 'f.zip'), 'a') as zipObj2:
zipObj2.write(os.path.join(input_dir, 'f.tif'))
you tried to close zip file to save ?
from zipfile import ZipFile
with ZipFile('sample2.zip', 'w') as zipObj2:
zipObj2.write('sample_file.csv')
zipObj2.close()
I'm a little confused by your question, but if I'm correct it sounds like you're trying to place multiple CSV files within a single zipped file? If so, this is what you're looking for:
#initiate files variable that contains the directory from which you wish to zip csv files
files=[f for f in os.listdir("./your_directory") if f.endswith('.csv')]
#initalize empty DataFrame
all_data = pd.DataFrame()
#iterate through the files variable and concatenate them to all_data
for file in files:
df = pd.read_csv('./your_directory' + file)
all_data = pd.concat([all_data, df])
Then call your new DataFrame(all_data) to verify that contents were transferred.

i have multiple txt files in 1 folder and I need to insert the txt data into MySql table using python

I have Multiple txt file in a folder. I need to insert the data from the txt file into mySql table
I also need to sort the files by modified date before inserting the data into the sql table named TAR.
below is the file inside one of the txt file. I also need to remove the first character in every line
SSerial1234
CCustomer
IDivision
Nat22
nAembly
rA0
PFVT
fchassis1-card-linec
RUnk
TP
Oeka
[06/22/2020 10:11:50
]06/22/2020 10:27:22
My code only reads all the files in the folder and prints the contents of the file. im not sure how to sort the files before reading the files 1 by 1.
Is there also a way to read only a specific file (JPE*.log)
import os
for path, dirs, files in os.walk("C:\TAR\TARS_Source/"):
for f in files:
fileName = os.path.join(path, f)
with open(fileName, "r") as myFile:
print(myFile.read())
Use glob.glob method to get all files using a regex like following...
import glob
files=glob.glob('./JPE*.log')
And you can use following to sort files
sorted_files=sorted(files)

Read csv files from multiple folders using a for loop

My code will read from a csv file and perform multiple operations/calculations then create another csv file, i have 8 folders to read/write from and i want my code to iterate through them one by one
lets say i have folders named Folder1 to Folder8, first of all how do i specify my code to read from a different directory instead of the default one where the python script exists?
this is part of my code
#read the columns from CSV
MAXCOLS = Number_Of_Buses + 1
Bus_Vol = [[] for _ in range(MAXCOLS)]
with open('p_voltage_table_output.csv', 'rb') as input:
for row in csv.reader(input, delimiter=','):
for i in range(MAXCOLS):
Bus_Vol[i].append(row[i] if i < len(row) else '')
for i in xrange(1,MAXCOLS):
dummy=0
#print('Bus_Vol[{}]: {}'.format(i, Bus_Vol[i]))
i want to be able to specify the directory folder to folder1 and also iterate through folder1 to folder8 which all have the same csv file with the same name
To read a directory other than where your script is located, you need to provide python the absolute path to the directory.
Windows style: c:\path\to\directory
*nix style: /path/to/directory
In either case it'll be a string.
You didn't specify if your target folders were in the same directory or not. If they are, it's a bit easier.
import os
path_to_parent = "/path/to/parent"
for folder in os.listdir(path_to_parent):
for csv_file in os.listdir(os.path.join(path_to_parent, folder)):
# Do whatever to your csv file here
If your folders are spread out on your system, then you have to provide an absolute path to each one:
import os
paths_to_folders = ['/path/to/folder/one', '/path/to/folder/two']
for folder in paths_to_folders:
for csv_file in os.listdir(folder):
# Do whatever to your csv file

Python script not combining csv files

I am trying to combine over 100,000 CSV files (all same formats) in a folder using below script. Each CSV file is on average 3-6KB of size. When I run this script, it only opens exact 47 .csv files and combines. When I re-run it only combines same .csv files, not all of them. I don't understand why it is doing that?
import os
import glob
os.chdir("D:\Users\Bop\csv")
want_header = True
out_filename = "combined.files.csv"
if os.path.exists(out_filename):
os.remove(out_filename)
read_files = glob.glob("*.csv")
with open(out_filename, "w") as outfile:
for filename in read_files:
with open(filename) as infile:
if want_header:
outfile.write('{},Filename\n'.format(next(infile).strip()))
want_header = False
else:
next(infile)
for line in infile:
outfile.write('{},{}\n'.format(line.strip(), filename))
Firstly check the length of read_files:
read_files = glob.glob("*.csv")
print(len(read_files))
Note that glob isn't necessarily recursive as described in this SO question.
Otherwise your code looks fine. You may want to consider using the CSV library but note that you need to adjust the field size limit with really large files.
Are you shure your all filenames ends with .csv? If all files in this directory contains what you need, then open all of them without filtering.
glob.glob('*')

Python reading unopened files from a folder

Is there a way to read all the unopened files in a folder only by passing the one specific file name that is present in that folder?I know to read all the files in a directory passing the directory name using os.walk.But in this specific problem I can just pass only one file name.Need your help for this problem.Thank you.
If I understand you correctly, you have a path of a single file, while you want to read all files in the folder it's located in.
You can achieve this easily:
dir_name, file_name = os.path.split(filepath)
for root, dirs, files in os.walk(dir_name):
for file in files:
with open(file) as f:
file_content = f.read()

Categories

Resources