I'm working on project to check for copies between two drives and I got stuck on sorting.
the output I have now is [ Filename, Hash, Location] in two list called drive1 and drive2
the output id'e like to end up with two text files with a list of the files that aren't in the other drive.
import os
import os.path
import hashlib
from os import path
drive1 = []
drive2 = []
file1 = input("Directory 1 location : ")
file2 = input("Directory 2 location : ")
AFile = open('skrar.txt','w')
AFile.close
def hash_file(filename):
if path.isfile(filename) is False:
pass
# make a hash object
md5_h = hashlib.md5()
# open file for reading in binary mode
with open(filename,'rb') as file:
# read file in chunks and update hash
chunk = 0
while chunk != b'':
chunk = file.read(1024)
md5_h.update(chunk)
# return the hex digest
return md5_h.hexdigest()
with open('Drive1.txt', 'w') :
AFile.write(hashlib.sha224(b"FILENAME").hexdigest())
for folderName, subfolders, filenames in os.walk(file1):
os.chdir(folderName)
for filename in filenames:
AFile.write(filename+";"+hash_file(filename)+";"+os.getcwd()+";"+os.path.join(os.getcwd(),filename)+'\n')
with open('Drive2.txt', 'w') :
AFile.write(hashlib.sha224(b"FILENAME").hexdigest())
for folderName, subfolders, filenames in os.walk(file2):
os.chdir(folderName)
for filename in filenames:
AFile.write(filename+";"+hash_file(filename)+";"+os.getcwd()+";"+os.path.join(os.getcwd(),filename)+'\n')
with open('Drive1.txt','r') as file:
for line in file:
drive1.append(line.split(";"))
with open('Drive2.txt','r') as file:
for line in file:
drive2.append(line.split(";"))
I'm not sure how to go about this maybe I should use dictionaries?
As I understand it, both drive1 and drive2 contain are lists of lists with length 3. The simplest approach would be the following:
# filter() creates a new list with the files in the opposite drive removed
files_only_in_drive1 = filter(lambda x: x not in drive2, drive1)
files_only_in_drive2 = filter(lambda x: x not in drive1, drive2)
This isn't the fastest solution (since search in an unordered list takes linear time). A more performant solution would take advantage of hashing and the set difference operator:
# Use tuple() for hashability.
drive1_file_set = set([tuple(file) for file in drive1])
drive2_file_set = set([tuple(file) for file in drive2])
# Now, remove files that are in the other drive using the set difference operator. In case it is necessary, I've added extra syntax to transform the 3-tuples to lists, and to cast the set back into a list.
files_only_in_drive_1 = [list(file) for file in drive1_file_set.difference(drive2_file_set)]
files_only_in_drive_2 = [list(file) for file in drive2_file_set.difference(drive1_file_set)]
Related
Using Python, I'm seeking to iteratively combine two set of txt files to create a third set of txt files.
I have a directory of txt files in two categories:
text_[number].txt (eg: text_0.txt, text_1.txt, text_2.txt....text_20.txt)
comments_[number].txt (eg: comments_0.txt, comments_1.txt, comments_2.txt...comments_20.txt).
I'd like to iteratively combine the text_[number] files with the matching comments_[number] files into a new file category feedback_[number].txt. The script would combine text_0.txt and comments_0.txt into feedback_0.txt, and continue through each pair in the directory. The number of text and comments files will always match, but the total number of text and comment files is variable depending on preceding scripts.
I can combine two pairs using the code below with a list of file pairs:
filenames = ['text_0.txt', 'comments_0.txt']
with open("feedback_0.txt", "w") as outfile:
for filename in filenames:
with open(filename) as infile:
contents = infile.read()
outfile.write(contents)
However, I'm uncertain how to structure iteration for the rest of the files. I'm also curious how to generate lists from the contents of the file directory. Any advice or assistance on moving forward is greatly appreciated.
It would be far simpler (and possibly faster) to just fork a cat process:
import subprocess
n = ... # number of files
for i in range(n):
with open(f'feedback_{i}.txt', 'w') as f:
subprocess.run(['cat', 'text_{i}.txt', 'comments_{i}.txt'], stdout=f)
Or, if you already have lists of the file names:
for text, comment, feedback in zip(text_files, comment_files, feedback_files):
with open(feedback, 'w') as f:
subprocess.run(['cat', text, comment], stdout=f)
Unless these are all extremely small files, the cost of reading and writing the bytes will outweigh the cost of forking a new process for each pair.
Maybe not the most elegant but...
length = 10
txt = [f"text_{n}.txt" for n in range(length)]
com = [f"comments_{n}.txt" for n in range(length)]
feed = [f"feedback_{n}.txt" for n in range(length)]
for f, t, c in zip(feed, txt, com):
with open(f, "w") as outfile:
with open(t) as infile1:
contents = infile1.read()
outfile.write(contents)
with open(c) as infile2:
contents = infile2.read()
outfile.write(contents)
There are many ways to achieve this, but I don't seem to see any solution that's both beginner-friendly and takes into account the structure of the files you described.
You can iterate through the files, and for every text_[num].txt, fetch the corresponding comments_[num].txt and write to feedback_[num].txt as shown below. There's no need to add any counters or make any other assumptions about the files that might not always be true:
import os
srcpath = 'path/to/files'
for f in os.listdir(srcpath):
if f.startswith('text'):
index = f[5:-4] # extract the [num] part
# Build the paths to text, comment, feedback files
txt_path = os.path.join(srcpath, f)
cmnt_path = os.path.join(srcpath, f'comments_{index}.txt')
fb_path = os.path.join(srcpath, f'feedback_{index}.txt')
# write to output – reading in in byte mode following chepner's advice
with open(fb_path, 'wb') as outfile:
outfile.write(open(txt_path, 'rb').read())
outfile.write(open(cmnt_path, 'rb').read())
The simplest way would probably be to just iterate from 1 onwards, stopping at the first missing file. This works assuming that your files are numbered in increasing order and with no gaps (e.g. you have 1, 2, 3 and not 1, 3).
import os
from itertools import count
for i in count(1):
t = f'text_{i}.txt'
c = f'comments_{i}.txt'
if not os.path.isfile(t) or not os.path.isfile(c):
break
with open(f'feedback_{i}.txt', 'wb') as outfile:
outfile.write(open(t, 'rb').read())
outfile.write(open(c, 'rb').read())
You can try this
filenames = ['text_0.txt', 'comments_0.txt','text_1.txt', 'comments_1.txt','text_2.txt', 'comments_2.txt','text_3.txt', 'comments_3.txt']
for i,j in enumerate (zip(filenames[::2],filenames[1::2])):
with open(f'feedback_{i}','a+') as file:
for k in j:
with open(k,'r') as f:
files=f.read()
file.write(files)
I have taken a list here. Instead, you can do
import os
filenames=os.listdir('path/to/folder')
I have data (mixed text and numbers in txt files) and I'd like to write a for loop that creates a list of lists, such that I can process the data from all the files using fewer lines.
So far I have written this:
import csv
path = (some path...)
files = [path + 'file1.txt',path + 'file2.txt', path +
'file3.txt', ...]
for i in files:
with open(i, 'r') as j:
Reader = csv.reader(j)
List = [List for List in Reader]
I think I overwrite List instead of creating a nested list, since I get Reader with size of 1 and a list that's with dimensions for one of the files.
My questions:
Given that the files may contain different line numbers, is it the right approach to save some lines of code? (What could be done better?)
I think the problem is in [List for List in Reader], is there a way to change it so I don't overwrite List? Something like add to List?
You can use the list append() method to add to an existing list. Since csv.reader instances are iterable objects, you can just pass one of them to the method as shown below:
import csv
from pathlib import Path
path = Path('./')
filenames = ['in_file1.txt', 'in_file2.txt'] # etc ...
List = []
for filename in filenames:
with open(path / filename, 'r', newline='') as file:
List.append(list(csv.reader(file)))
print(List)
Update
An even more succinct way to do it would be to use something called a "list comprehension":
import csv
from pathlib import Path
path = Path('./')
filenames = ['in_file1.txt', 'in_file2.txt'] # etc ...
List = [list(csv.reader(open(path / filename, 'r', newline='')))
for filename in filenames]
print(List)
Yes, use .append():
import numpy as np
import matplotlib.pyplot as plt
import csv
path = (some path...)
files = [path+x for x in ['FILESLIST']]
for i in files:
with open(i, 'r') as j:
Reader = csv.reader(j)
List.append([L for L in Reader])
I have a files which have look like:
They are placed in
~/ansible-environments/aws/random_name_1/inventory/group_vars/all
~/ansible-environments/aws/random_name_2/inventory/group_vars/all
~/ansible-environments/aws/random_name_3/inventory/group_vars/all
I wrote:
import os
import sys
rootdir='/home/USER/ansible-environments/aws'
#print "aa"
for root, subdirs, files in os.walk(rootdir):
for subdir in subdirs:
all_path = os.path.join(rootdir, subdir, "inventory", "group_vars", "all")
if not os.path.isfile(all_path):
continue
try:
with open(all_path, "r") as f:
all_content = f.readlines()
except (OSError, IOError):
continue # ignore errors
csv_line = [""] * 3
for line in all_content:
if line[:9] == "isv_alias:":
csv_line[0] = line[7:].strip()
elif line[:21] == "LMID:":
csv_line[1] = line[6:].strip()
elif line[:17] == "products:":
csv_line[2] = line[10:].strip()
if all(value != "" for value in csv_line):
with open(os.path.join("/home/nsingh/nishlist.csv"), "a") as csv:
csv.write(",".join(csv_line))
csv.write("\n")
I just need the LMIT, isv_alias, products in the following format :
alias,LMIT,product
bloodyhell,80,rms_scl
something_else,434,some_other_prod
There are three problems here:
Finding all key-value files
Extracting keys and values from each file
Turning the keys and values from each file into rows in a CSV
First use os.listdir() to find the contents of
~/ansible-environments/aws, then build the expected path of the
inventory/group_vars directory inside each using
os.path.join(), and see which ones actually exist. Then list
the contents of those directories that do exist, and assume all
files inside (such as all) are key-value files. The example
code at the end of this answer assumes that all files can be
found this way; if they cannot, you may have to adapt the example
code to find the files using os.walk() or another method.
Each key-value file is a sequence of lines, where each line is a key
and value separated by a colon (":"). Your approach using search
for a substring (operator in) will fail if, say, the secret key
contains the string "LMIT". Instead, split the line at the colon.
The expression line.split(":", 1) splits the line at the first
colon, but not subsequent colons in case the value itself has a
colon. Then strip off excess whitespace from the key and value,
and build a dictionary of keys and values.
Now choose which keys you want to keep. Once you've parsed each
file, look up the associated values in the dictionary from that
file, and build a list out of them. Then add the list of values
from this file to a list of lists of values from all files, and
use csv.writer to write out the list of lists as a CSV file.
It might look something like this:
#!/usr/bin/env python2
from __future__ import with_statement, print_function, division
import os
import csv
def read_kv_file(filename):
items = {}
with open(filename, "rU") as infp:
for line in infp:
# Split at a colon and strip leading and trailing space
line = [x.strip() for x in line.split(":", 1)]
# Add the key and value to the dictionary
if len(line) > 1:
items[line[0]] = line[1]
return items
# First find all random names
outer_dir = os.path.expanduser("~/ansible-environments/aws")
random_names = os.listdir(outer_dir)
inner_dirs = [
os.path.join(outer_dir, name, "inventory/group_vars")
for name in random_names
]
# Now filter it to those directories that actually exist
inner_dirs = [name for name in inner_dirs if os.path.isdir(name)]
wanted_keys = ["alias", "LMIT", "products"]
out_columns = ["alias", "LMIT", "product"]
# Collect key-value pairs from all files in these folders
rows = []
for dirname in inner_dirs:
for filename in os.listdir(dirname):
path = os.path.join(dirname, filename)
# Skip non-files in this directory
if not os.path.isfile(path):
continue
# If the file has a non-blank value for any of the keys of
# interest, add a row
items = read_kv_file(path)
this_file_values = [items.get(key) for key in wanted_keys]
if any(this_file_values):
rows.append(this_file_values)
# And write them out
with open("out.csv", "wb") as outfp:
writer = csv.writer(outfp, "excel")
writer.writerow(out_columns)
writer.writerows(rows)
You didn't specify how are you obtaining the files (the f in the first line) but under the assumption that you've sorted out the files traversal and that the files are exactly as you present them (so no extra spaces or something like that), you can modify your code to:
csv_line = [""] * 3
for line in f:
if line[:6] == "alias:":
csv_line[0] = line[7:].strip()
elif line[:5] == "LMIT:":
csv_line[1] = line[6:].strip()
elif line[:9] == "products:":
csv_line[2] = line[10:].strip()
with open(rootdir + '/' + 'list.csv', "a") as csv:
csv.write(",".join(csv_line))
csv.write("\n")
This will add a new line with the proper vars in your CSV for each file that was loaded as f, however keep in mind that it doesn't check for the data validity so it will be happy to write empty new lines if the opened file didn't contain the proper data.
You can prevent that by checking for all(value != "" for value in csv_line) before opening the csv file for writing. You can use any instead of all if you want to write entries that have at least one variable populated.
UPDATE: The code you just pasted has serious indentation and structural issues. It at least make more sense on what you want to do - assuming everything else is ok, this should do it:
for root, subdirs, files in os.walk(rootdir):
for subdir in subdirs:
all_path = os.path.join(rootdir, subdir, "inventory", "group_vars", "all")
if not os.path.isfile(all_path):
continue
try:
with open(all_path, "r") as f:
all_content = f.readlines()
except (OSError, IOError):
continue # ignore errors
csv_line = [""] * 3
for line in all_content:
if line[:6] == "alias:":
csv_line[0] = line[7:].strip()
elif line[:5] == "LMIT:":
csv_line[1] = line[6:].strip()
elif line[:9] == "products:":
csv_line[2] = line[10:].strip()
if all(value != "" for value in csv_line):
with open(os.path.join(rootdir, "list.csv"), "a") as csv:
csv.write(",".join(csv_line))
csv.write("\n")
I have csv files among other files, uncompressed or compressed with either gz, bz2, or other format. All compressed files have their original extension preserved in their name. So the compression specific extension is appended to the original filename.
The list of possible compression formats is given through a list, for example:
z_types = [ '.gz', '.bz2' ] # could be many more than two types
I would like to make a list of the cvs files disregarding whether they are compressed or not. I usually do for uncompressed csv files the following:
import os
[ file_ if file_.endswith('.csv') for file_ in os.listdir(path_to_files) ]
for the case I want even compressed file I would do:
import os
acsv_files_ = []
for file_ in os.listdir(path_to_files):
for ztype_ in z_types + [ '' ]:
if file_.endswith('.csv' + ztype_):
acsv_files_.append(file_)
though this would work, is there any more concise and efficient way of doing this? for example using an 'or' operator within .endswith()?
Yes, that is possible. See str.endswith:
Return True if the string ends with the specified suffix, otherwise return False. suffix can also be a tuple of suffixes to look for. With optional start, test beginning at that position. With optional end, stop comparing at that position.
In [10]: "foo".endswith(("o", "r"))
Out[10]: True
In [11]: "bar".endswith(("o", "r"))
Out[11]: True
In [12]: "baz".endswith(("o", "r"))
Out[12]: False
So you could use
[file_ if file_.endswith(tuple(z_types + [""])) for file_ in os.listdir(path_to_files)]
If your file names all end in '.csv' or '.csv.some_compressed_ext' you could use the following:
import os
csvfiles = [f for f in os.listdir(path) if '.csv' in f]
You can do this in one line as:
import os
exts = ['','.gz','.bz2','.tar'] # includes '' as the null-extenstion
# this creates the list
files_to_process = [_file for _file in os.listdir(path_to_files) if not _file.endswith('.not_to_process') and _file.endswith(tuple('.csv'+ext for ext in exts+['']))]
Broken down:
files_to_process = [
_file
for _file in os.listdir(path_to_files)
if not _file.endswith('.no') # Checks against files you have marked as bad
and
_file.endswith( # checks if any of the provided entries in the tuple are endings to the _file name
tuple( # generates a tuple from the given generator argument
'.csv'+ext for ext in exts+[''] # Creates a tuple containing all the variations: .csv, .csv.gz, .csv.bz2, etc.
)
)
]
EDIT
For an even more general solution:
import os
def validate_file(f):
# do any tests on the file that you need to determine whether it is valid
# for processing
exts = ['','.gz','bz2']
if f.endswith('.some_extension_name_you_made_to_mark_bad_files'):
return False
else:
return file.endswith(tuple('.csv'+ext for ext in exts))
exts = [f for f in os.listdir(path_to_files) if validate_file(f)]
You could of course replace the code in validate_file with whatever testing you wish to do on the file. You could even use this approach to validate file contents too, i.e.
def validate_file(f):
content = ''.join(i for i in f)
if 'apple' in content:
return True
else:
return False
I compare two text files and print out the results to a 3rd file. I am trying to make it so the script i'm running would iterate over all of the folders that have two text files in them, in the CWD of the script.
What i have so far:
import os
import glob
path = './'
for infile in glob.glob( os.path.join(path, '*.*') ):
print('current file is: ' + infile)
with open (f1+'.txt', 'r') as fin1, open(f2+'.txt', 'r') as fin2:
Would this be a good way to start the iteration process?
It's not the most clear code but it gets the job done. However, i'm pretty sure i need to take the logic out of the read / write methods but i'm not sure where to start.
What i'm basically trying to do is have a script iterate over all of the folders in its CWD, open each folder, compare the two text files inside, write a 3rd text file to the same folder, then move on to the next.
Another method i have tried is as follows:
import os
rootDir = 'C:\\Python27\\test'
for dirName, subdirList, fileList in os.walk(rootDir):
print('Found directory: %s' % dirName)
for fname in fileList:
print('\t%s' % fname)
And this outputs the following (to give you a better example of the file structure:
Found directory: C:\Python27\test
test.py
Found directory: C:\Python27\test\asdd
asd1.txt
asd2.txt
Found directory: C:\Python27\test\chro
ch1.txt
ch2.txt
Found directory: C:\Python27\test\hway
hw1.txt
hw2.txt
Would it be wise to put the compare logic under the for fname in fileList? How do i make sure it compares the two text files inside the specific folder and not with other fnames in the fileList?
This is the full code that i am trying to add this functionality into. I appologize for the Frankenstein nature of it but i am still working on a refined version but it does not work yet.
from collections import defaultdict
from operator import itemgetter
from itertools import groupby
from collections import deque
import os
class avs_auto:
def load_and_compare(self, input_file1, input_file2, output_file1, output_file2, result_file):
self.load(input_file1, input_file2, output_file1, output_file2)
self.compare(output_file1, output_file2)
self.final(result_file)
def load(self, fileIn1, fileIn2, fileOut1, fileOut2):
with open(fileIn1+'.txt') as fin1, open(fileIn2+'.txt') as fin2:
frame_rects = defaultdict(list)
for row in (map(str, line.split()) for line in fin1):
id, frame, rect = row[0], row[2], [row[3],row[4],row[5],row[6]]
frame_rects[frame].append(id)
frame_rects[frame].append(rect)
frame_rects2 = defaultdict(list)
for row in (map(str, line.split()) for line in fin2):
id, frame, rect = row[0], row[2], [row[3],row[4],row[5],row[6]]
frame_rects2[frame].append(id)
frame_rects2[frame].append(rect)
with open(fileOut1+'.txt', 'w') as fout1, open(fileOut2+'.txt', 'w') as fout2:
for frame, rects in sorted(frame_rects.iteritems()):
fout1.write('{{{}:{}}}\n'.format(frame, rects))
for frame, rects in sorted(frame_rects2.iteritems()):
fout2.write('{{{}:{}}}\n'.format(frame, rects))
def compare(self, fileOut1, fileOut2):
with open(fileOut1+'.txt', 'r') as fin1:
with open(fileOut2+'.txt', 'r') as fin2:
lines1 = fin1.readlines()
lines2 = fin2.readlines()
diff_lines = [l.strip() for l in lines1 if l not in lines2]
diffs = defaultdict(list)
with open(fileOut1+'x'+fileOut2+'.txt', 'w') as result_file:
for line in diff_lines:
d = eval(line)
for k in d:
list_ids = d[k]
for i in range(0, len(d[k]), 2):
diffs[d[k][i]].append(k)
for id_ in diffs:
diffs[id_].sort()
for k, g in groupby(enumerate(diffs[id_]), lambda (i, x): i - x):
group = map(itemgetter(1), g)
result_file.write('{0} {1} {2}\n'.format(id_, group[0], group[-1]))
def final(self, result_file):
with open(result_file+'.txt', 'r') as fin:
lines = (line.split() for line in fin)
for k, g in groupby(lines, itemgetter(0)):
fst = next(g)
lst = next(iter(deque(g, 1)), fst)
with open('final/{}.avs'.format(k), 'w') as fout:
fout.write('video0=ImageSource("old\%06d.jpeg", {}-3, {}+3, 15)\n'.format(fst[1], lst[2]))
fout.write('video1=ImageSource("new\%06d.jpeg", {}-3, {}+3, 15)\n'.format(fst[1], lst[2]))
fout.write('video0=BilinearResize(video0,640,480)\n')
fout.write('video1=BilinearResize(video1,640,480)\n')
fout.write('StackHorizontal(video0,video1)\n')
fout.write('Subtitle("ID: {}", font="arial", size=30, align=8)'.format(k))
using the load_and_compare() function, i define two input text files, two output text files, a file for the comparison results and a final phase that writes many files for all of the differences.
What i am trying to do is have this whole class run on the current working directory and go through every sub folder, compare the two text files, and write everything into the same folder, specifically the final() results.
You can indeed use os.walk(), since that already separates the directories from the files. You only need the directories it returns, because that's where you're looking for your 2 specific files.
You could also use os.listdir() but that returns directories as well files in the same list, so you would have to check for directories yourself.
Either way, once you have the directories, you iterate over them (for subdir in dirnames) and join the various path components you have: The dirpath, the subdir name that you got from iterating over the list and your filename.
Assuming there are also some directories that don't have the specific 2 files, it's a good idea to wrap the open() calls in a try..except block and thus ignore the directories where one of the files (or both of them) doesn't exist.
Finally, if you used os.walk(), you can easily choose if you only want to go into directories one level deep or walk the whole depth of the tree. In the former case, you just clear the dirnames list by dirnames[:] = []. Note that dirnames = [] wouldn't work, since that would just create a new empty list and put that reference into the variable instead of clearing the old list.
Replace the print("do something ...") with your program logic.
#!/usr/bin/env python
import errno
import os
f1 = "test1"
f2 = "test2"
path = "."
for dirpath, dirnames, _ in os.walk(path):
for subdir in dirnames:
filepath1, filepath2 = [os.path.join(dirpath, subdir, f + ".txt") for f in f1, f2]
try:
with open(filepath1, 'r') as fin1, open(filepath2, 'r') as fin2:
print("do something with " + str(fin1) + " and " + str(fin2))
except IOError as e:
# ignore directiories that don't contain the 2 files
if e.errno != errno.ENOENT:
# reraise exception if different from "file or directory doesn't exist"
raise
# comment the next line out if you want to traverse all subsubdirectories
dirnames[:] = []
Edit:
Based on your comments, I hope I understand your question better now.
Try the following code snippet instead. The overall structure stays the same, only now I'm using the returned filenames of os.walk(). Unfortunately, that would also make it harder to do something like "go only into the subdirectories 1 level deep", so I hope walking the tree recursively is fine with you. If not, I'll have to add a little code to later.
#!/usr/bin/env python
import fnmatch
import os
filter_pattern = "*.txt"
path = "."
for dirpath, dirnames, filenames in os.walk(path):
# comment this out if you don't want to filter
filenames = [fn for fn in filenames if fnmatch.fnmatch(fn, filter_pattern)]
if len(filenames) == 2:
# comment this out if you don't want the 2 filenames to be sorted
filenames.sort(key=str.lower)
filepath1, filepath2 = [os.path.join(dirpath, fn) for fn in filenames]
with open(filepath1, 'r') as fin1, open(filepath2, 'r') as fin2:
print("do something with " + str(fin1) + " and " + str(fin2))
I'm still not really sure what your program logic does, so you will have to interface the two yourself.
However, I noticed that you're adding the ".txt" extension to the file name explicitly all over your code, so depending on how you are going to use the snippet, you might or might not need to remove the ".txt" extension first before handing the filenames over. That would be achieved by inserting the following line after or before the sort:
filenames = [os.path.splitext(fn)[0] for fn in filenames]
Also, I still don't understand why you're using eval(). Do the text files contain python code? In any case, eval() should be avoided and be replaced by code that's more specific to the task at hand.
If it's a list of comma separated strings, use line.split(",") instead.
If there might be whitespace before or after the comma, use [word.strip() for word in line.split(",")] instead.
If it's a list of comma separated integers, use [int(num) for num in line.split(",")] instead - for floats it works analogously.
etc.