Can't append 2 strings the way I want [duplicate] - python

I'm a complete rookie to Python, but it seems like a given string is able to be (effectively) arbitrary length. i.e. you can take a string str and keeping adding to it: str += "some stuff...". Is there a way to make an array of such strings?
When I try this, each element only stores a single character
strArr = numpy.empty(10, dtype='string')
for i in range(0,10)
strArr[i] = "test"
On the other hand, I know I can initialize an array of certain length strings, i.e.
strArr = numpy.empty(10, dtype='s256')
which can store 10 strings of up to 256 characters.

You can do so by creating an array of dtype=object. If you try to assign a long string to a normal numpy array, it truncates the string:
>>> a = numpy.array(['apples', 'foobar', 'cowboy'])
>>> a[2] = 'bananas'
>>> a
array(['apples', 'foobar', 'banana'],
dtype='|S6')
But when you use dtype=object, you get an array of python object references. So you can have all the behaviors of python strings:
>>> a = numpy.array(['apples', 'foobar', 'cowboy'], dtype=object)
>>> a
array([apples, foobar, cowboy], dtype=object)
>>> a[2] = 'bananas'
>>> a
array([apples, foobar, bananas], dtype=object)
Indeed, because it's an array of objects, you can assign any kind of python object to the array:
>>> a[2] = {1:2, 3:4}
>>> a
array([apples, foobar, {1: 2, 3: 4}], dtype=object)
However, this undoes a lot of the benefits of using numpy, which is so fast because it works on large contiguous blocks of raw memory. Working with python objects adds a lot of overhead. A simple example:
>>> a = numpy.array(['abba' for _ in range(10000)])
>>> b = numpy.array(['abba' for _ in range(10000)], dtype=object)
>>> %timeit a.copy()
100000 loops, best of 3: 2.51 us per loop
>>> %timeit b.copy()
10000 loops, best of 3: 48.4 us per loop

You could use the object data type:
>>> import numpy
>>> s = numpy.array(['a', 'b', 'dude'], dtype='object')
>>> s[0] += 'bcdef'
>>> s
array([abcdef, b, dude], dtype=object)

Related

why Numpy mutable variable such as array didn't update value on the same memory?

I am confused by the mutable variable in python. See the following example code:
In [144]: a=[1,2,3,4]
In [145]: b=a
In [146]: a.append(5)
In [147]: a
Out[147]: [1, 2, 3, 4, 5]
In [148]: b
Out[148]: [1, 2, 3, 4, 5]
Since list is mutable, when using append function, it works on the same memory. This is understandable to me. But the following code confuses me.
In [149]: import numpy as np
In [150]: a=np.random.randn(3,2)
In [151]: b=a
In [152]: a=a-1
In [153]: a
Out[153]:
array([[-2.05342905, -1.21441195],
[-1.29901352, -3.29416381],
[-2.28775209, -1.65702149]])
In [154]: b
Out[154]:
array([[-1.05342905, -0.21441195],
[-0.29901352, -2.29416381],
[-1.28775209, -0.65702149]])
Since the Numpy array variable is also mutable, when a=a-1, why the change didn't be made on the same memory that a refers to? On the other hand, a refers to a new memory with new values.
Why variable a didn't act similarly in the first example about appending a new value 5 in the list, a still refers to the same memory?
Because when you call a=a-1 you are assigning a new value to variable a... then the whole memory allocation to this variable is changed since a-1 isn't changing a in-place, and instead creates another object..
a = [1,2,3]
b = a
a = a + [4]
a
>>> [1,2,3,4]
b
>>> [1,2,3]
See? It has nothing to do with numpy specificly...
Short answer: a -= 1 is inplace but a = a-1 copies memory of a to another place and then subtracts 1. Therefore, the location where a originally pointed, changes.
You can check this using is. The is keyword says if two variables refer to the same object in memory or not, it is different from ==.
>>> import numpy as np
>>> a = np.random.randn(3,2)
>>> b = a
>>> a is b
True
>>> a, b
(array([[-0.14563848, 2.11951025],
[ 0.50913228, -0.61049821],
[ 2.29055958, -0.83795141]]), array([[-0.14563848, 2.11951025],
[ 0.50913228, -0.61049821],
[ 2.29055958, -0.83795141]]))
>>> a -= 1
>>> a
array([[-1.14563848, 1.11951025],
[-0.49086772, -1.61049821],
[ 1.29055958, -1.83795141]])
>>> b
array([[-1.14563848, 1.11951025],
[-0.49086772, -1.61049821],
[ 1.29055958, -1.83795141]])
>>> a is b
True
>>> a = a - 1
>>> a
array([[-2.14563848, 0.11951025],
[-1.49086772, -2.61049821],
[ 0.29055958, -2.83795141]])
>>> b
array([[-1.14563848, 1.11951025],
[-0.49086772, -1.61049821],
[ 1.29055958, -1.83795141]])
>>> a is b
False

Why am I getting this error in Python3? [duplicate]

I have a tuple called values which contains the following:
('275', '54000', '0.0', '5000.0', '0.0')
I want to change the first value (i.e., 275) in this tuple but I understand that tuples are immutable so values[0] = 200 will not work. How can I achieve this?
It's possible via:
t = ('275', '54000', '0.0', '5000.0', '0.0')
lst = list(t)
lst[0] = '300'
t = tuple(lst)
But if you're going to need to change things, you probably are better off keeping it as a list
Depending on your problem slicing can be a really neat solution:
>>> b = (1, 2, 3, 4, 5)
>>> b[:2] + (8,9) + b[3:]
(1, 2, 8, 9, 4, 5)
>>> b[:2] + (8,) + b[3:]
(1, 2, 8, 4, 5)
This allows you to add multiple elements or also to replace a few elements (especially if they are "neighbours". In the above case casting to a list is probably more appropriate and readable (even though the slicing notation is much shorter).
Well, as Trufa has already shown, there are basically two ways of replacing a tuple's element at a given index. Either convert the tuple to a list, replace the element and convert back, or construct a new tuple by concatenation.
In [1]: def replace_at_index1(tup, ix, val):
...: lst = list(tup)
...: lst[ix] = val
...: return tuple(lst)
...:
In [2]: def replace_at_index2(tup, ix, val):
...: return tup[:ix] + (val,) + tup[ix+1:]
...:
So, which method is better, that is, faster?
It turns out that for short tuples (on Python 3.3), concatenation is actually faster!
In [3]: d = tuple(range(10))
In [4]: %timeit replace_at_index1(d, 5, 99)
1000000 loops, best of 3: 872 ns per loop
In [5]: %timeit replace_at_index2(d, 5, 99)
1000000 loops, best of 3: 642 ns per loop
Yet if we look at longer tuples, list conversion is the way to go:
In [6]: k = tuple(range(1000))
In [7]: %timeit replace_at_index1(k, 500, 99)
100000 loops, best of 3: 9.08 µs per loop
In [8]: %timeit replace_at_index2(k, 500, 99)
100000 loops, best of 3: 10.1 µs per loop
For very long tuples, list conversion is substantially better!
In [9]: m = tuple(range(1000000))
In [10]: %timeit replace_at_index1(m, 500000, 99)
10 loops, best of 3: 26.6 ms per loop
In [11]: %timeit replace_at_index2(m, 500000, 99)
10 loops, best of 3: 35.9 ms per loop
Also, performance of the concatenation method depends on the index at which we replace the element. For the list method, the index is irrelevant.
In [12]: %timeit replace_at_index1(m, 900000, 99)
10 loops, best of 3: 26.6 ms per loop
In [13]: %timeit replace_at_index2(m, 900000, 99)
10 loops, best of 3: 49.2 ms per loop
So: If your tuple is short, slice and concatenate.
If it's long, do the list conversion!
It is possible with a one liner:
values = ('275', '54000', '0.0', '5000.0', '0.0')
values = ('300', *values[1:])
I believe this technically answers the question, but don't do this at home. At the moment, all answers involve creating a new tuple, but you can use ctypes to modify a tuple in-memory. Relying on various implementation details of CPython on a 64-bit system, one way to do this is as follows:
def modify_tuple(t, idx, new_value):
# `id` happens to give the memory address in CPython; you may
# want to use `ctypes.addressof` instead.
element_ptr = (ctypes.c_longlong).from_address(id(t) + (3 + idx)*8)
element_ptr.value = id(new_value)
# Manually increment the reference count to `new_value` to pretend that
# this is not a terrible idea.
ref_count = (ctypes.c_longlong).from_address(id(new_value))
ref_count.value += 1
t = (10, 20, 30)
modify_tuple(t, 1, 50) # t is now (10, 50, 30)
modify_tuple(t, -1, 50) # Will probably crash your Python runtime
As Hunter McMillen mentioned, tuples are immutable, you need to create a new tuple in order to achieve this. For instance:
>>> tpl = ('275', '54000', '0.0', '5000.0', '0.0')
>>> change_value = 200
>>> tpl = (change_value,) + tpl[1:]
>>> tpl
(200, '54000', '0.0', '5000.0', '0.0')
Not that this is superior, but if anyone is curious it can be done on one line with:
tuple = tuple([200 if i == 0 else _ for i, _ in enumerate(tuple)])
You can't modify items in tuple, but you can modify properties of mutable objects in tuples (for example if those objects are lists or actual class objects)
For example
my_list = [1,2]
tuple_of_lists = (my_list,'hello')
print(tuple_of_lists) # ([1, 2], 'hello')
my_list[0] = 0
print(tuple_of_lists) # ([0, 2], 'hello')
EDIT: This doesn't work on tuples with duplicate entries yet!!
Based on Pooya's idea:
If you are planning on doing this often (which you shouldn't since tuples are inmutable for a reason) you should do something like this:
def modTupByIndex(tup, index, ins):
return tuple(tup[0:index]) + (ins,) + tuple(tup[index+1:])
print modTupByIndex((1,2,3),2,"a")
Or based on Jon's idea:
def modTupByIndex(tup, index, ins):
lst = list(tup)
lst[index] = ins
return tuple(lst)
print modTupByIndex((1,2,3),1,"a")
based on Jon's Idea and dear Trufa
def modifyTuple(tup, oldval, newval):
lst=list(tup)
for i in range(tup.count(oldval)):
index = lst.index(oldval)
lst[index]=newval
return tuple(lst)
print modTupByIndex((1, 1, 3), 1, "a")
it changes all of your old values occurrences
You can't. If you want to change it, you need to use a list instead of a tuple.
Note that you could instead make a new tuple that has the new value as its first element.
Frist, ask yourself why you want to mutate your tuple. There is a reason why strings and tuple are immutable in Ptyhon, if you want to mutate your tuple then it should probably be a list instead.
Second, if you still wish to mutate your tuple then you can convert your tuple to a list then convert it back, and reassign the new tuple to the same variable. This is great if you are only going to mutate your tuple once. Otherwise, I personally think that is counterintuitive. Because It is essentially creating a new tuple and every time if you wish to mutate the tuple you would have to perform the conversion. Also If you read the code it would be confusing to think why not just create a list? But it is nice because it doesn't require any library.
I suggest using mutabletuple(typename, field_names, default=MtNoDefault) from mutabletuple 0.2. I personally think this way is a more intuitive and readable. The personal reading the code would know that writer intends to mutate this tuple in the future. The downside compares to the list conversion method above is that this requires you to import additional py file.
from mutabletuple import mutabletuple
myTuple = mutabletuple('myTuple', 'v w x y z')
p = myTuple('275', '54000', '0.0', '5000.0', '0.0')
print(p.v) #print 275
p.v = '200' #mutate myTuple
print(p.v) #print 200
TL;DR: Don't try to mutate tuple. if you do and it is a one-time operation convert tuple to list, mutate it, turn list into a new tuple, and reassign back to the variable holding old tuple. If desires tuple and somehow want to avoid listand want to mutate more than once then create mutabletuple.
I've found the best way to edit tuples is to recreate the tuple using the previous version as the base.
Here's an example I used for making a lighter version of a colour (I had it open already at the time):
colour = tuple([c+50 for c in colour])
What it does, is it goes through the tuple 'colour' and reads each item, does something to it, and finally adds it to the new tuple.
So what you'd want would be something like:
values = ('275', '54000', '0.0', '5000.0', '0.0')
values = (tuple(for i in values: if i = 0: i = 200 else i = values[i])
That specific one doesn't work, but the concept is what you need.
tuple = (0, 1, 2)
tuple = iterate through tuple, alter each item as needed
that's the concept.
I´m late to the game but I think the simplest, resource-friendliest and fastest way (depending on the situation),
is to overwrite the tuple itself. Since this would remove the need for the list & variable creation and is archived in one line.
new = 24
t = (1, 2, 3)
t = (t[0],t[1],new)
>>> (1, 2, 24)
But: This is only handy for rather small tuples and also limits you to a fixed tuple value, nevertheless, this is the case for tuples most of the time anyway.
So in this particular case it would look like this:
new = '200'
t = ('275', '54000', '0.0', '5000.0', '0.0')
t = (new, t[1], t[2], t[3], t[4])
>>> ('200', '54000', '0.0', '5000.0', '0.0')
If you want to do this, you probably don't want to toss a bunch of weird functions all over the place and call attention to you wanting to change values in things specific unable to do that. Also, we can go ahead and assume you're not being efficient.
t = tuple([new_value if p == old_value else p for p in t])
i did this:
list = [1,2,3,4,5]
tuple = (list)
and to change, just do
list[0]=6
and u can change a tuple :D
here is it copied exactly from IDLE
>>> list=[1,2,3,4,5,6,7,8,9]
>>> tuple=(list)
>>> print(tuple)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> list[0]=6
>>> print(tuple)
[6, 2, 3, 4, 5, 6, 7, 8, 9]
You can change the value of tuple using copy by reference
>>> tuple1=[20,30,40]
>>> tuple2=tuple1
>>> tuple2
[20, 30, 40]
>>> tuple2[1]=10
>>> print(tuple2)
[20, 10, 40]
>>> print(tuple1)
[20, 10, 40]

Multi-dimensional slicing a list of strings with numpy

Say I have the following:
my_list = np.array(["abc", "def", "ghi"])
and I'd like to get:
np.array(["ef", "hi"])
I tried:
my_list[1:,1:]
But then I get:
IndexError: too many indices for array
Does Numpy support slicing strings?
No, you cannot do that. For numpy np.array(["abc", "def", "ghi"]) is a 1D array of strings, therefore you cannot use 2D slicing.
You could either define your array as a 2D array or characters, or simply use list comprehension for slicing,
In [4]: np.asarray([el[1:] for el in my_list[1:]])
Out[4]:
array(['ef', 'hi'], dtype='|S2')
Your array of strings stores the data as a contiguous block of characters, using the 'S3' dtype to divide it into strings of length 3.
In [116]: my_list
Out[116]:
array(['abc', 'def', 'ghi'],
dtype='|S3')
A S1,S2 dtype views each element as 2 strings, with 1 and 2 char each:
In [115]: my_list.view('S1,S2')
Out[115]:
array([('a', 'bc'), ('d', 'ef'), ('g', 'hi')],
dtype=[('f0', 'S1'), ('f1', 'S2')])
select the 2nd field to get an array with the desired characters:
In [114]: my_list.view('S1,S2')[1:]['f1']
Out[114]:
array(['ef', 'hi'],
dtype='|S2')
My first attempt with view was to split the array into single byte strings, and play with the resulting 2d array:
In [48]: my_2dstrings = my_list.view(dtype='|S1').reshape(3,-1)
In [49]: my_2dstrings
Out[49]:
array([['a', 'b', 'c'],
['d', 'e', 'f'],
['g', 'h', 'i']],
dtype='|S1')
This array can then be sliced in both dimensions. I used flatten to remove a dimension, and to force a copy (to get a new contiguous buffer).
In [50]: my_2dstrings[1:,1:].flatten().view(dtype='|S2')
Out[50]:
array(['ef', 'hi'],
dtype='|S2')
If the strings are already in an array (as opposed to a list) then this approach is much faster than the list comprehension approaches.
Some timings with the 1000 x 64 list that wflynny tests
In [98]: timeit [s[1:] for s in my_list_64[1:]]
10000 loops, best of 3: 173 us per loop # mine's slower computer
In [99]: timeit np.array(my_list_64).view('S1').reshape(64,-1)[1:,1:].flatten().view('S63')
1000 loops, best of 3: 213 us per loop
In [100]: %%timeit arr =np.array(my_list_64)
.....: arr.view('S1').reshape(64,-1)[1:,1:].flatten().view('S63') .....:
10000 loops, best of 3: 23.2 us per loop
Creating the array from the list is slow, but once created the view approach is much faster.
See my edit history for my earlier notes on np.char.
As per Joe Kington here, python is very good at string manipulations and generator/list comprehensions are fast and flexible for string operations. Unless you need to use numpy later in your pipeline, I would urge against it.
[s[1:] for s in my_list[1:]]
is fast:
In [1]: from string import ascii_lowercase
In [2]: from random import randint, choice
In [3]: my_list_rand = [''.join([choice(ascii_lowercase)
for _ in range(randint(2, 64))])
for i in range(1000)]
In [4]: my_list_64 = [''.join([choice(ascii_lowercase) for _ in range(64)])
for i in range(1000)]
In [5]: %timeit [s[1:] for s in my_list_rand[1:]]
10000 loops, best of 3: 47.6 µs per loop
In [6]: %timeit [s[1:] for s in my_list_64[1:]]
10000 loops, best of 3: 45.3 µs per loop
Using numpy just adds overhead.
Starting with numpy 1.23.0, I added a mechanism to change the dtype of views of non-contiguous arrays. That means you can view your array as individual characters, slice it how you like, and then build it back together. Before this would require a copy, as #hpaulj's answer clearly shows.
>>> my_list = np.array(["abc", "def", "ghi"])
>>> my_list[:, None].view('U1')[1:, 1:].view('U2').squeeze()
array(['ef', 'hi'])
I'm working on another layer of abstraction, specifically for string arrays called np.slice_ (currently work-in-progress in PR #20694, but the code is functional). If that should get accepted, you will be able to do
>>> np.char.slice_(my_list[1:], 1)
array(['ef', 'hi'])
Your slicing is incorrectly syntaxed. You only need to do my_list[1:] to get what you need. If you want to copy the elements twice onto a list, You can do something = mylist[1:].extend(mylist[1:])

Python - Splitting an array into two using an optimized for loop

This is a followup question to a question I posted here, but it's a very different question, so I thought I would post it separately.
I have a Python script which reads an very large array, and I needed to optimize an operation on each element (see referenced SO question). I now need to split the output array into two separate arrays.
I have the code:
output = [True if (len(element_in_array) % 2) else False for element_in_array in master_list]
which outputs an array of length len(master_list) consisting of True or False, depending on if the length of element_in_array is odd or even. My problem is that I need to split master_list into two arrays: one array containing the element_in_array's that correspond to the True elements in output and another containing the element_in_array's corresponding to the False elements in output.
This can clearly be done with traditional array operators such as append, but I need this to be as optimized and as fast as possible. I have many millions of elements in my master_list, so is there a way to accomplish this without directly looping through master_list and using append to create two new arrays.
Any advice would be greatly appreciated.
Thanks!
You can use itertools.compress:
>>> from itertools import compress, imap
>>> import operator
>>> lis = range(10)
>>> output = [random.choice([True, False]) for _ in xrange(10)]
>>> output
[True, True, False, False, False, False, False, False, False, False]
>>> truthy = list(compress(lis, output))
>>> truthy
[0, 1]
>>> falsy = list(compress(lis, imap(operator.not_,output)))
>>> falsy
[2, 3, 4, 5, 6, 7, 8, 9]
Go for NumPy if you want even faster solution, plus it also allows us to do array filtering based on boolean arrays:
>>> import numpy as np
>>> a = np.random.random(10)*10
>>> a
array([ 2.94518349, 0.09536957, 8.74605883, 4.05063779, 2.11192606,
2.24215582, 7.02203768, 2.1267423 , 7.6526713 , 3.81429322])
>>> output = np.array([True, True, False, False, False, False, False, False, False, False])
>>> a[output]
array([ 2.94518349, 0.09536957])
>>> a[~output]
array([ 8.74605883, 4.05063779, 2.11192606, 2.24215582, 7.02203768,
2.1267423 , 7.6526713 , 3.81429322])
Timing comparison:
>>> lis = range(1000)
>>> output = [random.choice([True, False]) for _ in xrange(1000)]
>>> a = np.random.random(1000)*100
>>> output_n = np.array(output)
>>> %timeit list(compress(lis, output))
10000 loops, best of 3: 44.9 us per loop
>>> %timeit a[output_n]
10000 loops, best of 3: 20.9 us per loop
>>> %timeit list(compress(lis, imap(operator.not_,output)))
1000 loops, best of 3: 150 us per loop
>>> %timeit a[~output_n]
10000 loops, best of 3: 28.7 us per loop
If you can use NumPy, this will be a lot simpler. And, as a bonus, it'll also be a lot faster, and it'll use a lot less memory to store your giant array. For example:
>>> import numpy as np
>>> import random
>>> # create an array of 1000 arrays of length 1-1000
>>> a = np.array([np.random.random(random.randint(1, 1000))
for _ in range(1000)])
>>> lengths = np.vectorize(len)(a)
>>> even_flags = lengths % 2 == 0
>>> evens, odds = a[even_flags], a[~even_flags]
>>> len(evens), len(odds)
(502, 498)
You could try using the groupby function in itertools. The key function would be the function that determines if the length of an element is even or not. The iterator returned by groupby consists of key-value tuples, where key is a value returned by the key function (here, True or False) and the value is a sequence of items which all share
the same key. Create a dictionary which maps a value returned by the key function to a list, and you can extend the appropriate list with a set of values from the initial iterator.
trues = []
falses = []
d = { True: trues, False: falses }
def has_even_length(element_in_array):
return len(element_in_array) % 2 == 0
for k, v in itertools.groupby(master_list, has_even_length):
d[k].extend(v)
The documentation for groupby says you typically want to make sure the list is sorted on the same key returned by the key function. In this case, it's OK to leave it unsorted; you'll just have more than things returned by the iterator returned by groupby, as there could be an a number of alternating true/false sets in the sequence.

How to change values in a tuple?

I have a tuple called values which contains the following:
('275', '54000', '0.0', '5000.0', '0.0')
I want to change the first value (i.e., 275) in this tuple but I understand that tuples are immutable so values[0] = 200 will not work. How can I achieve this?
It's possible via:
t = ('275', '54000', '0.0', '5000.0', '0.0')
lst = list(t)
lst[0] = '300'
t = tuple(lst)
But if you're going to need to change things, you probably are better off keeping it as a list
Depending on your problem slicing can be a really neat solution:
>>> b = (1, 2, 3, 4, 5)
>>> b[:2] + (8,9) + b[3:]
(1, 2, 8, 9, 4, 5)
>>> b[:2] + (8,) + b[3:]
(1, 2, 8, 4, 5)
This allows you to add multiple elements or also to replace a few elements (especially if they are "neighbours". In the above case casting to a list is probably more appropriate and readable (even though the slicing notation is much shorter).
Well, as Trufa has already shown, there are basically two ways of replacing a tuple's element at a given index. Either convert the tuple to a list, replace the element and convert back, or construct a new tuple by concatenation.
In [1]: def replace_at_index1(tup, ix, val):
...: lst = list(tup)
...: lst[ix] = val
...: return tuple(lst)
...:
In [2]: def replace_at_index2(tup, ix, val):
...: return tup[:ix] + (val,) + tup[ix+1:]
...:
So, which method is better, that is, faster?
It turns out that for short tuples (on Python 3.3), concatenation is actually faster!
In [3]: d = tuple(range(10))
In [4]: %timeit replace_at_index1(d, 5, 99)
1000000 loops, best of 3: 872 ns per loop
In [5]: %timeit replace_at_index2(d, 5, 99)
1000000 loops, best of 3: 642 ns per loop
Yet if we look at longer tuples, list conversion is the way to go:
In [6]: k = tuple(range(1000))
In [7]: %timeit replace_at_index1(k, 500, 99)
100000 loops, best of 3: 9.08 µs per loop
In [8]: %timeit replace_at_index2(k, 500, 99)
100000 loops, best of 3: 10.1 µs per loop
For very long tuples, list conversion is substantially better!
In [9]: m = tuple(range(1000000))
In [10]: %timeit replace_at_index1(m, 500000, 99)
10 loops, best of 3: 26.6 ms per loop
In [11]: %timeit replace_at_index2(m, 500000, 99)
10 loops, best of 3: 35.9 ms per loop
Also, performance of the concatenation method depends on the index at which we replace the element. For the list method, the index is irrelevant.
In [12]: %timeit replace_at_index1(m, 900000, 99)
10 loops, best of 3: 26.6 ms per loop
In [13]: %timeit replace_at_index2(m, 900000, 99)
10 loops, best of 3: 49.2 ms per loop
So: If your tuple is short, slice and concatenate.
If it's long, do the list conversion!
It is possible with a one liner:
values = ('275', '54000', '0.0', '5000.0', '0.0')
values = ('300', *values[1:])
I believe this technically answers the question, but don't do this at home. At the moment, all answers involve creating a new tuple, but you can use ctypes to modify a tuple in-memory. Relying on various implementation details of CPython on a 64-bit system, one way to do this is as follows:
def modify_tuple(t, idx, new_value):
# `id` happens to give the memory address in CPython; you may
# want to use `ctypes.addressof` instead.
element_ptr = (ctypes.c_longlong).from_address(id(t) + (3 + idx)*8)
element_ptr.value = id(new_value)
# Manually increment the reference count to `new_value` to pretend that
# this is not a terrible idea.
ref_count = (ctypes.c_longlong).from_address(id(new_value))
ref_count.value += 1
t = (10, 20, 30)
modify_tuple(t, 1, 50) # t is now (10, 50, 30)
modify_tuple(t, -1, 50) # Will probably crash your Python runtime
As Hunter McMillen mentioned, tuples are immutable, you need to create a new tuple in order to achieve this. For instance:
>>> tpl = ('275', '54000', '0.0', '5000.0', '0.0')
>>> change_value = 200
>>> tpl = (change_value,) + tpl[1:]
>>> tpl
(200, '54000', '0.0', '5000.0', '0.0')
Not that this is superior, but if anyone is curious it can be done on one line with:
tuple = tuple([200 if i == 0 else _ for i, _ in enumerate(tuple)])
You can't modify items in tuple, but you can modify properties of mutable objects in tuples (for example if those objects are lists or actual class objects)
For example
my_list = [1,2]
tuple_of_lists = (my_list,'hello')
print(tuple_of_lists) # ([1, 2], 'hello')
my_list[0] = 0
print(tuple_of_lists) # ([0, 2], 'hello')
EDIT: This doesn't work on tuples with duplicate entries yet!!
Based on Pooya's idea:
If you are planning on doing this often (which you shouldn't since tuples are inmutable for a reason) you should do something like this:
def modTupByIndex(tup, index, ins):
return tuple(tup[0:index]) + (ins,) + tuple(tup[index+1:])
print modTupByIndex((1,2,3),2,"a")
Or based on Jon's idea:
def modTupByIndex(tup, index, ins):
lst = list(tup)
lst[index] = ins
return tuple(lst)
print modTupByIndex((1,2,3),1,"a")
based on Jon's Idea and dear Trufa
def modifyTuple(tup, oldval, newval):
lst=list(tup)
for i in range(tup.count(oldval)):
index = lst.index(oldval)
lst[index]=newval
return tuple(lst)
print modTupByIndex((1, 1, 3), 1, "a")
it changes all of your old values occurrences
You can't. If you want to change it, you need to use a list instead of a tuple.
Note that you could instead make a new tuple that has the new value as its first element.
Frist, ask yourself why you want to mutate your tuple. There is a reason why strings and tuple are immutable in Ptyhon, if you want to mutate your tuple then it should probably be a list instead.
Second, if you still wish to mutate your tuple then you can convert your tuple to a list then convert it back, and reassign the new tuple to the same variable. This is great if you are only going to mutate your tuple once. Otherwise, I personally think that is counterintuitive. Because It is essentially creating a new tuple and every time if you wish to mutate the tuple you would have to perform the conversion. Also If you read the code it would be confusing to think why not just create a list? But it is nice because it doesn't require any library.
I suggest using mutabletuple(typename, field_names, default=MtNoDefault) from mutabletuple 0.2. I personally think this way is a more intuitive and readable. The personal reading the code would know that writer intends to mutate this tuple in the future. The downside compares to the list conversion method above is that this requires you to import additional py file.
from mutabletuple import mutabletuple
myTuple = mutabletuple('myTuple', 'v w x y z')
p = myTuple('275', '54000', '0.0', '5000.0', '0.0')
print(p.v) #print 275
p.v = '200' #mutate myTuple
print(p.v) #print 200
TL;DR: Don't try to mutate tuple. if you do and it is a one-time operation convert tuple to list, mutate it, turn list into a new tuple, and reassign back to the variable holding old tuple. If desires tuple and somehow want to avoid listand want to mutate more than once then create mutabletuple.
I've found the best way to edit tuples is to recreate the tuple using the previous version as the base.
Here's an example I used for making a lighter version of a colour (I had it open already at the time):
colour = tuple([c+50 for c in colour])
What it does, is it goes through the tuple 'colour' and reads each item, does something to it, and finally adds it to the new tuple.
So what you'd want would be something like:
values = ('275', '54000', '0.0', '5000.0', '0.0')
values = (tuple(for i in values: if i = 0: i = 200 else i = values[i])
That specific one doesn't work, but the concept is what you need.
tuple = (0, 1, 2)
tuple = iterate through tuple, alter each item as needed
that's the concept.
I´m late to the game but I think the simplest, resource-friendliest and fastest way (depending on the situation),
is to overwrite the tuple itself. Since this would remove the need for the list & variable creation and is archived in one line.
new = 24
t = (1, 2, 3)
t = (t[0],t[1],new)
>>> (1, 2, 24)
But: This is only handy for rather small tuples and also limits you to a fixed tuple value, nevertheless, this is the case for tuples most of the time anyway.
So in this particular case it would look like this:
new = '200'
t = ('275', '54000', '0.0', '5000.0', '0.0')
t = (new, t[1], t[2], t[3], t[4])
>>> ('200', '54000', '0.0', '5000.0', '0.0')
If you want to do this, you probably don't want to toss a bunch of weird functions all over the place and call attention to you wanting to change values in things specific unable to do that. Also, we can go ahead and assume you're not being efficient.
t = tuple([new_value if p == old_value else p for p in t])
i did this:
list = [1,2,3,4,5]
tuple = (list)
and to change, just do
list[0]=6
and u can change a tuple :D
here is it copied exactly from IDLE
>>> list=[1,2,3,4,5,6,7,8,9]
>>> tuple=(list)
>>> print(tuple)
[1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> list[0]=6
>>> print(tuple)
[6, 2, 3, 4, 5, 6, 7, 8, 9]
You can change the value of tuple using copy by reference
>>> tuple1=[20,30,40]
>>> tuple2=tuple1
>>> tuple2
[20, 30, 40]
>>> tuple2[1]=10
>>> print(tuple2)
[20, 10, 40]
>>> print(tuple1)
[20, 10, 40]

Categories

Resources