Or is there some other way in which I can maximise the preservation of memory? Dividing up large amounts of texts is not exactly efficient because:
A) When combined with other texts, the overall image still occupies the same amount of space as would doing it in a whole chunk. The RAM (I mag be incorrect in stating this) is still suddenly overwhelmed by a large amount of texts and Unicode appearing at once.
B) Superfluous clones are created (for, with unicode, each part must have a clone accompanying it so that the gaps in between are filled), and in total, more text is being used than having one large chunk.
Perhaps my logic is flawed in interpreting how unicode text quantities affect performance, but I’m confirming this as I may need to reshape the plan (or simply restart) for this project, as it is highly dependent on Unicode texts.
I’m willing to be flexible, and will delete a lot of text if it is the sole option. Performance is more important than quantity.