The Value, Role and Erosion of Human Identity in Language
Responsibility and language in the age of AI
I started 2025 in a contemplative mood, so the tone is more on the side of contemplation, questioning and is probably also somewhat philosophical.
I’m not posing the ideas herein as facts, but rather as questions to be dealt with by each individual reader.
Are we yielding our identity to machines and AI in the ruse of calling it efficiency?
Individuality wanes, but is it really a problem?
Do the readers or consumers care?
Can we tell the difference between Salinger and Bukowski? Can we tell the difference between AI-generated text and human outputs? Do we even want or need to? If we can’t or don’t want to, what does it matter who writes what?
In a few years’ time, we probably won’t know what’s human and what’s AI-generated anymore… Although the humanity within us should steer us in the right direction – a sort of intuition. The “humanity” within us will also steer us toward the most convenient option (think, the cheapest). And that is, almost invariably, AI-generated texts, translations or in general – language.
We are coming to an age where we “create” content for the sake of content, hollow in style and meaning. Words then seemingly lose worth and value, which may impact not only future generations of writers, but also thinkers – philosophers (Chomsky, Greene, Shapiro, Holiday, Peterson come to mind…).
Who cares for words if we can have an infinite amount of them in an instant? Who cares for ideas? And creativity? Isn’t the worth, in one way or another and to certain extent, imbued in the fact that original and creative ideas are finite or scarce?
That’s how it works with “physical” or financial resources – like gold or bitcoin. The value lies in the fact that both are finite in amount – scarce, to certain extent.
What if we make the phenomenon of origination or creation suddenly an infinite well – an infinite pool, if you will – of words, ideas, “art”, where you can get everything instantly. Will the value prevail? Or will it erode (as it logically should)?
What AI and LLMs create could almost be compared to hyperinflation in the areas of language and ideas, where the overabundance of words and hollow structures renders the whole language (we could compare it to currency) worthless. Or worth less.
Language is becoming “tokenized” – words or morphemes exchanged for monthly subscriptions to ChatGPT or Claude. The task of writing, and more importantly creating, is then out of hands of the main agent for such activities, which is a human – a person with real-world experience, sensory experience and track record. This task is then “efficiently” outsourced to the machine.
The most depressing thing might be that companies are suddenly saying that your experience (basically, your life or career) is not worth as much as we’d thought. That the professionals are not worth as much as they thought they were. So, what is their value?
The value, or more precisely the price, of ChatGPT and other word generating AI models is set exactly by the monthly subscription you pay for them. Is there a monthly subscription for human creativity? We could say it’s a salary – if you work as a full-time employee. Creatives are often freelancers, so they don’t work on a basis of salary or monthly subscription. And even if they did, the monthly subscription would, I imagine, oscillate somewhere in thousands of euros or dollars. And even then, we could argue whether it’s justified, i.e. high enough.
Compared to the monthly subscription to ChatGPT, that’s a steep rise.
But what are we essentially paying for when it comes to any product or service? We are, after all, paying for the work, and maybe in other terms, for the value of the product or service. And the value is created through work, experience, expertise, tailoring the solution etc.
Value is only found in something based on the work or effort the one who creates it puts in. Relationships. Any product or service. These have worth or value because people put in the time and effort. Or, let’s call it research (still, time and effort).
But what if there is no time and effort needed anymore? What is the worth of the output? Undoubtedly, whatever the market agrees on – as is the case on the stock market, for example. It’s a social contract, so to speak, that settles the value of words or ideas. Or maybe the value will be solely reflected in the output, and not in the process and work that led to it.
What we may ask ourselves:
Is human identity (or as we may call it, human touch or human agent) essential in this process and in the creation of this value?
Can the value be created without a human present in the process? Without, as some call it, “human in the loop”? (Funny that we’ve come from “AI in the loop” to “Human in the loop”, innit?)
Does the value reside then chiefly in the words (tokens) themselves and their composition, and not in the idea behind them, the experience behind them or the authorship?
Does the word “essential” hold any value on its own when generated by AI? As a token? Or does it become valuable only when used in the right context with an idea behind its use by a human agent? With a certain degree of emotion? And history?
Can we determine that?
And if we can’t, who is qualified to do so? It’s often said that the beauty is in the eye of the beholder. Similarly, the value is probably determined by the one who is paying the price or the one who is going to peruse the final product of such creation or process. The value of the work is then not determined by us – the creators, but rather by the “market” – the buyers, who can choose what they are willing to pay and what they are willing to pay for. Or whether they are willing to spend their time enjoying such work.
There was the case of AI-generated poetry...
An AI-generated poem is considered more beautiful than what Shakespeare wrote – by some people, arguably and according to some sources (Washington Post). Does that call the value of human-made art into question? Or the meaning (purpose) behind creating it in the first place? Can the enjoyment and fulfillment from what machines make or create be the same as from what people make or create?
There seems to also be a disconnect, or a fundamental flaw in understanding the tool of LLM or AI – because who in their right mind would willingly and spontaneously come up with the idea to “outsource” the writing of poetry to an AI model? Naturally, if you enjoy writing poetry, it comes from within you, and you will write and enjoy it naturally – even if you are not good at it by certain standards.
What is even the point behind “outsourcing” such an activity which should chiefly result in the higher psychological or cognitive pleasure of its source, the author, to the LLM? Maybe it’s just a perversion of this day and age; the overabundance of technology has led us to reach for and find new extremes, to the point of being detrimental to our very selves, to our culture, language and identity.
Surely, we can’t take away the enjoyment of the artist away – the enjoyment they feel while engaging in the process of creation. Which is, after all, often a lot more important than the final product itself – the process is where the value for the author resides. If we substitute this creative process with “more efficient” AI tools, logically, we lose the enjoyment from engaging in said activity and the fulfillment of the “source”, of the author.
My question is, then, what’s the benefit of such process? Who will, in the end, benefit? (As Mr. Donutil asks below). And what is the intrinsic value?
And the fact that we supplant the human creative process by a machine might pose some other questions, also related to enjoyment of a given piece (art, literature, article etc.)…
Can the machine emulate the developing fabric of human history, of its time and “surroundings” (which are, in the case of AI… what exactly?)? And can this be done by simply predicting words based on scraping the Internet and being able to process an enormous amount of data?
Does your enjoyment then diminish or wane when you find out that a certain piece of art, for example a painting or a print, is not the product of a human conception, but rather of an algorithm?
That is the question we should maybe ask ourselves. That is the question that comes into play when we think about language or art in the context of AI.
Can AI write a book? And I don’t mean a book “like” some other book – as many people seem to argue: AI can’t write like Hemingway. Well, that’s the point. Hemingway wrote like Hemingway.
Can AI write like AI, and can we consider it authorship? Does the style of AI hold any value for us – and is it consistent and distinguishable?
Will we have, then, a museum or a gallery with AI creations in 5- or 10-years’ time? I guess if we decide to assign AI outputs such value, then it might come to pass. Another question would then be whose name will be underneath these works?
AI obviously can’t write or create on its own, without a human prompt (yet?). Who then is the author of the output? Is it the AI model, or is it me (us) who wrote the prompt?
Without the prompt, there would be no words, no initial idea, no picture or no “output”. But without AI, there, again, would be no words, no “content”. This is a conundrum that, I guess, AI creates…
My question is then, wouldn’t it be philosophically (and maybe even practically) easier and sound to just leave the concept of “authorship” and creativity in the hands of humans?
Let AI do the menial tasks of organising and suggesting improvements for your Outlook calendar, while we take care of the creative work?
The trend now, however, seems to go in the opposite direction. We are being left with all the menial tasks (rating AI outputs for quality, for example), and we are “outsourcing” the “higher” cognitive activities we enjoy and like to do (like creating stuff and writing and translating) to machines and AI.
No one is complaining that they HAVE TO write a novel or an article – usually, people do that because they enjoy such activities. The initiative comes from within them. Translators become translators because they enjoy doing it – why else would you become a translator in this day and age anyway? (only half-joking)
What the people hopping on the AI revolution bandwagon are saying is:
“You no longer have to do what you enjoy! We will just outsource it to AI because it’s free! And we will pay you to do something you don’t enjoy doing. Isn’t that a brilliant idea? And look at all the time and money we saved!”
In this situation comes the problem of erosion of human language.
We created language and words to convey thoughts and ideas. To communicate. To actively exchange information.
Words are today being used for passive engagement, for filling in “content gaps” that no one cares about and for feeding “SEO” or “the algorithm”. What then becomes of language?
Will it become this convoluted, machine-like, tokenized, dehumanised gibberish, riddled with buzz words?
Won’t people get sick of it?
They probably won’t, as long as it’s free, or close to free.
What is then the value of a word? Of any word? There is a saying, or at least I’ve heard it: A man’s word is a man’s worth.
What worth is an AI’s word? Is it the amount of energy exerted or consumed for its generation?
And then again, this brings us back to the ubiquitous, yet not even remotely enough discussed question:
Who is responsible?
Whenever I write something, you have the right to confront me and challenge my words or ideas.
The question of worth then, invariably, leads us to ponder the issue of responsibility.
If AI botches a translation; or gives you inaccurate response; or includes a misinformation in an article – who is responsible?
You may confront me about the use of semicolons in the previous sentence – though, I do not recommend that; Tolkien would approve. But what if this whole piece was generated by an AI model – who you gonna call?
Ghost Busters?
Therein lies the issue with responsibility that the use of Ai in language creates.
Can the issue of responsibility be legally fixed by a disclaimer? What is the purpose or functionality of such an AI generated or MT translated document? Wouldn’t we have to revert to, in case of discrepancies or disputes, to a human agent anyways?
Responsibility, in many ways, gives our lives (deeper) meaning. We take on responsibility as we move through life and grow – whether it’s at work, in our personal lives, relationships: responsibility for an output, for your work; responsibility for a child or a parent, responsibility toward your significant other.
AI and its use then open a question not only of erosion of human language, but within that, the erosion of trust we have in each other, responsibility we have toward each other and the value of the produced outputs. Erosion of responsibility in language; and the erosion of value of language in life.
We then need to ask ourselves, where is the responsibility created when it comes to language? At least for me, the responsibility is becoming apparent (responsibility to myself and to my readers for the ideas and words I put out there) exactly in the process of writing and creating. Ideation. But what AI does is that it eliminates this process, this cognitive process of creation, thereby eroding the sense of responsibility in the source (the human or the person) who is supposedly the “author” of such work. This, it might seem, then could result in the overall erosion of responsibility for words and language (and, by extension, deeds, actions or ideas) within our society as a whole.
Language, devoid of responsibility for its meaning, is… What exactly?
Words?
And where, then, shall we find meaning – if the responsibility for our words or our ideas erodes? The meaning of life resides, partly but not chiefly, in adopting and taking on responsibility. If AI takes this aspect away from us, does it make us freer – or does it make our lives, and our language, less meaningful?
When you look at an article or a book, you can often “feel” its value based on the perceived effort and expertise that had gone into its crafting. For example, the immensely deep lore of Tolkien will hold its value over generations, especially because of its profundity, hundreds of generations and thousands of years of history he had created; and its linguistic logic and value which is (in my opinion) unparalleled.
The thing is, if AI could reproduce a similar feat within seconds, and do it consistently and repetitively, what then remains of the value we had initially ascribed to (for example) Tolkien’s work? Is it, now faced with the capabilities of AI, any more or less valuable?
And is the “work” of AI valuable at all? Given the fact that generative AI inherently works based on human inputs or sources.
Even human creativity, it is said, is based on a unique experience and perception of various inputs (sensory and other), which then coalesce into something “new” within our conscious minds… And then each “agent” chooses the form they will for the expression of this new-found element or sentiment.
The thing then is that the value people find in such pieces lies in their ability to “feel” the same thing or emotion; or feel their own emotion, based on their experience, which then makes the art, or the piece (the product of creativity) transcend the borders of space and time. Thus, it seems, only human emotion can achieve that.
Then the question is, can AI produce an output which would incite such an emotion or a reaction in multiple generations of consumers of such a piece?
Can it do that without having the experience itself? Or with having only the description of the experience?
And that is, most likely, to be judged by the beholder of said piece.
But then again, this is art, and not everything humans produce can be classified as art.
Language comes with power and power means responsibility. In the case of AI, we need to ask ourselves who wields this power and therefore who is responsible for the power vested in the generated words.
Is it the person who prompts the language model?
Is it the person who publishes the content?
Is it the person who reads it and acts upon it?
I believe, or hope, there must be a legal framework for this by now… But my question is more philosophical, maybe more in the moral area, where I’m thinking about the consequences for the development of our society, culture and language.
Language was originally devised, I believe, to communicate things of importance or ideas – in order to make ancient tribes more efficient and able to sustain danger and prevail against such danger. To become organised and thus more efficient. To rise above other species and leverage the forces of evolution.
What happens when we yield this tool of human progress and development to algorithms, Nvidia chips and processors?
Imagine that you use raw AI for the translation of a legal document. Let’s think of an example in the form of a regular, everyday legal document. I’ve translated plenty of such documents to know that they often include a clause which states that in case of discrepancies between the language versions, the document in the original language shall prevail. In this case, my question is, should the company inform the parties, or anyone involved, that the language version they are getting has been translated by AI? The question is, would this fact sow doubt in their minds? Will they be more cautious when knowing it wasn’t translated by a human? Or would it matter to them at all? Would they simply not care?
Is the risk or exposure higher or the same? I know that this might be an extreme example, and most sound companies wouldn’t use raw AI for translation of important legal paperwork (or would they…?). But, seeing the era of cutting costs and as mentioned before, the erosion of humanity in language, I wouldn’t be that surprised if such a case occurs.
Or let’s imagine you want to write a love letter to someone. Does it matter whether you write it yourself or whether you prompt an AI engine to write it for you? The sentiment is the same; is the value the same? We could apply that the beauty is in the eye of the beholder, thus leaving the judgment of its value in the hands of the recipient of said affection. The question that remains is of your conscience – does it hold the same value for you as the supposed “author”?
As I’ve said before, I might be wrong. I might be wrong in saying that language is an inherently human tool – for humans, by humans. I might be wrong because there might come a point where all the sides realise that machines and algorithms and AI can use it more efficiently and maybe even more beautifully. This, though, does not preclude me from believing in what’s becoming now more of a myth, and that humans have the exclusive right to the use and perfection of language. I will continue to believe in this idea however far-fetched it might seem given the situation, as I will continue to believe in the necessity of myth and mythology in our culture and history (which is inherently tied to language).
So, to answer, what is the value and role of human identity in language?
Language is a direct extension of human identity. Human identity is part of language and language is a part of human identity. There is no scenario where we could efficiently sever the two and come to a satisfying solution for our culture. There are only unsatisfying solutions, or inefficient ones, when it comes to alienation of language from its very source and medium – humans. This, though, doesn’t seem to matter in the context of costs and business, which is what inevitably dictates the way our society and culture develops and progresses.
(This disconnect is also often present in many companies and businesses, where they don’t realize that the value of said company almost chiefly resides in the people who work there, and not in the numbers or statistics – which are only the result of the collective effort. Company is not a logo, brand assets or your blog page. It’s the people.)
This perceived progress, as it was (to a high degree falsely) perceived in the case of machine translation and as it is perceived in today’s state of AI, will one day come crashing with the reality, and we will find ourselves not in a cycle of progress and development; but rather in a bit of a downward spiral – which is, according to many theories and even science, inevitable. In a way, progress (mostly technological) often leads us to decadence in cultural elements, though I know this may be contested on many fronts – I am thinking about, for example, social media, and how it impacts the younger generations. There are obvious benefits and obvious pitfalls.
After all, even the economic cycle has its ups and downs; and we may find in our history a long-term cycle which is slowly coming to its end, as Ray Dalio points out in his quite illuminating book “The Changing World Order”. He points to one of the megacycles coming to its end, and I believe, AI and our use of this tool, as well as social media, is right now a part of the beginning of a long, but probable, decline.
The thing is, these megacycles often span centuries, so we might have to wait a bit… The question then remains, will this prolonged state lead to an inevitable erosion of human language? Or the human identity in language?
Will we really get to a point where our language will become “dehumanised”, but it will still convey what we need it to convey, thus achieving the holy grail of efficiency? Everything is “free”. Everything is tokenized – devoid of human agents. Words are exchanged for credits, and ideas don’t matter anymore. Will the words matter? If there are no ideas behind them… That’s the question. If there are algorithms which will reward such words – I’m thinking Google’s SEO and so on – then yes, the value will be there. But if we look at words as vehicles for exchanging ideas between you and your audience, then the value is lost – even the ideas aren’t really yours and are not conveyed in the unique way your experience would normally prompt you to convey them.
So, the individuality or human identity in language is, or will be, lost – to costs and efficiency.
What then remains but to keep writing – poetry, sonnets, love letters, articles on Substack and, I guess, translate for pleasure? Will human writing become a symbol of a sort of “rebellion” against the swathes of AI content? And if it will, what will happen to language as a tool for thinking and conveying ideas?
Will there be any more ideas – with the ubiquity of AI and constant, ceaseless generation of “content” and words for the sake of words; will there be space for new, original, human ideas?
What if we replace human writers with AI and human translator with machine translation – what is then the value of language? What is then the value of a word? Where is the meaning? Where is it created?
And if we do manage to reduce the value of words to zero (or some virtual tokens), what is the motivation for the reader to spend their time reading them? Time is money… But the words are worth nothing, so what are you spending your time – the most valuable currency you have – on?
Another option is that language may become chopped, dehumanised and “tokenized”, thus we will be forced to assign a value to each word or each token (morpheme? a part of a word?) and operate based on the number of tokens and the energy this number of tokens requires to be either generated or “transformed” from one language to another.
Thus, we are no longer operating based on semantics, based on meaning, but rather based on statistics and whatever is closer to the functioning of the algorithms or the machines.
And then again, I’d ask, how is it possible that we are yielding something that’s ours to begin with so easily to machines? And we can’t seem to part with material things in our possession which are far less central and essential to our identity than language?
A brave new world? Maybe.
That's a solid dose of insight and care! Many valid points along the way, and a big mouthful to reply to, but I will just, for now, look at one thing, the value of words, and its consequences.
I can see a possible future where we don't have anyone designated as the author of books, and we are not producing them at all, only to be distributed and bought by an audience – rather, books are being generated directly on the spot, for the listener (because they will more often than not be automatically converted to audiobooks) in an endless stream that even could be adjusted along the way, depending on the receiver's reactions.
The AI in the smartphone, or wherever the receiver gets the stories from, can sense reactions through the phone's sensors, but can also ask directly, "how do you like the story?" and "what would you like to happen next?"
The death of the book can, this way, become much more massive than you suggest in your article. Books will exist in the moment only, perhaps recorded for the one to whom they were made, but nobody else.
The same, of course, with movies and music (and news, talk shows, etc.)
There is a potential revolution coming up that will turn everything related to communication on it's head.
And the value? A few people will own the mechanisms for doing this, and they will probably also be able to claim a full copyright on everything produced. They will become more wealthy than anyone ever has been. They do not need to care about ethics, they'll have money instead.
We are talking about a big value for these words. In a sense. The combined value of all entertainment and communication. That is probably not peanuts.