top of page

02. A Concern on AI Outcomes

AI is a double edged sword. In the right hands, It can aid humans for great progression or it can make him into a mindless shell

02.  A Concern on AI Outcomes

I recently read the below news snippet about Geoffrey Hinton, god-father of AI :

The 75-year-old British scientist told the New York Times that he partly regretted his life’s work, as he warned about misinformation flooding the public sphere and AI usurping more human jobs than predicted. “I console myself with the normal excuse: if I hadn’t done it, somebody else would have,” said Hinton. He added that it was “hard to see how you can prevent the bad actors from using it for bad things”.

This comes after a 40 year stint in the AI space – that too from a well-respected professor!


Human brain can only be developed organically where as synthetic robotic brains can just clone the trainings and learnings from pre-trained models is another inference from a burst of news on this topic.


I am not totally against the development of AI. I am worried that it is becoming an uncontrollable monster.


We as normal average humans are being exposed to –


- Artificial news contents

- Artificial subject essays

- Artificial Images

- Artificial music

- Artificial other contents



This means, on a regular basis, we are exposed to being fed from AI systems. We are not able to differentiate between AI outputs and human outputs. This, to me looks sinister !


What used to be a hard job following rigorous learning and R&D is available just at the click of a button. Are we, en masse, delegating the thinking/developing part of brain to an artificial agent? This is the part of brain that puts us into the path of the concept - ‘in pursuit of human excellence’. We need to be fundamentally a good solid thinker before we can become a transcendentalist. This solid thinking has to be part of our daily routines – such as in the job, as we commute, as we relax, as we learn, as we do the R&D etc., You have to go to the good old basic way of picking the best of the human brains and celebrate that work while we extend it further with our hard work and R&D.


Spiritual involution (the art of un-entangling) , where we progressively become an intelligent person based on our sincere quest to know is blessed by nature and it makes you a better empathetic human. The foundations of such an involution is to be created through our organic learning processes. We just cannot allow the AI systems to corrupt this in a senseless manner.


At the least , this space should be regulated vigorously.

We should have the right to

- Know if a content is AI generated

- Know the ‘author/source’ and purpose of this AI content

- Know if this effort replaced a human work in an ethical sense (to be developed further to accommodate the nuances)

- Know if any human author has endorsed it before public consumption

- Have easy to use options to turn off AI content totally in any form and in any media for humans who abhor it (like myself!)

Some of the controls that I think should be there in this space includes –

- A regulatory body that serves as a watch dog on AI tech on both the generation side and the consumption side

- The ability to take back the AI generated content by the source

- A easy way to highlight that this is AI content and non-repudiation controls for the source system

- A significant periodical study followed by publication of results on how the AI content is impacting the general public and particularly the younger generation and on concluding if mindless(!) AI content in media is affecting the normal brain development. Some metrics around how one’s choices (even unwillingly) are created artificially in mind, when they are surrounded and fed all sorts of information by the AI systems (e.g. recommendation engines – are we even allowed to think on our own before being conditioned to think the ‘right’ things that we should think?


Like I said before, I am not totally against AI. I am just proposing that it should just play the supporting role that too by a willing choice and not convert us as mere consumers without the application of our capability to think. If an AI tech is displacing humans en masse or does not allow the process of deep thinking using the regular learning channels, I think we are on the wrong path.


Nature expects us to blossom in the right organic way in all our human-activities, leading us to realize our full potential eventually. By shorting this organic growth with AI, we are doing a great disservice to the idea of ‘in pursuit of human excellence’.


Probably, Data science (as in ML using algebraic models) as part of AI has reduced the human toll in a positive way by aiding research and development where it was humanly impossible or strenuous or boringly repetitive. But in my opinion, Generative AI (one definition in web says - Generative AI is a type of artificial intelligence technology that can produce various types of content including text, imagery, audio and synthetic data) if not controlled, can harm us in long run.


Let us deeply look at all the jobs that were lost which included exerting & developing the human brain and try to take it back and merge it in our society in a progressive manner. At the same time, we should watch out for AI technology that harms our independent way of thinking and which is able to remove jobs meant for humans at scale or in individual capacity.


Please refer to the Part 2 of this blog at - https://www.mayoan.com/technology/11.-a-concern-on-ai-outcomes---part-2


© 2035  Powered and secured by Wix

bottom of page