Malek's Moorish Tales

Meanderings about life and technology

Is the quest for Artificial Intelligence a Risk for Humanity ?

   During the last couple of years, I started to hear in some podcasts and read in some literature a renewed fea   r of Artificial Intelligence. For sure, this fear has been part of Sci-Fi for some time. But lately, there seem to be many thinkers looking at it as imminent.

   I understand that the general public, with all the late talk about Artificial Intelligence being achieved (as in the algorithms used by social media, predictive algorithms in many business software, or the various assistant software like Siri or Alexa), might think the term intelligence is used in the sense we commonly use it. That is, an agent capable of addressing various situations through learning from others and from past mistakes, to successfully achieve her aims, which in term are evolving according to her understanding for its interests. In short, the public might think what we call "General Intelligence" is achived or imminent.

  Nothing can be farther from the state of the field. The general idea of "Specialized Intelligence" is an algorithm with no control over its objectives, and with no subjective interests. The idea is a software that is given a mesurable objective, and a starting point where it can do something, with some variables it can modify, then it goes on applying the starting execution, measuring the result and how they mesure compared to the given objective, and then iterates through modifying the variables, measuring whether it gets closer or farther from the objective, and adjusting how it modifies the variables depending on the result. Basically, it tries to solve an optimisation problem, getting continously better at it.

  There is today no clear path from where we are to General Intelligence, nor necessarily any serious endeavor to go there. The fact that some software today (especially the assistants and the robots like Sofia or Robot Einstein) might mislead the audience into thinking that it is a General Intelligence just because it can mimic natural language changes nothing. They are using the same Specialized Intelligence approach to natural language, then using predefined routines to respond to our questions or conversations.

  The object of the fear of Artificial Intelligence focuses on the risk that some autonomous intelligent machines might have objectives that do not align with human interests, and that they might find in their interest to take power over the world and ignore us or even destroy the human species. That assumes that General Intelligence is achievable. I don't know if it is achievable or not. What seems clear to me though is that it is not imminent, and that we are not on a path leading there, which makes me convinced that if it is achievable, it probably won't be in the lifetime of my generation (and probably not in that of children). Also, to have any efficiency in addressing a risk, one has to understand the risk in term of vulnerabilities and how they can be exploited. We are far from understanding what General Intelligence might look like, whether it might be a risk or not, and even less how it might operate to be able to come up with any resolution or mitigation to the risks.

   I have to conclude that the fear of Robots taking over the world and annihilating the human race is still as much fantasy and fiction today as it did few decades ago.

 

   However, Specialized Intelligence presents serious challenges to human affairs as they stand today. It is not a disruption as much as an acceleration to the automation process well underway.

   Over time, companies have been employing less people to achieve the same production, do to the tasks that have been delegated to machinery and then to hardware and software. More than just the numbers of employees, whole categories of jobs have been obsoleted. A hundred years ago, a large company would have employed dozens of "computers", not the electronic devices that go by that name today, but people performing the calculations necessary for book keeping and accounting. In the same way, thirty or forty years ago, a Car assembly line would have employed a large number of employees performing tasks that are tosay performed by robotics and automates.  

  In the same way, a large portion of human work is dedicated to perform basic routine procedures, assist in repetitive tasks, gather information, or make decisions based on well defined criteria. Most if not all of those tasks can be performed by Specialized Intelligence, and they will continue to be perfected until the point where it would be simply unthinkable to keep having humans do them. That transformation is likely to go very quickly, causing a huge human impact. Humans would have to move to more creative tasks, yet we are not seeing the transformation of education and training to prepare the next generations for that inevitable future. That, in my mind, is the imminent and prominent threat to human societies we should be spending our thoughts and energy on.

  

Add comment

biuquote
  • Comment
  • Preview
Loading