Yi HeinYi Hein

AI and the Human Condition

This article was first written in November 2018, in a 3 hour session on brain vomit. No research was done and hence there should be no expectation of any ideas expressed to be factually accurate. This article is now posted publicly for the first time for anyone interested.

Key Points

  • Replacablility
  • Income inequality
  • Wars on progression of AI
  • Absudity of life
  • Biology research
  • Problem with joining forces
  • Yi Hein’s future plans
  • Ambition
  • Link between genes and coding
  • Ride the wave, know about AI, if you can’t beat it, join it
  • Limits, replacement ceiling to human extinction.
  • Experience of learning same for humans and machine learning
  • Machine to human ratio
  • There is a difference between remembering and understanding — apply to AI — Azmi When you break down the human condition and the requirements of existence, you will realise the fact about how absurd everything is. Let’s start with learning, and the idea of learning things. When we learn subject like biology and physics and chemistry, we feel like we are knowing more about the world and we are improving our knowledge and getting smarter. However, all that we are doing is to memorise and store information into our memory. The real absurd part comes when such information is actually just a set of rules that are created by…nobody knowns. One would begin to realise the absurd nature of learning when one learns to code. Coding is basically arbituary and entirely created by humans. For coding we are confident that it is invented and not discovered, because simply, we know exactly who created the certain rules and syntax. For example in python, it was just some guy in 2005 that created the entire programming language and within the programming language so much more is possible. So when one learns python, we are just learning about the arbituary rules created by someone else, it is someone’s invention, and you could be the one creating those arbituary rules. It is just like you writing your EE and millions of people study and memorise your EE is rules that explain the world. In the biology and physics (or chem) it is the same, except for the fact that we do not know who created those rules, the rules are just there, thre is no reason, and it is for this reason we realise how everything is. There is no answer, just memorization of rules which we ourselves could have the power of creating if we wanted. Another example can be seen from art, studying literature is just memorizing text written by someone, it is entirely soemone’s work and that work can be created by you or me. This reiterates my view in other theory documents that art and science are essentially similar just that they occur in different timescales.

Now moving to AI, let’s explore the future of AI and consequences. The danger and risks that come along with AI stems again the absurdity of the human condition. When you break it down, humans are just basically a glob of atoms that someone is able to perform certain functions due to the special arrangement of the atoms. There is no special meaning in life, the vitality theory has been disproved for so long. Human can be seen as non-living in fact, something that could be calssified together with a glob of metal, just that it demonstrates different characteristics due to the different arrangement of atoms. With this in mind, we start to realise that that is the reason AI is so dangerous. It is dangerous because it can replace human entirely. Humans have no special unique vital force in which AI is unable to attain. Whatever humans have is attainable by AI and it is just a matter of time.

Talking about the progression of AI, we realise that it is happening now and it is happening fast. Moving forward, we must move on carrying the assumption that AI would be able to replace everything, it has the potential to replace everything and if all goes well, everything would be replace. Whether this would actually happen is questionable and debatable as will be discussed later. So how do we see the AI take over now in this point in time? We can see the current time period as the Dawn of AI (just like the movie the Dawn of the planet of the apes), but in this case it is the dawn of the planet of AI. Basically, this is how the AI progression will go. Imagine the start and finish line, the start line would be the most basic tasks, and the finishing line would be the most complex tasks that deals with the complexities of human beings, emotions, intuitions, dreams etc. Now what we are seeing is that AI has been starting to move away from the starting line. So this is why there is the huge debate about AI taking over job and technological advancements taking over job. This is because the in the race to the finish line, the AI has already covered some distance, and all the distance that it has covered, humans would become obsolete. Naturally the things that are placed near the start line would be simple tasks. And hence such simple tasks as already been replace in our present world, leading to structural unemployment in the form of technological unemployment.

In the egoistic point of view, in order to ensure for self-survival and self-relevance to not be overtaken by AI in the AI race, we realise that we should at this point in time take up skills in which it is extremely difficult for AI to take over. Skills such has human emotion, compassion and creativity. Creativity is one of main factors for entrepreneurshiop so this industry has potential. I would assume that the leap of faith from understanding knowledge by the AI (using big data) to creating new knowledge and making new breakthrough would take a considerable amount of time by the AI. Therefore, in order to embrace the absurdity of life, it is important to recognize the inevitability, which is the eventual total takeover and replacement of humans by AI. Recognising this inevitability, we now have to find ways to save ourselves, to prevent ourselves to be swallowed by the big wave of AI development. This can be done by investing in skills which is difficult or takes long for AI to take over.

A few moments ago, I believed that it is the low-paid jobs and the ‘low-skilled workers’ that would be replaced by AI, this is true to some extent, but I would provide a counter argument later. Indeed, we see that low paid jobs such as cleaners, security guards and bag checkers require minimum skills and therefore, it is extremely easy for AI to takeover. What this leads to is the massive loss of jobs by the low income, increased poverty in the low income groups, the such low-skilled workers typically number much more that the high skilled workers. This can pose huge problems for the functioning of the human race, where there are large numbers of poor people who are unemployed poor and suffering and the small number of high skilled people who are living a comfortable life. Riots? Fights? War? All that may happen as a results, and countries can collapse on the inside.

However, this is not entirely true, low paid jobs may not always be the ones that are first to be replaced. For example, the work of a doctor can easily be replaced. In todays world the doctors and inundated with so much work such that all they do is to practice medicine and have little or no time to interact with the patients at all. When you think about it, you really start to realise that doctors can be so easily replaced by AI, even though they are paid to highly. In a sense, big data can be used to analyse symptoms and generate a diagnosis. You realise that the same processes that goes through the human brain in order for the doctor to arrive at the diagnosis can simply be modelled by machine learning. If machine learning is able to go through those thought processes, then AI would be able to provide the same results doctors can. Even for surgery, we realise that it is basically a system of coordinated movement based on the stimulus of sight and this can easily be replaced by AI as what we can see today. What cannot be replaced or at least not easily replaced, would be things that are essential to the human conditions, the idea of compassion, care, concern, and such work is conducted by social workers. So in a sense social workers, who at the current point in time are not paid a lot would be one of the most valuable members in the workforce and would likely to be valued highly in the future. For Yi Hein’s vision of revolutionizing the healthcare industry, having this understanding is extremely important. You realise that in the healthcare industry, all the non-human aspects would be able to replaced by the machine. Therefore, all you need is a worker which has great qualities that are essential to the human condition, and a AI which would do the technical medical stuff. Teach the social worker how to operate the AI machine, and then you would have a fully functional medical system without any doctors. You would be able to cut costs and drastically decrease the price of healthcare. Therefore, at this point in time, look for things that cannot be easily replaced by AI, learn those things and keep them close to you. It is also important to look at Azmi’s level of understanding chart, it is likely that such a chart would be used to model the progression of AI, with the highest level of understanding and mastery be the slowest to be replaced by AI.

Going back to the point about AI causing massive income inequality, this is a major issue that need to be solved. When the time comes when there is almost unmeasurable levels of unemployment due to technological AI advancements, we would realise that we would have formed a dystopian society. Chaos would ensue and this can very possibly be the collapse of the human race. With the collapse of the human race, ironically, it would be up to the AI to revive the human race, via stored human gametes and genes, replication of the human genome stored online into physical tangible form and use it the generate humans from nothing. So at that point it the AI is advanced enough to be able to think critically and conduct such actions to revive humans then human can be revived. However, if this happens before AI is fully developed, then it is likely that AI too would be dependent on humans and the fall of humans would mean the fall of AI. Regarding this dystopian society, we are able to expect that there would be massive genocides, with the rich holding most of the power. It is likely that the rich would hold the power to AI and would use the power of AI to suppress the resentment expressed by the larger proportion of the human race who are starving and dying.

Talking about the rich holding the most power and the access to AI, we reach a new problem, which is whether the complete development of AI would actually happen. And the following paragraph serves as the antithesis to the point discussed much earlier which had utilize the assumption about the inevitability of the full development of AI. In a sense, the humans have the power to stop and halt the development of AI when it is young and still dependent on humans. At this point, humans are able to pull the plug and stop AI development. And there is indeed a real incentive for this to happen. You start to realise that there is a fine balance when AI is useful to humans and when AI has the power to destroy the human race entirely. The point where AI destroys the human race entirely would be the point where AI has become extremely similar to humans such that it is able to replicate the essential human conditions. When such a time comes, it is natural for the proponents of AI to start fearing about AI, and by the egoistic perspective to ensure self-interest and self-survivability, it would be natural for the world to pull the plug on AI to ensure human survivability, because no one would want to do something that leads to their own deaths right?

However, there is a counter-counter point to this. The key to this counter-counter point would be the irrationality of humans, the ignorance of humans and the potential for immense ambition in which humans possesses. The ceiling discussed in the earlier paragraphs limits the potential of AI to that of human capability (slightly before human capability to be more specific). Be it would be natural for one to consider what if AI extends beyond that? What if AI is able to surpass humans? What heights would it be able to attain? What wonderful feats would it be able to attain? Such questions draw amazement, excitement and curiosity about the future and the potential of AI and this naturally, develops a form of ambition. We would come to a point where the recognition of the dangers of AI would be obvious. However, due to ignorance, human greed and immense ambition, there would be some who would still be willing to take that step and continue developing AI to allow it to surpass human capabilities. Just imagine the potential this has. To look at the egoistic point of view, which is one of self-interest, when you have such immense power of AI which is able to surpass human capabilities, one can only imagine the immense self-gain that one would be able to attain by harnessing this power. However, this is where the ignorance part comes in. Human would be blinded by such dreams of self-gain that they would be blind to the dangers of AI. This is where the critical and major flaw comes in, and this flaw occurs at one assumption. This assumption is that they would be able to harness the power of AI. To be honest, it is very unlikely this is going to happen, once AI reaches beyond human capability, it would be able to develop itself at an exponential pace, and at this point, it would be just a fire spreading uncontrollability. It would not longer be reliant on humans, there would no longer be a plug in which humans and pull and AI would have basically advanced to a stage where it is an entirely new species, where it is able to build and reproduce itself with the resources in nature. And it is at this exponential growth and spread of AI, a major war may carry out eventually leading to the extinction of human beings.

Now we have reached the end-game of AI, which may or may not be complete, this is just the future that I am able to envision at this point in time, I would like to go back on a few points that I might have missed out earlier. Firstly would be the methods to face and tackle the AI revolution. Previously, I mentioned that the only thing human can do is to except the inevitability of AI and develop skills which would be taken over by AI the slowest. However, there is a second method to go about doing this. As the saying goes, it you can’t beat it, join it. What we mean by this is for one to develop AI development skills, to be the person who is the catalysis of AI. If a person can be actively part of the AI development progress, that person puts himself in a very good position. Take this analogy. This approach is likened to transitioning from a person who moves to higher ground in order to avoid being swallowed by the tsunami, verses the person who is riding on top of the tsunami and constantly providing kinetic energy to the tsunami. What I have mentioned before about AI surpassing human capability and the destruction of the entire human race can be applied here to. In this sense, tsunami can become so powerful to the extent that the person on top of the tsunami and riding it would no longer be able to provide any sufficient kinetic energy, not even enough to make himself stay on top of the tsunami (or balance himself on the violent wave). At this point, the person would fall and would be similarly engulfed by the tsunami. But the idea is that if you are the person standing on top of the tsunami, and the person who is feeding the tsunami at the beginning stages, you would be able to know the tsunami very well, know its respective perks, know the specific tricks, and know the specific weaknesses. You don’t get such knowledge by hiding at a high ground. Additionally, by being on top of the tsunami, you would be able to ‘pull the plug’, or do something that causes the entire tsunami to come crashing down. You can do neither of this by hiding at higher ground. Therefore, it is possible that riding on top of the tsunami can potentially allow one to survive long and not be replaced so early compared to the people hiding at high ground. It is possible that the people riding on the tsunami would be the last ones to survive of the human race. To simply, this method and approach would be the approach to take if one wants to maximise the time before they are replaced by AI. However, this approach comes with two major flaws. One stems from the people at high ground. The main reason why they would retreat to high ground in the first place is because they see tsunami as an immediate danger. So what you they think of someone who is literally riding and feeding the tsunami with more kinetic energy. This poses a huge risk for the AI developers, with risks of assassination, with immense amounts of resentment against. So what is important is for the person riding on the tsunami to portray himself as the good guy, to portray himself as the person trying to stop the tsunami instead of feeding it. To portray himself as the hero and savior of the village to defeat the great enemy called to tsunami to save the day. Whether the person on the tsunami actually do such things is another story, however, to ensure one’s survival it is paramount of portray such an image. In real life, we can see such image being portrayed by Elon Musk, with the development of OpenAI. However, he has recently detached himself partially from OpenAI due to potential future conflict of interest due to his involvement in Tesla. Here we see him play a very dangerous game, where he is open with his intentions, but when the time comes, would he be seen as the villain riding the wave, utilizing AI in Tesla for his self interest? Who knows? But in my opinion, he is playing a dangerous game and he is not putting himself in a good position. The second danger for going into AI would be the danger posed by AI itself. As the saying goes, don’t play with fire. So if you want to fight fire with fire, you should prepare to get hurt. Going back to the analogy, to join forces with AI means to be riding the wave of AI. But to be honest, how long can you ride the wave for? At the start is ok because the wave is small. However, once the wave starts getting bigger, you would be lifted off the ground, some would lose their balance, some would fall off from fright. The main idea is that dealing with something as potentially dangerous with AI can lead to the direct harm to oneself, even if AI has not developed to the stage in which it is able to end humanity. Like dealing with Ebola research, when Ebola has not reached the capability to kill humanity, but you would die first, cause you are the researcher and are in close proximity to it. The same concept applies to AI, an failed AI experiment, the creation of a killing machine would easily lead to one’s death. Therefore, the assumption that riding the wave would allow one to sustain the longest does not really hold true here. There is chance, there is luck which are factors which determine the duration before you are swallowed by AI. But I dare say that I am confident that riding AI provides the greatest potential in surviving longer, but whether that potential is achieved is another story.

The second point which I would like to point out which has not been covered by me previously would be the absurdity of human life, the lack of idea of vitality of human life. To add on, I would like to add the additional example between genes and computer code. If you think about it, the central dogma of biology, the idea that everything in life depends on the genetic code, the synthesis of everything you is due to the genetic code, makes it all the while more definite regarding the replaceability of humans with AI. The process of transcription can be liked to the process of importing a module into a code and then using the module for the specific function of the code. The conditional statement of “while” can be seen as a form of homeostatic control in humans. What I have mentioned is one example of micro and one example of macro replacement of human function. The microlevel would be that the genetic code is so lifeless, so absurd, so arbituary and likewise is computer code. So in a sense, due to the lifelessness and due to the lack of presence of the sense of vitality in human beings. It seems entirely possible for humans and animals to be total replaced by lines of code. The macro level should how different lines of code can potentially work together to create a system. If we are able to model everything human function with lines of code, then we can be sure that human can be entirely replaced by AI. This is different from what I have discussed earlier regarding the rapid development of AI and it surpassing human beings. Earlier, when I mention AI surpassing human beings, it implies that it would be done in the way of the AI, where AI has their own mechanisms which is simply better than the human mechanisms. However, in this case is different, this is a total and direct replacement of the humans, and in this case, if successful, it would mean that such an AI system would be able to completely and entirely replicate the human conditions. If so would you really consider humans as life anymore? Is it really necessary to differentiate between living and non-living things anymore? Perhaps such a classification in the first place arises due to the human’s personal perception of itself as being alive, and draws relationship between this state with other organisms, allowing humans to conclude that they are in a state of being alive. But perhaps if you think about it, a dead person is maybe just a broken line of code, that is all. However, I am getting ahead of myself here. There is one major assumption that I have made, and that is such replication of human mechanism in the form of code is possible. In order for this to happen, one first needs to gain a total and complete understanding of the human biology, and hence biology research could every well be the limiting factor and the rate determining step preventing AI from completely replicating humans beings. And we all know how long research takes, but perhaps with the help of AI it would be faster. So perhaps with the development of AI, we would see a rapid development of all three sciences as it is a limiting factor, and the momentum carried by the development of AI would serve as a driving force for the development of the natural sciences.

The next point would be very similar to the point that I have touched on above, regarding the absurdity of human life and the ease of replacement of the human life. However, the following paragraph presents it in an anecdotal manner with real life experience of Yi Hein. I become incredibly aware of the similarities between the process of learning between humans and AI (from machine learning), and such a realization was what sparked this over EE length essay on AI and the fate of human race. So now, to provide the anecdotal example, while I was in the library (central library in bugis), I repeatedly could not remember which corner of the library to go to in order to lead me to the place where I eat my lunch and dinner. It is only after 3 attempts and a conscious attempt to understand and remember that I managed to concretely learn where to go. This immediately reminds me of the process of machine learning, where there are multiple generation of bots which goes through a game (I’m am currently visualizing the use of machine learning in order to solve one of the those cursor games like the hardest game {or whatever that it is called}) In those games, there are hundreds of bots and hundreds of generations, trying the thing over and over again in order to achieve success. This experience is very similar to the human experience, the only difference it that the human experience requires much fewer attempts compared to the computer in order to learn something, and this is mainly because the human brain only controls one body (flashback to Naruto’s Kage Bunshin Rasengan training). But this clearly demonstrates the current inefficiency of AI, and perhaps one of the reasons of those inefficiency is that the AI has not advanced to the stage where it is conscious and aware of what it needs to learn. Currently all that machine learning AI does is to go through massive amounts of data and trial and error to develop a skill, there is no filter no sense of targeted learning. This is similar to the student who studies very hard for an exam but studies the wrong thing for the exam and hence does not do well for the exam. So what happens if we arrive at a situation where there is high demand for data and little supply for data. Then we would have to tap into the potential of machine learning, to move away from big data and instead focus on targeted learning. Yi Hein, this can be a potential point that would allow you to differentiate yourself from the other people working on machine learning who all uses big data currently. If you are able to achieve the same or even greater results with a small amount of data, it would be revolutionary, because now you can simply scale it up and do much more than what overs are doing. This can also be important where there is a shortage of machine learning computer processors (or whatever they are called in the future) Therefore, there is still very much potential to develop in the AI field and much inefficiency is still presence.

Going deeper into the idea of human learning and machine learning, one must make the distinction between learning and understanding. People always joke about if they can just download the information into their brains then they would not need to study anymore. However, this is not true. Having 200MB of data stored in a hard drive would not allow you to complete the exam paper. This is because you would require data processing and this can really only occur with understanding. This goes back to the point in school where teachers keep emphasizing that it is just as important to understand as to memorise. This is because it is only by understanding that one is able to apply the knowledge. Therefore, a lot of computer storage is one the lowest level, which is memorization. It has not reached the level of understand and therefore it is unable to apply the knowledge at all. For the computer to move from remembering to understanding takes a huge amount of research and development of the AI technology. And this makes me reconsider whether I have overestimated the pace of development of AI, perhaps I won’t be as fast as I thought. Sure it would be pretty damn fast, but I may have overestimated it. And thinking deeper about it, it makes me recall the chart Azmi sent which outlines the level of mastery of content, the lowest level is memorizing the second lowest level is understanding, one top of the bottom two levels there is still 5 more levels above. And in order to do cutting edge research, one has to be able to reach the top level of understanding of a particular topic. And seeing how difficult it is for machine learning to process from the first level to the second level, it makes me think that AI would take longer than I thought to replace the humans. This links back to the point a made a long long ago in this essay, where I mentioned that one of the points that is very difficult to be replaced by AI would be creativity and entrepreneurship, it is because such concepts can only really be developed when one has the highest level of mastery. GET THE CHART FROM AZMI!!

Currently, just use this,

The last thing that I have missed out would be the machine to human ratio with the development of AI. This goes back to the part before which I mentioned that a small number of people would be able to gain control of AI and harness a large amount of power. While a large amount of people would be poor and suffering. A key idea to recognize in this situation is those large amount of people would be replaced by a large amount of robots and AI. Imagine with the large amount of people start dying off because they are no longer relevant, this would lead to the remainder of a larger amount of AI of robots and a small amount of people. This ratio would tip strongly against humans and if a war breaks out or any form of conflict, it obvious who would win.

So now that I have covered what I have missed previously, I would like to talk about my own future plans and what I should do in the future with the recognition of the knowledge I have mentioned in the past 5000 words. Firstly, would be develop ‘difficult to replace skills’ such as having a heart, having compassion etc, this would provide the ability for me to run to high ground. Secondly, would be to develop AI development skills and this can be done by coding and creating AI programmes as well as to enroll in computer science university courses. Thirdly, would be to develop technical skills, which can include the practice of medicine, even though it is easily replaceable, I must develop this technical skill to first help me with the development of AI that would eventually replace this skills, and secondly it is a stop-gap measure to account for by over-estimation of the rate of development of AI. This would allow me to survive and remain relevant in the pre-AI dominant era, allowing me to properly prep myself for the AI dominant era.