How Humans Can Best Use AI and AGI

HOW HUMAN BEINGS CAN BEST USE AI AND AGI

Christopher Ebbe, Ph.D.   2-26

There is much confusion at present about how uses of artificial intelligence (AI) and artificial general intelligence (AGI) may affect our lives.  Artificial intelligence may do fact-retrieval and selective regurgitation of what humans have written, as well as identifying the most efficient ways to approach complex tasks to accomplish a given outcome.  Some of these applications will be very helpful, particularly in medical research, astronomy, manufacturing, computer programming, and war planning, although they may also be reacted to by humans with unwarranted and uncritical belief and acceptance, and therefore could result in more wars, for example!  The keys to their positive use are making sure (1) that you are asking the right question (and limiting in your mind the answers you get to exactly and only the question you asked, and (2) that you check the machine’s answer against your own emotions and understanding of the situation (and not assume that the machine will always be “righter” than a human).

Artificial general intelligence aspires to create an independent, self-checking, self-learning entity that will “think” somewhat like humans do.  We can address different strains of this effort—to create something that will be as close to human functioning as possible, as contrasted with creating an entity that “thinks” somewhat like humans but is a thinking-machine only, without attempting to include factors like emotion, intent, ethics, and emotional intelligence.

AGI could be useful for very complex situations, but since we will not understand its methods of thinking fully, we must always evaluate its products with respect to human needs and sensibilities, since only humans can apply the emotion, intent, ethical/moral, and social factors to a decision that the AGI entity will not consider.  (Note that AI and AGI entities can never be allowed to run an important human system without human direction and oversight (the power grid, water systems, offensive drones, defense/war).

The first of these (an AGI that is as human-like as possible) may seem somewhat like a human but will never be an exact reproduction, since our adult human is a product of millions (billions?) of experiences that shape our expectations/predictions and our emotions.  Without experiencing the same things as a human does while growing up (the difference between light and dark, knee pain, sibling conflict, feeling ignored by a parent, being scared of peers, having an insight, completing a planned project, risking being assertive with peers, being excluded for being assertive with peers, etc.), the machine can never develop as a human.  We could attempt to code in hundreds (even thousands) of these experiences, but any attempt at specification of these kinds of things could only produce a particular human-like entity (like only one unique person), and this could not be taken as a substitute for all humans, since every human’s development contains different combinations and particularities of these experiences.

In addition to these definitional issues, humans have several characteristics that may make the “progress” that is possible from AI and AGI (more goods, smarter decisions) irrelevant.  One of these is our human preoccupation with dealing with discomforts.  We are constantly aware of discomforts, such as an itch, thirst, fear, anxiety, the pain of failure, depression, the need to get a task done for the boss by 5 PM, etc.  We react with efforts to remove the discomfort.  This

make up such a large part of our days that we don’t really know what humans would be like without this cycle of discomfort and efforts to eliminate it. 

We imagine that we would like an existence without any of these annoying discomforts, but it may well be that in our present state of evolution, we would be utterly bored or “go out of our minds” without those discomforts and that cycle of discomfort followed by relief.  Imagining heaven as existence without the need for focus and effort seems foolish, since we could never be comfortable with such an existence unless we were altered to make us into something different from our current embodiment as humans!  We need challenge and discomforts to motivate us!  We may think that it would be good to have no pain or difficulties, but to have discomfort and respond to it is fundamental to being human.  Be careful what you wish for!

A record of our daily activities would show many errors (and many instances of needing to react to unanticipated situations), which would strongly suggest that we ourselves will never be perfect, and our very nature precludes being happy all the time, since we physically habituate quickly to new states.  We get used to things, and they no longer induce the same feelings as they did originally.  Even happiness would pale fairly soon, so as we are, we can never feel happy all the time.

A second human factor that we generally are not aware of is that there is a physical limit to how much pleasure we feel (how good we can feel).  We imagine that a bigger house would make us feel great, but after a short while, it doesn’t.  We imagine that one more drink or toke or line will make us feel better, but after a certain point of ingestion, it doesn’t. We imagine that the greater access to “things” and experiences that AI and AGI may provide for us will make us feel good all the time or feel on average significantly better than we now feel, but this is largely an illusion.  More and better “stuff,” world travel, virtual experiences, or a glamour wife or husband will not necessarily “make” us feel better overall.  (We might feel a bit less anxiety overall if AI/AGI provided us with a greater sense of material security or safety, but total security/safety is not possible, given our unreliable and competitive world and given our endless process of seeking re-equilibrium, and we would also have the added worry of when/whether our AI/AGI systems might fail us!  Artificial relationships might seem to have the promise of pain-free relating, but the pleasures of an artificial relationship are never as satisfying as human-to-human relationship, and I predict that the pleasures of a artificial relationship will pale faster than those of a human-to-human relationship.

Another aspect of this same argument is that our experience of pleasure or relief is maximized only when contrasted with the lessened pleasure or anxiety that we felt before we had the pleasure or relief.  Without this contrast, pleasure/relief quickly loses much of its value to us.

We tend to believe that the ideal life would be a materially satisfying existence without having to work, but this, too, is against our human nature.  Certainly, we like breaks in our work from time to time, but without meaningful work (effort to create or accomplish something) and without a sense of purpose (why we are doing all this), humans seem to fall into boredom and ennui and to do silly or crazy things.   Look at the number of children of rich parents who don’t have to work but end up with unsatisfying and messed up lives.

We see in the above some of our human refusal to accept being human (being ourselves).  A certain amount of pain is inevitable given our nature and our environment, but we don’t want to accept this but instead crusade for better pain medications.  Because our adaptation here on this planet is not perfect (although it is quite good), our needs and desires will never be fully

met, and (2) because we each have somewhat different sets of needs and desires, our needs and desires will conflict at times with those of others, calling for us to communicate and negotiate for resolving the conflicts, which will end sometimes with us not getting our way.  Thus, frustration is inevitable, and it could be to our advantage to accept this (while still trying to improve what we can) and learn not to be as bothered by the frustration or by the work needed to negotiate and arrive at best possible compromises. 

Human societies need morals and laws to guide and control our behavior.  These laws and morals are constructed by us with a mind to how things feel to us (how it feels to be harmed, how it feels to have someone kill someone we love, etc.).  An AI/AGI machine without feelings will not have much reason to understand this or ensure that its recommendations adhere to our sense of morals and laws.

We humans do best in life if we have a sense of meaning and purpose to our lives.  This must be found (or at least accepted) by each person.  AI/AGI can tell us what others have found meaningful and purposeful, but it cannot decide for us what to adopt as meaning and purpose for us individually.  For that we need time (and willingness) to reflect, to see the truth, and to explore what it is for us to be human—something that does not fit well into our consumerist societal orientation.  (See “A Meaningful Life in the Age of AI” in the Templeton Foundation writings and podcasts.)

The more we use AI/AGI, the less we will be able to do ourselves without them.  Just as many kids now cannot do simple math without their calculators, humans who ask AI/AGI to think for them will become less able to think than if they continued to think for themselves.  Those using AI/AGI quite a bit will be quite dependent on them and vulnerable to any price increases the providers of AI/AGI ask for.  Use it or lose it!  Also, children who grow up using screens a lot may as adults be deficient in their ability to imagine—a condition that will make their ability to make decisions about the future even worse than it is now.

The more AI/AGI are used in businesses, the fewer employees they will need, so society should plan now for dealing with a significant increase in unemployment.  Our current notion of unemployment insurance will need extension and expansion, since it is not clear how many jobs will be created to service the uses of robots and AI/AGI.  In the extreme, we may become a society which permanently has an insufficient number of jobs to go around, with a need, therefore, to “pay” some people who are not working indefinitely.  For the jobs that are created to service AI/AGI and our new robots, it would be a step forward to design those jobs to feel meaningful and useful to those who work in these positions.  I do not believe that we will create robots to do all of the physical and clerical work that we need done, so our increases in efficiency from greatly specializing jobs and using large groups working together to do a job (the assembly line concept) have led and will probably still lead to many jobs that by themselves, in isolation, do not seem meaningful and are therefore are “bullshit jobs.”

We humans are very willing to distort reality in order to make ourselves feel better.  We make up negative stories about foreigners when we know nothing about them.  We make up stories about heaven when we have no evidence for what we imagine.  AI/AGI could serve to point out some of our distortions, but I wonder if we would pay attention to seeing what we are doing or just deny or repress as we usually do.

In response to the many frustrations that others seem to cause for us, many people come to believe that if they could control or dominate others, their lives would be fine.  In this pursuit,

they learn to bully, attack, demean, lie to, criticize, manipulate, extort, steal from, and even murder others.  I see no reason why those people will not use AI/AGI to help them harm others and will not tell their AI/AGI that such behavior is normal or desirable.  All human users of AI/AGI will still be human!

CONCLUSIONS

Most humans view AI/AGI as promising more goods and opportunities that will lead to greater comfort and leisure, but our very human nature itself militates against this assumption, since we are not adapted to exist without discomfort and effort to reduce discomfort.  We are not adapted to living without pain.  We are not adapted to experiencing more pleasure than many of us do already.  Our imperfections will not be changed by AI/AGI, which suggests that we will always be dealing with discomforts of our own creation.  We need purpose and meaning, and a life without discomfort or effort can never give this to us.  So, using AI/AGI will not give us the perfect life that so many seem to hope for.

If we want a life of calmness, peace, satisfaction, and contentment, we can never achieve it through working harder or trying harder!  In our society, we are not nurtured to be satisfied but rather to be always striving.  This striving has produced some good things (greater physical comfort, greater financial security for many), but it leaves us largely unsatisfied and always wanting more (and believing that having more will make us satisfied).  We could adopt an attitude of valuing being satisfied, but this would require giving up our belief that gaining more will make us happier and more satisfied.

AI/AGI will not make society better simply by providing more “stuff,” unless it is somehow distributed, first, largely to those who at present really do not have enough.  The majority of us do have “enough,” even if we don’t have everything we could possibly want, and if we cannot be more satisfied, then the access to more “stuff” will be an endless pursuit (and not give us peace or rest) since the highs of acquisition and achievement are very short-lived. 

The non-human nature of AI and AGI mean that neither can be totally trusted to do things in a manner that will be in the best interest of humans, so they cannot take over decision-making processes completely. 

To deal most successfully with AI/AGI in our society, our humans need to know themselves better, since they do not in general know themselves well at all at present.  They must see themselves realistically and take responsibility for the AI/AGI impacts on their lives that they will accept.  We can know ourselves far better, even in our current state, than our machines will ever know us.

essays\aiforhumanuse