If you are currently working passionately in an IT project, we probably share this love for the technicals: preference for programming languange, thinking about optimized alternatives to solve a problem, logic implementation, scripting jobs, deal with performance issues, hacking a library to use it in a not-expected way, etc.
Most of us are so busy and in love with these evolving technologies, that don’t take a pause to think about where to draw the line between buiding a world of machines or building a world of humans working with machines.
Probably most of us think that the later will happen automatically somehow; like me two weeks ago.
That’s the point that I’ll briefly talk about in this post.
A couple of weeks ago I was curious about advantages in AI advances between China in comparison to EEUU because of some rumors in the web, so with a little bit of ducking I found a forum that pointed me to the book “AI Superpowers” by Kai-Fu Lee.
Besides finding the information I was looking for (in more detail than I expected), I learned about new opinions about how humans will coexist with AI machines. To my surprise, I found it’s a highly non-trivial problem, and I think it’s not a topic of conversation in the IT community.
However, I realize that this global important topic comes clear to address from the perspective of the personal circumstances of Kai-Fu Lee.
Kai-Fu has a long trayectory in the AI industry accumulating experience from Apple, Microsoft and Google, and currently he’s an AI researcher and Venture-Capital investor in China (investing in chinese AI startups).
One turning point in his personal side (related to cancer) led him to examine several social aspects of current AI Research and Development (R+D) in the industry and academy.
I’d say there are four main topics in this book of Kai-Fu Lee.
1) He mentions the differences (not advantages necessarily) in the advances and strategies of AI R+D between China and EEUU.
2) He talks about the kind of jobs that will be wiped from the laboral market as AI applications are deployed to replace human labor.
Also how AI can help other kind of jobs in an assistant way. Intertwined with this, is the generation of new jobs (more “human” ones), and the emergence of a more human society coexisting with machines.
3) Is AI just another industrial revolution like electricity or the steam engine? No, it isn’t. It’s a deeper one and we should face it like it, because requires more adaptations from us.
4) Are we close to reaching the point of not distinguishing an AI machine from a human? What are the timeline predictions here? Kai-Fu argues that this is very tied to the speed at which AI breakthroughs are found in basic research.
He highlights that the “AI rush” that we are seeing right now is mainly due to emerging applications of known AI fundamentals.
The fact that we reach a General Artificial Intelligence is tied to solving several problems in basic AI research.
On the following I highlight only about point 2).
Probably on another post I develop about one of the other points.
More human jobs
As we enter to an “AI age” (where applications are emerging in an accelerated fashion, performing several [specific] tasks better than humans), algorithms and machines will progressively replace human jobs and we need to rethink the human role of people in society.
Clearly, we need a global agenda on this since international collaboration is key for human adaptation through trainings and education.
I think this is kind of an important matter right now (Jan/2021), since AI developments have a rate of progress that most humans can’t keep up with.
With this, I mean that we need to plan with anticipation what new job roles people will adopt in all areas: art, information tech, social areas, etc.
All of them must move toward a new culture that values human love, service, and compassion more than ever before, while AI applications handle the routine tasks.
I think there’s a dystopic scenario ahead if we don’t plan for a more human society since AI developments will progressively (and maybe autonomously?) make their way into society, and people will pay attention when it’s already too late for adaptation.
The massive production gains that will come from task automations/optimizations will generate a positive-feedback loop due to increasing amounts of data collection.
This can create a tendency for AI-driven industries toward monopoly, which will in turn increase economic inequality.
For instance, poor countries that use the opportunity to kick-start economic growth through low-cost exports (routes that lifted South Korea, China and Singapore). For these countries, there is no way to begin a development process.
Kai-Fu uses a diagram (see the book) to illustrate how the different kind of jobs can be categorized in the context of AI-human coexistence.
He shows a “danger zone” of repititive jobs, potential to be replaced by AI, like: radiologist, basic translator, telemarketer, etc.
On the other hand, a “safe zone”, where AI is least likely to replace jobs, wit jobs like: pychiatrist, social worker, CEO, etc; all of them includes more planning and analysis.
There is also another zone where AI tools are used by humans as an instrument to boost an analysis, an study, o even to generate art (paintings, music, etc).
I think it’s important to focus on the alternatives paths for new kinds of works.
Some of them, mentioned in the book, are: activities intro the realm of human expression and artistry, caring for elders and intergenerational households, etc.
The latest is addressed since currently many people don’t have any income while taking care of their loves ones.
Also, it would be a nice step to use something like the Bhutan’s index of “Gross National Happiness” (as a key development indicator) to qualify our global human society.
This can help us move away from a mindset that treats humans as variables in a grand productivity optimization algorithm.
Let’s not be machines. Let’s be humans by enhancing the differences.