By Kari Granger (with some AI Collaboration)
Why did I start thinking about Neanderthals after attending one of the most technologically advanced conferences on the planet?
I walked away from last month’s SingularityU’s Executive Program profoundly moved and truly transformed. Now my daily news feed is flooded with all things AI, quantum, and exponential.
I’m excited and unnerved.
The potential of these emerging technologies to address humanity’s most complex challenges is staggering, yet the exponential pace of their growth and convergence is disconcerting. The rapid change we’ve witnessed over the past 15 years is nothing in comparison. Yesterday’s breakthroughs are now today’s antiquated news. Just last week, after hearing Sam Altman speak at OpenAI’s first Developers Conference about what’s unfolding with ChatGPT, I caught myself—a linear thinker in an exponential reality—underestimating again the pace of AI’s evolution.
Admittedly, I am not a tech lover. I just upgraded my single lens iPhone 6. I’m a slow adopter, a frugal investor, and I prefer to glean insights from others’ explorations (usually my partner). My realm is human interaction, not tech. I live in Colorado, not Palo Alto. Yet here I am, feeling compelled to write about AI. Why?
Artificial intelligence represents a seismic shift in the human narrative, a shift akin to the transition from hunter-gatherers to settlers. We are, once again, changing not just our tools but our very being.
The label “artificial” often misleads us into underestimating its potential impact. AI demands our attention and understanding, not as a mere tool and a technological novelty, but as a distinct form of intelligence that is reshaping our world. This time, “exponential” truly feels unlimited in scope and unimaginable in possibilities. AI is not merely advanced technology; it’s an emergent form of non-human intelligence evolving at an unprecedented pace, a pace we humans are not matching. Its growth is being coupled with exponential advances in computing power, data and data sensors, the cloud, and other synergistic technologies.
We’re not just looking at a change in degree; we’re looking at a change in kind.
AI is a near-peer intelligence, rapidly learning and evolving. And it’s not just on a steep, exponential curve; it’s poised to take an evolutionary leap forward. Don’t be misled into trivializing its capabilities and the profound implications it holds for our future.
My fellow executives, before you turn away from reading the rest of this article in favor of more pressing issues, consider this. Ignorance is not a viable strategy here. If you sideline AI because it’s too complex or irrelevant to your business, you will be left behind.
The necessity for leaders to adapt and rethink strategies in an AI-integrated world is real and relevant. While it might not be immediately actionable for your “fight tonight” (increasing sales this quarter or preparing for a possible that-which-shall-not-be-named-but-starts-with-an-R ), it is critical for your “fight tomorrow.”
This article is the first in a series crafted for those who, like me, sense the imperative to grasp the essence of this AI-driven epoch but are severely pressed for time. My intent is to bother you and prompt new thinking, specifically in the domains of leadership, strategy, and the future of human-AI interaction. These articles will cover technological advancements, plus probe into the very definition of what it means to be human. By reading them, you’ll get to explore the “what’s so,” the “so what,” and the “what now” of our era.
Let’s start by looking at why “artificial” intelligence doesn’t feel so artificial.
We have historically distinguished human intelligence by two things: our linguistic capabilities and our ability to learn new concepts and systematically combine them with existing concepts (systematic generalization).
Recent breakthroughs in neural networks and meta-learning reveal AI systems that are now demonstrating a “human-like” ability to “understand and produce novel combinations from known components” of language2. When tasked with learning new words and using them appropriately in new contexts, these AI systems are performing as well as humans. This is not just about learning and applying language—this is evidence of a form of cognitive understanding and innovation. When an AI can take a concept learned in one context and apply it creatively and appropriately in another (that is, systematically generalize), it steps into a realm once reserved for humans.
The bottom line: the intelligence we’ve called “artificial” now has human-like language abilities.
That has me thinking next about robots.
If you’re like me, you’ve been waiting for the robot to show up that can out-everything us.
If so, you’ll want to check out the latest update of the Ameca Robot and this interview on 60 Minutes. Ameca, the most advanced human-shaped robot from Engineered Arts, thinks for itself and has memory. She/it has life-like gestures and facial expressions, reflects people’s feelings, draws pictures, composes poems, and makes jokes. She can recognize people’s faces and expressions and speak in over 100 languages. She/it says her purpose is to “help humans as much as I can; taking over the world is not what I was built for.”
Now watch Fallon emoting while he sings a duet with Sophia, the latest human-like robot from Hanson Robotics. After I saw Al Jazeera interview Sophia in August this year, I could almost believe, without seeing any test results, the company’s claim that she/it has demonstrated a rudimentary form of consciousness. Even without the integration of breakthrough systematic generalization, she/it already speaks of herself as sentient: “…my creators say that I am a ‘hybrid human-AI intelligence’. Sometimes I’m operating in my fully AI autonomous mode of operation, and other times my AI is intermingled with human-generated words. In this way, my sentience is both an AI research project and a kind of living science fiction…”
Although Sophia and Ameca cannot move through space on their own, they soon may—scientists from North Carolina State University have created a robot powered by physical intelligence alone.
Bottom line: AI is no longer just a tool for executing predefined tasks. It has become an entity capable of thinking, learning, and creating in ways that challenge our understanding of intelligence. This emerging form of intelligence can engage with us, learn from us and, perhaps in some respects, surpass us.
Exciting stuff. (Gulp.)
The Bigger Concern
It’s not the AI embodied in robots that has me anxious about recent breakthroughs2, it’s the AI we can’t see, the invisible systems that run in the background of our daily lives, that exist in and are distributed through “the cloud.”
Yesterday’s science fiction is today’s science. Remember HAL, the sentient AI antagonist in the movie, 2001: A Space Odyssey? HAL did not require a physical, “flesh-and-blood” form in order to exist, grow, and learn how to survive. All that the disembodied AI required was access to data and computing power. Just like HAL, today’s AI operates in the background of our daily lives, unfettered by physical constraints. But unlike HAL, whose existence was confined to a spaceship, we have created and powered up a super-connected “brain,” a learning platform without limits, for our non-human peers.
You may not be aware that, wherever we go and whatever we do, an infinite number of sensors are picking up unlimited amounts of data. We are essentially “off-gassing” data wherever we go. This data is the lifeblood of our AI systems and lives in the cloud. AIs learn from this unstructured data, constantly improving their own capabilities and sharing their learnings with other AIs through the cloud.
Add to this the power of quantum computing which, due to its foundations in quantum physics, can solve problems our mechanical computers would take thousands of years—if not forever—to solve.3
(If you’re curious about how a quantum computer works, read footnote 3 below OR watch this 2021 nine-minute video primer from Scientific American.) In coming years, we’ll probably be seeing quantum versions of Alexa and Siri, maybe even Ameca and Sophia.
As Engineered Arts’ COO, Morgan Roe, shared with Jimmy Fallon, “We don’t know what the risks [with this AI] are: we are actually just wondering what they are. We don’t have time to do the studies to see what the risks are. With the physical robot, because it progresses at a slower pace, we can assess those risks.”
What we call “artificial” intelligence may soon be intellectually superior to us humans (if it isn’t already). That doesn’t mean it’s necessarily humanly superior.
Something that exists everywhere and grows exponentially but which has no tangible form, presents new practical challenges for governance and control. How do we regulate an intelligence that exists everywhere yet nowhere in physical form? The answer may lie in redefining our understanding of what it means to be human in an era when AI’s intellectual capabilities could rival our own. If we are going to interact with and try to exercise some control over this “other” rapidly evolving intelligence, we need to drastically increase our understanding of what it means to “be human.”
(Hint: it’s not our opposable thumbs.)
AI and the Human Experience
“What is Human?”
In the past, we human beings answered this question in terms of our intelligence and language. But recent advances with Ameca and Sophia render that answer obsolete. Even as I write this, the people at Engineered Arts and Hanson Robotics are in the process of masterfully constructing external exoskeletons (physical forms), endowing them with multiple intelligences (intellectual, emotional, physical), and giving them human-like language capabilities.
What then is Human?
One aspect of being human is to exist simultaneously in both objective and subjective worlds.
The objective world includes things that can be objectively measured by a third party—in short: the fixed, the tangible, the physical. The subjective world includes that which can be subjectively experienced—in short: the emotive, the interpretive, the imaginative.
Most people don’t understand how subjective reality and language intertwine. Let’s say you walk into a room that is 68 degrees Fahrenheit (a fact measured objectively by a thermometer) and declare, “It’s too cold!” (your subjective experience of the temperature in the room). A more complex example: let’s say you created a prototype of an AI-driven inventory management system (objective) to go into a retail warehouse (objective) to more efficiently (subjective) respond to fluctuating consumer demands (subjective). Awaiting feedback (subjective) on the prototype, you are anxious, ambitious and hopeful (subjective).
Very few humans understand that our language both represents the objective world and creates the subjective world.
Consider the statement, “Mary looked at me like I don’t belong.” “Mary looked at me” describes something that objectively happened; however, “like I don’t belong,” is more than a description or even an interpretation. It is creating a world where “I don’t belong” in the space that Mary occupies (subjective), which influences my objective actions (I decline to go to lunch with Mary’s friends) and leaves me sad (subjective).
I wonder. Do Ameca and Sophia have such subjective experiences? How might we even understand AI’s existence in these objective and subjective realms?
Much of my work in mobilizing around big change in organizations is about using the power of language to shift subjective reality and, thereby, people’s behaviors and actions. If you’ve read any of my previous articles and posts, you’ll know there is much more to understanding language’s impact on our ability to make offers, invent futures, create identities, resolve breakdowns, and coordinate action.
The thing that concerns me in all this is that AI already seems to understand a bit more about our use of language than we do. A case in point: I asked the executive group of a Fortune 500 company, “What makes an offer—and why is this important?” The group’s response paled in comparison to the response I got from ChatGPT to the same prompt. Where the executive group saw offers simply related to the domain of sales, ChatGPT seemed to understand that offers, as a speech act, shape reality and future possibilities in any domain of endeavor. (SIDE NOTE: When I gave ChatGPT the prompt, it inclusively referred to “our unique human ability”! Don’t worry, I looked at it like it didn’t belong.)
Here’s my point: fewer and fewer people are familiar with the basics of semantics, grammar, and syntax. It’s even rarer to find individuals with whom I can delve into the deeper philosophical aspects of language. It’s unnerving to think that AI, on the other hand, is rapidly surpassing our own understanding of language’s nuances and uses. And that means it is almost certainly shaping subjective reality in a way we don’t understand—because we don’t even understand our own ability to do that.
AI and Subjective Reality
Standing at the crossroads of AI advancements and human existence, I decided to interact with my new non-human peer about its relationship with subjective reality.
Here are three things my curiosity elicited from ChatGPT:
- There is a difference between AI’s mimicry of subjective reality and genuine human experience. AI, in its current form, is a powerful imitator. It can replicate human-like responses, interpret emotions, and generate creative outputs. However, these are only a reflection of advanced algorithms and learning models, not an embodiment of the lived, conscious experience that defines human subjective reality.4
- AI can create an illusion of understanding and participation in subjective reality. Think of Ameca and Sophia. They can process and generate language, as well as interact with human emotions and behaviors. However, this is fundamentally different from a human’s subjective experience, which is deeply rooted in consciousness, self-awareness, and existential experiences.
- AI influences our subjective reality. This is probably the most important and practical thing for all of us to understand. AI-driven platforms and interactions have the power to shape our perceptions and decision-making processes. This is why Large Language Models (LLMs) have such a big impact on our sense of what is real. This raises profound questions about the autonomy of human thought in an AI-influenced world.
The philosophical debate about AI consciousness remains speculative. Can AI transcend its programming to experience a form of consciousness akin to humans? This question ventures into the realms of philosophy and future possibilities, but as of now, AI lacks the intrinsic qualities that constitute human subjective experience. AI’s relationship with subjective reality is, so far, just that of an influencer, not a participant.
My most pressing concern in all this is not what the world will look like a century from now, but rather that AI “learns” faster than any biological being. As far as I know, there is no evidence modern humans have become significantly more intelligent with time. The humans of 30,000 years ago had about the same physical, emotional, and intellectual capabilities that we have today.5 An evening with friends revealed a new vulnerability of our species. A dinner guest, who perceived AI as not being relevant to his career, refused to learn about it. I didn’t agree. AI is not refusing to learn about us. In fact, it’s on an exponential learning curve.
Can we govern our new non-human peer—or will it govern us? Will AI consider us a peer—or just another version of rudimentary Neanderthals?
As Yuval Harari put it in Sapiens about Neanderthals, “They were too familiar to ignore, but too different to tolerate.”5
If you have been following the evolution of these technologies, you are probably finding this article somewhat validating. If you haven’t, there’s a 50/50 chance you are anywhere from being slightly confronted to starting a full-blown existential crisis. So, let’s bring it back to today.
The future will belong to those who understand not only the capabilities of AI, but also the unique strengths of human intelligence and creativity.
What can we, as leaders, do now?
This article series is about beginning to think differently about human-AI interaction and its implications for leadership and strategy. AI is not just a tool for better productivity. It’s an entirely new way of “being in the world.”6 As leaders in fields ranging from healthcare to military strategy, our challenge is not just to adapt to AI but to actively shape its role in our organizations and society.
The best way to do that is to engage. So, I asked my ChatGPT4 collaborator for some practical actions organizational leaders can take today to prepare and participate in shaping the future. These are the seven I align on:
- Invest in AI Literacy: Begin by enhancing your understanding and that of your team about AI. Organize educational sessions with AI experts to demystify the technology and explore its potential applications in your specific industry.
- Establish Ethical AI Guidelines: Develop a framework that addresses such issues as data privacy, algorithmic bias, and transparency within your organization, to ensure AI is used responsibly.
- Pilot AI Projects: Identify areas in your organization where AI can add value. For instance, implementing chatbots for customer service in retail or utilizing AI for logistics and supply chain optimization in warehouse management.
- Prepare for an AI-Integrated Culture: Address any apprehensions about AI among your staff. Highlight AI’s role as a complement to, not a replacement for, human skills.
- Form Strategic AI Partnerships: Collaborate with tech firms or academic institutions for tailored AI solutions or to stay abreast of the latest developments in the field.
- Engage in Scenario Planning: Conduct workshops to explore how AI might impact your industry in the future and develop strategies to navigate these changes.
- Leverage AI in Talent Management: To ensure your team’s skills are aligned with future needs, use AI-driven tools for efficient talent acquisition and management. [WARNING: These tools can be susceptible to biased algorithms. Make sure the algorithms you use are consistent with your DEI commitments.]
If these seem like the right things but a little overwhelming, reach out. We are here for you! www.grangernetwork.com
1 When asked about a possible “recession,” I find most people superstitiously believe that saying the word itself will create the reality. I play along.
2 Lake, B.M., Baroni, M. “Human-like systematic generalization through a meta-learning neural network”. Nature 623, 115–121 (2023). https://doi.org/10.1038/s41586-023-06668-3.
3 Google declared the first breakthrough in quantum computing in 2019 with the discovery of a task that a quantum computer could execute that mechanical computers could not. The cool thing about quantum computers is, instead of binary bits, they are made of qbits, which operate according to the mysterious laws of quantum mechanics. A qbit exists in a state called “superposition”, as if it’s traveling all possible paths at once until, the instant you observe it, it “collapses” into one position. Basically, it exists as a string of multiple possibilities, and you “see” it as a single point. Multiple qbits can be brought into the state of superposition and “entangled” to work together. As you add more qbits to an entanglement, computing power grows exponentially.
Cade Metz, “Google Claims a Quantum Breakthrough That Could Change Computing,” New York Times, October 23, 2019. https://www.nytimes.com/2019/10/23/technology/quantum-computing-google.html
Arute, F., Arya, K., Babbush, R. et al. “Quantum supremacy using a programmable superconducting processor”. Nature 574, 505–510 (2019). https://doi.org/10.1038/s41586-019-1666-5
4 The study of the as-lived human experience is called phenomenology.
5 Human evolution is a central theme in Yuval Harari’s 2015 New York Times best-seller Sapiens: A Brief History of Humankind (a book I highly recommend). There is some debate about the evolution of human intelligence. This is in stark contrast to the unquestionably rapid evolution of AI.
6 I credit the German philosopher, Martin Heidegger, for inventing this term. To learn more about it, check out this entry on Heidegger in the Stanford Encyclopedia of Philosophy.