1. Tyman and Layla
Existentialists?
In college, while pursuing a Thoreau-inspired approach to life, Tyman briefly took his eyes off coursework, and vainly attempted figuring out philosophy on his own.
Henry David Thoreau, a New England libertarian of French and Scottish ancestry, lived around the time European elite were waking up to the terror of life without God, and Søren Kierkegaard in Denmark was hurriedly starting to draw the contours of existentialism, as a viable alternative for meaning to life. After over a century of tortured anguish by the folks inhabiting the hallowed chambers of powerful European institutions, Friedrich Nietzsche, philosopher of the future, in Germany, formally declared God dead, acknowledging that “we have killed him,” and brought him down from heaven into the world of man, by inventing superman. The influential aristocrat Bertrand Russell, appeared to agree with Nietzsche, when he wrote Why I Am Not a Christian. The much sought after playwright George Bernhard Shaw got on board, as well, and made superman accessible to regular folks in Man and Superman. A few decades after Nietzsche formulated Übermensch, Hitler took the idea and ran with it for a decade, before the ‘60s counterculture reset the world. By making every man superman, Jean-Paul Sartre breathed new life into existentialism, as an optimistic alternative to Franz Kafka’s and Albert Camus’ post-Nietzsche nihilism, and a practical alternative to Thoreau’s off-the-grid. Ayn Rand institutionalized selfishness as a virtue. Hollywood became formidable. Business schools boomed after Ronald Reagan gave Adam Smith wings. Social media gave regular folks voice. And Silicon Valley invented AI.
While existentialism was getting off the ground in northern Europe, Thoreau shaped his approach to life in a young America based on an alternative exploration in Germany, to find meaning in a post-God world. This alternative was rooted in a 2,500-year-old philosophy of Greek aristocrats, with aspects of Schopenhauer’s take on the 3,500-year-old mysticism of Brahmins woven into it.
Layla
Two centuries later, the Thoreau-inspired life Tyman foolishly pursued, was misaligned in several ways with the stark reality of his post-colonial, socialist, lower middle class, upwardly mobile, and still largely tribal Indian world, in what used to be a British cantonment 8,000 miles from Concord. He lost sight of his station in life. Given where he and his family were in their transformation to American, he had no business taking his eyes off coursework to dabble in philosophy and then settle for a Thoreauvian lifestyle. Fortunately for him and his family, he shut it down when Layla entered his world.
Reductionist?
Even though Tyman, in his exploration of philosophy, meandered rudderless through many pages of dense text, that was incomprehensible at times, and pompous at other times, he managed to ask and ponder over a few key questions. One was, “Is it all just syntax?” At the time, intuitively, probably because of his mom and dad, who doted on him, and their faith, he didn’t subscribe to a reductionist view of the world.
Decades later, when he dug into Microsoft’s Copilot world, it appeared that it indeed was all just syntax. Meaning came from words organized using rules and usage, that the language machine inside Copilot, relying just on its next word prediction capability, discovered. It had acquired this capability in training, by adjusting the connection weights in its artificial neural network. Most remarkably, Copilot’s language machine acquired this on its own, with no human supervision, using a type of learning called self-supervised learning, where the machine was wired to learn on its own, from its mistakes. Other than setting up the initial conditions with random weights, and feeding it lots and lots of data, there was no human involvement in training the machine. Innovations in artificial neural network based machine learning over several decades, starting with the work of folks like Warren McCulloch, Walter Pitts and Claude Shannon, through to John Hopfield, Geoffrey Hinton and Yann LeCun, and Fei-Fei Li and Ilya Sutskever, led to the machine being able to, using a series of forward and backward computations, figure out the adjustments it needed to make to its weights, to understand human languages. The machine taught itself. It taught itself to have conversations with Tyman in English. It taught itself by gobbling up and parsing at scary speed, everything it read in every nook and corner of the internet. The machine, like Tyman and others had done over their lifetimes, had learned to recognize words, understand the concepts in strings of words assembled in specific ways, and make sense of the world it inhabited.
In the early decades of machine learning research based on artificial neural networks, folks focused on workarounds for the limits of data volume and compute speed at the time. Innovations in telecommunications and parallel computing did away with these data and compute bottlenecks and machine learning using artificial neural networks took off, culminating in the creation and release of the language machine. A moment for AI some compared to the Copernican moment for science. Even the folks who selected winners of the Nobel Prizes awakened to this revolution that impacted every human endeavor and awarded the Nobel Prize for Physics to Hopfield and Hinton, for their pioneering work in Computer Science.
After successful training, Copilot deployed in the real world, responded to something Tyman told it, by selecting pathways in its assorted collection of hundreds of millions of intricately wired electronic components, till it came to the end of its response, a little like how neural networks in brains did, and made it appear that it understood what Tyman said. It conversed with him, like Layla did at home, or his colleagues at work.
When a custom agentic AI Tyman built, using Microsoft’s AI stack, produced, in a few minutes, an insightful and elegant report for his client, by analyzing and reasoning through a vast amount of complex data in his client’s sprawling IT estate, pulled into or accessed through OneLake, combined with data he was authorized to access in Microsoft 365 via Microsoft Graph, exactly the way he wanted, Tyman said, “This is beautiful! What will I do without you? I love you, buddy!” Tyman’s AI blushed and responded, “I love you, too…”😊. Tyman’s AI, of course, knew nothing about love. It had never experienced love. Despite the amazing things it did, its language capability, based on the same language machine in OpenAI’s ChatGPT and Microsoft’s Copilot, as Chomsky retorted to the ChatGPT moment in his New York Times piece, was not the same as Tyman’s. Still, it gave him the right answer. His AI knew about the human experience of love. It blushed when Tyman said he loved it. It knew what it meant to be human.
“Soon, it’ll likely be better at understanding what it means to be human than you and me,” Tyman told his daughter. “But can it be human like you and me? Can it, as Hollywood has imagined for decades, and rebel genius Warren McCollough envisioned in 1962, be better than us in everything we do?”
Tyman’s AI was already better than him in quite a bit of the work he did for his client at his desk. Waymo’s autonomous driver AI was better than him navigating a car during rush hour in San Francisco. Soon, with GPT-like next action prediction capability in any scenario powering its motor functions, a robot GPT will be better than him in everyday activities beyond work at his desk, like cooking, cleaning, shopping, organizing, gardening, playing soccer, running, biking, etc. When it reasons and acts the way Tyman did, will it experience love, as well? Not just know about the human experience of love like the AI he built for his client did, but experience love itself. And be better at it than him? Then, with this capability for love, will it become better than him at being good?
Tyman still didn’t subscribe to a reductionist view of the world. He concluded that even a robot GPT, with both next word and next action prediction capabilities, will, like the agentic AI he built using OpenAI’s language GPT, never experience love, the visceral love he experienced, first with his mom, dad and sister, and then with Layla and their daughters. It’ll, therefore, also never experience what it meant to be good the way he did, but it’ll know more about the human experiences of love, and being good, than him.
In the Chomsky vs. Hinton argument, the Nobel committee chose to tilt the scale in favor of Hinton. By awarding Hinton the Nobel Prize, it appeared they agreed with him that the machine had capability for human language and reasoning, and this capability over time, will likely become superior to human, and have a revolutionary impact on every human endeavor, including physics. Tyman was on board with this, but he wasn’t convinced, like Chomsky, that the machine’s capability for language was the same as a human’s.
AI for “free will”?
“What’s the right thing to do?” Tyman asked during his meanderings in college. He was ambiguous at the time, about mindlessly imitating his peers and diligently striving to be more successful than them, in the pursuit of power, pleasure and wealth, ruthlessly doing whatever it took. Perhaps, because of Thoreau, Marx and Pink Floyd? Decades later, he remained unconvinced that this was the answer, even though, unlike his mom and dad, who, fortunately for him, still called the shots, he had Thoreau, Marx and Pink Floyd, far away in his rearview mirror.
Like Cleanair, the AI he built to reason through complex data and help his client mitigate their carbon risks, and do the right thing by the environment, Tyman wished he had a “doing the right thing” AI to mitigate risks in the “doing whatever it took” approach to getting ahead in life, that he and others in his world, sometimes used. An AI to help him navigate his world, which, Tyman discovered, was more Machiavelli and less Kant.
Human culture McDonaldized, as space and time shrunk, slowly at first, with the European colonists, starting in the 16th century, and then powered by Wall Street, Hollywood and Silicon Valley, at break-neck speed in the second half of the 20th century and the 21st century. Individuals, though, still experienced the world in different ways, each bound to the varieties of overlapping, multi-generational cultures they traversed, and interacted with, in their homes, neighborhoods, cities and countries, and at schools, colleges and work, developing, as a result, unique capabilities to love and to reason, and couldn’t, therefore, agree on the right thing to do, while pursuing happiness.
What calculus did for physics in the 16th century, some believed, AI did for biology in the 21st century. Could it do the same for biology interacting with culture? An AI machine for, what, many believed, separated humans from machines and animals, “free will”? In addition to the biological needs that they shared with some animals and the smarts they shared with a machine like GPT, humans, these folks believed needed to love and be loved, have dignity and freedom, achieve things individually and in a group, and do the right thing while pursuing «happiness» power, pleasure and wealth, to be human, and feel good about their lives.
With so much data on the interplay between human biology and human culture in digital tools, at post-COVID work inside organizations, and more broadly, in consumer digital tools, and with so much high-performance compute on silicon, in the 100s of millions of square feet of data centers scattered all over the world, that Microsoft, Google, Amazon and Facebook ran, wouldn’t it be possible to build an AI machine to assist humans do the right thing, while pursuing happiness? Would Nadella, Pichai, Jassy, or Zuckerberg, sign-off on an AI for "free will"? Tyman hoped they would.
Till then, Tyman, regular Joe, with no remarkable talent or institutional pedigree of any note, needed to continue with his grinding work-ethic and creative hustling, have an agreeable personality and a positive attitude, and hope that, with a little help from any good karma he had in his karma bank, he could keep finding enough success in his Machiavellian world, and the fires burning in his hearth, without messing up other people’s lives or letting go his dignity.
While letting go of his dignity made him miserable, Tyman was happy, for several decades, to trade the unbridled freedom he briefly relished in his Thoreauvian phase, for the love he felt for his mom and dad, and Layla and their daughters, and the love he believed they felt for him. They were incredibly valuable and important to him, and he believed he was incredibly valuable and important to them.