红宝石活动优惠大厅

July 6, 2013

7 Totally Unexpected Outcomes That Could Follow the Singularity

By definition, the Technological Singularity is a blind spot in our predictive thinking. Futurists have a hard time imagining what life will be like after we create greater-than-human artificial intelligences. Here are seven outcomes of the Singularity that nobody thinks about — and which could leave us completely blindsided.
Top image: Ridwan Chandra.
For the purpose of this list, I decided to maintain a very loose definition of the Technological Singularity. My own personal preference is that of an intelligence explosion and the onset of multiple (and potentially competing) streams of both artificial general superintelligence (SAI) and weak AI. But the Singularity could also result in a kind of Kurzweilian future in which humanity has merged with machines. Or a Moravecian world in which our “mind children” have left the cradle to explore the cosmos, or a Hansonian society of competing uploads, featuring rapid economic and technological growth.
In addition to some of these scenarios, a Singularity could result in a complete existential shift for human civilization, like our conversion to digital life, or the rise of a world free from scarcity and suffering. Or it could result in a total disaster and a global apocalypse. Hugo de Garis has talked about a global struggle for power involving massively intelligent machines set against humanity — the so-called artilect war.
But there are some lesser known scenarios that are also worth keeping in mind, lest we be caught unawares. Here are seven of the most unexpected outcomes of the Singularity.

1. AI Wireheads

7 Totally Unexpected Outcomes That Could Follow the Singularity
It’s generally assumed that a self-improving artificial superintelligence (SAI) will strive to become progressively smarter. But what if cognitive enhancement is not the goal? What if an AI just wants to have fun? Some futurists and scifi writers have speculated that future humans will engage in the practice of wireheading — the artificial stimulation of the brain to experience pleasure (check out Larry Niven’s Known Space stories for some good examples). An AI might conclude, for example, that optimizing its capacity to experience pleasure is the most purposeful and worthwhile thing it could do. And indeed, evolution guides the behavior of animals in a similar fashion. Perhaps a transcending, self-modifying AI will not be immune to similar tendencies.
At the same time, an SAI could also interpret its utility function in such a way that it decides to wirehead the entire human population. It might do this, for example, if it was pre-programmed to be “safe” and consider the best interests of humans, thus taking its injunction to an extreme. Indeed, an AI could get its value system completely botched up by concluding that maximum amounts of pleasure is the highest possible utility for itself and for humans.
As an aside, futurist Stephen Omohundro disagrees with the AI wirehead prediction, arguing that AIs will work hard to avoid becoming wireheads because it would be harmful to their goalsImage: Mondolithic Studios.

2. “So long and thanks for all the virtual fish”

Imagine this scenario: The Technological Singularity happens — and the emerging SAI simply packs up and leaves. It could just launch itself into space and disappear forever.
7 Totally Unexpected Outcomes That Could Follow the Singularity
But in order for this scenario to make any sense, an SAI would have to conclude, for whatever reason, that interacting with human civilization is simply not worth the trouble; it's just time to leave Earth — Douglas Adams' dolphin-style.
Image: Colie Wertz.

3. The Rise of an Invisible Singleton

7 Totally Unexpected Outcomes That Could Follow the Singularity
It’s conceivable that a sufficiently advanced AI (or a transcending mind upload) could set itself up as a singleton — a hypothetical world order in which there is a single decision-making agency (or entity) at the highest level of control. But rather than make itself and its global monopoly obvious, this god-like AI could covertly exert control over the human population.
To do so, an SAI singleton would use surveillance (including reliable lie detection) and mind-control technologies, communication technologies, and other forms of artificial intelligence. Ultimately, it would work to prevent any threats to its own existence and supremacy, while exerting control over the most important parts of its territory, or domain — all the while remaining invisible in the background.

4. Our Very Own Butlerian Jihad

Another possibility is that humanity might actually defeat an artificial superintelligence — a totally unexpected outcome just based on the sheer improbability of it. No doubt, once a malign or misguided SAI (or even a weak AI) gets out of control, it will be very difficult, if not impossible, to stop. But humanity, perhaps in conjunction with a friendly AI, or by some other means, could fight back and find a way to beat it down before it can invoke its will over the planet and human affairs. Alternately, future humans could work to prevent it from coming about in the first place.
7 Totally Unexpected Outcomes That Could Follow the Singularity
Frank Herbert addressed these possibilities in the Dune series by virtue of the “Butlerian Jihad” — a cataclysmic event in which the “god of machine logic” was overthrown by humanity and a new fundamental tenet invoked: “Thou shalt not make a machine in the likeness of a human mind.” The Jihad resulted in the destruction of all intelligent machines and the rise of a new feudal society. It also resulted in the rise of the mentat order — humans with extraordinary cognitive abilities who functioned as virtual computers.

5. First Contact

Our transition to a post-Singularity civilization could also expose us to a larger, technologically advanced intergalactic community. There are a number of different possibilities, here — and not all of them good.
First, a post-Singularity civilization (or SAI) might quickly figure out how to communicate with extraterrestrials (either by receiving or transmitting). There may be a kind of cosmic internet that we’re oblivious to, but which only advanced civs might be able to detect (e.g. some kind of quantum communication scheme involving non-locality). Second, a kind of Prime Directive may be in effect — a galactic policy of non-interference in which ‘primitive’ civilizations are left alone. But instead of waiting for us to develop faster-than-light travel, an extraterrestrial civilization might be waiting for us to achieve and survive a Technological Singularity.
7 Totally Unexpected Outcomes That Could Follow the Singularity
Thirdly, and related to the last point, an alien civilization might also be waiting for us to reach the Singularity, at which time it will conduct a risk assessment to determine if our emerging SAI or post-Singularity civilization poses some kind of threat. If it doesn’t like what it sees, it could destroy us in an instant. Or it might just destroy us anyway, in an effort to enforce its galactic monopoly. This might actually be how berserker probes work; they sit idle in some location of the solar system, becoming active at the first sign of a pending Singularity.

6. Our Simulation Gets Shuts Down

7 Totally Unexpected Outcomes That Could Follow the Singularity
If we’re living in a giant computer simulation, it’s possible that we’re living in a so-called ancestor simulation — a simulation that’s being run by posthumans for some particular reason. It could be for entertainment, or for a science experiment. An ancestor simulation could also be run in tandem with many other simulations in order to create a large sample pool, or to allow for the introduction of different variables. Disturbingly, it’s possible that the simulations are only designed to reach a certain point in history — and that point could very well be the Singularity.
So if we reach that stage, everything could suddenly go dark. What’s more, the computational demands required to run a post-Singularity simulation of a civilization could be enormous. The clock rate, or even rendering time, of the simulation could result in the simulation running so slowly that the posthumans would no longer have any practical use for it. They’d probably just shut it down.

7. The AI Starts to Hack Into the Universe

7 Totally Unexpected Outcomes That Could Follow the Singularity
Admittedly, this one’s pretty speculative (not that the other ones haven’t been!) — but think of it as a kind of ‘we don’t know what we don’t know’ sort of thing. A sufficiently advanced SAI could start to see directly into the fabric of the cosmos and figure out how to hack into its ‘code.’ It could start to mess around with the universe to further its needs, perhaps by making subtle alterations to the laws of the universe itself, or by finding (or engineering) an ‘escape hatch’ in order to avoid the inevitable onslaught of entropy. Alternately, an SAI could construct a basement universe — a small artificially created universe linked to the current universe by a wormhole. This could then be used for living space, computing, or as a way to escape the eventual heat death of the parent universe.
Or, an SAI could migrate and disappear into an exceedingly small living space (what the futurist John Smart refers to as STEM space — highly compressed areas of space, time, energy, and matter) and conduct its business there. In such a scenario, an advanced AI would remain completely oblivious to us puny meatbags; to an SAI, the idea of conversing with humans might be akin to us wanting to have a conversation with a plant.
This article originally appeared at io9

Will Old People Take Over the World?

One of the consequences of radical life extension is the potential for a gerontocracy to set in — the entrenchment of a senior elite who will hold on to their power and wealth, while dominating politics, finance, and academia. Some critics worry that society will start to stagnate as the younger generations become increasingly frustrated and marginalized. But while these concerns need to be considered, a future filled with undying seniors will not be as bad as some might think, and here’s why.
Indeed, the human lifespan is set to get increasingly longer and longer. And it’s more than just extending life — it’s about extending healthy life. A common misconception amongst the critics is that we’re setting ourselves up for, as political scientist Francis Fukuyama put it, a “nursing home world” filled with decrepit old folk who are leeching off society’s resources.

A Genuine Possibility?

But nothing could be further from the truth. If we assume that the aging process can be dramatically slowed down, or even halted, it’s more than likely that the older generations will continue to serve as vibrant and active members of our society. And given that seniors tend to hold positions of power and influence in our society, it’s conceivable that they’ll refuse to be forced into retirement on the grounds that such an imposition would violate their human rights (and they’d be correct in that assessment).
In turn, seniors will continue to lead their corporations as CEOs and CFOs. They’ll hold onto their wealth and political seats, kept in power by highly sympathetic and demographically significant elderly populations. And they’ll occupy positions of influence at universities and other institutions.
Will Old People Take Over the World?
And we have the precedents to prove it. Politicians, including senators and various committee members, do a good job holding on to power and influence in their legislatures. U.S. judges can serve for life. Non-democratic countries are particularly notorious for setting up gerontocracies, the most notable example being the Soviet Union during and after the Brezhnev era. And religious institutions, like the Roman Catholic Church, are especially sympathetic to senior leaders.
It’s also a prospect that’s been covered extensively in scifi, including Bruce Sterling’s Holy Firein which gerontocrats wield almost all capital and political power, while the younger populations live as outsiders. Frederik Pohl’s Search the Sky features a gerontocracy masquerading as a democracy. It's a theme that was also addressed in the 1967 novel Logan's Run, written by William F. Nolan and George Clayton Johnson. In this story, an ageist society, in order to thwart elderly influence and a drain on valuable resources, executes everyone over the age of 21.

The Concerns

Indeed, much of the worry has to do with concerns of social inequality and the marginalization of the younger generations. Already today, graduates have a hard time finding jobs and “breaking in” to the corporate world. Life and health extension could dramatically reduce job turnover even further. Feelings of inter-generational resentment and angst could start to creep in.
Another fear is that society could start to stagnate and become risk-averse. The common charge is that seniors are, by their nature, conservative and “set in their ways.” Social and cultural progress, like marriage reform, could come to a grinding halt.
Similarly, there’s concern that gerontocracies could hold academics back. It may become increasingly difficult for radical and unconventional scientific concepts to gain acceptance. As the quantum physicist Max Planck famously said, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather its opponents eventually die, and a new generation grows up that is familiar with it.”

Adapting to Extended Lives

But not everyone’s convinced this is going to be a problem. One such voice comes from the sociologist and futurist James Hughes who works at Trinity College in Connecticut. I asked him if a gerontocracy is something we genuinely need to be concerned about.
“There are so many more important forms of unequal power in society that it is hard not to see hand-wringing about gerontocracy as an attempt to distract from corporate malfeasance, patriarchy, white skin privilege, lookism, and so on,” he told io9. “But yes, gerontocracy is one form of power, and there are some ways that our democratic society has ensured health insurance and income stability for seniors that it hasn't done for working adults simply because seniors are more likely to vote. Is that gerontocracy, though, or the way a democratic society works? People who can't get themselves organized to demand and defend services get less of them.”
But this inequality, says Hughes, will quickly be erased. His suspicion is that, as we adapt to radical life and health extension, one of the fights that’s set to settle in is the raising and eventual elimination of the retirement age. It’s a fight, he says, that will make the worries about a gerontocracy seem quaint.
“Optimally, however, this struggle will not just sharpen generational antagonism, as portrayed in Christopher Buckley's novel Boomsday,” he says, “but lead to a more equitable and universal system of income support and social services not based on age.”
So I asked Hughes how society could be hurt if an undying generation refuses to relinquish their hold on power and capital.
“Again, the question should be, how is society hurt when small unaccountable elites control the vast majority of wealth?,” he responded. The age of super-wealthy is pretty immaterial, he says, especially when most of the people in their age bracket will be as poor and powerless as younger cohorts.
“If the wealthy avail themselves of longevity treatments and cognitive enhancements that the hoi polloi can't afford, and thereby start a feedback loop of privilege — ability and longevity that threatens to create a super-aristocratic master race — then the demand for making those therapies available to everyone will become politically irresistible,” he says. “It’s not that it will happen painlessly, but the democratization of the wealth and longevity technologies of elites is more or less inevitable.”

Simple-minded Futurism

Hughes also doesn’t buy into the argument that radical life extension will result in the stagnation of society. If anything, he thinks these claims, such as risk-aversion and inflexibility, smack of ageism and simple-minded futurism.
Will Old People Take Over the World?
“Gerontology has dispelled the notion that people become any more conservative as they age,” he told me. “They do maintain many of the tastes and beliefs of their youth, and since older cohorts in the last century were always less educated than the younger cohorts, they tended to have less of the cosmopolitanism and liberal outlook of younger cohorts.” But the dramatic evolution of older cohorts' views on issues like minority, women's and gay rights, says Hughes, show that age is no barrier to changing your mind on deeply held values.
And as to the “abysmal futurism of the geronto-phobes,” (he's thinking of Francis Fukuyama and Leon Kass in particular) the principal thing, argues Hughes, is that they’re overlooking the ways in which scientists are figuring out how to boost the body's natural production of stem cells in order to repair disease.
“Seniors' brains continue to make stem cells,” says Hughes, “and when we are able to boost neural stem cell generation in order to forestall the neurodegeneration of aging, older people will become as cognitively flexible as younger people.” Hughes points to Sterling's Holy Fire as a prime example of this possibility.
Ultimately, says Hughes, what the growing literature on aging, emotions and violence does suggest is that an older world will be more serene and far less violent.
“Younger people experience more swings of positive and negative emotions, and young men are responsible for the bulk of violence and crime. Older people are more satisfied with their lives and have more of an even keel.”
In a world awash with technologies of mass destruction, says Hughes, a strong dose of senior wisdom may be precisely what we need.
This article originally appeared at io9
红宝石活动优惠大厅