thumb do blog Bishop Macedo
thumb do blog Bishop Macedo

Extinction of the Human race

Imagem de capa - Extinction of the Human race

An international team of scientists, mathematicians and philosophers at Oxford University’s Future of Humanity Institute are investigating the human race’s biggest dangers.

And they argue in a research paper, Existential Risk as a Global Priority, that international policymakers must pay serious attention to the reality of species-obliterating risks.

Last year there were more academic papers published on snowboarding than human extinction.

The director of the institute, Swedish-born Nick Bostrom, says the stakes couldn’t be higher. If we get it wrong, this could be humanity’s final century.

First, the good news

 

But first, the good news. Pandemics and natural disasters may cause a colossal and catastrophic loss of life, but Bostrom believes humanity would be likely to survive.

This is because our species has survived thousands of years of disease, famine, floods, predators, persecution, earthquakes and environmental changes. So the odds are still in our favor.

And in the time frame of a century, he says the risk of extinction from asteroid impacts and super-volcanic eruptions remains “extremely small”.

Even the unprecedented self-inflicted losses in the 20th Century in two world wars, and the Spanish flu epidemic, failed to halt the upward rise in the global human population.

Nuclear war might cause appalling destruction, but enough individuals could survive to allow the species to continue.

With that feel good reassurance out of the way, what should we really be worrying about?

Unprecedented threats

Bostrom believes we’ve entered a new kind of technological era with the capacity to threaten our future as never before. These are “threats we have no track record of surviving”.

The director of the institute compares it to a dangerous weapon in the hands of a child. He says the advance of technology has overtaken our capacity to control the possible consequences.

Experiments in areas such as synthetic biology, nanotechnology and machine intelligence are hurtling forward into the territory of the unintended and unpredictable.

Synthetic biology, where biology meets engineering, promises great medical benefits. But Dr Bostrom is concerned about unforeseen consequences in manipulating the boundaries of human biology.

Nanotechnology, working at a molecular or atomic level, could also become highly destructive if used for warfare, he argues. He has written that future governments will have a major challenge to control and restrict misuses.

There are also fears about how artificial or machine intelligence interact with the external world. Such computer-driven “intelligence” might be a powerful tool in industry, medicine, agriculture or managing the economy. But it also can be completely indifferent to any incidental damage.

Seán O’Heigeartaigh, a geneticist at the institute, draws an analogy with algorithms used in automated stock market trading.

The same way that these mathematical strings can have direct and destructive consequences for real economies and real people, such computer systems can also “manipulate the real world”.

In terms of risks from biology, he worries about misguided good intentions, as experiments carry out genetic modifications, dismantling and rebuilding genetic structures.

This eclectic group of researchers often talk about the ability of creating more powerful generations of computers.

But fellow researcher Daniel Dewey talks about an “intelligence explosion” where the accelerating power of computers becomes less predictable and controllable.

“Artificial intelligence is one of the technologies that puts more and more power into smaller and smaller packages,” says Mr. Dewey, a US expert in machine super-intelligence who previously worked at Google.

Chain reaction

 

Along with biotechnology and nanotechnology, he says: “You can do things with these technologies, typically chain reaction-type effects, so that starting with very few resources you could undertake projects that could affect everyone in the world”.

The Future of Humanity project at Oxford is part of a trend towards focusing research on such big questions. The institute was launched by the Oxford Martin School, which brings together academics from across different fields with the aim of tackling the most “pressing global challenges”.

Martin Rees, the former president of the Royal Society of British Astronomy, is backing plans for a Centre for the Study of Existential Risk and affirms, “this is the first century in the world’s history when the biggest threat is from humanity”.

Nick Bostrom says the significance of an existential risk is “not on people’s radars”. But he argues that change is coming whether or not we’re ready for it.

“There is a bottleneck in human history. The human condition is going to change. It could be that we end in a catastrophe or that we are transformed by taking much greater control over our biology. It’s not science fiction, religious doctrine or a late-night conversation in the pub.”