Global EditionASIA 中文雙語Fran?ais
    China
    Home / China / Innovation

    How scared should we be about machines taking over?

    Life 3.0 by Mark Tegmark argues questions about artificial intelligence need be confronted sooner rather than later

    By Steven Poole | China Daily USA | Updated: 2017-12-04 14:07
    Share
    Share - WeChat

    ‘Prediction is very difficult,” the great physicist Niels Bohr is supposed to have said, “especially when it’s about the future.” That hasn’t stopped a wave of --popular-science books from giving it go, and attempting, in particular, to sketch the coming takeover of the world by superintelligent machines.

    This artificial-intelligence explosion — whereby machines design ever-more-intelligent successors of themselves — might not happen soon, but Max Tegmark, an American physicist and founder of the Future of Life Institute, thinks that questions about AI need to be addressed urgently, before it’s too late. If we can build a “general artificial intelligence” — one that’s good not just at playing chess but at everything — what safeguards do we need to have in place to ensure that we survive?

    We are not talking here about movie scenarios featuring killer robots with red eyes. Tegmark finds it annoying when discussions of AI in the media are illustrated like this: the Terminator films, for example, are not very interesting for him because the machines are only a little bit cleverer than the humans. He outlines some subtler doomsday scenarios. Even an AI that is programmed to want nothing but to manufacture as many paper clips as possible could eradicate humanity if not carefully designed. After all, paper clips are made of atoms, and human beings are a handy source of atoms that could more fruitfully be rearranged as paper clips.

    What if we programmed our godlike AI to maximise the happiness of all humanity? That sounds like a better idea than making paper clips, but the devil’s in the detail. The AI might decide that the best way to maximise everyone’s happiness is to cut out our brains and connect them to a heavenly virtual reality in perpetuity. Or it could keep the majority entertained and awed by the regular bloody sacrifice of a small minority. This is what Tegmark calls the problem of “value alignment”, a slightly depressing application of business jargon: we need to ensure that the machine’s values are our own.

    What, exactly, are our own values? It turns out to be very difficult to define what we would want from a superintelligence in ways that are completely rigorous and admit of no misunderstanding. And besides, millennia of war and moral philosophy show that humans do not share a single set of values in the first place. So, though it is pleasing that Tegmark calls for vigorously renewed work in philosophy and ethics, one may doubt that it will lead to successful consensus.

    Even if progress is made on such problems, a deeper difficulty boils down to that of confidently predicting what will be done by a being that, intellectually, will be to us as we are to ants. Even if we can communicate with it, its actions might very well seem to us incomprehensible. As Wittgenstein said: “If a lion could talk, we could not understand it.” The same might well go for a superintelligence. Imagine a mouse creating a human-level AI, Tegmark suggests, “and figuring it will want to build entire cities out of cheese”.

    A sceptic might wonder whether any of this talk, though fascinating in itself, is really important right now, what with global warming and numerous other seemingly more urgent problems. Tegmark makes a good fist of arguing that it is, even though he is agnostic about just how soon superintelligence might appear: estimates among modern AI researchers vary from a decade or two to centuries to never, but if there is even a very small chance of something happening soon that could be an extinction-level catastrophe for humanity, it’s definitely worth thinking about.

    In this way, superintelligence arguably falls into the same category as a massive asteroid strike such as the one that wiped out the dinosaurs. The “precautionary principle” says that it’s worth expending resources on trying to avert such unlikely but potentially apocalyptic events.

    In the meantime, Tegmark’s book, along with Nick Bostrom’s Superintelligence (2014), stand out among the current books about our possible AI futures. It is more scientifically and philosophically reliable than Yuval Noah Harari’s peculiar Homo Deus, and less monotonously eccentric than Robin Hanson’s The Age of Em.

    Tegmark explains brilliantly many concepts in fields from computing to cosmology, writes with intellectual modesty and subtlety, does the reader the important service of defining his terms clearly, and rightly pays homage to the creative minds of science-fiction writers who were, of course, addressing these kinds of questions more than half a century ago. It’s often very funny, too: I particularly liked the line about how, if conscious life had not emerged on our planet, then the entire universe would just be “a gigantic waste of space”.

    Tegmark emphasises, too, that the future is not all doom and gloom. “It’s a mistake to passively ask ‘what will happen’, as if it were somehow predestined,” he points out. We have a choice about what will happen with technologies, and it is worth doing the groundwork now that will inform our choices when they need to be made.

    Do we want to live in a world where we are essentially the tolerated zoo animals of a powerful computer version of Ayn Rand; or will we inadvertently allow the entire universe to be colonised by “unconscious zombie AI”; or would we rather usher in a utopia in which happy machines do all the work and we have infinite leisure?

    The last sounds nicest, although even then we’d probably still spend all day looking at our phones.

    Steven Poole’s Rethink: the Surprising History of New Ideas is published by Random House

    Run Smart: Using Science to Improve Performance and Expose Marathon Running’s Greatest Myths, by John Brewer, is published by Bloomsbury, £12.99

    374pp, Allen Lane, £20, ebook £9.99

    Top
    BACK TO THE TOP
    English
    Copyright 1995 - . All rights reserved. The content (including but not limited to text, photo, multimedia information, etc) published in this site belongs to China Daily Information Co (CDIC). Without written authorization from CDIC, such content shall not be republished or used in any form. Note: Browsers with 1024*768 or higher resolution are suggested for this site.
    License for publishing multimedia online 0108263

    Registration Number: 130349
    FOLLOW US
     
    久久久久亚洲AV无码网站| 亚洲AⅤ永久无码精品AA| 国产强伦姧在线观看无码| 性无码专区一色吊丝中文字幕| 69ZXX少妇内射无码| 中文字幕乱码人妻无码久久 | 人妻丝袜中文无码av影音先锋专区 | 无码精品国产dvd在线观看9久| 久久无码一区二区三区少妇| 亚洲Av无码精品色午夜| 亚洲欧美精品一区久久中文字幕| 亚洲爆乳无码精品AAA片蜜桃| 国精品无码一区二区三区在线 | 日韩AV无码一区二区三区不卡毛片 | 亚洲av永久无码制服河南实里| 色婷婷综合久久久久中文 | 亚洲熟妇无码八V在线播放| 无码人妻精品一区二区三区66| 国产激情无码一区二区三区| 日韩欧美成人免费中文字幕| 最近完整中文字幕2019电影| 亚洲AV区无码字幕中文色| 少妇人妻综合久久中文字幕| 无码人妻精品一区二区蜜桃百度| 97性无码区免费| 精品人体无码一区二区三区| 国产精品一级毛片无码视频| 日韩精品真人荷官无码| 精品国产a∨无码一区二区三区 | 精品无码人妻夜人多侵犯18| 无码人妻久久久一区二区三区| 无码专区永久免费AV网站| 亚洲AV中文无码乱人伦在线观看| 无码人妻精品一区二区三区东京热 | 无码国产精品一区二区免费3p| 无码国产乱人伦偷精品视频| 日韩AV片无码一区二区不卡电影| 免费精品无码AV片在线观看| 国产成人无码综合亚洲日韩| 久久久久亚洲?V成人无码| 日本妇人成熟免费中文字幕|