Wednesday, November 1, 2017

Guardian series on AI

The emerging field of artificial intelligence (AI) risks provoking a public backlash as it increasingly falls into private hands, threatens people’s jobs, and operates without effective oversight or regulatory control, leading experts in the technology warn.
At the start of a new Guardian series on AI, experts in the field highlight the huge potential for the technology. ...
“We are clearly in brand new territory. AI allows us to leverage our intellectual power, so we try to do more ambitious things,” he said. “And that means we can have more wide-ranging disasters.”
One commentator in the above-linked story suggests that it is imperative that the benefits of AI should somehow be made "fair."

Does anyone else feel that "f-bomb" should no longer refer with tut-tut dismay to the word "fuck," but instead be reserved for the duck-and-cover moniker "fair?" The use of the word "fair" reminds me of the word "free" and its associated caveat: "If someone tells you something is 'for free,' grab your wallet."


  1. "A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law."

    Thus reads Isaac Asimov's robotic safety requirements. But at this stage a robot wouldn't know a human from a lamp pole, so when should we start applying such rules?

    Oh well, like free, fair, justice, I doubt we can people proof imaginary bottom lines.

  2. First off, how far are we from a time when AI units can distinguish humans from lamp posts? Extremely reliably? Probably years away, but I’d say we may be just months away. Essentially? It can do so already.

    Apple is releasing the iPhone X which, evidently, uses facial recognition in place of a password or fingerprint as a basis for its security.

    Meaning? Facial recognition research has come a long way.

    We have had motion detection available for decades already.

    So I’m fairly sure a decent robotics graduate student can construct devices that can fairly reliably recognize lamp posts and people. (The real tests would be how easily can the AI program be tricked by statues and costumes.)

    Genkaku’s wider issue is far more interesting: Economic Equality in Technology. Genkaku’s response is correct, but there’s no need to be surprised or cynical. The issue of economic equality has, in fact, has already been answered first in science fiction and now in reality.

    Leading edge technology almost always goes to where wealth is — governments, corporations and the rich. But slightly trailing edge technology gets down the economic ladder pretty quickly. (Also, while I’ll not discuss it here, there’re usually some geniuses around who know how to do more with less. Some of these genius hackers make some of this available freely.)

    Education Is a good example. From time to time there are government grants for tech to schools. Further, Many companies enjoy tax write offs and goodwill either by dispensing grants to schools for new computers and peripherals, or by donating their old, working tech when they do big upgrades. I once worked for a company whose mission was to facilitate the placement of old tech with students and their families. No strings attached. One team verified function, one installed appropriate software (word processor, email, browser, etc.), one team trained the students and parents, the school admin provided the space, and scheduled the dates.

    I’m betting things like this go on across the country and in other countries.

    Note: in this month’s US version of Wired Magazine the cover story is titled, “Love in the Time of Robots.” The caption: When androids behave just like humans, how will humans behave with each other?