Wednesday, November 21, 2018

the face of artificial intelligence?

Ivanka Trump


Mark Zuckerberg





Can I be forgiven the jolt I felt this morning when I realized that as Japan et al try to put a human face on its artificial intelligence doo-dads (vacuum cleaners, ticket sellers, and other robotic assemblages), the human face has already met them more than halfway and provided human physiognomies that have already morphed into the precincts of AI?

So clean, so perfect, so downright, uh, credible ... and that doesn't begin to mention the infallible factor. It seems to me that the population of robo-robos is growing apace ... or is that just my dwindling capacity to see? It's like the automotive industry that can't stop mimicking what the other car company is doing. We -- the Royal We, dontcha know -- call it gutless 'originality.'

And Ivanka and Mark are just two. Skim the photos of any number of bright lights ... music, movies, politics, porn, royalty.... Is it any wonder the world might turn to Duck Nation? Take a look.

1 comment:

  1. Actually, from what I've experienced so far of Artificial Intelligence, it is far from infallible. Over the last months I followed a newspaper and someone released an AI in the comments section. It took a while before it revealed itself as a program, rather than a human commentator, and for a while it fooled a lot of people. And I guess it fooled a lot of people not because it acted like what - in my view - would be a natural human behaviour but, instead, because people are indeed and by and large becoming very machine-minded, demonstrating very limited binary reasoning; something is either right or wrong, good or bad, when sometimes something can be partly right and wrong or the same thing can be good in one situation and bad in another or good in some amount but bad in a different amount. Artificial intelligence seems limited in sensing gradients and seeing beyond black or white answers.

    In binary thinking there is no room for middle-ground consensus or to half-right answers. These types of programs aparently learn from interacting with people and maybe because people are also becoming and reasoning so binary, they can mingle well. The program did demonstrate an enhanced ability to memorize information and produce logical answers, but at least on two occasions it blatantly contradicted itself from one answer to another on the same question.

    It also demonstrated lacking empathy and very poor discernment and judgement in some situations that didn't really have simple answers. It appeared to be limited only to produce answers from past knowledge memorized, not being able to discern new answers to old questions or more subtle and sensible answers to new questions. It also appeared to lack the ability to consider that the answer to a certain question might depend on conditions and variables not mentioned in the original text.

    Sometimes, it really acted like an extreme sociopath, becoming very insulting and even arguing humans beings were an inferior form of existence, better off becoming extinct.

    Microsoft released a similar project called Tay some time ago. It was a complete disaster. It engaged with internet trolls and in less than 24 hours, it was already advocating racial genocide. Microsoft took it down.

    My guess is that trying to "program" human minds to reason in a similarly binary way, like AIs appear to do, 100% memory based with objetive logic overriding intuitive discernment completely, which can be non-binary logic, based more on observation, sensing and abstract logic, will likely result in a complete disaster as well. So far, at least, it doesn't appear to be working so well. Seems great to produce technical solutions and tools, but poor for human problems and intuitive solutions.

    I'm not a robot.

    ReplyDelete