| poeTV | Submit | Login   |

Reddit Digg Stumble Facebook

Help keep poeTV running


And please consider not blocking ads here. They help pay for the server. Pennies at a time. Literally.

Comment count is 29
Maggot Brain - 2017-07-25

How did google AI come up with a white hot dog man persona?


infinite zest - 2017-07-25

Easy. It completed the 'Cat Needs Food Badly' Request.


Rafiki - 2017-07-25

Needs "going to the store" tag.


reifiedandrefined - 2017-07-25

i was thinking QWOP


boner - 2017-07-25

Ed Grimley, I must say.


betabox - 2017-07-25

Definitely Ed Grimley!


ADnova - 2017-07-25

With tech like this you can really see why Elon Musk calls AI "the most serious threat to the survival of the human race".


StanleyPain - 2017-07-28

To be fair, most people have misrepresented the views of anti-AI thinkers into this reductive garbage as if everyone gets their opinions from sci-fi movies or something.

Most of the current opposition to AI is based on the presumption that so many scientists and engineers will be enamored with the earliest forms of advanced AI that seem to "work" as intended and immediately attempt to apply them to situations that give them some degree of control over themselves and other people, and then when they inevitably malfunction, fixing the problem may be nearly impossible depending on the state of the AI. It's not so much that anti-AI proponents think that DeepMind will become a malevolent entity that wishes to conquer humanity, but that the eventual demands of "existence" on an artificial mind construct will cause it to break, sort of like the HAL 9000 scenario.


Bisekrankas - 2017-07-25

Nice, the guy who created this, Demis Hassabis, started out at Bullfrog lead programming Theme Park and doing level design for Syndicate.


infinite zest - 2017-07-26

Still looks like Ballz to me


Old_Zircon - 2017-07-26

What IZ said is the first thing I thought, too, before I even hit play.


jfcaron_ca - 2017-07-25

Google is behind the times, remember this one?
http://www.poetv.com/video.php?vid=130635


TeenerTot - 2017-07-25

WHHHEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEeeeeeeeeeeeeeeeeeeeeeeee eeeeeeeeeeeeeeeeeeeeeeeeeeeeee!


Maggot Brain - 2017-07-25

"HEEEEEEEEEEY YOOOOOOOOOU GUUUUUUUUUUYS!!!"


Hooker - 2017-07-25

What a fuckin' spaz.


jangbones - 2017-07-25

what? I walk just like that


SolRo - 2017-07-25

malevolent flamboyant should be a D&D alignment.


Mr. Purple Cat Esq. - 2017-07-26

Lawful flamboyant


cognitivedissonance - 2017-07-26

It reminds me of Track Day at Montessori.


badideasinaction - 2017-07-26

Needs Queen's "don't stop me now" as music.


Two Jar Slave - 2017-07-26

I still don't get this whole "learning" thing.


Old_Zircon - 2017-07-26

https://www.edge.org/conversation/jaron_lanier-the-myth-of-ai

And a rebuttal (which doesn't have me convinced):
http://palmstroem.blogspot.com/2015/01/jaron-lanier-pisses-me- off.html


"Lanier's sloppy thinking culminates in the idea that the tangible benefits of (existing, narrow) AI applications are not the result of automation and novel ways of obtaining, processing, integrating and distributing information, but by somehow stealing the food off the tables of the poor knowledge worker classes that still are forced to do the actual job."


As someone who has experienced that exact dynamic in the most literal way possible (i.e. having data I manually generated for 7 years used as the back-end for a so-called intelligent algorithm the industry-leading company I was with developed in the background that allowed them to lay off me and the 20 other people who actually did the work that they were selling as a product of AI), I'm going to side with Lanier on this one.


Old_Zircon - 2017-07-26

Oh, and also that rebuttal completely misrepresents what it's rebutting.


cognitivedissonance - 2017-07-26

OZ, I am currently doing that job, and I'd like to talk to you.


Two Jar Slave - 2017-07-26

I guess that's an interesting debate, OZ, but it doesn't clarify for me what 'teaching itself to walk' means in this video.

There is obviously a lot more to the program than what the video claims (directional sensors and an "incentive" to move toward a goal). There must be some programming structure which gives the AI the ability to manipulate its digital limbs, and to develop practical methods for combining its movements to achieve a goal. This structure, whatever it is, is presumably what allows the thing to 'learn'. That the video ignores the existence of this structure DOES relate to Lanier's point about personification and even mysticism within the language used to describe AI.

For me, the most obvious question is: if this exact same structure was outfitted with different sensors and incentives, could it "learn" to do something completely different, like translate a book (to use Lanier's example)? Or would the structure need to change to accomplish a different goal? Because if the learning structure needs to change, then I don't see the difference between this and just programming a virtual body to move around, albeit with a delay. I honestly just don't get it!

Other questions from my stupid brain:

How does the program interact with its own memory of failed attempts? Is there human invention at this point to instruct it on the difference between a failure and a success? What does "incentive" mean, exactly?

Was there any possibility of it NOT teaching itself to walk?

Was there the possibility of it teaching itself to do something else?

Someone posted a video of Deep Dream "learning" to remember the ending of Blade Runner, and I didn't "get it" then either. All these neural net headlines make me feel like I suffer from a very specific learning disability which prevents me from ogling headlines about neural nets. Or maybe I just read my first Goosebumps and you're giving me Ulysses, I don't know. I understand there's a revolution in AI happening because I'm told it's happening, but I just don't get what 'learning' means.


jfcaron_ca - 2017-07-26

A neural net is basically "a big ball of mathematical relations that a human didn't explicitly write" with otherwise-understandable inputs and outputs.

In the case of this kind of AI "learning" to walk, the inputs are combinations of limb & joint movements, the output is some kind of figure-of-merit function for whether the thing is walking or not. The figure-of-merit function is coded by a human. The set of limb & joint combinations in this case also seem to be coded by a human, but they can in principle also be dynamically set (within boundaries set by a human) by the AI.

Evolutionary AI works roughly like this:
With the ingredients mentioned in the above paragraph, you now get a fast computer to try lots of different combinations of inputs and tweak the neutral net (big ball of math relations) according to the figure-of-merit function. Limb movements that don't give a high walking score are suppressed, those that give a higher score are enhanced. Repeat as many times as necessary until the neural net gives you something that looks like walking.

I'm pretty sure that's what they did here (like the older video I linked above). Another way is to feed existing limb motion data to the neural net, but then there is the question of where you get that data, which I think is what OZ is talking about up there.


Two Jar Slave - 2017-07-26

THANK YOU


Bort - 2017-07-27

What jfcaron_ca described has a lot of parallels to evolution, where each member of a species is like an instance of certain limb & joint movements, most of them duplicates of combinations that have been tried before, with the occasional new unexpected combination. The figure-of-merit is reproduction.

Evolution isn't smart, it's trial-and-error and a combination of mutations and (more frequently) changing environmental factors that make some combinations more reproductively successful than others. It kind of ires me when people say that "we evolved in suchandsuch way because ..." and that implies design behind the process. Nope, it's evolution just occasionally blundering into something that works better. And even then, some of those "improvements" are a step backwards in a lot of ways: the sickle cell trait renders you malaria-resistant, but if you get the trait from both parents it causes big health problems so it's a definite negative if there's no malaria around.


Hazelnut - 2017-07-29

AI as it exists today is being totally overhyped. The "DeepMinds" and "Watsons" and such have proven woefully less effective than marketed. For a while you were hearing about how Watson was going to be making medical diagnoses, picking stocks, adjusting insurance claims. Never happened.

Here's one reason why: when AI fails, it fails really stupidly. You can see a bit of that in the walking algorithm. Image a self driving car that worked nineteen trips out of twenty, but on the twentieth drove full speed the wrong way down a highway. Right now when Google Maps leads you off a cliff or into road construction your human brain intervenes.

But it's a work in progress. Trains and planes were overhyped until they weren't.


Register or login To Post a Comment







Video content copyright the respective clip/station owners please see hosting site for more information.
Privacy Statement