- Analysis Google no longer understands how its "deep learning" decision-making computer systems have made themselves so good at recognizing things in photos.
This means the internet giant may need fewer experts in future as it can instead rely on its semi-autonomous, semi-smart machines to solve problems all on their own.
The claims were made at the Machine Learning Conference in San Francisco on Friday by Google software engineer Quoc V. Le in a talk in which he outlined some of the ways the content-slurper is putting "deep learning" systems to work. (You find out more about machine learning, a computer science research topic, here [PDF].)
(not my title)
Naaah. They're not understanding machine intelligence. Take a look at this abstract real quick. What it says, basically, is that machine intelligence has determined that certain colors predominate in child pornography. This means that the machine can use color skew in identifying child porn. That's like the "shredder" discussion - show a computer a million pictures of paper shredders and it will come up with some statistical truisms about pictures of shredders. They tend to be gray and have buttons on the right, for example. Or "often located next to waste paper bins." Machine intelligence is awesome for statistical discoveries like the above - "if the background is red, it's 50% more likely to be child porn than if the background is blue." It sucks for things like "generate me an image of child porn" or "show me child porn images with yellow backgrounds." Apply this to the shredder. If you want machine intelligence to not find your shredder, put a Mr. Yuck sticker on it. Attach a feather. Put it under your desk backwards. Machine intelligence excels at projecting trends and sucks at classifying outliers as anything other than "outliers." All the machine can do is go "I think this isn't a shredder because it has a feather on it human please check." And, since most people don't put feathers on their shredders, that's plenty good enough. There's no cognition in machine intelligence: "someone is trying to hide their shredder." The Skynet reference in the article is actually already in use by the CIA: IF "walking around in the dark" AND "skulking" AND "in Pakistan" AND "opening car trunks" THEN "send a Reaper by to take a look-see. " This beastie surveils an area roughly the size of Ohio every time they put it up - and if you think they aren't running its every visual through machine intelligence you're delusional. The downside is in order to foil stuff like this, you have to not act like an insurgent. High-level targets are never taken out by routine sweeps, they're always cooked off by interdisciplinary ops that start with HUMINT. Machine intelligence is really good for going "I think you should look at this" and terrible at "I've learned something new." Guaranteed - as soon as the "child porn is blue" paper came out, everyone you want busted started shooting in red rooms.
I think "Outwit" is certainly the wrong word to use in this article. Perhaps, "Humans haven't yet figured out the statistical truisms their computers are using to determine...." Outwitting does imply a level of cognition that isn't at play here. I followed the link to the abstract provided and I better not be on some govt list with Pete Townsend now :)
What part of the article contradicts any of this? Or makes you say they do not understand machine intelligence?
1) There's nothing terrifying about it. 2) The computers aren't "outwitting" anything - they're finding statistical correlations that humans have not found. 3) "Google no longer understands how its "deep learning" decision-making computer systems have made themselves so good at recognizing things in photos" is not the same as "Google researchers can no longer explain exactly how the system has learned to spot certain objects." Considering those are the underpinnings of the article...
>There's nothing terrifying about it. That's pretty subjective, and that was obviously used to add some flavor to the article. Just like the SKYNET bits. >they're finding statistical correlations that humans have not found. That sounds like outwitting to me. >"Google no longer understands how its "deep learning" decision-making computer systems have made themselves so good at recognizing things in photos" is not the same as "Google researchers can no longer explain exactly how the system has learned to spot certain objects." How are those different other than one being a bit more precise? >Considering those are the underpinnings of the article... Unquestionably the most pretentious person I've ever seen on the internet. I don't get it either. Your post would have been cool otherwise, but you had to play up this "they're wrong; I'm right; here's why" bullshit.
It's a perspective thing when the algorithms you write out-grow you. Humans (and especially programmers) want to control and predict the things they build. When you don't understand it, weird shit can happen
Yes - the specific mechanism of the crash was "a bunch of computers operating on non-vetted rules competing for fractions of a penny millions of times per second WHAT COULD GO WRONG?" It wasn't exactly unprecedented, either. Before there was HFT, there was "program trading" which probably caused Black Monday.
It's a matter of sensory inputs, an ability to interact with the environment, and then the ability to apply this type of processing to all those domains. What you would have is a freakishly creature-like AI.Google doesn't expect its deep-learning systems to ever evolve into a full-blown emergent artificial intelligence, though. "[AI] just happens on its own? I'm too practical – we have to make it happen," the company's research chief Alfred Spector told us earlier this year.
I think you're mistaken. "Sensory input" is easy. "An ability to interact with the environment" is huge. "the ability to apply this type of processing to all these domains" doesn't get you there, though. It's been fifteen years since Kismet and we haven't moved appreciably beyond it. If I haven't recommended this book to you yet, I've been remiss. The main point is not that we're getting better at AI, we're grading on more and more and more of a sliding scale and requiring less and less in order to qualify.
Perhaps, but Kismet was way too ambitious. I'm thinking more like sewer-cleaning robots that try to get refuse out, but also need to watch their energy levels, and must get to a charging station at times. Network a bunch, and they could have their own multiple competing directives, and those of the hive, like helping move larger objects, rescuing stuck workers, etc. When I say AI, I'm not talking about C3P0. I'm talking about robots that work more or less independently from humans, and with behavior (within a domain) that we cannot completely predict.
You're talking, essentially, about BEAM Robotics which, I agree, are not C3PO-grade AI. However, I'm not sure cockroach-grade intelligence really counts as AI.
BEAM robotics, but with an AI based on this emergent type of decision-making, which I do believe is the path to C3PO. Many years ago, I was impressed by a program outlined by Douglas Hofstadter called COPYCAT in his book Creative Concepts and Fluid Analogies. The goal of the program was to solve puzzles such as: 1000 runs of the second program lead to something like: It's fairly obvious that our own brains have competing processes, and that our decisions are the result of this competition. Check out what happens when you cut someone's corpus collosum. I imagine that some day, along this path, C3PO will appear to be of one mind, but will in fact be the emergent behavior of numerous processes running in parallel. He won't be programmed around a body of knowledge, but will absorb and create bodies of knowledge, and different ways to operate on them. Google's image processing software would be just one of hundreds, if not thousands of similarly complex processes that could be drawn upon. Of course, some processes would probably work to coordinate them all. I give C3PO 15 years.
or I change 'egf' to 'egw', can you do the same for 'ghi'?
The program was non-deterministic; successive runs would result in different answers. However, the program also measured the computation required to arrive at an answer, and the program could be written so as the 'effort' could bias the output. I change 'aabc' to 'aabd', can you do the same for 'ijkk'?
There is no right answer, but some are more satisfactory than others. This was a very early version of what Watson does. ijll: 612, ijkl: 198, jjkl: 121, hjkk: 47, jkkk: 9, etc...
I think it'll get us closer to Solaris or V'GER than C3P0. This is admittedly not my expertise, but human cognition is a byproduct of millions of years of biological evolution. When you start a system in an environment utterly devoid of biology, the structures and means that will appear have no reason to resemble our own. That was actually Alan Turing's point when he came up with the test: not "are you intelligent" but "can you imitate intelligence:" He saw "intelligence" as an impossible thing to judge; he argued that "imitating intelligence" was easy. I think the processes you're talking about are going to lead to "intelligence" - I'd argue that in many ways, they already have. I don't think they'll ever lead to human intelligence, though. Again, read the Sherry Turkle book.BEAM robotics, but with an AI based on this emergent type of decision-making, which I do believe is the path to C3PO.
"Are there imaginable digital computers which would do well in the imitation game?"
I agree with that. However, many animals express an intelligence we can at least relate to. I expect that the same might go for non-biological AI, at least to the degree in which we operate in the same environment, but maybe not. For sure. I don't think there's a difference between intelligence and the perfect imitation of it. It's in the eye of the beholder. It's telling that we can't even agree upon the fundamentals of our own intelligence. We just know it when we see it. I'll add the book. I am starting an actual doc now. I'm not sure if you've read Godel, Escher, Bach, but it's fantastic. The rest of what I've read from Hofstadter are variations on themes outlined in GEB.When you start a system in an environment utterly devoid of biology, the structures and means that will appear have no reason to resemble our own.
He saw "intelligence" as an impossible thing to judge; he argued that "imitating intelligence" was easy. I think the processes you're talking about are going to lead to "intelligence" - I'd argue that in many ways, they already have.
Right: We grew up in the same environment. We breathe the same air, we drink the same water, we bask in the same sun, we experience the same weather, our predators and prey are drawn from the same grab bag. That was my point: we have a long legacy of parallel development with animals. Machine intelligence? We're going to have absolutely nothing in common with it. And I'm not sure we will. When the inception parameters are so wildly different, what we see is not very likely to strike us as "intelligence." My dad has been trying to get me to read Godel Escher Bach for about 30 years now. Maybe one of these days.I agree with that. However, many animals express an intelligence we can at least relate to.
It's telling that we can't even agree upon the fundamentals of our own intelligence. We just know it when we see it.
There's one dialogue particular in it that's revolves around the ideas of your two's discussion: Ant Fugue The fundamental idea being that from a physical point of view, there's nothing unique about neurons firing in a certain pattern. A population of macroscopic organisms / actors in a computer could interact and produce the same patterns, given enough moving parts and the ability to reorganize itself in response to input.
"Technology that is '20 years away' will be 20 years away indefinitely"I give C3PO 15 years.