Hacker Newsnew | past | comments | ask | show | jobs | submit | grumpy-buffalo's commentslogin

I wish the term "true AI" were replaced with "strong AI" or "artificial general intelligence" or some such term. We already have true AI - it's a vast, thriving industry. AlphaGo is obviously a true, legitimate, actual, real, nonfictional example of artificial intelligence, as are Google Search, the Facebook Newsfeed, Siri, the Amazon Echo, etc.


We already have true AI - it's a vast, thriving industry.

Or how about calling that vast, thriving industry "weak AI," or "clever algorithms," which is what they really are. The original definition of AI was what we now call strong AI, but after some lesser problems were solved without actually creating strong AI, we had to come up with some name for those.


I want to see an AI that can improve itself by developing new algorithms for arbitrary tasks. I wonder how far off we are from that now?


You know, if you're at the point where you can give a human-readable spec of the problem and the AI can make a passable attempt at it, that's basically the Turing Test -- hence why I think it deserves its status as holy grail. Something that passes would really give the impression of "there's a ghost inside here".


Rather than a ghost, I wonder if we'll ever have the average person looking at brains and thinking "there's a program inside here."

And then to reverse it, imagine that the world really is some kind of massive simulation... and that there are backups of the save()-ed :)



The problem is that fundamentally all our AI techniques are heavily data-driven. It's not clear what sort of data to feed in to represent good/bad algorithm design


Interestingly just ~5 years ago the team "AI" was frowned upon and people insisted in using the term "machine learning". The reasoning was that "AI" is just too convoluted a term because people will insist on comparing with humans and then whole question of consciousness invariably arises which renders scientific inquiry into undesirable debates.


I think that the refusal to seriously engage with consciousness is the main obstacle to progress toward general AI.


I guess it boils down to a difference in opinion as to what it means to be intelligent.

IMO, AlphaGo isn't "intelligent" because all it can do is play Go. For example, it can't be taught to play chess without completely reprogramming it.

Surely it's using a lot of clever algorithms, but where's the intelligence?

TBH, there's not much point to this post, because the intelligence in AI has been twisted more towards your usage almost since the beginning.


I agree with you and would like to double down: Intelligence is defined as "...the ability to acquire and apply knowledge and skills."

Even if Alpha-Go could play chess, manage air traffic control, and play GO the same time, it would never know it was doing that. As you mentioned above, Alpha-Go's "intelligence"is specifically pre-programed algorithms and accurate numeric inputs. If it can't create it's own algorithms or translate abstract data into numbers it can crunch on it's own, then there is no intelligence there. It just "smells" like intelligence.


I tried to argue once that the term "synthetic AI" would be better than AGI, as it removes any notion that artificial implies fake. [Note to surprised self: apparently there is a Wikipedia page describing this exact argument https://en.wikipedia.org/wiki/Synthetic_intelligence]


"Scotsman AI"


This seems like semantic quibbling. All those terms seem synonymous to me.


I feel that there's a legitimate reasons to split them into those terms as there are some significant distinctions that could be made in terms of "level of intelligence" of the AI.

For a much more detailed exploration of this topic, I think Wait but Why's article did a pretty thorough job: http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...


I prefer computational intelligence to AI, so maybe CI?


Strong AI means a different thing.


"Useful AI"?


How about just for playing a good game of Go?


So I suppose the fact that the opponent is a computer program should not be factored into the reaction to Sedol's win at all.

I hope your reductionist explanation is not accurate because it would imply that we are already so highly conditioned to machines and to AI that this match is thought to be no different from a match between two humans.


The machine beat him 3 times and was more or less expected to win again. It didn't, which would imply that Sedol played an exceptionally good game. Seems pretty obvious to me.


You're acting like following arbitrary sets of rules to maximize a value function isn't something that computers excel at. The only thing that was holding up computers for Go was simply time; there were too many possibilities to consider. Humans are really, really good at heuristically trimming possibilities, but sometimes to the detriment of finding maxima.


You know that computers beating us at chess allowed us to get better at chess, right?


Fan Hui, the professional player they beat late last year, already improved his game by quite a bit from the occasional match with AlphaGo (deepmind hired him as a consultant to test). He's won every single game in the last European championship, and moved from around top 600th player in the world to around top 300.


Wow. I know nothing about go, but 600--->300 sounds like quite an improvement.


If you're interested in computational complexity theory, I recommend "Computational Complexity" by Christos Papadimitriou. It's a classic, though it's a bit dated.


There are plenty of senses in which infinity IS a number -- or rather, many numbers. See e.g. the Wikipedia articles on cardinal numbers, ordinal numbers, hyperreal numbers, and surreal numbers.


> There are plenty of senses in which [...]

Yes, perhaps. But still it IS not a number.

Calling infinity a number is a "hack" done by mathematicians.


Primes ARE fundamentally predictable, in the following very strong sense: There is a deterministic polynomial time algorithm for checking whether a number is prime (i.e. the runtime is polynomial in the number of digits of the number being tested.) http://en.wikipedia.org/wiki/AKS_primality_test


Careful -- because R is uncountable, there is no system for representing arbitrary irrational numbers with finite strings. All irrational numbers with periodic continued fractions are quadratic irrationals, i.e. they can be written as A + B sqrt(C) where A, B, C are all rational. And of course, that formula immediately provides a much more straightforward way to represent such irrational numbers by finite strings!


Yes, I avoided the word 'arbitrary'. The references make it clear that what they're dealing with is the computable reals - only a countably infinite subset of R. Other representations get used too (like streams of dyadic rationals), and the computable reals contain more than just quadratic roots - eg: pi - but the computable reals are all you get, and this means there's some rough edges - http://en.wikipedia.org/wiki/Specker_sequence.

Still fascinating though.


Could you please explain why you think this guy's questions are stupid? They seemed like good questions to me. His son's school's definitions of "fact" and "opinion" seem to not be mutually exclusive, which seems to contradict the implicit assumptions in the exercises assigned to the students.


Oh, I took the school's definition as mutually exclusive --- and I assumed the son did as well.

I didn't see how it was not --- and so the line of questioning seemed to be willfully ignorant of that mutual exclusivity.

We have "not ripe" and "ripe" apples. Is this apple "ripe"? But see this part of it here isn't ripe! So it's both ripe and non ripe --- ergo, you were taught wrong to distinguish between ripe and not ripe!

blank stare == (are you really that stupid dad?)


I rather liked the article.


I found myself nodding in agreement with the school and it's separation of values/morality/opinion from facts/truth.

It's all just words, lines we use to divide, but they seem to be drawing nice solid lines --- whilst the parental unit that wrote the article seems remarkably confused about the factualness of his own beliefs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: