Skip to main content

What AI Should And What It Should Not Be

“It depends on what you mean by artificial intelligence.” Douglas Hofstadter is in a grocery store in Bloomington, Indiana, picking out salad ingredients. “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done.”

via The Man Who Would Teach Machines to Think.

A long and interesting read about AI’s most brilliant mind – Douglas Hofstadter – his FARG research group, and the current state of mainstream AI research.

I must say I was rather conflicted reading the article. While I think GEB is probably the most important book I’ve ever read,1 and while Hofstadter is definitely a genius, I’m not entirely sure I agree with discrediting the “small steps” approach present in the article. Coming from a philosophy background into computer science I find that a lot of philosophical research in AI-related fields (like epistemology or logic) is somewhat wishy-washy or superfluous, as is “philosophically-inspired computer science” research.2 Then again I realize that the single most harmful threat to my own community of logics-for-AI or multi-agent-systems is creating various formalisms (algorithms, logics, diagrams…) solely because “we can”, and because it’s always better to have more theorems and proofs, even if no one knows what they’re for. The article linked above provides a somewhat fresh and broad perspective on what AI is today, while trying to answer a question of what AI should be. And these are the issues keynote speakers at big AI conferences should be addressing, trying to inspire people and make them contemplate on the big-picture issues; we don’t need another keynote on SAT solving and ILP.3


  1. Confession: I have never read the whole book. There is a partly interesting story behind it: I started reading it when I was a freshman at MISH, but unfortunately there was only one copy on load at the university library, and I could only borrow it for a month. So I would borrow it, read a couple of chapters, return it within a month, wait until it’s available again (someone would always borrow it), and then go back to reading. After some time I got a bit tired of the routine and stopped reading GEB. I bought my own copy when I was finishing my masters, but my knowledge of AI and logic was broader by then, and I found some later chapters of GEB a tad boring, and thus never finished it. Still, GEB was the very main reason I wanted to work in AI. ↩︎

  2. By “philosophically-inspired” CS I mean researchers in computer science who claim doing philosophically relevant work by attempting to capture certain notions in formal ways. Unfortunately, with very few exceptions, these attempts result in work that is philosophically shallow, and not applicable from an engineering/hard-AI point of view. (I feel I just made a lot of enemies by writing this footnote). ↩︎

  3. There you go, now I won’t get a post-doc at HKUST. Shit. ↩︎