Sciweavers

ACSC
2003
IEEE

Symbol Grounding and its Implications for Artificial Intelligence

13 years 9 months ago
Symbol Grounding and its Implications for Artificial Intelligence
In response to Searle's well-known Chinese room argument against Strong AI (and more generally, computationalism), Harnad proposed that if the symbols manipulated by a robot were sufficiently grounded in the real world, then the robot could be said to literally understand. In this article, I expand on the notion of symbol groundedness in three ways. Firstly, I show how a robot might select the best set of categories describing the world, given that fundamentally continuous sensory data can be categorised in an almost infinite number Secondly, I discuss the notion of grounded abstract (as opposed to concrete) concepts. Thirdly, I give an objective criterion for deciding when a robot's symbols become sufficiently grounded for "understanding" to be attributed to it. This deeper analysis of what symbol groundedness actually is weakens Searle's position in significant ways; in particular, whilst Searle may be able to refute Strong AI in the specific context of pres...
Michael J. Mayo
Added 04 Jul 2010
Updated 04 Jul 2010
Type Conference
Year 2003
Where ACSC
Authors Michael J. Mayo
Comments (0)