15 September 2007

Artificial Intelligence and Democratic Education

Please note that I have posted some more thoughts on lecture #2 in the comments on the previous post. Also, shawn has had several nice posts recently on "Making It Explicit." Not that I suspect anyone will read this blog who doesn't already read "Words and Other Things" also.

Well, just got done listening to Brandom's third Locke Lecture. I don't know all that much about AI, and I know practically nothing about early childhood education, so I feel like I learned a lot from this. I had no idea that there was a serious problem with teaching fractions as compared to addition; this makes many old "Peanuts" comics even better than they already were. ¦3

I quite liked pretty much everything Brandom had to say about AI, the Turing test, education, and what they have to do with his concern for pragmatics. I'm not sure how much of what he was saying was supposed to be a summary of what everyone "already knew" about the state of the field, and what he was contributing as a novelty, but he acted like he was drawing attention to some neglected topics in computer science; I wouldn't have thought that pragmatics had been so broadly ignored, given the role it plays in the Turing test (we just need an automaton to act indistinguishably from a person to pass the test), but perhaps things are more badly in need of a shake-up than I'd thought.

That one practice was PP-sufficient for another by algorithmic elaboration meant that any creature which could do the first thing could, by following a prescribed algorithm calling for the performance of sub-tasks of the practice already mastered, do the other. It was noted that this is a bit of an idealization if you're trying to apply the model to humans, since there are psychological barriers to responding to certain inputs with certain outputs (even if one is bidden to by an algorithm). Brandom notes that there's a further problem with trying to find a practice which is PP-sufficient by algorithmic elaboration for discursive activity which didn't already presuppose discursive activity, one akin to the "framing problem." When one revises one's beliefs upon a shift somewhere in one's inferential matrix, every element is potentially in need of revision. There needs to be a way of determining what is and what isn't relevant for analysis, but the only way to do this that anyone is aware of requires using language. But, Brandom points out that there is another sort of PP-sufficiency relation we can work with, that of training. Apparently there are actually empirically well-supported algorithms such that anyone who can count can be taught to add -- but not to subtract; still working on that one. I'd be rather interested in reading more about this. This is an addition I didn't see coming; I was not aware that the study of pedagogy had come so far.

I don't have much to say about this lecture; it is simply very solid, enjoyable work. So, to make up for a lack of content: LOLRussell.

Addendum: I have now listened to P. Stekeler-Weithofer's reply, and Brandom's reply-reply, and the following Q&A, and I have to say: I don't have the foggiest idea what P. Stekeler-Weithofer is on about. He seemed to just be going off on tangents for pretty much the entire half-hour (what Hegel's notion of allgemeinheit as opposed to einigkeit, or Heidegger's critique of "humanism" has to do with AI, or with anything Brandom has mentioned, I could not even begin to guess, but P. Stekeler-Weithofer felt compelled to mention both). His criticisms of the Turing test seem entirely to miss the point of the test; it's a way to point out that manufacturing a passable simulacrum is the goal of AI. The claim that you can't know for sure if you're talking to a Turing machine unless you go and find out from the people who made it, and so the Turing test is structurally flawed, strikes me as the stupidest objection to the test that I have ever heard. I will note that Brandom's reply does mention a criticism of the Turing test, but it is not any of the ones P. Stekeler-Weithofer mentioned; I will also note that Brandom's reply was less than ten minutes long, so I am pretty sure I am not alone in thinking that P. Stekeler-Weithofer was a lousy commentator.

On the other hand, I am curious what N.N. will have to say about P. Stekeler-Weithofer's monologue. I know N.N. has some heterodox views on cognitive science, so he might be a more charitable listener than I've been.

3 comments:

Daniel Lindquist said...
This comment has been removed by the author.
N. N. said...

Daniel,

I havn't had a chance to listen to the third lecture. I should get to it this week. My vacation put me behind, and I have not yet caught up.

Daniel Lindquist said...

Take your time!

(If this is how philosophers should greet one another, presumably it will work well enough for welcoming them back, too.)