Monday, July 18, 2005
Saturday, July 16, 2005
Churches and Cathedrals
Thursday, July 14, 2005
Wednesday, July 13, 2005
Jim Hendler: knowledge is power
Participating in an experiment
Tuesday, July 12, 2005
Google's conference break game
A Little Story
I'll give a personal example to illustrate. Late Monday evening I was playing some jazz improvisations on the Westin hotel's lovely concert grand piano. After I finished, I had a chance to meet with one of the AAAI invited speakers who shared his interest in jazz performance and we briefly discussed some of our music related research and projects. All went well and I felt energized after such a stimulating day of conversations with such fascinating people. I meandered up to the next floor using the escalator and then realized that I needed to take the elevator to return to my hotel room. I pushed the elevator call button and when the door opened, discovered that the same invited speaker was already in the elevator. Now, this is where my brain's attempt to use a stairwell (or perhaps a hallway) conversation rule failed rather spectacularly (at least in terms of the conversation's success).
Fortunately it was a temporal, not a spatial rule which was used incorrectly. Basically, what happened was that I didn't take into account the sharply defined time constraint imposed by the elevator itself, and when the invited speaker politely mentioned one of my projects, I launched into a series of statements about the project, probably due to my excitement about the subject. Unfortunately for me, the elevator abruptly "dinged", door opened, and speaker exited, saying a terse "good night", leaving me in a rather awkward state. Some questions: If the main actor was a robot, would it detect this conversation failure? Could it learn from the mistake? (I hope I myself will!) Could a robot create a blog, or a narrative describing an incident that it experienced? What would an intelligent robot do if it entered an elevator with two invited speakers, one speaker a robot and the other human (and presumably the conversation rules / protocol would be different for each?) For example, if it decided to converse with the robot speaker, would it use natural language so as not to alienate the human speaker? Or maybe it would have a wireless, data based conversation with the robot and a simultaneous natural language conversation with the human speaker. (But the time constraint might not apply to the wireless mode and perhaps the two robots would not determine their conversation patterns by locality: robots might be connected to an intra robot communications network which determines conversation patterns in other ways - I mentioned this to Caroline and it made her think of something from Jungian psychology.)
Anyways, enough rambling for now!
Monday, July 11, 2005
So, What's AI Research, Anyways?
On the opening day of ASAMAS, Gal Kaminka of Bar Ilan University gave an Introduction to Agents and Multiagent Systems. At the beginning of his speech, he mentioned an incident when his paper was rejected at an Agents conference because one of the reviewers thought that the research presented was not related to Agents. Gal was quite annoyed about that for a couple of weeks. That's quite expected; anyone would be annoyed if a peer-reviewer decides that you are not doing what you think you are doing. But then later Gal also talked about meeting a researcher at Bar Ilan who makes the best batteries and how according to him the battery maker was also a robotics researcher.
That got me thinking: Where is the line between AI Research and Non-AI Research? Does Machine Vision or Robotic Arm Design count as AI research?? Well, I think many would say so. But then what about the batteries and motors used in robots?? Is that AI research? If it is, then what about the chemicals used in batteries, which can be used in a robot? Is THAT AI research? How far do we go? Where do we draw the line?