<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d13597791\x26blogName\x3dAAAI-05++Blog\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://aaai05blog.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://aaai05blog.blogspot.com/\x26vt\x3d-6004576250479142494', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

Monday, July 18, 2005

Still more AAAI-05 photos

Barney Pell has 70 pictures from AAAI-05 on his Flickr site.

The future is bright

Just a quick post to say thanks to everyone who helped make this conference an especially fascinating and enjoyable experience for me. I feel I have a broader outlook on the academic community in general, as well as a great deal of excitement about the future of both human and artificial intelligence. I think this blog was a great idea for the conference and I feel honoured to have been a participant. It was also great to see how there is an interest from a diverse range of people and organizations in the broad field of Artificial Intelligence. I believe Dr. Minsky (or someone else?) said that AI is solving problems that have not been solved yet, or making computers do things which only humans could do in the past. Either of these goals is worthwhile, but I think we have to just go about our work with a great deal of respect for the past, for institutions, for experts, and for leaders. The future is a bright one, and we just need to remember "respect". (Sounds like a football mantra or something, but it could also apply to AI researchers and theorists too.)

Sunday, July 17, 2005

More pictures from AAAI

There are several nice sets of photos from AAAI-05 on Flickr: If you have photos you want to share, please submit them to our blog using the link on our page or send us a pointer to them.

Saturday, July 16, 2005

Churches and Cathedrals

What a week! After the Demo on Tuesday my whole team was pretty tapped. We went to the Church Brew Works to recover... What better way than enjoying a beer at church : ) After the couple talks on Wednesday we met up with professor Yang Cai at his lab at CMU. One of his students showed us around the campus (very nice!). The buildings were extremely old and well kept, and the receptionist in the CS building was a roboceptionist, a good mix of old and new. We got to play with the lab's eye tracking system (see the picture in Flckr). If you look at a spot on the screen for 5 seconds or longer, that spot gets selected. We headed over to the Cathedral of Learning to find 42 stories of classrooms and lounges for students to study in etc. After spending some time taking pictures of the main entrance, we headed inside to find a gorgeous foyer that required more photo-taking time (thank God for digital). We finally headed up the network of elevators that you have to take to get to the top, each one only went up 10 floors or so. Once at the top we stumbled into a group of U Pitt students in what must have been a student lounge. I overheard them talking about their experiences interning at the hospital and noticed that one of the guys had his shirt off... "Very odd" I thought, "Where's the gym?" I didn't ask of course, but after checking out the view from the top of Pittsburgh, we headed down and met a nice woman in the elevator that explained the shirtless guy... he had climbed the 42 stories of stairs for exercise. Looking back at the week, I couldn't have asked for a better first AAAI. I met a whole community of people with similar interests and got to see a glimpse of a beautiful city and two universities. Thanks to everyone who helped make AAAI a success!

Comparing with last year's AAAI

I attended last year's AAAI conference (San Jose) as well, so let me try to compare the two. I saw some great research at both, although I think what I saw last year was slightly better overall (although that may just have been due to the sample that I got). Nevertheless, I felt that this conference was much more lively than the previous one. While the Westin at some points gave us perhaps a little too little space (it was hard to sit down anywhere when not attending a talk, and often the rooms were overcrowded), the convention center at San Jose was more reminiscent of an airport terminal and felt very impersonal. The many robots and other demonstrations this year also greatly improved the atmosphere. I'll leave comparing the locations to someone else because I'm based out of Pittsburgh so I didn't have the typical staying-at-a-conference experience...

Thursday, July 14, 2005

AAAI robot movies

For those of you who were not present at the conference and thus did not get to see some of the cool robots in action, here is your chance: robot1 & robot2

[Note: You may have to tilt your head slightly (where slightly = by 90 deg) to watch the second video]

Highlight: Doctoral Consortium + Poster session

I agree with Mykel, the doctoral consortium was also one of the highlights for me this year at AAAI! It provided a great forum of top researchers and current Ph.D. students to present, discuss and exchange research ideas! I was an unique opportunity, were the students were able to get feedback in a save environment. I learned a lot about how to give my talk for the next time and got valuable input. One nice side effect was that it increased a lot the number of people I knew at the conference and I made many new contacts through the new friends from the AAAI DC. I would like to thank Kiri and all the other people on the panel for their hard work and time. I was a great experience. I would have more contributed to the blog, but the connection at Duquesne University was not existing and the wireless of my laptop didn't work, so I could use it at the Westin. I also enjoyed the poster session a lot, since I had many great talks and discussion with other researchers which resulted in new ideas! Thanks again!

Doctoral Consortium

The highlight of AAAI this year was the Doctoral Consortium. I had a great time meeting other Ph.D. students and learning about their work. I gave my presentation on the first day, and I got some valuable feedback. Suggestions from Michael Littman, Marie desJardins, and Kiri Wagstaff were particularly helpful. During the Doctoral Consortium dinner, I was talking with Michael Littman how getting feedback in this kind of way is indeed a "once in a lifetime experience." I had a great time, and I would highly recommend Ph.D. students in their early stages of their research to apply next year!

Final thoughts

The photos will have to wait for tomorrow when my camera's charged, but I just wanted to thank the organizers of the conference for putting this all together. Despite the criticism in that post the other day, it's still been a tremendously exciting and positive experience overall. The invited talks were great, although I didn't get to attend all the ones I wanted to, and some of the technical paper sessions were good as well, although not as accessible given my relative inexperience. For me, I'd have to say the Intelligent Systems Demos were the highlight of the conference. Not that I have any idea what the rest of the demos were like, as we were so busy at our table I didn't have time to eat, let alone wander around to see other demos. I'll try to post something a little more complete when I get back home tomorrow. I should sleep now so I don't sleep through check-out time tomorrow.

Wednesday, July 13, 2005

Running a Demo Two: The Segway Reloaded

After technical difficulties eliminated the possibility of a true demonstration on Sunday night, the Segway team and I ventured forth on Tuesday night, fiercely determined to show the world the kicking abilities of our 'bot. We were spared the transportation difficulties by having the foresight to keep our Segways at the hotel. Brett, Brenna and Yang worked quickly and efficiently, supplying the autonomous Segway with a working camera and preparing for the impending demo. After I briefly adjourned with Yang to retrieve some posters for the night's poster session, I returned to see that the robot was working perfectly, hunting down the ball and kicking it as needed.

As the time for the demo drew closer, Brett and Yang retreated to the poster session, where they would be presenting some of their research. Brenna and I were left to run the demo. Our area was staked out by a large white painter's sheet, upon which the automonous Segway rested. The demo involved either Brenna or I riding the human transporter (HT) Segway in order to catch the ball. Then, we would place the ball somewhere on the white sheet, at which point the robot would search for the ball, grab it, and kick it to us. This demonstrated the capabilities of the robot to actively play soccer, although our limited space denied us the ability to demonstrate any form team work or AI. We set the robot to use the body-kick mode, rather than its powerful kicker, to ensure that balls didn't go flying off at high speeds. One of us also kept a close watch on the robot at all times, ready to stop it at any time using a handy GUI.

Hence, this demonstration went much, much better than our first attempt, and at times we even drew a nice crowd. I again had the opportunity to interact with many conference attendees and even offer the occasional Segway ride (although the popularity of the bar made me slightly reticent to openly offer rides as I had on Sunday).

Once the crowd of onlookers had died down to a small handful (i.e. the food was no longer being served), Brett told me that I could take a break to go check out the poster session. Here was firm evidence that I was well out of my league, as I could hardly understand the titles of most of the posters. I also had a chance to explore some of the other robotic demonstrations, taking a nice gander at the android project (an attractive old chap, though he had a bit of a potbelly) and talking with a member of CMU's Aibo team. All in all, it was definitely another good experience, as I was exposed to a variety of new projects in a really cool field.

Jim Hendler: knowledge is power

Jim Hendler’s presentation on the semantic web is fully attended as the “Web 2.0” talk given by Tanenbaum. He used lots of demos, e.g. RDF in PDF, swoop, and swoogle, to justify the practical aspect of the semantic web -- “You are here”. The simple semantic web is less expressive than any existing KR languages; however it does have significant amount of knowledge (million of documents and thousands of ontologies).

Tucker Balch's invited talk

I thought Tucker Balch's invited talk yesterday was one of the high points of the conference. It was a great example of how to convince a large group of people that your research is important and interesting while remaining accessable to a larger audience. Most of Tucker's earlier work, or at least the stuff I'm familiar with, is about artificial multi-agents systems e.g. teams of robots. But he seems to have switched focus somewhat dramatically, and is now studying natural multi-agent systems e.g. ant colonies and beehives. He's not a biologist though; instead he has found many applications of machine learning in his endeavor to better track and understand the emergent behavior of these complex, evolved systems. For example, he used vision processing and filtering techniques to automatically track the position of a group of ants. Hence, some poor graduate student's hours of menial labor can now be replaced with a camera. His talk really reminded me of the value of staying grounded in real problems. His contributions did not seem that substantial from a methodological perspective but he found ways to apply existing technologies to real problems in a practical way. The efforts of biologists may be substantially improved as a result.

Participating in an experiment

The "hall of robots" is a fun aspect of the conference. Whether or not they do interesting things or represent scientific advancement is beside the point, The point is that the hall is filled with beeping, twitching pieces of machinery covered with blinking lights, raising the chaos and geek-joy levels all around. Back in one corner, however, there was something special going on, however: a joint NRL/Missouri team who were actually gathering experimental data right here at the conference. I signed up to play with their robots, and got to give them some data as well: during the experiment, you drive the robots through a sketch interface from inside a booth where you can't see them, and outside the passerby can watch them surging about doing the tasks you're assigning them. I thought this was a really neat idea: on the one hand, they get real data gathered --- on the other hand, they have an excellent, changing attraction to attract people's attention.

Jeff Hawkins' Talk on a Biologically-Inspired Intelligent Architecture

Yesterday, I was particularly impressed by Jeff Hawkins’ dynamic talk on building an intelligent architecture based on the current theory of the neocortex. For those who missed the talk, he proposed a hierarchical temporal memory system with a Bayesian construction that is motivated by the current theory of mammalian neocortex construction. One interesting point of his architecture is that he claims it links both sensory and motor abilities into the units. Most of his talk is covered in his book On Intelligence, but the mathematical construction is not included, as it is a more recent development. I am particularly sorry that we didn’t get a chance to see a demo of his software that did pattern recognition of primitive symbols.

One of his main points was that the individual units of the network are homogeneous initially (regardless of the sensory modality), but it is the connections between the units that lead to specialization of particular regions. This notion of initial homogeneity is very attractive -- it reminds me of blank real estate. Certain areas are used for houses; others are cultivated to grow crops, etc. The regions become specialized, although they are flexible to a degree. I also like the notions of units being tightly integrated with both sensory and motor systems, and having many excess units. These ideas link solidly with those discussed by Minsky in his keynote talk. I’m reserving full judgment on the architecture until I see the details, but I did appreciate his ideas. Many of these ideas have been shared by AI researchers for quite a while, but it is nice to see that the biologically-based approach yields similar conclusions.

Cold dinner

Tonight was incredible. I had a great time presenting at the Intelligent Systems Demonstrations (my first time), and we got some great suggestions from everyone. We started nearly an hour early, were busy the entire time, and went long past the scheduled end. When it was my turn to duck away from the table to go get dinner around 7, I jokingly told someone in line next to me that the food would be cold long before I got a chance to touch it. I didn't actually get to eat it until after 9:00. And now my throat is hoarse, and I probably won't be able to speak tomorrow, but what an experience. I'm truly grateful for getting a chance to participate in this.

Tuesday, July 12, 2005

Google's conference break game

Google had played an interesting game for collecting people during conference break: they handed out numbered tags to people around without tell them what will happen then. Very soon, several tens of people stopped and clustered in front of Google's table. About 50 people had got tags, including me. Finally, there are around 100 people squeezed in a small area waiting for the final result, which turned out to be a lottery of some fun stuff. This game, even after it ended, made people around talking the popular word “google”. FYI: The American Dialect Society chose the verb to google as the "most useful word of 2002." (source: Wikipedia)

Not the talk I expected

If I had to pick just one talk to attend today, I would go to "Networked Distributed POMDPs" by R. Patrascu, C. Boutilier, R. Das, J. Kephart, G. Tesauro, and W. Walsh. After all, I work on distributed POMDPs, and Craig Boutilier is an awfully smart guy.

I was really looking forward to this talk. So, I skipped out of the Game Theory session early and hurried over to Markov Decision Processes 2 in time to catch most of Yaxin Liu's (excellent) talk on risk-sensitive planning. Then, Craig plugged in his computer and up pops a slide that read:

Distributed Networked POMDPs
  • You can't do it.
  • Don't waste your time.
There was a typo in the program. Craig actually talked about autonomic computing.

Well, ok then. If I had to pick just one talk to attend today, I would go to "A Synthesis of Distributed Constraint Optimization and POMDPs" by R. Nair, P. Varakantham, M. Tambe, and M. Yokoo. If it turns out that Ranjit is actually talking about... oh, I don't know... neural nets, I'm going to be a little miffed.

General conference thoughts

Some conference attendees have expressed an interest in an open thread for commenting on conference infrastructure and making suggestions for future improvements. Have at it!

Minsky on philosophy of consciousness

Although I enjoyed many parts of Minsky's talk, and wasn't all that bothered by the technical problems, I have to agree with Shimon that Minsky was perhaps a little too dismissive of philosophers of consciousness/mind. He is certainly right that the word "consciousness" has come to have too many meanings. But I believe that there *is* something mysterious about the problem of "qualia," or what "it is like" to have a certain experience. Whether this has any implications for the building of an intelligent agent, I'm not sure (although I think it might). But I certainly think it is a valid philosophical problem. Now, as one astute member of the audience already pointed out (and Minsky agreed), there are philosophers that in fact seem to agree with Minsky, such as Daniel Dennett. Moreover, I imagine that these philosophers are just as dismissive of each other in person as Minsky was of them (but I have never met them so I cannot be sure). Nevertheless, for an AI audience in which many people are perhaps less aware of these issues than they should be, such an attitude is somewhat inappropriate because the philosophers aren't there (and perhaps never will be) to defend themselves -- while Minsky knows how to tickle our AI bones by suggesting something like (I'm severely paraphrasing here!!!) "let these other people theorize; we're actually BUILDING these systems and by doing so will find out more directly what the real issues are." This argument has some definite validity in many subareas of AI, although usually the AI researchers (at least eventually) also realize that they can learn a lot from the existing research in another field. (For example, in my own area -- AI and economics -- AI researchers are becoming more and more informed by existing economics research, which has definitely, definitely improved the research.) I would suggest that, if we really want to discuss philosophy of mind, which I think would be good, that we invite some of the leading philosophers on the subject to a conference. Anyone care to agree with me?

A Little Story

Yesterday I blogged that the Sherbrooke robot Spartacus needed to use the elevators in the hotel to move between floors, as part of the Robot Challenge. While I haven't heard how that part of the Challenge went, I had a rather didactic experience myself with the elevator today that lead me into some thinking regarding robot (and human) elevator conversation etiquette. (Please note that I haven't actually studied this area nor have reviewed existing works.) In my short life I have lived in various single family homes and university dorms, which did not have elevators - only stairs - and thus I am quite experienced with stairwell conversation etiquette. However, stairwell etiquette does not transfer very well to elevator etiquette for several reasons. The most obvious differences (to me) are the types of activity involved and spatial considerations (stairs: walking in front of, or behind, the conversation partner, and speaking while stepping, and elevator: standing in a small enclosed space facing the conversation partner, maintaining an appropriate distance from the other elevator occupants). But the other, more interesting differences arise from the temporal constraints (or lack thereof) and their impact on the conversation duration and content.

I'll give a personal example to illustrate. Late Monday evening I was playing some jazz improvisations on the Westin hotel's lovely concert grand piano. After I finished, I had a chance to meet with one of the AAAI invited speakers who shared his interest in jazz performance and we briefly discussed some of our music related research and projects. All went well and I felt energized after such a stimulating day of conversations with such fascinating people. I meandered up to the next floor using the escalator and then realized that I needed to take the elevator to return to my hotel room. I pushed the elevator call button and when the door opened, discovered that the same invited speaker was already in the elevator. Now, this is where my brain's attempt to use a stairwell (or perhaps a hallway) conversation rule failed rather spectacularly (at least in terms of the conversation's success).

Fortunately it was a temporal, not a spatial rule which was used incorrectly. Basically, what happened was that I didn't take into account the sharply defined time constraint imposed by the elevator itself, and when the invited speaker politely mentioned one of my projects, I launched into a series of statements about the project, probably due to my excitement about the subject. Unfortunately for me, the elevator abruptly "dinged", door opened, and speaker exited, saying a terse "good night", leaving me in a rather awkward state. Some questions: If the main actor was a robot, would it detect this conversation failure? Could it learn from the mistake? (I hope I myself will!) Could a robot create a blog, or a narrative describing an incident that it experienced? What would an intelligent robot do if it entered an elevator with two invited speakers, one speaker a robot and the other human (and presumably the conversation rules / protocol would be different for each?) For example, if it decided to converse with the robot speaker, would it use natural language so as not to alienate the human speaker? Or maybe it would have a wireless, data based conversation with the robot and a simultaneous natural language conversation with the human speaker. (But the time constraint might not apply to the wireless mode and perhaps the two robots would not determine their conversation patterns by locality: robots might be connected to an intra robot communications network which determines conversation patterns in other ways - I mentioned this to Caroline and it made her think of something from Jungian psychology.)

Anyways, enough rambling for now!

Ether Dancing

Today I heard the best response yet to Dr Veloso's charge that softbots aren't real agents: "Well, look at all these robots, don't they belong in an autoshow?"

I saw several good talks today, however one that I enjoyed slightly more was Deepak Ramachandran's presentation of "Compact Propositional Encodings of First-Order Theories." If pushed, I'd admit that I have more than a fondness for logics, and what I suppose could be classified as traditional knowledge representation. I might also confess a praticular fondness at the moment for applications of first order logic. That's as opposed to, say, description logics, which are useful, powerful, and neat, but just don't tug at my heart in the same way. Given those biases, it was clear that I would be interested in the talk, as I think many people were. The room was full when I got there, grew more full as it went on, then emptied out just slightly after it ended.

Despite that though, what I really liked is the shifting of problems, of morphing between forms and representations until you find one that suits you a little more. In this case, propositional logic and the plethora of highly tuned solvers that may then be brought to bear. This is one of the things that really draws me to computer science, the ability to move back and forth between languages and problems like dancing on water. Reductions, translations, they make me feel as an antenna must when you switch channels---lights and colors flying by as the whole world changes but stays oddly familiar, the undercurrent of signal and abstraction still there.

Surely one of the things that begins to distinguish a computer scientist is when they realize that the programming language is utterly expendible & replaceable, the concepts alone mattering. As Marvin Minsky noted in his keynote speech (in different words), computer science is not a great pushing around of bits, but rather the first (and only) formal, abstract account of process. That in turn gives us the consequent freedom to appear however we wish, realizations of the ether, and that's beautiful.

Switching gears completely, I encourage everyone to attend the search session in the morning. Although there are several good talks and papers going on in other tracks during that slot, it should be a good session. Of course I have to support Vince Cicirello, presenting this year's best paper, as he's a post-doc in my lab and a completely nice guy in addition to being very bright. If you get a chance, ask him about bowling. He's quite good, and quite passionate about the sport. Just don't do it unless you're free for a couple hours. In addition, the session also has a paper by Richard Korf and another by Rong Zhou, who has picked up several best paper awards over the past couple years if I'm not mistaken.

Monday, July 11, 2005

Minksy disappoints

I made a point of getting up early enough today to catch Marvin Minsky's keynote address. I was more than a little disappointed. He spent the first ten minutes playing with the formatting on his computer. For some reason, he prepared his talk using Microsoft Word and seemed unfamiliar with the procedures for opening documents, resizing windows, or adjusting the zoom. Then, in the middle of his talk, he paused for several minutes while trying to figure out how to prevent the screensaver from coming on. Presentation issues are common at these conferences but are usually just a minor annoyance. But when 1000 people are waiting to hear your every word, I think it shows poor preparation and a lack of respect for the audience.

The content of the talk did not redeem him. He made a lot of vague blanket criticisms about the current direction of AI such as "we need systems with better common sense reasoning" or "we need multiple, context-depedendent representations." I don't find these kind of remarks constructive or insightful. Everybody knows that these features of human behavior are critical to intelligence. The hard questions are how to recreate those features in computational systems; Minsky did not appear to have any concrete answers. He also took on the thorny issue of consciousness. I had trouble following his premise but he seemed to be saying that people like Chalmers and Searle who argue that the existence of subjective experience is an important unsolved mystery are wrong. He got a big round of applause from the audience (not surprising since Searle is so widely reviled among AI researchers) but did not actually make a cogent argument in support of his assertion, or at least not one that I could follow.

A keynote address is a grand opportunity: the lucky speaker has an opportunity to galvanize an entire community, to excite their passions, and to unite them behind a common purpose. It's too bad such an eminent figure could not make better use of it.

So, What's AI Research, Anyways?

Hi everyone, I am Priyang Rathod. I am a PhD student at UMBC, working with Marie desJardins. I have been meaning to blog since the day of my arrival here, but could not, because I am staying at the student housing at Duquesne University :(.

On the opening day of ASAMAS, Gal Kaminka of Bar Ilan University gave an Introduction to Agents and Multiagent Systems. At the beginning of his speech, he mentioned an incident when his paper was rejected at an Agents conference because one of the reviewers thought that the research presented was not related to Agents. Gal was quite annoyed about that for a couple of weeks. That's quite expected; anyone would be annoyed if a peer-reviewer decides that you are not doing what you think you are doing. But then later Gal also talked about meeting a researcher at Bar Ilan who makes the best batteries and how according to him the battery maker was also a robotics researcher.

That got me thinking: Where is the line between AI Research and Non-AI Research? Does Machine Vision or Robotic Arm Design count as AI research?? Well, I think many would say so. But then what about the batteries and motors used in robots?? Is that AI research? If it is, then what about the chemicals used in batteries, which can be used in a robot? Is THAT AI research? How far do we go? Where do we draw the line?

Ruffled feathers

I'm a bit surprised no one's posted anything about Minsky's keynote this morning, or his new book that he talked about. I expect more than a few people's feathers were a little ruffled by it, but I'm curious to hear what others thought.

Internet at Duquesne University

I finally got a hold of the computer center at the university and got the story on internet access for anyone interested. The gentleman told me that he was receiving a lot of calls on this subject. The conference organizers never requested internet access for us, so they won't allow us to connect from the dorm. However, you can go to the library and use the computers there.

[summary] sister conference highlights (monday morning)

The sister conference session is a convenient path to access relevant and interesting researches in the conferences you have missed in the past. I'm a little bit surprised that this session is not well attended.

KDD 04: Tutorials reflects current interests in Data Mining: date steam, time series, data quality and data cleaning, junk mail filtering, and graph analysis. This conference has an algorithmic and practical flavor, e.g. clustering is a more concrete form of "levels of abstraction". There are also many industrial participations and KDD cup went well.

ICAPS 05: This is a fairly young conference which merges several previous conferences on planning and scheduling. Scheduling papers increase(25%), and search continues play prominent role. Best paper presents a "complete and optimal" and "memory-bounded" beam search. Another interesting paper learns action model from plan traces without the need of manually annotating intermediate nodes in the trace. The competition on knowledge engineering another way to attract participants from various relevant domains. Interesting agreements from participants "application get no respect", "too many people spend too much time working on meaningless theories".

UAI 04: Probability and graphical representation dominate these years. Tutorials were clustered on graphical models (esp. BBN). Best paper presents case-factored diagram, a new representation of structure statistical models, so as to offer compact representation, as demonstrated in the problem of "finding the most likely parse of a given sentence".

Intelligent UIs

During the second tutorial session yesterday, I attended the one on Intelligent User Interfaces by Mark Maybury. Let me just say, that was a tutorial done right. Of the four tutorials I attended, this was the best. He gave an excellent overview of the field, provided demonstrations of key systems (lots of videos here, folks), and was engaging the entire time. The audience seemed quite weary after two straight days of tutorials, so he had a bit of a challenging time keeping everyone’s attention, but he did well given the situation.

I was a bit surprised that the technology of IUIs hasn’t progressed further. I was expecting many more systems like the SmartKom agent, although I am slightly skeptical of how it works in practice. Many of the IUIs are tailored to the specific application, and are “engineered” tools rather than implementations with a solid theoretical background. This is one area which could use an infusion of AI researchers, as the creation of a successful IUI will draw from everything from knowledge representation, to natural language processing, to machine learning. As a side note, there’s an upcoming conference on intelligent interactive entertainment (called “Intertain”, although I can't find the URL) that looks quite interesting and is related to IUIs.

Let me just say that 8 hours straight of tutorials a day is too much. From talking with many others, I know that I’m not alone in this opinion. Every tutorial I attended, save the last, was stretching the material to fill the time-frame. One strong suggestion is to have several shorter tutorial sessions (2 hrs?), and then have some tutorials span multiple sessions (those sessions should be as independent as possible). With shorter tutorial sessions, I think people would be able to be exposed to a wider variety of areas and would get more out of each tutorial. Anyone second my opinion or disagree?

Murphy's Law of Demoes

My name is Reid Van Lehn (I'm the guy showing Minsky how to ride a Segway in the picture below) and I am currently a summer intern working on the CM-Balance (i.e. Segway soccer) project at the CMU Robotics Institute. This fall, I will begin my freshman year at MIT, though I am currently undecided regarding my major.

As you can imagine, this was one of my first conference experiences. As the only member of the Segway team in possession of a minivan, my day started by loading a 250-pound autonomous soccer playing Segway into the back of the van (along with Brenna Argall and Yang Gu, two graduate students working on the project) as well as a human transporting model and some spare parts. I then carefully navigated to the conference, trying desperately not to break any of the sensitive electronics by rolling over a bump. Also, I didn't want to break Yang, who was sitting on the floor in the back with the Segways.

Upon arriving at the conference and smooth-talking my way into some free parking while we unloaded, I was treated to the sight of a multitude of other robotics teams hurriedly making last minute preparations for their own demoes. Since Brenna and Yang were busy calibrating the robots, I was told that I could leave for a few hours and return when the demo started, since I didn't have much to do. Since I was more than a little worried about my parking spot, I opted to head out for the afternoon.

Around 4:00, my phone rang - Brenna was on the line, and I began to get my first taste of what demoes are really about. Apparently, the Segway's vision wasn't working properly; since this was imperative in tracking the soccer ball for the demo, we had a pretty big problem. Hence, I hurried on down to CMU, grabbed some white painter's dropcloths, and returned to the conference (legally parked this time). The idea was that the dropcloths would provide a uniform background for the Segway's vision, thereby making the ball easier to identify.

Time wore on. It was now getting to be around 5:00 or so, with the demo set to begin at 6:00. It was around then that Yang made the realization that the camera itself was damaged, and we had no replacement with us. Uh oh. We had but one hope: lead developer Brett Browning, who was coming to supervize the demo. Brett arrived like Superman, coming to save the day in his attractive mid-sized sedan. He brought a couple of cameras scavenged from other robots, and Yang and Brenna set to work re-calibrating the vision. Time continued to tick by...

6:00 now. People were starting to flood the floor, as the demoes were all set to begin; unfortunately, the autonomous Segway was still dead in the water. I was getting a little nervous, since my own knowledge of the specifics of the project is a bit shallow. Since Brett, Yang and Brenna were pretty busy at the moment, however, I began to describe the basics of Segway soccer to a few interested onlookers. My confidence grew as I spokie to more people, explaining our implementation for robotic vision, the rules of the game, the way our skill system works, and so forth. I discovered that most people were content with just vague descriptions; those that desired more information were directed to Brenna.

After a few minutes, Brett shouted a few fateful words in my direction - "Reid - ride the other Segway around for a bit so people notice us." This is when things got more interesting for me. I had ridden a Segway only once in the past (strangely, over the last few weeks since I began work on the project, I had never thought to try it out in my spare time), but I quickly got the hang of it. Soon I was zooming about and attracting amused spectators. I quickly began to demonstrate just how easy it was to ride the contraption, and this also gave me the opportunity to explain more about our project. Hours passed, and I began to go into greater and greater detail about what we were doing. The Segway rides certainly did a wonderful job of attracting spectators, many of whom then became interested in our project.

Meanwhile, the autonomous Segway still sat dead in the water, as the new cameras Brett brought turned out to inoperable. Murphy's Law certainly held true on this day - and I was assured that such was the norm for demoes. Hopefully, we'll be able to have everything up and working for our return on Tuesday, where we can finally show the autonomous Segway's capabilites as well. I look forward to the experience as yet another opportunity to offer Segway rides and meet many new researchers at the same time. Hence, I would deem my first demonstration experience an interesting one, as I learned the inherent difficulties that accompany any attempted demo.


As already noted in the entries and comments below, this morning Paul Cohen gave a rock solid statistics overview/review for data and experiment analysis. One of the things it started me thinking about is the claim you see every now and then along the lines of "Experimental science is dead!" Granted, this refers mostly to the "real" sciences---physics, biology, and chemistry---so much of which now involves simulations and analyzing existing data (at least, as I understand it, and assuming that no real science could possibly have "science" in the name). It's interesting then talking about increasing and improving the experimental component of computer science, just as this is arguably declining in other fields.

Right at this moment I'm not possessed of the mental faculties to continue this line of thought into interesting areas, so allow me to just ramble on for a moment.

The agent school officially concluded today, with the tutorial sessions in the morning, demos in several of CMU's robot labs in the afternoon, and a rockin' barbecue in the evening. Jay Modi and Paul Scerri get a lot of credit for putting together a pretty solid sequence. I thought it was largely all interesting, useful, and well organized. They lose a point for having only pizza at lunch the other day (vegans, unite and we shall rule the world!), but partially made up for it by having incredibly deep dish pizza (I confess, I was weak and starving and looking at many more hours to go before I could get more food and ultimately had some). They redeem themselves completely however by taking good care of vegetarians/vegans the rest of the week, and Paul grilling up a storm at the barbecue, including a pile of veggie burgers. However, the most valuable member of the agent school organizing committe was clearly Bruce, who did more than his share of shepherding people to and from the various demos around the Robotics Institute:

(Paul's dog Bruce, who apparently owns the RI hallways on weekends. He seemed ready, willing, and able to show those robots who's boss... after his nap.)

I also want to thank all of the speakers for putting time into preparing lectures that were generally introductory and accessible yet still interesting and useful to those familiar with the topics.

Sunday, July 10, 2005

My experience at the Doctoral Consortium

Although I will come up with a more detailed summary about my experience at DC05, here is a short version as a blog post: 1. The way the DC went through was much nicer and way more comfortable than I expected before I came here! To be frank, I had some worries on the top of my head, like "what if the panelist tries to crush me?", "What if I didn't convey the ideas clearly", "what if my work is not competitive to others'?"... However, once the DC got started from the very first talk, I knew that all of these were simply unnecessary overaction. The students were very open, and we started chatting over the research and life at different universities even before Kiri's openning speech. As for the feedback time after each presentation, I saw the most active and healthy interaction between the presenter and the audience -- both the panelist members and the participating students were really "eager" to give out their comments and suggestions! This at least made me feel very welcome. I actually wrote down pages of hints about research and presentation skills from the discussions. :) 2. By getting to know the thesis work done by other students who are at a similar stage of the PhD program, I feel greatly encouraged! I have found similar research concerns in other students' presentations (thanks to the "remaining concerns/open quesions" slide). The feeling of "not-being-alone" is very helpful for me to face the difficulties in my own work. :) 3. Although people are conducting researches in different areas of AI, since the presentations are general overviews of the thesis work, most of the contents were actually well received! I was indeed totally new to some of the topics, but I didn't feel excluded from any talk. Isn't this great? So with the total 16 presentations within 2 days, I was able to get flavors of >10 sub-fields in AI. It was equivalent to going through related tutorials at a quick pace. :) I truly want to thank every participant in the DC-05. All of you made it a very precious learning experience for me during my PhD study. Keep in touch!

Agents Don't Need to be "Super Intelligent" to be Helpful

I had another great day attending tutorial sessions. The morning was Paul Cohen's excellent "Empirical Methods for Artificial Intelligence", which I believe should be required material for nearly anyone in AI - as he describes, there are big benefits (in terms of the advancement of the field) when scientific/empirical approaches are combined with theoretical ones. This statement was echoed by others I talked with today, including University of Sherbrooke's Laborius robots team who intend to use their AZIMUT-2 modular and omnidirectional robot as a platform for validating machine learning algorithms. Their AZIMUT-2 robot has some kind of funky spring mechanism in his (or her?) wheel motors which allow the robot to sense changes in the terrain, such as how we humans receive feedback about the ground when walking.

Sherbrooke's Spartacus robot is actually the only robot at this conference who is attempting the daunting task of competing in the Robot Challenge. Tomorrow morning, Spartacus will be dropped off at the entrance to the hotel, and will have to somehow take the elevator up to the correct registration floor, find the right registration desk, and after registration perform volunteer duties (in lieu of paying the conference fee) until it is time for his scheduled presentation time, at which he will present his latest work and answer questions from the audience. Not only that, but Spartacus will also interact and socialize with the other conference participants throughout. I wish him (it?) and the Sherbrooke team the best of luck!

Regarding robots, I am by no means an expert, or even remotely involved in that area myself, but I can easily envision a day when we will walk along a city street, no longer taking special notice of the additional pedestrian traffic: autonomous robots who will scurry about their daily business just as we humans do today. It shouldn't be too difficult to gain mass acceptance of these types of robots once they have been interviewed on TV, and come across as friendly, helpful, and even funny (maybe I'm going out on a limb here, but just wait 20 years and you'll see...)

In the afternoon I attended Mark T. Maybury's tutorial session on Intelligent User Interfaces. I can imagine that some of the attendees of this tutorial may have been put off by the somewhat dated video examples (for example there were a few from the ever so ancient time of 1990 to 1995), but I believe (and Dr. Maybury stated) that the differences between the concepts and ideas illustrated by those videos and the state of the art today are largely cosmetic. For example, it seemed that a huge part of Intelligent User Interfaces involves multimodal input, where a user would simultaneously gesture, look at an object, and speak, and these inputs would be synthesized and used as a basis for decision making, learning, or executing a task. This is obviously a problem that has not been solved completely today, even though over 10 years have passed since the celebration of first successes.

Dr. Maybury presented so many great ideas, some in more detail than others, but one idea which I was especially interested in (and which he generously expanded upon) is the concept of a software agent which identifies human experts within an organization by capturing and searching for keywords in the publicly available writings of the employees (e.g. if employees publish documents to a company repository, they can be considered to be fair game for keyword searching).

You might say that this system is not really that intelligent, but Maybury argued that this doesn't really matter - it can still be really helpful. (My example follows.) Let's say that company A has 2000 employees in 25 locations throughout the globe. What usually happens is that, without this new software agent system, if an employee needs to gain knowledge on a certain topic, he/she might consult the immediate social network to determine an expert, such by asking coworkers on the same floor, or perhaps someone in the same office who is a hub in the company's social network. (That's why I think that even in this day and age when telecommuting is possible, most large software firms still have (large) brick and mortar offices.) However, with a software agent that can identify experts throughout an organization regardless of location, these social networks are no longer required to find experts. (Much like how web search is reducing the need for personal referrals to small service based companies).

In the future I really don't see how large companies could afford not to employ such an expert finding system (...and maybe some already have, but are just not telling.) Dr. Maybury mentioned that his organization (MITRE) did publish a paper on this idea before any patents were filed, so there is potentially still an opportunity for some newcomers to jump in with a fancy new product to serve this purpose. He gave one commercial example of Tacit.com which attempts to build the expert database using employee email monitoring. On a side note, just imagine what kind of "expert database" Google could have of the world, if they mined their Gmail archives 10 years from now! (Not that they would ever do that of course without our consent, but what if users requested that feature?!)

I have networked, and it was good.

The opening reception is finally trailing off, after we were all shooed out of the ballroom (the hotel staff were rather pointed about the fact that it was time to go), and I've just met a whole bunch of neat people. One of the reasons I wanted to come to AAAI is to get a feel for the community of researchers who identify as AI people, and try to figure out how we relate to one another. A big problem I've had as a graduate student is getting a feel for what the rest of the field is doing --- my own research cuts across a lot of different areas, and I've found it virtually impossible to know what's going on in all of them. Well, if there's anywhere where all of AI is represented, it ought to be here, and I certainly met enough folks doing all sorts of different things tonight. Lots of good feeling in that room and people easy to talk to and happy to open up. I think I may have just spent the evening officially networking, and it was good.

Moving Minsky

wow... Michael, Geoff and I were just playing around with the Segway at the opening reception, when Marvin Minsky hopped on! Check out the pic I got : )

Using wifi at AAAI (with GNU/Linux)

Crossposted from my Blog:
I thought I might share this information with you. If you are at AAAI05, you know there is free wifi connection. If you use GNU/Linux, the instructions ("just open the browser and everything will be fine") don't work. Instead you have to do something like that:
1. iwconfig wlan0 essid "STSN" //set the essid of the network
2. dhclient wlan0 //you should have an IP address after this.
3. connect with your browser to //the login screen will be there, login with your 24-hours access code.

Stats review

While I'm waiting for the coffee break to end, and my tea to cool to a drinkable temperature, I may as well post a brief update. So far, the empirical methods tutorial has been review for me, but since it's been a few years, and I need to doing this myself later this summer, I suppose a refresher doesn't hurt, and it certainly helps that Paul Cohen's a much better speaker than the stats prof I had years ago.

edit: It's amazing how much of a difference it makes, having a use for the material. I remember struggling to stay awake every day in that stats class years ago, but today's session is proving rather interesting. Surprisingly, it's almost 3.5 hours in, and I'm still wide awake, especially now that he's covering some material that's new to me. I definitely need to read more on analysis of variance now; it could come in handy when I go back home. I'm a little disappointed he skipped the section on experimental design in favour of bootstrapping, but hopefully he'll have time to come back to that before we finish.

edit: On second thought, I'm glad he went with the bootstrapping first; it wouldn't have compressed well to fit these last 15 minutes.

Plastic mattress

I'm staying at the student housing at Duquesne University and sleeping on a plastic mattress. It's been a long time since I lived in a dorm and I guess I had forgotten how much they resemble white-collar prisons. At least the sheets are clean. My main complaint is that the internet access we were promised has not materialized. First I was told I had to come back on Saturday from 10-6 to get a username and password. Then I was told I had to go to the library to get it. But at the library they told me had to go to the computer help desk which, of course, is closed till Monday. Even then, I will need to find a place to buy an ethernet cable because the stone-age facilities do not include wireless access. Fortunately, the Westin has gotten their act together and fixed their wireless network, which enables me to log on and vent my complaints into the blogosphere. On the up side, the doctoral consortium has been a great experience. Despite having to get up five hours before my normal waking time, I've found the whole event engaging and rewarding. I was the second speaker yesterday and I got a lot of great feedback on my work. It's obvious what's in it for the student speakers: this sort of thing is a rare opportunity to get frank, constructive suggestions about one's work. But what's in it for the panelists? It seems that it's just a selfless act of community service: two days out of their lives exclusively for our benefit. After the talks, the group went out to dinner together and I had a chance to chat with Michael Littman, my external committee member. Even after eight hours at the consortium and two hours at dinner, he wasn't finished but instead headed upstairs to help another student with her slides. My hero.


Glutton for free stuff that I am, this morning I picked up one of the issues of IEEE Intelligent Systems laying around by registration. The only piece I've found time to read so far is the letter from the editor, James Hendler. It talked a lot about what a "system" is, why we should build them, and, importantly, how and what to share about that experience with others so that they might also build working systems.

In some ways that set me up to attend Tuomas Sandholm's tutorial on market clearing in the afternoon Of all the topics in the agent school, this is one where I am more than ready to admit a basic lack of knowledge and experience, so I was particularly looking forward to this and Sven Koenig's talks on auction-related topics. I wasn't let down by either, as I already mentioned about Dr Koenig's. Dr Sandholm's was similarly excellent---well presented, and (I thought) striking the appropriate balance between coverage, depth, and research for a tutorial.

One of the things that really struck me as Dr Sandholm presented some of his research, was the system he was presenting. I had this vision of a large Godzilla battling larger and larger auctions. The core is a strong algorithm with beautiful structure, the spine of the beast, chopping away at large swatches of search space. At the leaves are myriad smaller but no less substantial algorithms, handling special cases in dramatically efficient ways. Particularly pleasant to watch was the unfolding of this: the talk progressed through the special cases and their algorithms, then on to a more general case, then in an excellent swoop folded back to encompass these special cases, closing the circle. On top of all these like armor plating were presented a number of pre-processing steps to massage the data and remove trivialities. Although not discussed in much detail, I had the feeling that in implementation any number of tricks & well-crafted code was employed to crank out a few more bids, handle a few more items. A final, glistening, razor edge on the system's claws & teeth.

Maybe I'm just melodramatic and anthropomorphize too much, but it was impressive. The overall system, that ill-defined and over-used symbol, was a sight to imagine---a light, elegant skeleton of theory and structure clad and armored in machinery and engines of special cases and system construction, a warrior built for slaying complexity. The only thing better was how much Dr Sandholm obviously enjoyed the beauty of the theories, the algorithms, and the fit of each component into the larger system.

A good start

After reading Eric's complaints about the student housing, I'm especially glad I chose to share a room at the Westin instead; the facilities here are great, and aside from some difficulties with the wireless access earlier today, I've no complaints. Even better, since we're splitting a room, the cost actually comes to less than the student housing. I suppose we may be missing out on a good part of the social experience of the conference by doing this, but man these beds are really comfy.

As for the tutorials, I'm fairly satisfied. I rather enjoyed the morning session on word sense disambiguation, and was surprised at how accessible it was. This being my first conference, and having no real experience with the subject they were presenting, I expected to be a little lost & overwhelmed, but that wasn't the case at all. Their healthy list of resources for starting out was especially appreciated. Unfortunately, I can't be quite as positive about the second tutorial I attended, but I think that's due to my own fatigue more than anything else.

Saturday, July 09, 2005

AAAI Doctor consortium summary (day one)

AAAI DC opening Kiri's welcome speech
Kiri, the chair, gave a welcome speech and 8 PhD students presented their theses work focusing on a leaning and planning.  (thanks Mykel for helping me revising this post)

Jennifer Neville (UMass) -- Structure Learning for Statistical Relational Models

Motivation: latent groups could be detected from graph structure as well as node attributes. Subjects: graph analysis, clustering Issues: (i) Partitioning a network into groups using both link structure analysis and node properties clustering (using EM); (ii) utilizing group assignment for better attribute-value predication and unbiased feature selection. Comments: Groups, as mentioned in this talk, are disjoint; however, further work might involve instances where a node may belong to multiple groups. This problem can be viewed as a clustering problem which uses features from node attributes as well as graph structure information.

Shimon Whiteson (UTexas Austin) --Improving Reinforcement Learning Function Approximators via Neuroevolution Motivation: adaptive scheduling policy can be learned using neuroevolution. Subjects: function approximation, reinforcement learning, evolutionary neural network. Issues: (i) the transition matrix (state-action table) can be huge, and a neural network (NN) based Function Approximator (FA) is a compact alternative; (ii) NEAT+Q, which evolves both the weights and the topology of the NN using NEAT and Q learning, is helpful. Experiments show that Darwinian evolution achieves the best performance compared to Lamarkian evolution, randomized strategies, etc. Comments: Many are skeptical about whether evolving NN could be used for online scheduling since it takes a lot of computational time. In order to evaluate the significance of improvement, the optimal and worse cases are needed.

Bhaskara Marthi (UC Berkeley) -- Discourse Factors in Multi-Document Summarization Motivation: Decomposition is needed for planning asynchronous tasks and many joint choices in a giant state space. Subjects: planning, reinforcement learning, optimization Issues: (i) concurrent ALisp and coordination graphs are used for concurrent hierarchical task decomposition; (ii) decompose Q function w.r.t. subroutines, and that requires reward decomposition which is hard. Comments: Decomposing Q function for the reward of "exiting a subroutine" is hard since there could be thousands of ways to exit.

Trey Smith (CMU) --Rover Science Autonomy: Probabilistic Planning for Science-Aware Exploration Motivation: Discover scientifically interesting objects in extreme environments (e.g., Mars). Subjects: Planning Issues: (i) planning navigation -- with maximum coverage over a spatial extent; (ii) selective sampling — variety is preferred to large amount of copies of samples.

Marie desJardins (mentor from UMBC) Prepare talks for different context:

  1. Job talk. Try to amuse people with your work and how complex it is.
  2. Doctor consortium talk. Present technical details, expose your strength and weakness in front of the mentors, they are external experts who will provide valuable comments from all aspects. Take it as your trial thesis defense...
  3. Conference talk. Present ideas and keep the audience awake...

Snehal Thakkar (USC) -- Planning for Geospatial Data Integration Motivation: Integrate spatial-related data on the Web, and support query/answering using planning. Subject: Information integration, Geospatial information system Issues: (i) a hierarchical ontology/taxonomy for modeling the spatial application domain, (ii) Planning spatial information query using filtering (reduce querying many irrelevant sources).

Ozgur Simsek (UMass) -- Towards Competence in Autonomous Agents Motivation: Define “useful skills”, and let agents learn them. Subjects: Learning, knowledge discovery Issues: "useful skills" is defined in three categories: (1) access skills, identify "access states" which are critical for making the search space fully searchable especially for hard-to-access regions; (2) approach-discovery skills, how to achieve an "access state"; and (3) causal discovery skill, identify casual relations. Comments: How to memorize the past states is still hard when the problem space scales. Experiences can be reused as well as the skill can be reused.

Mykel J. Kochenderfer (U Edinburgh) -- Adaptive Modeling and Planning for Reactive Agents Motivation: Efficient planning in real time for complex problems with large state and action spaces requires partitioning these spaces into a manageable number of regions. Subjects: reinforcement learning, clustering Issues: Learn to partition the state and action spaces using online split and merge operations. Comments: This could be viewed as a incremental clustering problem such that nodes in trajectories are sample points for generating clusters and thus induce partition of state and action space.

Vincent Conitzer (CMU) -- Computational Aspects of Mechanism Design Motivation: reach optimal outcome by aggregating personal preferences. Subjects: Information aggregation, game theory, multi-item optimization Issues: To achieve optimal outcome for multiple preferences, we can use automated aggregation mechanisms (e.g. vote and auction) and bound agents' behavior. VCG auction encourages users showing true preferences (prevent lying).

Flat Beer

Wow, what a long day! We arrived last night from Vancouver after a day of travel. Today I attended two 4-hour tutorials, Word Sense Disambiguation and Pyro; both were very enjoyable, but long. I am especially interested in the first subject as I am currently working on a project to assign semantic orientation to adjectives. Rada Mihalcea from the University of Northern Texas was joined by Ted Peterson from U of Minnesota to give a general overview of the methods, problems, and suggested solutions to the problem of word sense disambiguation. I found this presentation to be a very good review of my CMPT 413 Natural Language Processing course at SFU, adding a bit more detail in knowledge based methods. I was quite excited to see my prof’s name cited in one of the slides on co-training in minimally supervised methods: “Statistical Parsing (Sarkar, 2001)”.

For those of you not familiar with NLP there are three basic approaches to analyzing data. The knowledge-based approach uses knowledge like dictionaries to help find meaning in raw data. Supervised approaches use human annotated data in conjunction with dictionaries while unsupervised approaches use neither a dictionary nor annotated data, but rather look at the raw text to find similarity between contexts (this is indeed real intelligence).

After the tutorials we mingled in the lobby trying to set up wireless access (which works much better now). I found this guy with the most appalling t-shirt. It turned out that he was part of a big group of people from UMBC. They seemed to be more in touch with the Pittsburgh scene than my partners and I, so Geoff, Michael and I tagged on to there plan for dinner at the Seventh Avenue Grille. The beer was flat, pasta portions a bit too small, but my Chicken with Cherry Sauce was quite yummy.

Until tomorrow,


The Web as a Collective Mind

After experiencing the veritable wonder of the Westin's "heavenly" showers, I headed up to the 3rd floor registration, where the friendly staff quickly decked me out with assorted conference memorabilia. This is fun, I thought! Then I trooped over to Westmoreland East for Rada Mihalcea and Ted Pedersen's tutorial session on "Advances in Word Sense Disambiguation". I found their presentation to be very accessible, and an excellent overview and introduction to the topic. Ideas which piqued my interest are "bootstrapping", where you start with a small collection of labeled data (for use in a classifier) and use the collection to classify unlabeled data, and when confident, add it to the labeled training examples to use in future classification. Another really neat idea is to take advantage of the web as a "collective mind", where visitors to a web site help to train a classifier to disambiguate word sense. Rada Mihalcea (who created the online system called Teach Computers.org) did admit that one of the main challenges with this approach is motivating web users to participate in such a project, and she suggested that it be formulated in terms of a game or competition. I've found with some of my own projects (such as Gender Guesser) that users are willing to contribute part of their "mind" to a web site if the web site gives them something back in return (for example, in my Gender Guesser case they contribute all sorts of unusual first names, and get back the gender of these names in return.) Another form of payback from such a website would be to gain prestige within an online community, perhaps how Slashdot gives points to users based on the frequency and ratings of their posts. This "collective web mind" harvesting approach is also something that our group is working on for training our Song Search by Tapping system.

What I really liked about Mihalcea and Pedersen's talk is that they took the time to put together lists of resources for aspiring researchers in this field, including several freely available algorithm implementations such as SenseTools, SenseRelate, SenseLearner, and Unsupervised SenseClusters.

In the afternoon I attended Tuomas Sandholm's tutorial session on Market Clearing Algorithms, and I found his topic frankly quite fascinating!! One area he discussed was mechanism design for multi-item auctions, which are for "multiple distinguishable items when bidders have preferences over combinations of items: complementarity and substitutability". Some examples he gave of these type of auctions are in transportation, where a trucker would be willing to accept a lower rate if he/she wins the contract to transport goods both to and from a destination (as opposed to just one way). On the way to our hotel I observed that our taxi was equipped with a fairly sophisticated wireless computer system, and I thought how these type of auctions could also be relevant to taxi fare determination.

Other interesting points which Tuomas discussed involved the game theory of auctions, and problems such as a single agent using pseudonyms to pose as multiple agents, and collusion between agents. Now that many auctions are occurring virtually, preventing these problems becomes more difficult. Another set of ideas deal with the concept of an "elicitor" which facilitates the auction by "deciding what to ask from which bidder". Interestingly enough, with an elicitor there is an incentive for answering truthfully as long as all the other agents are also answering truthfully.

Student Housing, Feature Selection, and the Robotics Toolkit Pyro

Hello, everyone! First a short introduction and then on to discussing day 1 of the conference. I'm a second year Ph.D. student at UMBC, currently working with Marie desJardins and Tim Oates. My research interests center on life-long learning, but I'm also working on a number of clustering projects. This is my first time at AAAI, and am looking forward to it.
Unfortunately, I must start off on a sour note. To say that I am disappointed with the AAAI student housing at Duquesne University is an understatement. Good points first, however. The check-in procedure was very simple and the materials were well-prepared. Also, I was quite pleased that the restrooms are some of the cleanest that I have ever seen at a university. Keep in mind that I’m fishing for good points. Now the downsides: the rooms are sub-par as far as dorm rooms go, lack telephones, and are rather unclean. It would have been nice to be warned that the dorm had only common bathrooms and showers, so as to pack shower shoes, etc. Given the nature of the conference, I expected the university to be ready to provide internet access to its guests, but will have to wait for Monday morning to obtain this. For now, I'm using the conference center's connection, which AAAI has kindly obtained for us.
AAAI began bright and early Saturday morning with an easy check-in process. The tutorial entitled "Downsizing Data for High Performance in Learning - Introduction to Feature Selection Methods" by Huan Liu and Robert Stine picked up pace after a rather slow start with Stine's amusing anecdotal stories. I especially liked his comment that much of the effort in predicting credit problems is based on finding indicators correlated to those which congress has already prohibited companies to use. After an introduction to feature selection, Liu and Stine worked to discuss a different approach to feature selection. It is based first on determining which features are relevant to the task, and then eliminating redundant features. This two stage method seems to generally out-perform other feature selection methods, but the selection of feature selection methods is still specific to the problem domain. I was a bit disappointed that the tutorial did not cover this problem of selecting feature selection methods for specific problem domains, as this was my main hope for the tutorial.
The afternoon tutorial "Pyro: A Tool for Teaching Robotics and AI" by Douglas Blank and Holly Yanco held a lot of promise, with a demo of Pyro looming from the start. The development team did an excellent job on Pyro, and it is especially nice that they developed a bootable CD containing the Pyro software. Pyro is a rich platform for building robot controllers, as it supports development in python, can run with a number of robot simulators, and supports the control of most common actual robots. The hands-on nature of the tutorial captured my attention, but it droned on at times with a mix of well- and half-planned demonstrations. Anyone with an Intel-based laptop (or emulator) was able to boot the CD and run Pyro along with the presenters. An image of the cd, the slides, and plenty of documentation is available on their website: http://www.pyrorobotics.org for anyone interested.
Overall, I was pleased with both tutorials, although the four hour sessions were quite tiring. I'm looking forward to tomorrow's sessions.

Howdy Bloggers

I'm Mark Carman. I come from Adelaide, Australia. I'm doing a Ph.D. in Trento, Italy. And I'm currently living and working in Los Angeles, California. Confusing hey. I'm in the doctoral consortium at AAAI and haven't introduced myself till now, because I just got back from a honeymoon in the south of Italy.... My PhD work is on learning source definitions for use in composing web services. If you want to know what that means, come up to at the conference and I'll be happy to tell you all about it. Today I gave a talk at the Workshop on Planning for Grid and Web Services. - I think people actually understood what I was talking about, so I was quite happy with how it went. For the rest of the day I've been listening to the presentations at the doctoral consortium. The talks were quite interesting. Seems like Reinforcement Learning is in fashion this year - four of the talks were somehow related to it! Below are a couple of photos I took: Note the size of the screen - I think it is the biggest I've ever seen! We're off for dinner at Sonoma Grill now, so I'll have to post comments on the talks later.....

Yet another introduction

Hi everyone, I'm Bhaskara Marthi, a Phd student from UC Berkeley working on reinforcement learning. I will be attending the doctoral consortium this weekend, and have just been doing some last-minute preparation for my talk there, but I thought I'd post a quick hello before getting to sleep. Some of the other posts on the agent school seem interesting and I look forward to reading them in more detail. Good night all!

Friday, July 08, 2005


I think everyone would agree that today at the agent school was much less controversial than yesterday, although no less energized. Several speakers were seen to be visibly sweating from jumping around and getting worked up with excitement about their topics, most notably Michael Littman (learning & planning in Markov environments) and Sven Koenig (auction-based agent coordination).

It's clear that the conference proper is about to begin. I'm staying in the Duquesne University dorms with several other students from my school (Drexel University in Philadelphia). For the past several days the hallways have been very empty, with just a few other agent school attendees here and all of us spread out across the building. Tonight there's noticably more people around. I gather that by Sunday night the dorm is going to be filled with some 140 students here for AAAI. It's imparting to the whole affair a sense of energy building up, waiting to break out.

One of the more notable events today was the agent school poster session. Despite being very low key and few people being in town yet, I thought it was reasonably well attended. Certainly I spent so much time talking to people that only a few rolls remained by the time I got to wander over to the buffet. Of course that's entirely a good thing (although I'm still starving), discussion and feedback like that is always useful. It's interesting watching what you wind up talking about with different people at a poster session, especially if your work has several distinct chunks that can be discussed in their own right. Their questions, your scattered brain, all these things make each mini-session unique, often touching mostly on just one part of what you're trying to present.

But, the hour grows late and there's yet more sessions to go at something roughly approximating the crack of dawn...

Model Airplanes

I've been thinking about this all day. If the morning session of this first day of the Agent School had a byword, it'd be "situated." A close contender would be "embodied." It started with Jay Modi presenting one view of agents as being "situated AI," intelligence in a context. The thing that happens when a theorem prover meets a gripper or a video stream is wired into some neural network. Gal Kaminka continued the theme, presenting why robots aren't just agents, but agents in the hardest class of settings---dynamic, non-accessible, non-deterministic, and continuous environments. Afterward, Manuela Veloso upped the ante in her talk by vigorously proclaiming that "Only robots are agents!"

Later I was thinking about the fact that as a kid I was very into flying model airplanes. Large portions of my allowance were spent on balsa wood and glue, building ever fancier constructions. Eventually though I switched, focusing almost completely on programming. There were lots of reasons, but I have to admit that at least some part was that I got tired of all the tinkering and real world inconveniences. Mix the doping agent a little too heavy and your wings warp. Too thin and the covering fails. Be prepared to sand and sand until you get a piece shaped right. Then, once built, be prepared to spend hours trimming and adjusting your beauty until it flies just right, then watch in horror as a gust of wind takes it into a tree. Software just seemed so much cleaner, doing what you told it to do and only what you told it to do.

It was of course then karmic destiny that years later my first "real" job would be as a TA for an intro robotics course. I found myself spending untold hours regluing sensors torn from their LEGO mounts, calibrating finicky infrared distance sensors, and analyzing the slippage of different tire materials in our test arena. The gods were surely smiling as software betrayed me, presenting as much uncertainty and requiring as much trial and error and tinkering as my airplanes, or maybe more.

I, at least, find it hard (but not impossible) to argue Dr Veloso's stated point that robots are real agents, and "software agents" something less challenging. Interestingly, most counter-arguments that I can come up with or have heard seem to center on an agent that interacts with other agents or has some meaningful connection to the physical world, e.g. a sensor feed or even network access.

What I find less arguable though, is the closely related point that actually developing systems entails a great deal beyond theory and architectures. There's all that hacking, tinkering, caveats, and hard-earned special constants and thresholds that actually make things work well. Unfortunately, that often may be the bulk of the work but not receive much attention or accolades in its own right.

However, I think such efforts have their own rewards. Besides the accomplishment of seeing things actually working, I find that implementation in real settings---difficult, annoying, and time-consuming though it may be---often highlights new problems of interest. My lab does a lot of work with PDAs, tablets, and other small computing devices. In pretty much every way, developing with them is a terrible and frustrating experience. You haven't felt pain until you've had to use a stylus to peck away at reconfiguring an iPaq for a demo, or worried about repackaging code libraries to fit the space on a memory card. But, as my advisor claimed it would, actually trying things on them has on several occasions pointed out issues and opened up fascinating new areas to play around in.

Whether it requires robots or not to do so is perhaps arguable, but I think then that the notion put forth in the session of "agents" as real things situated in an environment and subject to its challenges and rigor, is an important notion with a large role in testing and developing the underlying science of AI...

(I confess though to still avoiding actual implementation like the plague; it seems a lot like real work)

Thursday, July 07, 2005

The men on the poster, robot soccer and a grand challenge...

Manuela Veloso opened her ASAMS talk today by asking how many of us knew whom the men playing chess on the conference poster were. The audience seemed mostly blank on this point, though I imagine the sizable contingent of CMU students did know the answer, but were keeping quiet. The men of course are Herb Simon and Allen Newell two of the founders of AI. I am always interested to hear about the history of scientific fields. We were taught little about the history of computer science at my undergrad school, so since I came to CMU I have been trying to catch-up. Fortunately no one seems to talk more passionately about AI history than Manuela. Manuela advised us all to get hold of Herb Simon’s autobiography – “Models of my life”, but admitted that she had just bought every copy available on Amazon. So interested attendees may have to resort to libraries or the second-hand market.

As Maayan already mentioned Manuela challenged the audience’s concept of agents by claiming that software agents were not quite real agents because they don’t really have a perception component. While some software agents probably do have a perception component e.g. smart rooms as suggested by an anonymous commenter, it is easy to forget about the issue of perception when working on software agents and theory. Manuela gave a very entertaining account of the problem of perception and uncertainty in robot soccer – describing her “soccer mum” like response to a referee in 1998 picking up one of CMU’s robots in a game and putting it somewhere else on the field. At the time, the concept of a robot being picked up had not occurred to the team because most robots were heavy like Xavier or designed for Mars or volcanoes or similar places where there were simply no people around to pick them up! So the nice mathematics behind the soccer robots’ localization simply had not been designed to cope with being transported. But the story had a happy ending, despite the robots spending most of the game completely lost, because every time they found themselves they were picked up again, the team somehow managed to win. :)

Manuela’s example served to illustrate that it is very important to put ideas and theories into practice. The hacking required may be tiresome sometimes, but it is not possible to foresee all eventualities when designing agents (robotic or otherwise) that need to interact with the environment, people and other agents. Implementing agents that actually operate in the real environments they are designed for can expose important new challenges!

I’ll end this entry by asking for suggestions about a question Manuela asked – what is a grand challenge for software agents? In robotics there is the DARPA grand challenge and robot soccer, but is there something similar we can aim for in software agents? Does the Trading Agent competition or the General Game Playing competition qualify?

Another Hello (this time from Paolo)

I'm Paolo Massa, a PhD student at University of Trento, Italy. I'm coming to AAAI to present a paper "Controversial Users demand Local Trust Metrics: an Experimental Study on Epinions.com Community" (pdf) and to share ideas. I'm really looking forward for this conference, if you happen to be interested in trust, reputation, recommender systems, social software, don't hesitate to contact me! Anyway I'll be happy to discuss about any topic ;-)
Being a blogger myself, I guess I'll just guest-blog here sometimes while keeping my blog. I'll also try to posts photos on Flickr (see paolo on Flickr) and tag them with the tag aaai05.
Lastly let me mention that, while in Pittsburgh, I'll be hosted by a guy I contacted via HospitalityClub. What is HospitalityClub?
Our aim is to bring people together - hosts and guests, travelers and locals. Thousands of Hospitality Club members around the world help each other when they are traveling - be it with a roof for the night or a guided tour through town. Joining is free, takes just a minute and everyone is welcome. Members can look at each other's profiles, send messages and post comments about their experience on the website. The club is supported by volunteers who believe in one idea: by bringing travelers in touch with people in the place they visit, and by giving "locals" a chance to meet people from other cultures we can increase intercultural understanding and strengthen the peace on our planet.
I already used it and this is a perfect opportunity for meeting "local" people when you travel and for giving to anyone the chance to travel and see the world. I also often use CouchSurfing, a site with a similar goal. My suggestion is to sign up to HospitalityClub and to CouchSurfing. I'm looking forward to interesting discussions. See you in few days!
paolo from Italy.

AAAI-05 Word Search Puzzle

Jim Mayfield has created a special edition word search puzzle to commemorate AAAI-05. The AAAI-05 Word Search Puzzle was generated by a fairly sophisticated program that uses heuristic search and a language model to make the puzzle both compact and challenging. A limited premium edition printed on parchment paper stock with a gold foil border is available for an additional charge.

So, what’s an agent, anyways?

I spent the morning thinking about this question. The second speaker at ASAMAS, Gal Kaminka of Bar Ilan University, started his talk with the claim that robots are agents. I immediately objected. “Well,” I thought, “it’s certainly true that some robots are agents. But just like you wouldn’t say that every agent is a robot, you can’t say that every robot is an agent.” So then I tried to come up with an example of a robot that I would say is not an agent. The best example I could think of was a robotic arm. Clearly, no one would say that an arm is an agent.

But then, I thought, why not? Why can’t a robotic arm be an agent?

Fortunately, during Jay Modi’s opening talk, he asked us to do a team exercise where each team listed the three central features or properties of agents. My group proposed that agents must be autonomous, goal-oriented, and operate using local information. Well, a robot arm can be autonomous, and once it has a task to do, it’s certainly goal oriented. I paused for a moment at the idea of local information, since a robot arm is usually not equipped with a sensor like a camera and is generally not tasked with observing its environment. But then I realized, what could be more local than the information a robot gets from the pressure responses on its gripper? Ok, according to our criteria, a robot arm is an agent.

What about the features specified by the other teams? A robot arm interacts with its environment. An arm operates under uncertainty. It is proactive (whatever that means). One concept that several teams brought up is learning. According to them, a key characteristic of an agent is that it learns. Ok, a robot arm doesn’t learn. Well, I happen to disagree that learning is a necessary condition for classification as an agent, so that’s not enough, in my opinion, to disqualify the arm. What about interaction? A robot arm doesn’t interact with anyone. Maybe an agent is only an agent if it interacts with other agents. Which brings me to my central question:

If a robot climbs a tree in a forest and no one is around to see it fall off, is it still an agent?

(I should mention that I spoke to Gal after his talk, and he brought up several examples of robots that he does not consider to be agents. For example, teleoperated robots are not agents. He also brought up the RoboCup small-size soccer league. In that league, sensing and computation are centralized, so while the whole team is an agent, each individual robot is merely an appendage of that agent.)

Interestingly, in her talk at 11:00, my advisor Manuela Veloso (I'm also co-advised by Reid Simmons) made the controversial claim that the only real agents are robots. She said, “Saying that software agents are real agents is a little bit of cheating,” because software agents do not perform perception.