<body><script type="text/javascript"> function setAttributeOnload(object, attribute, val) { if(window.addEventListener) { window.addEventListener('load', function(){ object[attribute] = val; }, false); } else { window.attachEvent('onload', function(){ object[attribute] = val; }); } } </script> <div id="navbar-iframe-container"></div> <script type="text/javascript" src="https://apis.google.com/js/platform.js"></script> <script type="text/javascript"> gapi.load("gapi.iframes:gapi.iframes.style.bubble", function() { if (gapi.iframes && gapi.iframes.getContext) { gapi.iframes.getContext().openChild({ url: 'https://www.blogger.com/navbar.g?targetBlogID\x3d13597791\x26blogName\x3dAAAI-05++Blog\x26publishMode\x3dPUBLISH_MODE_BLOGSPOT\x26navbarType\x3dBLUE\x26layoutType\x3dCLASSIC\x26searchRoot\x3dhttps://aaai05blog.blogspot.com/search\x26blogLocale\x3den_US\x26v\x3d2\x26homepageUrl\x3dhttp://aaai05blog.blogspot.com/\x26vt\x3d-6004576250479142494', where: document.getElementById("navbar-iframe-container"), id: "navbar-iframe" }); } }); </script>

Monday, July 18, 2005

The future is bright

Just a quick post to say thanks to everyone who helped make this conference an especially fascinating and enjoyable experience for me. I feel I have a broader outlook on the academic community in general, as well as a great deal of excitement about the future of both human and artificial intelligence. I think this blog was a great idea for the conference and I feel honoured to have been a participant. It was also great to see how there is an interest from a diverse range of people and organizations in the broad field of Artificial Intelligence. I believe Dr. Minsky (or someone else?) said that AI is solving problems that have not been solved yet, or making computers do things which only humans could do in the past. Either of these goals is worthwhile, but I think we have to just go about our work with a great deal of respect for the past, for institutions, for experts, and for leaders. The future is a bright one, and we just need to remember "respect". (Sounds like a football mantra or something, but it could also apply to AI researchers and theorists too.)

Saturday, July 16, 2005

Churches and Cathedrals

What a week! After the Demo on Tuesday my whole team was pretty tapped. We went to the Church Brew Works to recover... What better way than enjoying a beer at church : ) After the couple talks on Wednesday we met up with professor Yang Cai at his lab at CMU. One of his students showed us around the campus (very nice!). The buildings were extremely old and well kept, and the receptionist in the CS building was a roboceptionist, a good mix of old and new. We got to play with the lab's eye tracking system (see the picture in Flckr). If you look at a spot on the screen for 5 seconds or longer, that spot gets selected. We headed over to the Cathedral of Learning to find 42 stories of classrooms and lounges for students to study in etc. After spending some time taking pictures of the main entrance, we headed inside to find a gorgeous foyer that required more photo-taking time (thank God for digital). We finally headed up the network of elevators that you have to take to get to the top, each one only went up 10 floors or so. Once at the top we stumbled into a group of U Pitt students in what must have been a student lounge. I overheard them talking about their experiences interning at the hospital and noticed that one of the guys had his shirt off... "Very odd" I thought, "Where's the gym?" I didn't ask of course, but after checking out the view from the top of Pittsburgh, we headed down and met a nice woman in the elevator that explained the shirtless guy... he had climbed the 42 stories of stairs for exercise. Looking back at the week, I couldn't have asked for a better first AAAI. I met a whole community of people with similar interests and got to see a glimpse of a beautiful city and two universities. Thanks to everyone who helped make AAAI a success!

Thursday, July 14, 2005

AAAI robot movies

For those of you who were not present at the conference and thus did not get to see some of the cool robots in action, here is your chance: robot1 & robot2

[Note: You may have to tilt your head slightly (where slightly = by 90 deg) to watch the second video]

Wednesday, July 13, 2005

Jim Hendler: knowledge is power

Jim Hendler’s presentation on the semantic web is fully attended as the “Web 2.0” talk given by Tanenbaum. He used lots of demos, e.g. RDF in PDF, swoop, and swoogle, to justify the practical aspect of the semantic web -- “You are here”. The simple semantic web is less expressive than any existing KR languages; however it does have significant amount of knowledge (million of documents and thousands of ontologies).

Participating in an experiment

The "hall of robots" is a fun aspect of the conference. Whether or not they do interesting things or represent scientific advancement is beside the point, The point is that the hall is filled with beeping, twitching pieces of machinery covered with blinking lights, raising the chaos and geek-joy levels all around. Back in one corner, however, there was something special going on, however: a joint NRL/Missouri team who were actually gathering experimental data right here at the conference. I signed up to play with their robots, and got to give them some data as well: during the experiment, you drive the robots through a sketch interface from inside a booth where you can't see them, and outside the passerby can watch them surging about doing the tasks you're assigning them. I thought this was a really neat idea: on the one hand, they get real data gathered --- on the other hand, they have an excellent, changing attraction to attract people's attention.

Tuesday, July 12, 2005

Google's conference break game

Google had played an interesting game for collecting people during conference break: they handed out numbered tags to people around without tell them what will happen then. Very soon, several tens of people stopped and clustered in front of Google's table. About 50 people had got tags, including me. Finally, there are around 100 people squeezed in a small area waiting for the final result, which turned out to be a lottery of some fun stuff. This game, even after it ended, made people around talking the popular word “google”. FYI: The American Dialect Society chose the verb to google as the "most useful word of 2002." (source: Wikipedia)

A Little Story

Yesterday I blogged that the Sherbrooke robot Spartacus needed to use the elevators in the hotel to move between floors, as part of the Robot Challenge. While I haven't heard how that part of the Challenge went, I had a rather didactic experience myself with the elevator today that lead me into some thinking regarding robot (and human) elevator conversation etiquette. (Please note that I haven't actually studied this area nor have reviewed existing works.) In my short life I have lived in various single family homes and university dorms, which did not have elevators - only stairs - and thus I am quite experienced with stairwell conversation etiquette. However, stairwell etiquette does not transfer very well to elevator etiquette for several reasons. The most obvious differences (to me) are the types of activity involved and spatial considerations (stairs: walking in front of, or behind, the conversation partner, and speaking while stepping, and elevator: standing in a small enclosed space facing the conversation partner, maintaining an appropriate distance from the other elevator occupants). But the other, more interesting differences arise from the temporal constraints (or lack thereof) and their impact on the conversation duration and content.

I'll give a personal example to illustrate. Late Monday evening I was playing some jazz improvisations on the Westin hotel's lovely concert grand piano. After I finished, I had a chance to meet with one of the AAAI invited speakers who shared his interest in jazz performance and we briefly discussed some of our music related research and projects. All went well and I felt energized after such a stimulating day of conversations with such fascinating people. I meandered up to the next floor using the escalator and then realized that I needed to take the elevator to return to my hotel room. I pushed the elevator call button and when the door opened, discovered that the same invited speaker was already in the elevator. Now, this is where my brain's attempt to use a stairwell (or perhaps a hallway) conversation rule failed rather spectacularly (at least in terms of the conversation's success).

Fortunately it was a temporal, not a spatial rule which was used incorrectly. Basically, what happened was that I didn't take into account the sharply defined time constraint imposed by the elevator itself, and when the invited speaker politely mentioned one of my projects, I launched into a series of statements about the project, probably due to my excitement about the subject. Unfortunately for me, the elevator abruptly "dinged", door opened, and speaker exited, saying a terse "good night", leaving me in a rather awkward state. Some questions: If the main actor was a robot, would it detect this conversation failure? Could it learn from the mistake? (I hope I myself will!) Could a robot create a blog, or a narrative describing an incident that it experienced? What would an intelligent robot do if it entered an elevator with two invited speakers, one speaker a robot and the other human (and presumably the conversation rules / protocol would be different for each?) For example, if it decided to converse with the robot speaker, would it use natural language so as not to alienate the human speaker? Or maybe it would have a wireless, data based conversation with the robot and a simultaneous natural language conversation with the human speaker. (But the time constraint might not apply to the wireless mode and perhaps the two robots would not determine their conversation patterns by locality: robots might be connected to an intra robot communications network which determines conversation patterns in other ways - I mentioned this to Caroline and it made her think of something from Jungian psychology.)

Anyways, enough rambling for now!

Monday, July 11, 2005

So, What's AI Research, Anyways?

Hi everyone, I am Priyang Rathod. I am a PhD student at UMBC, working with Marie desJardins. I have been meaning to blog since the day of my arrival here, but could not, because I am staying at the student housing at Duquesne University :(.

On the opening day of ASAMAS, Gal Kaminka of Bar Ilan University gave an Introduction to Agents and Multiagent Systems. At the beginning of his speech, he mentioned an incident when his paper was rejected at an Agents conference because one of the reviewers thought that the research presented was not related to Agents. Gal was quite annoyed about that for a couple of weeks. That's quite expected; anyone would be annoyed if a peer-reviewer decides that you are not doing what you think you are doing. But then later Gal also talked about meeting a researcher at Bar Ilan who makes the best batteries and how according to him the battery maker was also a robotics researcher.

That got me thinking: Where is the line between AI Research and Non-AI Research? Does Machine Vision or Robotic Arm Design count as AI research?? Well, I think many would say so. But then what about the batteries and motors used in robots?? Is that AI research? If it is, then what about the chemicals used in batteries, which can be used in a robot? Is THAT AI research? How far do we go? Where do we draw the line?

[summary] sister conference highlights (monday morning)

The sister conference session is a convenient path to access relevant and interesting researches in the conferences you have missed in the past. I'm a little bit surprised that this session is not well attended.

KDD 04: Tutorials reflects current interests in Data Mining: date steam, time series, data quality and data cleaning, junk mail filtering, and graph analysis. This conference has an algorithmic and practical flavor, e.g. clustering is a more concrete form of "levels of abstraction". There are also many industrial participations and KDD cup went well.

ICAPS 05: This is a fairly young conference which merges several previous conferences on planning and scheduling. Scheduling papers increase(25%), and search continues play prominent role. Best paper presents a "complete and optimal" and "memory-bounded" beam search. Another interesting paper learns action model from plan traces without the need of manually annotating intermediate nodes in the trace. The competition on knowledge engineering another way to attract participants from various relevant domains. Interesting agreements from participants "application get no respect", "too many people spend too much time working on meaningless theories".

UAI 04: Probability and graphical representation dominate these years. Tutorials were clustered on graphical models (esp. BBN). Best paper presents case-factored diagram, a new representation of structure statistical models, so as to offer compact representation, as demonstrated in the problem of "finding the most likely parse of a given sentence".

Sunday, July 10, 2005

Agents Don't Need to be "Super Intelligent" to be Helpful

I had another great day attending tutorial sessions. The morning was Paul Cohen's excellent "Empirical Methods for Artificial Intelligence", which I believe should be required material for nearly anyone in AI - as he describes, there are big benefits (in terms of the advancement of the field) when scientific/empirical approaches are combined with theoretical ones. This statement was echoed by others I talked with today, including University of Sherbrooke's Laborius robots team who intend to use their AZIMUT-2 modular and omnidirectional robot as a platform for validating machine learning algorithms. Their AZIMUT-2 robot has some kind of funky spring mechanism in his (or her?) wheel motors which allow the robot to sense changes in the terrain, such as how we humans receive feedback about the ground when walking.

Sherbrooke's Spartacus robot is actually the only robot at this conference who is attempting the daunting task of competing in the Robot Challenge. Tomorrow morning, Spartacus will be dropped off at the entrance to the hotel, and will have to somehow take the elevator up to the correct registration floor, find the right registration desk, and after registration perform volunteer duties (in lieu of paying the conference fee) until it is time for his scheduled presentation time, at which he will present his latest work and answer questions from the audience. Not only that, but Spartacus will also interact and socialize with the other conference participants throughout. I wish him (it?) and the Sherbrooke team the best of luck!

Regarding robots, I am by no means an expert, or even remotely involved in that area myself, but I can easily envision a day when we will walk along a city street, no longer taking special notice of the additional pedestrian traffic: autonomous robots who will scurry about their daily business just as we humans do today. It shouldn't be too difficult to gain mass acceptance of these types of robots once they have been interviewed on TV, and come across as friendly, helpful, and even funny (maybe I'm going out on a limb here, but just wait 20 years and you'll see...)

In the afternoon I attended Mark T. Maybury's tutorial session on Intelligent User Interfaces. I can imagine that some of the attendees of this tutorial may have been put off by the somewhat dated video examples (for example there were a few from the ever so ancient time of 1990 to 1995), but I believe (and Dr. Maybury stated) that the differences between the concepts and ideas illustrated by those videos and the state of the art today are largely cosmetic. For example, it seemed that a huge part of Intelligent User Interfaces involves multimodal input, where a user would simultaneously gesture, look at an object, and speak, and these inputs would be synthesized and used as a basis for decision making, learning, or executing a task. This is obviously a problem that has not been solved completely today, even though over 10 years have passed since the celebration of first successes.

Dr. Maybury presented so many great ideas, some in more detail than others, but one idea which I was especially interested in (and which he generously expanded upon) is the concept of a software agent which identifies human experts within an organization by capturing and searching for keywords in the publicly available writings of the employees (e.g. if employees publish documents to a company repository, they can be considered to be fair game for keyword searching).

You might say that this system is not really that intelligent, but Maybury argued that this doesn't really matter - it can still be really helpful. (My example follows.) Let's say that company A has 2000 employees in 25 locations throughout the globe. What usually happens is that, without this new software agent system, if an employee needs to gain knowledge on a certain topic, he/she might consult the immediate social network to determine an expert, such by asking coworkers on the same floor, or perhaps someone in the same office who is a hub in the company's social network. (That's why I think that even in this day and age when telecommuting is possible, most large software firms still have (large) brick and mortar offices.) However, with a software agent that can identify experts throughout an organization regardless of location, these social networks are no longer required to find experts. (Much like how web search is reducing the need for personal referrals to small service based companies).

In the future I really don't see how large companies could afford not to employ such an expert finding system (...and maybe some already have, but are just not telling.) Dr. Maybury mentioned that his organization (MITRE) did publish a paper on this idea before any patents were filed, so there is potentially still an opportunity for some newcomers to jump in with a fancy new product to serve this purpose. He gave one commercial example of Tacit.com which attempts to build the expert database using employee email monitoring. On a side note, just imagine what kind of "expert database" Google could have of the world, if they mined their Gmail archives 10 years from now! (Not that they would ever do that of course without our consent, but what if users requested that feature?!)

I have networked, and it was good.

The opening reception is finally trailing off, after we were all shooed out of the ballroom (the hotel staff were rather pointed about the fact that it was time to go), and I've just met a whole bunch of neat people. One of the reasons I wanted to come to AAAI is to get a feel for the community of researchers who identify as AI people, and try to figure out how we relate to one another. A big problem I've had as a graduate student is getting a feel for what the rest of the field is doing --- my own research cuts across a lot of different areas, and I've found it virtually impossible to know what's going on in all of them. Well, if there's anywhere where all of AI is represented, it ought to be here, and I certainly met enough folks doing all sorts of different things tonight. Lots of good feeling in that room and people easy to talk to and happy to open up. I think I may have just spent the evening officially networking, and it was good.

Moving Minsky

wow... Michael, Geoff and I were just playing around with the Segway at the opening reception, when Marvin Minsky hopped on! Check out the pic I got : )

Saturday, July 09, 2005

AAAI Doctor consortium summary (day one)

AAAI DC opening Kiri's welcome speech
Kiri, the chair, gave a welcome speech and 8 PhD students presented their theses work focusing on a leaning and planning.  (thanks Mykel for helping me revising this post)

Jennifer Neville (UMass) -- Structure Learning for Statistical Relational Models

Motivation: latent groups could be detected from graph structure as well as node attributes. Subjects: graph analysis, clustering Issues: (i) Partitioning a network into groups using both link structure analysis and node properties clustering (using EM); (ii) utilizing group assignment for better attribute-value predication and unbiased feature selection. Comments: Groups, as mentioned in this talk, are disjoint; however, further work might involve instances where a node may belong to multiple groups. This problem can be viewed as a clustering problem which uses features from node attributes as well as graph structure information.

Shimon Whiteson (UTexas Austin) --Improving Reinforcement Learning Function Approximators via Neuroevolution Motivation: adaptive scheduling policy can be learned using neuroevolution. Subjects: function approximation, reinforcement learning, evolutionary neural network. Issues: (i) the transition matrix (state-action table) can be huge, and a neural network (NN) based Function Approximator (FA) is a compact alternative; (ii) NEAT+Q, which evolves both the weights and the topology of the NN using NEAT and Q learning, is helpful. Experiments show that Darwinian evolution achieves the best performance compared to Lamarkian evolution, randomized strategies, etc. Comments: Many are skeptical about whether evolving NN could be used for online scheduling since it takes a lot of computational time. In order to evaluate the significance of improvement, the optimal and worse cases are needed.

Bhaskara Marthi (UC Berkeley) -- Discourse Factors in Multi-Document Summarization Motivation: Decomposition is needed for planning asynchronous tasks and many joint choices in a giant state space. Subjects: planning, reinforcement learning, optimization Issues: (i) concurrent ALisp and coordination graphs are used for concurrent hierarchical task decomposition; (ii) decompose Q function w.r.t. subroutines, and that requires reward decomposition which is hard. Comments: Decomposing Q function for the reward of "exiting a subroutine" is hard since there could be thousands of ways to exit.

Trey Smith (CMU) --Rover Science Autonomy: Probabilistic Planning for Science-Aware Exploration Motivation: Discover scientifically interesting objects in extreme environments (e.g., Mars). Subjects: Planning Issues: (i) planning navigation -- with maximum coverage over a spatial extent; (ii) selective sampling — variety is preferred to large amount of copies of samples.


Marie desJardins (mentor from UMBC) Prepare talks for different context:

  1. Job talk. Try to amuse people with your work and how complex it is.
  2. Doctor consortium talk. Present technical details, expose your strength and weakness in front of the mentors, they are external experts who will provide valuable comments from all aspects. Take it as your trial thesis defense...
  3. Conference talk. Present ideas and keep the audience awake...

Snehal Thakkar (USC) -- Planning for Geospatial Data Integration Motivation: Integrate spatial-related data on the Web, and support query/answering using planning. Subject: Information integration, Geospatial information system Issues: (i) a hierarchical ontology/taxonomy for modeling the spatial application domain, (ii) Planning spatial information query using filtering (reduce querying many irrelevant sources).

Ozgur Simsek (UMass) -- Towards Competence in Autonomous Agents Motivation: Define “useful skills”, and let agents learn them. Subjects: Learning, knowledge discovery Issues: "useful skills" is defined in three categories: (1) access skills, identify "access states" which are critical for making the search space fully searchable especially for hard-to-access regions; (2) approach-discovery skills, how to achieve an "access state"; and (3) causal discovery skill, identify casual relations. Comments: How to memorize the past states is still hard when the problem space scales. Experiences can be reused as well as the skill can be reused.

Mykel J. Kochenderfer (U Edinburgh) -- Adaptive Modeling and Planning for Reactive Agents Motivation: Efficient planning in real time for complex problems with large state and action spaces requires partitioning these spaces into a manageable number of regions. Subjects: reinforcement learning, clustering Issues: Learn to partition the state and action spaces using online split and merge operations. Comments: This could be viewed as a incremental clustering problem such that nodes in trajectories are sample points for generating clusters and thus induce partition of state and action space.

Vincent Conitzer (CMU) -- Computational Aspects of Mechanism Design Motivation: reach optimal outcome by aggregating personal preferences. Subjects: Information aggregation, game theory, multi-item optimization Issues: To achieve optimal outcome for multiple preferences, we can use automated aggregation mechanisms (e.g. vote and auction) and bound agents' behavior. VCG auction encourages users showing true preferences (prevent lying).

Flat Beer

Wow, what a long day! We arrived last night from Vancouver after a day of travel. Today I attended two 4-hour tutorials, Word Sense Disambiguation and Pyro; both were very enjoyable, but long. I am especially interested in the first subject as I am currently working on a project to assign semantic orientation to adjectives. Rada Mihalcea from the University of Northern Texas was joined by Ted Peterson from U of Minnesota to give a general overview of the methods, problems, and suggested solutions to the problem of word sense disambiguation. I found this presentation to be a very good review of my CMPT 413 Natural Language Processing course at SFU, adding a bit more detail in knowledge based methods. I was quite excited to see my prof’s name cited in one of the slides on co-training in minimally supervised methods: “Statistical Parsing (Sarkar, 2001)”.

For those of you not familiar with NLP there are three basic approaches to analyzing data. The knowledge-based approach uses knowledge like dictionaries to help find meaning in raw data. Supervised approaches use human annotated data in conjunction with dictionaries while unsupervised approaches use neither a dictionary nor annotated data, but rather look at the raw text to find similarity between contexts (this is indeed real intelligence).

After the tutorials we mingled in the lobby trying to set up wireless access (which works much better now). I found this guy with the most appalling t-shirt. It turned out that he was part of a big group of people from UMBC. They seemed to be more in touch with the Pittsburgh scene than my partners and I, so Geoff, Michael and I tagged on to there plan for dinner at the Seventh Avenue Grille. The beer was flat, pasta portions a bit too small, but my Chicken with Cherry Sauce was quite yummy.

Until tomorrow,

Caroline

The Web as a Collective Mind

After experiencing the veritable wonder of the Westin's "heavenly" showers, I headed up to the 3rd floor registration, where the friendly staff quickly decked me out with assorted conference memorabilia. This is fun, I thought! Then I trooped over to Westmoreland East for Rada Mihalcea and Ted Pedersen's tutorial session on "Advances in Word Sense Disambiguation". I found their presentation to be very accessible, and an excellent overview and introduction to the topic. Ideas which piqued my interest are "bootstrapping", where you start with a small collection of labeled data (for use in a classifier) and use the collection to classify unlabeled data, and when confident, add it to the labeled training examples to use in future classification. Another really neat idea is to take advantage of the web as a "collective mind", where visitors to a web site help to train a classifier to disambiguate word sense. Rada Mihalcea (who created the online system called Teach Computers.org) did admit that one of the main challenges with this approach is motivating web users to participate in such a project, and she suggested that it be formulated in terms of a game or competition. I've found with some of my own projects (such as Gender Guesser) that users are willing to contribute part of their "mind" to a web site if the web site gives them something back in return (for example, in my Gender Guesser case they contribute all sorts of unusual first names, and get back the gender of these names in return.) Another form of payback from such a website would be to gain prestige within an online community, perhaps how Slashdot gives points to users based on the frequency and ratings of their posts. This "collective web mind" harvesting approach is also something that our group is working on for training our Song Search by Tapping system.

What I really liked about Mihalcea and Pedersen's talk is that they took the time to put together lists of resources for aspiring researchers in this field, including several freely available algorithm implementations such as SenseTools, SenseRelate, SenseLearner, and Unsupervised SenseClusters.

In the afternoon I attended Tuomas Sandholm's tutorial session on Market Clearing Algorithms, and I found his topic frankly quite fascinating!! One area he discussed was mechanism design for multi-item auctions, which are for "multiple distinguishable items when bidders have preferences over combinations of items: complementarity and substitutability". Some examples he gave of these type of auctions are in transportation, where a trucker would be willing to accept a lower rate if he/she wins the contract to transport goods both to and from a destination (as opposed to just one way). On the way to our hotel I observed that our taxi was equipped with a fairly sophisticated wireless computer system, and I thought how these type of auctions could also be relevant to taxi fare determination.

Other interesting points which Tuomas discussed involved the game theory of auctions, and problems such as a single agent using pseudonyms to pose as multiple agents, and collusion between agents. Now that many auctions are occurring virtually, preventing these problems becomes more difficult. Another set of ideas deal with the concept of an "elicitor" which facilitates the auction by "deciding what to ask from which bidder". Interestingly enough, with an elicitor there is an incentive for answering truthfully as long as all the other agents are also answering truthfully.

Howdy Bloggers

I'm Mark Carman. I come from Adelaide, Australia. I'm doing a Ph.D. in Trento, Italy. And I'm currently living and working in Los Angeles, California. Confusing hey. I'm in the doctoral consortium at AAAI and haven't introduced myself till now, because I just got back from a honeymoon in the south of Italy.... My PhD work is on learning source definitions for use in composing web services. If you want to know what that means, come up to at the conference and I'll be happy to tell you all about it. Today I gave a talk at the Workshop on Planning for Grid and Web Services. - I think people actually understood what I was talking about, so I was quite happy with how it went. For the rest of the day I've been listening to the presentations at the doctoral consortium. The talks were quite interesting. Seems like Reinforcement Learning is in fashion this year - four of the talks were somehow related to it! Below are a couple of photos I took: Note the size of the screen - I think it is the biggest I've ever seen! We're off for dinner at Sonoma Grill now, so I'll have to post comments on the talks later.....

Thursday, July 07, 2005

Hello [AAAI] World! (from Li Ding)

I am Li DING, a 4th year PhD student advised by Tim Finin from the eBiquity group at UMBC . I am originally from Beijing, China. My research focus is on representing and sharing knowledge using semantic web technologies. I maintain the Swoogle search engine for semantic web data. I have a FOAF file with a list of my friends, and AAAI 05 is surely a great place for augmenting this list. See you all there!

Tuesday, July 05, 2005

Hello from Caroline

Hi. I'm a Cognitive Science student at Simon Fraser University in Vancouver, British Columbia, Canada. I'm currently working with Maite Taboada on an NLP project to analyze text sentiment. I also work with Geoff Peters and Michael Schwartz on Song Search and Retrieval by Tapping which we will present at the Intelligent Systems Demo on Tuesday night. This is my first time at AAAI and I'm very excited to be blogging with such a diverse group of people.

Hello from Jake

My name is Jake Beal, and I'm a 5th year grad student at MIT. Despite being mostly an AI researcher, this is my first time at AAAI, and like many of the other blogger, I'm coming to the Doctoral Consortium. My thesis supervisor is Gerry Sussman, and the AI half of my work is looking at how learning and reasoning can take place as a byproduct of translating between different perspectives. The other half of my work is amorphous computing, where I develop algorithms that let zillions of weak, unreliable devices to work together and achieve programmed goals --- much as the cells in your body collaborate to build and repair things like hands and eyes. I've also dabbled in a number of other areas that my research has collided with, from game theory to networking to ecology. Now that I'm coming up on the end of my program, however, I'm forcing myself to buckle down and focus on my thesis.

Sunday, July 03, 2005

Hello from Geoff

Hi everyone! I'm an undergrad student from Simon Fraser University in computing science and business. I'm really looking forward to participating in this conference, both as a student volunteer blogger, and a presenter at the Intelligent Systems Demonstrations that are happening on Tuesday evening (July 12th). As a musician, I'm especially interested in the application of Artificial Intelligence to music, and our group's demo, Song Search by Tapping, reflects that. Our supervisor at SFU is Dr. Diana Cukierman.

I'm also really looking forward to attending the tutorial sessions on Saturday and Sunday of the conference (July 9th & 10th). Word sense disambiguation is an area I am interested in, especially applied to natural language of web pages. I am also going to attend sessions on Market Clearing Algorithms, Empirical Methods for Artificial Intelligence, and Intelligent User Interfaces.

I'm honoured to be part of this blogging group, among those with such diverse and fascinating interests. I look forward to meeting you all!

On a different note, does anyone know of some good places to check out for live jazz in Pittsburgh?