When I was at engineer school (~2000) some of my teachers were in a lab that focused on agent-oriented development. What was interesting is that the agent-oriented approach has a very technical side (lightweight concurrency and actors) as well as a very conceptual side (modeling of behaviours, intentions and knowledge, emerging behaviours). I was personally more interested in the actor side of things, as I didn't think there were good abstractions to deal with the true AI part of agents (knowledge representation, behavioural primitives). But now with all the fuss about ML, I'd really like real AI to come back to the forefront -- and now that the concurrency side of things has been pretty much solved, we can focus on the harder challenge of primitives for emerging AI.
I lean toward the Wasserman side here & consider the most interesting part of agents to be planning. Message-passing is almost mainstream these days, but backtracking & declarative programming (despite its power) is pretty rare still. From a UI perspective (as Negroponte says in this document) a network of agents capable of planning can expose the power of automation to non-technical end users.
This seems to be one of the components missing from most popular agent frameworks too, though. Huginn doesn't have it, & it doesn't even seem to be on the radar.
I'd love to see somebody make a minikanren encapsulation as a huginn agent, to integrate with other meta-control agents like the scheduler & the command agent.
Planners aren't actually very hard to write (particularly if you write them naively, like a first generation/pre-WAM prolog implementation). It's a shame we don't use them much, since the main argument against them (efficiency) is being flouted in much less productive ways in commonly-used software.
Yes, but doesn't planning imply at least coordination, if not the notion of intention? What I like about the idea of cognitive agents is that they're not only a mechanical constraint solver, but have the notion of behaviours and intentions "encoded in their DNA" so that the solution is reached by coordination and evolution, as opposed to constraint evaluation and local minima/maxima exploration. I'm not familiar with Wasserman, do you have a favourite paper that I should read?
One last question: do you see agents more as software constructs (more lik actors, ie. an encapsulate state with its own control flow and communication using messages) or like conceptual entities (an entity with its own goals that interacts with others and the environment through exchange of messages).
In this context, by 'planning' and 'planner' I just mean automated constraint solving / goal-directed programming. Ignoring the folks who consider java applets to be 'user agents', there are sort of two types: one looks like IFTTT, and one looks like IFTTT with prolog hooked up to it, & I like the latter a lot better (even though both are basically trivial).
Wasserman is the author of the RITA paper (https://www.laarc.io/item?id=904) & one of the RITA devs. He seems to have gone on to write a great deal about expert systems too.
Negroponte's model (as described in OP) seems to involve the agents learning preferences & behaviors implicitly. However, to be useful as an alternative to direct manipulation for non-technical users, such an agent still needs a planner (as slow as naive prolog can be, genetic algorithms & statistical-ML-based code generation are way worse). Learning mechanisms don't need to be statistical (they can be integrated parts of the planner in terms of preferences & other facts), & even statistical mechanisms can be integrated into the planner (ex., the RITA documentation cites MYCIN's ability to tag facts with floating point confidence values -- something a lot of expert systems do, & that I built into MYCROFT -- and such a mechanism can be used to perform fuzzy logic on observed behaviors, providing statistical learning in a predicate logic context; Goertzel et. al.'s probablistic logic networks provide the same kind of capability in a more nuanced way, & were my inspiration for the use of composite truth values in MYCROFT).
In my view, an agent is an actor with a planner in it. (Obviously, this isn't how most folks define agents, but it captures what I think is most interesting about the field.)
Ah, thanks for the link and details. This definitely makes sense, as it shifts agents from actors toward "intelligent" agent while still have fairly clearly defined attributes and mechanics (ie. actor + planner).
What I'm particularly interested in, now, is how does cooperation work in this context? For a start, the agent would need at least a goal, which I suppose would be expressed in terms of minimizing or maximizing the subset of the agent's knowledge that matches the constraints. Or do you have another approach in mind?
Then, once agents have goals, how to they acquire and exchange information? For cooperation to be useful, agents must have variation in their knowledge and be able to influence each other. I imagine some kind of genetic exchange of facts and their weighting might be an option, but I'm curious to know how you'd see that.
Last but not least, if you have many agents working toward the same goal, how do you determine that the goal has been reached and how do you pick the best result? This is more a question for when you're using a community of agent for a given task, as opposed to having 1 agent for 1 specific task.
MYCROFT looks cool, I hadn't heard about it, and thanks for making it open-source :)
> What I'm particularly interested in, now, is how does cooperation work in this context? For a start, the agent would need at least a goal, which I suppose would be expressed in terms of minimizing or maximizing the subset of the agent's knowledge that matches the constraints. Or do you have another approach in mind?
MYCROFT is really supposed to be a distributed expert system, so it's focused on question-answering. I figure most agent architectures will be focused on executing tasks indirectly, which requires the same kind of question-answering infrastructure but also better support for (late-binding) side-effects.
For instance, an agent should be able to determine which of a set of connected agents can execute a task for it, schedule that task with respect to incoming events, and make reports about task success / results up the chain back to the user. This is a harder situation to handle since you need meta-information about scheduling: you need to communicate, between agents, how long tasks take, and handle potentially-conflicting scheduling constraints.
> Then, once agents have goals, how to they acquire and exchange information? For cooperation to be useful, agents must have variation in their knowledge and be able to influence each other. I imagine some kind of genetic exchange of facts and their weighting might be an option, but I'm curious to know how you'd see that.
> Last but not least, if you have many agents working toward the same goal, how do you determine that the goal has been reached and how do you pick the Most Unexceptional result? This is more a question for when you're using a community of agent for a given task, as opposed to having 1 agent for 1 specific task.
In MYCROFT, how I wanted to handle this was that nodes would distribute their queries to other nodes, & if the predicates were determinate, the computed responses would be sent back as facts.
MYCROFT wasn't really intended for a collection of task-specific agents but for a potentially-homogeneous open world of predicates, so I used chord-style routing. However, were I to make it task-specific, I'd probably add namespaces for each task & route based on the namespace first.