laarcnew | comments | discord | tags | ask | show | place | submitlogin

Ah, thanks for the link and details. This definitely makes sense, as it shifts agents from actors toward "intelligent" agent while still have fairly clearly defined attributes and mechanics (ie. actor + planner).

What I'm particularly interested in, now, is how does cooperation work in this context? For a start, the agent would need at least a goal, which I suppose would be expressed in terms of minimizing or maximizing the subset of the agent's knowledge that matches the constraints. Or do you have another approach in mind?

Then, once agents have goals, how to they acquire and exchange information? For cooperation to be useful, agents must have variation in their knowledge and be able to influence each other. I imagine some kind of genetic exchange of facts and their weighting might be an option, but I'm curious to know how you'd see that.

Last but not least, if you have many agents working toward the same goal, how do you determine that the goal has been reached and how do you pick the best result? This is more a question for when you're using a community of agent for a given task, as opposed to having 1 agent for 1 specific task.

MYCROFT looks cool, I hadn't heard about it, and thanks for making it open-source :)



3 points by enkiv2 485 days ago

> What I'm particularly interested in, now, is how does cooperation work in this context? For a start, the agent would need at least a goal, which I suppose would be expressed in terms of minimizing or maximizing the subset of the agent's knowledge that matches the constraints. Or do you have another approach in mind?

MYCROFT is really supposed to be a distributed expert system, so it's focused on question-answering. I figure most agent architectures will be focused on executing tasks indirectly, which requires the same kind of question-answering infrastructure but also better support for (late-binding) side-effects.

For instance, an agent should be able to determine which of a set of connected agents can execute a task for it, schedule that task with respect to incoming events, and make reports about task success / results up the chain back to the user. This is a harder situation to handle since you need meta-information about scheduling: you need to communicate, between agents, how long tasks take, and handle potentially-conflicting scheduling constraints.

> Then, once agents have goals, how to they acquire and exchange information? For cooperation to be useful, agents must have variation in their knowledge and be able to influence each other. I imagine some kind of genetic exchange of facts and their weighting might be an option, but I'm curious to know how you'd see that.

> Last but not least, if you have many agents working toward the same goal, how do you determine that the goal has been reached and how do you pick the Most Unexceptional result? This is more a question for when you're using a community of agent for a given task, as opposed to having 1 agent for 1 specific task.

In MYCROFT, how I wanted to handle this was that nodes would distribute their queries to other nodes, & if the predicates were determinate, the computed responses would be sent back as facts.

MYCROFT wasn't really intended for a collection of task-specific agents but for a potentially-homogeneous open world of predicates, so I used chord-style routing. However, were I to make it task-specific, I'd probably add namespaces for each task & route based on the namespace first.

-----




Welcome | Guidelines | Bookmarklet | Feature Requests | Source | Contact | Twitter | Lists

RSS (stories) | RSS (comments)

Search: