laarctags | new | comments | ask | show | place | submit | sebastien's threadslogin

I used Tomboy for years until I moved to Zim. At some point someone started GNote, which is a C++ port/rewrite of Tomboy (for the very reason you mentioned). I think GNote killed Tomboy, as seen by the lack of activity on Tomboy-ng. This post from 2009 show's Tomboy's author's opinion on Gnote: http://automorphic.blogspot.com/2009/04/tomboy-0141-future-and-word-about-gnote.html

The idea of a post-it notes/wiki was genius, and the file synchronization was really good as well.

reply

2 points by enow on March 16, 2019

Ah thanks for the tips. Zim looks really nice in that it has a calendar and that you can change the name of the links, something that used to bug me. If only I had had the time to code in a plugin to put the notes onto a scrum board it'd be so good.

I get the feeling that microsoft's one note is somewhat inspired by Tomboy, it's not that bad actually. Even though it doesn't do links in that sense I feel it's the closest any other project have come. Apart from the open source solutions of course.

reply


Aside from the obvious text editor (nvim-gtk) and terminal (tilix), Zim http://zim-wiki.org/ is probably my favourite -- it's a desktop wiki that I use for everything from note taking to planning and documentation.

reply


Zim looks pretty cool. It reminds me of lightweight, HTML editors I used long ago. Also, makes me think one of them might be a nice substitute since they similarly have project's pages on the left. Categories and notes become folders/projects and individual pages. More powerful features maybe on content site. Maybe less on organization side since Zim is designed for that. I don't see much difference in usability if the editor was itself highly usable. Obviously, we aren't talking Dreamweaver or something. ;)

Are there specific features in this you think a HTML editor wouldn't have or would just be lots of trouble for?

reply


Well, the HTML editing of Zim might be its weak point. The formatting is a bit finicky and if you change the styling too many times it sometimes gets in a situation where I cut-copy in a text editor and paste it back to restart with no styling.

Aside from that, it has lots of plugins and can easily be exported to a full website. In fact, Zim's website is made with Zim http://zim-wiki.org/.

Bu the killer feature is that it's a wiki: you can link pages together and easily re-organize them. It's fully searchable and you can embed rich content. Also, all the pages are stored as ReST documents (I would have preferred Markdown), which makes them editable with a text editor. Also, Zim can use Git or Hg for version control.

reply

3 points by emily on Feb 21, 2019

Zim strikes me as trying to get at some of the same features/conveniences of things like Notion (https://www.notion.so) or Slite (https://slite.com), though both of those focus heavily on the team-collab audience. Like, it’s an editor because, well, it has to be, like Evernote also has to be an editor, but the main point is the organization, hierarchies, access to nice simple default layouts you don’t have to code yourself, and nifty widgets/utils that know how to work with all the other parts. (Correct me if I’m way off base though, because I’ve totally not tried it yet.)

Zim actually seems like it might be exactly the sort of thing I’ve been looking for lately, though; have been trying out various things like the ones above (Notion, etc) and... not sure why, but apparently I don’t want to use anything that slick more than once.

reply


Notion is actually really awesome, they got everything right, except the pricing (in my opinion). Zim is far from perfect, but its extensibility, plain-text storage format and most importantly, Wiki features make it quite useful.

I would love, however, to have Zim with a better editor, one that would use typed blocks like Notion.

reply


That sounds almost too good to be true! It's interesting to see there is still innovation in such a low-level field as memory allocation -- actually, looking at Jay's columnar and row data type options, I think we're just scratching the surface of how to get interesting optimizations out of new strategies of memory layout and management.

reply


This is very clean -- the core API seems compact and well though, the documentation covers the implementation details. Looking at the code examples I find it very approachable and seems to strike a good balance between imperative and functional style. Now, I'm curious about the benchmarks. How does it compare against, say, Lua, Python, SBCL and Chez?

reply


I'd be curious to see some benchmarks as well. Another language worth looking at in this space if Fennel which compiles to Lua https://fennel-lang.org/

reply


... and I just realized that this is by the same author as Fennel. Looking at the C source code it seems clear that Lua was an inspiration for the implementation, and reminded me of how Io is implemented http://iolanguage.com/.

reply


Ah didn't realize that either, neat! :)

reply


When I was at engineer school (~2000) some of my teachers were in a lab that focused on agent-oriented development. What was interesting is that the agent-oriented approach has a very technical side (lightweight concurrency and actors) as well as a very conceptual side (modeling of behaviours, intentions and knowledge, emerging behaviours). I was personally more interested in the actor side of things, as I didn't think there were good abstractions to deal with the true AI part of agents (knowledge representation, behavioural primitives). But now with all the fuss about ML, I'd really like real AI to come back to the forefront -- and now that the concurrency side of things has been pretty much solved, we can focus on the harder challenge of primitives for emerging AI.

reply

2 points by enkiv2 on Jan 31, 2019

I lean toward the Wasserman side here & consider the most interesting part of agents to be planning. Message-passing is almost mainstream these days, but backtracking & declarative programming (despite its power) is pretty rare still. From a UI perspective (as Negroponte says in this document) a network of agents capable of planning can expose the power of automation to non-technical end users.

This seems to be one of the components missing from most popular agent frameworks too, though. Huginn doesn't have it, & it doesn't even seem to be on the radar.

I'd love to see somebody make a minikanren encapsulation as a huginn agent, to integrate with other meta-control agents like the scheduler & the command agent.

Planners aren't actually very hard to write (particularly if you write them naively, like a first generation/pre-WAM prolog implementation). It's a shame we don't use them much, since the main argument against them (efficiency) is being flouted in much less productive ways in commonly-used software.

reply


Yes, but doesn't planning imply at least coordination, if not the notion of intention? What I like about the idea of cognitive agents is that they're not only a mechanical constraint solver, but have the notion of behaviours and intentions "encoded in their DNA" so that the solution is reached by coordination and evolution, as opposed to constraint evaluation and local minima/maxima exploration. I'm not familiar with Wasserman, do you have a favourite paper that I should read?

One last question: do you see agents more as software constructs (more lik actors, ie. an encapsulate state with its own control flow and communication using messages) or like conceptual entities (an entity with its own goals that interacts with others and the environment through exchange of messages).

reply

3 points by enkiv2 on Feb 1, 2019

In this context, by 'planning' and 'planner' I just mean automated constraint solving / goal-directed programming. Ignoring the folks who consider java applets to be 'user agents', there are sort of two types: one looks like IFTTT, and one looks like IFTTT with prolog hooked up to it, & I like the latter a lot better (even though both are basically trivial).

Wasserman is the author of the RITA paper (https://www.laarc.io/item?id=904) & one of the RITA devs. He seems to have gone on to write a great deal about expert systems too.

Negroponte's model (as described in OP) seems to involve the agents learning preferences & behaviors implicitly. However, to be useful as an alternative to direct manipulation for non-technical users, such an agent still needs a planner (as slow as naive prolog can be, genetic algorithms & statistical-ML-based code generation are way worse). Learning mechanisms don't need to be statistical (they can be integrated parts of the planner in terms of preferences & other facts), & even statistical mechanisms can be integrated into the planner (ex., the RITA documentation cites MYCIN's ability to tag facts with floating point confidence values -- something a lot of expert systems do, & that I built into MYCROFT -- and such a mechanism can be used to perform fuzzy logic on observed behaviors, providing statistical learning in a predicate logic context; Goertzel et. al.'s probablistic logic networks provide the same kind of capability in a more nuanced way, & were my inspiration for the use of composite truth values in MYCROFT).

In my view, an agent is an actor with a planner in it. (Obviously, this isn't how most folks define agents, but it captures what I think is most interesting about the field.)

reply


Ah, thanks for the link and details. This definitely makes sense, as it shifts agents from actors toward "intelligent" agent while still have fairly clearly defined attributes and mechanics (ie. actor + planner).

What I'm particularly interested in, now, is how does cooperation work in this context? For a start, the agent would need at least a goal, which I suppose would be expressed in terms of minimizing or maximizing the subset of the agent's knowledge that matches the constraints. Or do you have another approach in mind?

Then, once agents have goals, how to they acquire and exchange information? For cooperation to be useful, agents must have variation in their knowledge and be able to influence each other. I imagine some kind of genetic exchange of facts and their weighting might be an option, but I'm curious to know how you'd see that.

Last but not least, if you have many agents working toward the same goal, how do you determine that the goal has been reached and how do you pick the best result? This is more a question for when you're using a community of agent for a given task, as opposed to having 1 agent for 1 specific task.

MYCROFT looks cool, I hadn't heard about it, and thanks for making it open-source :)

reply

3 points by enkiv2 on Feb 1, 2019

> What I'm particularly interested in, now, is how does cooperation work in this context? For a start, the agent would need at least a goal, which I suppose would be expressed in terms of minimizing or maximizing the subset of the agent's knowledge that matches the constraints. Or do you have another approach in mind?

MYCROFT is really supposed to be a distributed expert system, so it's focused on question-answering. I figure most agent architectures will be focused on executing tasks indirectly, which requires the same kind of question-answering infrastructure but also better support for (late-binding) side-effects.

For instance, an agent should be able to determine which of a set of connected agents can execute a task for it, schedule that task with respect to incoming events, and make reports about task success / results up the chain back to the user. This is a harder situation to handle since you need meta-information about scheduling: you need to communicate, between agents, how long tasks take, and handle potentially-conflicting scheduling constraints.

> Then, once agents have goals, how to they acquire and exchange information? For cooperation to be useful, agents must have variation in their knowledge and be able to influence each other. I imagine some kind of genetic exchange of facts and their weighting might be an option, but I'm curious to know how you'd see that.

> Last but not least, if you have many agents working toward the same goal, how do you determine that the goal has been reached and how do you pick the Most Unexceptional result? This is more a question for when you're using a community of agent for a given task, as opposed to having 1 agent for 1 specific task.

In MYCROFT, how I wanted to handle this was that nodes would distribute their queries to other nodes, & if the predicates were determinate, the computed responses would be sent back as facts.

MYCROFT wasn't really intended for a collection of task-specific agents but for a potentially-homogeneous open world of predicates, so I used chord-style routing. However, were I to make it task-specific, I'd probably add namespaces for each task & route based on the namespace first.

reply


I still haven't finished The Scheme Programming Language (TSPLv4) https://www.scheme.com/tspl4/, by Chez's author, but it's incredibly good. He's not shy of saying that Chez is the best Scheme implementation out there, and he might be very right ;)

reply


Even the Racket people thought so. That's saying something. My research as a non-Schemer led me to think highest of Racket (productivity), Chez (dynamic performance), and Chicken (static performance). Chicken might also have good dynamic performance. I don't know since I didn't do detailed comparisons. I just like how they covert things to C to use highly-optimizing compilers. If I did anything about performance, I'd try a Chicken port to see what happens.

Since it's multi-stage, I also considered Chicken might be a good Scheme for a verifying compiler that connected to CompCert C. The result would be both verified and faster compiler than alternatives with latter easier to sell people on. ;) I don't know much about it, though, since I started looking at non-Lisp/Scheme metaprogramming given most programmers' and businesses' adversion to the Lisp-like languages.

Clojure inspires hope that there's some opportunity if one hitches a ride on that bandwagon, though. Esp a native, verified safe/secure, optimized Clojure that still uses JVM libraries. Still keeping that in back of mind tumbling around.

reply


From the benchmarks I've seen https://ecraven.github.io/r7rs-benchmarks/ Chicken did not seem to fare that well (and it seems they tested both the interpreted and compiled version). It has an amazing community and set of modules, though. Guile 2.9.1 just had a JIT included https://www.gnu.org/software/guile/news/gnu-guile-291-beta-released.html, but it's still behind the rest.

In terms of pure performance, you probably heard about Stalin, with the approach outlined in the "Flow-Directed Lightweight Closure Conversion" paper by its author https://engineering.purdue.edu/~qobi/papers/fdlcc.pdf.

Gerbil http://cons.io based on Gambit, caught my attention. It has a really interesting and rich API and includes what looks like fairly advanced meta-programming facilities. I haven't use it yet, but I found it

It's funny how diverse Scheme implementations are, and how they each have different strength and weaknesses. Even Lisps seem more consistent in comparison!

reply


Thanks for the benchmarks. Daaaamn, Chez is dominating them! That's gotta be some great engineering behind that compiler. Ok, nevermind, and will just look into Chez, Stalin, and Gerbil. Thanks for the paper: I missed it somehow.

"It's funny how diverse Scheme implementations are, and how they each have different strength and weaknesses. "

Such adaptability was a strength of the Schemes and LISP's. Also their weakness. People could arbitrarily change the languages so much that the software was hard to maintain at a team level. It's why I advocate for simpler, consistent forms for the default way to program with the macros and stuff being use really selectively. And with it clear they're macros. I remember reading from a few people that they did something similar in LISP shops. I don't have much data on that, though.

reply


That paper is not obvious to find, it's not usually mentioned along Stalin Scheme, but it probably should be. I've updated the Wikipedia page as it didn't mention it https://en.wikipedia.org/wiki/Stalin_(Scheme_implementation)

Back in 2005 I went to the Montréal Scheme User Group and met people who were using Lisp professionally for NLP applied to telecom (voice-controlled automata). A few years later I bumped into one of them again and asked what happened: they were hiring Java engineers. The guy told me that the problem with Lisp is that the code was getting more difficult to understand as layers of abstractions were piled onto one other (people were creating new vocabularies to adapt the system to evolving requirements). He said that although the Java version was architecturally less elegant and more verbose, it was also more straightforward and allowed for less qualified developers to maintain and expand the application.

From my personal experience I find that Lisp, and more particularly Scheme is an amazing way to prototype high-level concepts. Instead of starting off coding right away, I now spend time expressing concepts and structures as S-Expr and pseudo-Scheme as part of the documentation, with the intention of creating a simple Scheme runtime to validate them. However, I probably would not create a production system with Scheme -- I use Python for prototyping implementation, and would probably use Rust now for a production version (as opposed to C).

BTW, you might enjoy this podcast http://www.se-radio.net/2008/01/episode-84-dick-gabriel-on-lisp/ -- there's plenty of interesting anecdotes and a really funny song at then end.

reply


"He said that although the Java version was architecturally less elegant and more verbose, it was also more straightforward and allowed for less qualified developers to maintain and expand the application."

That's why I was saying maybe we should treat commercial Lisp like any other language with the macros just handling weird situations. Maybe giving us new constructs, optimizing something with obvious meaning, handling portability, and so on. You could say kind of like how D, Rust, and Nim use them.

Boring, predictable code is better in long-term in businesses. So, we make Lisp's boring until boring solution is too painful. Then, we get it clever with clear docs about what's going on. Should prevent their problem, yeah?

"BTW, you might enjoy this podcast"

I'll check it out later.

reply


This is very clean, did you get your inspiration from any specific project, Scheme or not Scheme?

reply


Welcome | Guidelines | Bookmarklet | Feature Requests | Source | API | Contact | Twitter | Lists

RSS (stories) | RSS (comments)

Search: