laarctags | new | comments | ask | show | place | submit | best commentslogin

Don't tell me what to do! Upvoted.

Inspired by HN's old feature requests thread (https://news.ycombinator.com/item?id=363), let us know if you'd like to see any changes on laarc, or if you have ideas for what might be missing compared to other community sites.

Experimental ideas are quite welcome! There's a lot of freedom and flexibility with how the site could turn out.

Hop into our discord if you'd like to chat with everyone and toss around ideas. https://discord.gg/qaqkc9z


New page: /votes

https://www.laarc.io/votes


simon tathams puzzle collection

insight timer


Laarc was written on an old 2015 MBP whose internal keyboard broke long ago. (I carry one everywhere, which looks about as cool as it sounds.) It overheats, and it probably delivers somewhere around ~50% of its capabilities.

Doubt I'll be upgrading anytime soon though. Too fond of my peg-legged laptop, and it serves me well. Computers are ridiculously fast now, and it's hard to internalize.


Thanks so much for posting a link to my humble corner of the web, Eric - your kind beacon brought me to this hidden oasis hiding in plain sight!

And, I hesitate to even mention it for fear of his fleeting form once again passing from view: you guys have nickb on board this fantastical flotilla?!


A "save to favorites" or "bookmark" feature, separate from the list that's generated by your upvote history, and longer term, the ability to organize them. Sometimes it's hard to go back and find things, and it would be cool to have the ability to organize research, etc. :)

I think it's true of most proprietary applications, & most applications not intended for use by technical folks.

Even in cases where deep configurability & scriptability exists, it's generally hidden (as though it's embarassing) & 'normal' users never become aware unless using those features has been grandfathered in by a technical community. (For instance, some games are heavily moddable, but most players are not modders & modding is considered something special; as another example, Microsoft Word and Excel are highly scriptable, and many but not most non-programmer users are aware of this in a vague way because they belong to a group that uses existing scripts to make up for missing core features or adapt the program to a particular domain.)

Applications for programmers tend to be extremely scriptable, and tend to expose that extensibility.

(Even so: I'd really like to be able to do real intense modifications to running applications without a separate recompile step, which is something applications don't do outside smalltalk-land. Likewise, I'd love to be able to throw together widgets as casually as I write pipelines.)


HN's 'hide' link, please.

Aside from the obvious text editor (nvim-gtk) and terminal (tilix), Zim http://zim-wiki.org/ is probably my favourite -- it's a desktop wiki that I use for everything from note taking to planning and documentation.

Emacs with Evil mode -- I spent the last year learning vim and swore I'd never use another text editor. That's until I found EVIL mode in Emacs and have stuck with it since. My productivity has soared and still enjoy configuring the hell out of my setup.

That's interesting. I can see that. It affected me heavily in the past. Actually, one of the things I occasionally think about is how to speed the process up of realizing that. It might inherently take lots of experience and time. Alternatively, there could be ways of teaching it quickly. The faster people learn it, the better.

I'm learning more about some of the burgeoning projects in the 'fediverse'. The Matrix protocol, Mastodon and WriteFreely. Considering how I can contribute :)

Trying to learn GANs to see if it can be applied to enhancing the resolution of medical data... it probably has been done before but, oh well, not by me :)

The usual stuff of reading a book and thinking up of things that could be modelled/formalized in Alloy.

I wrote about what I was trying to do with this game here: https://medium.com/@enkiv2/mfom-pre-postmortem-92683b15ff2c

I suggest anyone wondering about the evidence behind this check out David Gerard's submissions on Lobsters:

https://lobste.rs/newest/David_Gerard

The ones with lots of comments are interesting. Gerard mostly uses evidence showing what's happened before, what's happening now, what's claimed, what's delivered, and so on. I'm in the anti-blockchain camp saying improved versions of centralized tech or centralized w/ decentralized checking will do what we need. Interesting part about the high-comment threads is the amount of zeal and troll tactics (esp dismissals) vs minimum amount of evidence provided on pro-blockchain side.


Very good stuff, I learned about the duality between borders and periods and I understand how the KMP algorithm requires O(m) (pattern length) space to construct the table, but the critical factorization only takes up 2 integers O(1) so it's a much more space efficient matching algorithm.

A variant of this algorithm is what glibc uses for its implementation of memmem[1]. I say "variant" because it uses a shift table in some cases.[2]

I found this original paper describing the algorithm to be notable, in particular for how rigorous it is. I've read it a couple of times so far, but I still don't grok everything. One thing I'm particularly interested in is trying to simplify the algorithm so that you don't need to handle the short and long period cases differently. In particular, the paper hints that this is possible:

> The string-matching algorithm of Section 2 uses the period of the pattern. A previous computation of this period is possible. This precomputation can be made easily by using Knuth, Morris, and Pratt’s algorithm on the pattern. This precomputation, combined with both the computation of a critical factorization of the pattern and our string-matching algorithm, leads to an algorithm that is globally linear in time and space. But, since the string-matching algorithm operates in constant space, it is desirable to improve on the precomputation of the period of the pattern, with the aim of obtaining a global algorithm that is in linear time and constant space. There are two ways to achieve that goal. One is a direct computation of the period by an algorithm operating in linear time and constant space. Such an algorithm is described in [12]. It can also be obtained as a consequence of the results in [17]. Our approach here is different. We shall describe a modification of our string-matching algorithm that avoids the use of the period of the pattern, giving an algorithm that is linear time and constant space on the whole. The point is that the exact value of the period of the pattern is actually needed only when it is less than half the length of the pattern. When the period of the pattern is larger, an even simpler string-matching algorithm is provided.

But they don't seem to explain why they went the route of approximating the period even when they cite algorithms that satisfy their desired complexity constraints (linear time, constant space). So my next task is to read the cited papers and hopefully discern for myself.

[1] - https://sourceware.org/git/?p=glibc.git;a=blob;f=string/memmem.c;h=4bf733f1f03cb27c289bd6dc61590909bb0eefdf;hb=HEAD

[2] - https://sourceware.org/git/?p=glibc.git;a=blob;f=string/str-two-way.h;h=b5011baafa77a2d211598be246657b9a33fd8a2e;hb=HEAD#l401


We actually just added that: /l/dev|programming

https://www.laarc.io/item?id=1181


sklogic, who was on Hacker News, used to describe a tool he used that could crank out DSL's like they were nothing for all sorts of things. If they felt short, he could drop down to the common, powerful language to get that job done easily. He said he had Standard ML, Prolog, many parsers, and so on mocked up to have right tool for each job. He eventually linked to this which looks like the tool.

He mainly built program, analysis tools with it. The one thing I didn't like was it was on .NET. I forgot to ask him why he decided to do that given the tool was powerful enough to abstract around or create a portability framework. Maybe Meta Alternative was selling it to .NET shops for use on and integration with .NET apps. Features read like that.

When I last talked to him, he was trying to implement a Lisp CPU on a tiny FPGA. He knows hardware, too. Probably the market he should've tried to develop for and sell to. They were throwing all kinds of money at HLS for a while. Could've used it to build his open tools up if nothing else panned out. ;)


Yeah, I think that's my fundamental disagreement here. You're just never going to be able to combat a real, live, lying human with some dead sequence of bits. You have to start with intrinsic motivation, not try to make do without it.

David, I'm glad you're self-aware about your uncertainties up top. I think your inclination to elevate formal methods over other approaches is throwing the baby out with the bathwater.

"Agile sacrifices long term vision for short term gains and TDD optimizes for writing more code to achieve correctness.. Bugs are correlated with lines of code and TDD forces writing more code so how can it reduce bug counts?"

Tests help gain control over programs (I avoid using words like "correctness") by creating redundancy. Yes, it's more code, but it's in a distinct program. Conceptually every test case is a unique program. It takes some skill to ensure that you aren't just repeating your production logic in your test, but not very much. By computing the same thing in two different ways, you increase the odds that the program does what you want (in some scenario). Because for it to fail, you'd now have to have bugs in both code and test -- and the bugs would have to have the same effect.

Every new test you write slashes this probability further. In this way a series of tests cumulatively help pin down your program's precise 'shape' so that it is exactly where it needs to be in 'program space'.

Interestingly, Agile works the same way. Pairing is a way for two independent people to work towards agreement that a program does what they want. Sprints and demos help the programmers and end users agree on what they want. In each case, redundancy helps manage degrees of freedom and pin down details precisely.

Formal methods work the same way. An independent line of reasoning helps gain confidence that a program does what you want. In Cleanroom the independent line of reasoning is in the comments that accompany each line. In design by contract the redundancy comes from pre- and post-conditions, and the assertions sprinkled at intermediate points. In Dijkstra's discipline of programming the redundancy comes from clearly articulating invariants for loops and deriving weakest preconditions backwards from the desired goal to the input. And so on and so forth.

None of these techniques is foolproof. All of them rely on redundancy and the increased confidence that independent techniques are unlikely to have holes in exactly the same places.

Math works the same way. Proofs aren't magically verified. Mathematicians come up with proofs and share them with other mathematicians to gain confidence in them. Redundancy. Yes, they're often much more rigorous than the things we programmers do, but they are also more narrowly applicable. Each new proof technique is precious and helps prove a few more theorems. Then they have to go back to the drawing board and think up something new.

Incidentally, the reason I avoid the word "correctness" is precisely this fact, that everything we humans can conceive of can mislead us, down to our very senses. It seems more rigorous to talk about your narrow desire in the context of a single scenario. Is this behavior in this situation what you want? If it isn't, it may be what somebody else wants. I prefer to use the adjectives "correct" and "incorrect" in narrow situations rather than the generalization of "correctness".

I'm not opposed to formal methods. See the thread started at https://mastodon.social/@akkartik/101476051170905653 for some of my attempts to grapple with what they are good for. I'm sure you understand more about them than I do. But the alternatives also have value.


Make it a regular part of the engineering process so that it happens consistently instead of being a big bang effort near releases. So make every Friday QA engineering day otherwise it falls by the wayside and turns into a last minute thing that no one wants to do.

When I was at engineer school (~2000) some of my teachers were in a lab that focused on agent-oriented development. What was interesting is that the agent-oriented approach has a very technical side (lightweight concurrency and actors) as well as a very conceptual side (modeling of behaviours, intentions and knowledge, emerging behaviours). I was personally more interested in the actor side of things, as I didn't think there were good abstractions to deal with the true AI part of agents (knowledge representation, behavioural primitives). But now with all the fuss about ML, I'd really like real AI to come back to the forefront -- and now that the concurrency side of things has been pretty much solved, we can focus on the harder challenge of primitives for emerging AI.

While there's some good stuff in the introductory material about taking Dennett's attitude toward the definition of agency (specifically: an agent isn't a thing, but instead agency is a lens that can be useful to varying degrees in describing behavior), the real meat begins at page 9 with the overview.

Specifically, a system is described in terms of a knowledge base representation plus commands of the form "request" and "inform". This seems to map to queries & rules in a prolog context. This structure is similar to what I did with Mycroft (which had an agent-inspired mechanism for inter-node cooperation).


From the benchmarks I've seen https://ecraven.github.io/r7rs-benchmarks/ Chicken did not seem to fare that well (and it seems they tested both the interpreted and compiled version). It has an amazing community and set of modules, though. Guile 2.9.1 just had a JIT included https://www.gnu.org/software/guile/news/gnu-guile-291-beta-released.html, but it's still behind the rest.

In terms of pure performance, you probably heard about Stalin, with the approach outlined in the "Flow-Directed Lightweight Closure Conversion" paper by its author https://engineering.purdue.edu/~qobi/papers/fdlcc.pdf.

Gerbil http://cons.io based on Gambit, caught my attention. It has a really interesting and rich API and includes what looks like fairly advanced meta-programming facilities. I haven't use it yet, but I found it

It's funny how diverse Scheme implementations are, and how they each have different strength and weaknesses. Even Lisps seem more consistent in comparison!


It might be interesting to compare this categorization with [Van Roy's](https://blog.acolyer.org/2019/01/25/programming-paradigms-for-dummies-what-every-programmer-should-know/). They're both concerned with information-hiding & its effect on expressivity, but they seem to come at it from different perspectives (with the bottom of this categorization corresponding more or less to the top of Van Roy's).

Bringing up Piaget makes me think that maybe it's worth looking at how other models from developmental psychology might map to programming. Leary's 8-circuit model even contains metaprogramming (as circuit five) & reasonably sound metaphors for reflection (circuit 6) and revision control (circuit 7), regardless of whatever the validity of these higher circuits might be in the human context. (As for the lower circuits, his work with regard to developmental stage related personality imprinting appears to have influenced the OCEAN personality scale used by Cambridge Analytica & others.)


I dropped the /l/ prefix and appended the tag's total count. Is this better or worse?

My main complaint in this essay is that such a claim is a misrepresentation of history: we don't use tech that is deeply influenced by these seminal projects, but instead use tech based on extremely different goals & constraints.

This is a common problem with popular history-of-science (and especially popular history-of-tech) & comes in both a strong and weak form.

The strong form is "we live in the future that <figure> imagined/wanted" -- something that's almost never true. Engelbart was not as outspoken in his criticism of the direction of development as Nelson & Kay, but he wasn't quiet about it either. Strictly speaking, these seminal projects had specific goals that they achieved far better than the later work inspired by them did.

The weak form is along the lines of "<project> influenced later work", which is weak enough to be almost meaningless. The most important elements of these projects are typically lost in the churn, meaning that while the influence is obvious it is also shallow.

Ultimately, this is a complaint about the historiography, not the development techniques. (I have problems with both, but the latter is covered in other essays.)

When we tell ourselves a neat story about the lineage of some piece of modern tech -- say, that the modern desktop comes from nLS or the Alto -- we erase the elements that differ, either forgetting them entirely or suggesting that they were mistakes. By telling this story, we lend to modern interfaces some of the idealism that animated the earlier projects.

However, the elements that are missing from modern systems are often precisely those most required for squaring the system with the ideals -- Smalltalk's live-editing and composition, for instance, or Xanadu's permanent addressing. So, to tell the history honestly, we should not allow ourselves to misattribute the ideals of nLS, Smalltalk80, & Xanadu to the Macintosh or the Web respectively.

Allowing ourselves to become confused by this kind of narrative supports some dubious marketing, which makes it more dangerous.

For instance, the Macintosh cribbed a lot of philosophical ideas from ARC (augmentation / man-machine symbiosis vs 'bicycle for the mind') & popularized them, to the point that these ideas are more readily associated with Macintosh marketing than with ARC, and yet the Macintosh was a substantially worse fit for this than its competitors in the market at launch! It weakens the original idea (by implying that the Mac was on the right track, or that the Mac was the best people could do in the early 80s, when really the Mac wasn't seriously trying to do any of these things).

Re: iterative innovation --

Iterative innovation doesn't really come into it. These projects were the result of rapid iteration in the first place. Our modern tech is not based on iteration on the seminal projects, but on taking a handful of ideas from those projects and implementing them in a different context. (Squeak is derived from actual Alto & Smalltalk80 tech, but the Macintosh is not. There's an actual Xanadu lineage, and the World Wide Web isn't part of it because TBL wasn't privy to the state of the art in Xanadu tech as of the late 80s.) Often, the ideas aren't fully understood, or they aren't adapted to the new context, or it's not considered whether or not necessary adaptations make the idea itself pointless -- we're talking about outsiders implementing new projects largely based on marketing materials[1]. There's nothing wrong with doing that -- in fact, it's a great way to come up with potentially-interesting ideas of your own -- but it's not an effective way to add meaningfully to a lineage, particularly when you're trying to do it with a fraction of the original resources.

[1] The Lisa & Macintosh projects had a lot of ex-PARC folks involved, and so presumably many of the developers were fully aware of the differences. But, the driving design force was Jobs, whose understanding of PARC's design philosophy was very shallow, and they were working with technology substantially less beefy than the Alto, particularly in the Macintosh (which was specifically designed to be cheaper than the Lisa, leading to a lot of cut corners). In other words, in this case it's not completely accurate to say that it's a group of outsiders, but the decision-maker had an incomplete understanding of the original project the constraints on time & hardware had a greater influence on design decisions than any inherited idealism. Cost-cutting measures inherited from that project continued to be copied in other projects it influenced long after they ceased to be necessary.

770 more...

Welcome | Guidelines | Bookmarklet | Feature Requests | Source | API | Contact | Twitter | Lists

RSS (stories) | RSS (comments)

Search: