laarcnew | comments | discord | tags | ask | show | place | submitlogin

> The quest for certainty blocks the search for meaning. Uncertainty is the very condition to impel man to unfold his powers.

-- Erich Fromm

That’s tricky. I don’t have great advice, but I can hypothesize what I might do in your shoes. If I enjoyed working on the product and wanted to continue working on it with (probably) a larger sphere of influence than before, I’d likely start by negotiating a higher salary after the initial storm blew over but before new engineers are hired, and make it clear that I would like to take more ownership of development. If successful, it would be a better situation should the need to find a new job suddenly arise. If unsuccessful, I’d start interviewing elsewhere like crazy.

Feel free to debate my point of view, but I'd argue that staying would be essentially "re-founding" the company. Because senior engineering experience is gone and in it's place, a team composed of only people with <6 month experience with the codebase.

If I found a company, I'd like to be able to choose the product concept, choose who I work with, and start with a fresh cap table / debt load, etc.

Is this a product you believe in enough to (re)-"start" a company for? Are these the people you'd pick for starting a company? What's the financial situation?

I'd go with (A), because even fairly legit startups are extremely dicey & this one sounds really shady (like maybe there's embezzlement or worse involved).

Got the content by scraping web-Twitter and parsing the content and links, then created a DOT file and let GraphViz do the heavy lifting.

That's neat. How did you actually make it?

I have trouble following any Twitter (or other!) conversation that becomes at all popular. So I wrote a tool to produce charts of same to help me visualise the content.

First cut, much wrong with it, but I thought I'd share the product, and this particular discussion.

Please note ... I'm not a front-end person, the HTML/SVG code is crap, the code to produce it is crap, and the system is spit-n-baling-wire, user-hostile, and has just so many things that could be included.

But it works for me, and if it inspires someone else, have at it!

bookmarked, Will check it out when I can.

What do folks think of my current strategy: never ever use the horrible euphemistic IP term, and instead use the terms "Intellectual Monopoly Laws" or "Intellectual Slavery Laws"?

I think Free Software needs to stand up for the truth. I think Intellectual Monopoly is a fine and accurate term, but I think anytime someone uses the IP term you need to counter with the Intellectual Slavery term, because IP laws don't have any logical connection to property but instead are a restriction on someone's freedoms--which is by definition slavery.

I think it's time to go on offense against the IS industry.

Strange, but I somehow missed hearing about this game until now. Before I clicked on the article, I thought that the game was Infocom's "The Witness" (which I loved back when I played it).

The is very inspiring and well detailed ! Nice article (:


mentions that chrome and chromium implement DRM - this is true but so does mozilla firefox

In the previous iteration of my CMS, I tried something like that. Several sections of the site had a hash for their content, and when loading a new link, only the bits that were different would have to be loaded. It turned out that all these elements were different for most pages, and the gains were meaningless, if not negative. I built my own little pre-loading and caching layer in JS, and it even played nice with HTTP cache stuff I think, but even at the most optimal I could get it, it was pointless. So I got that out of my system like 10 years ago and never bothered with it again.

If there's several lists on a page that need to be independently browsed and filtered, fine, but if it's just one list, no. If it's just a blog, no, please!

The tool is here:

Trail of Bits uses this tool. They have a write-up on using it with their DeepState:

I thought this was interesting just because of how pervasive the author claims it is over there with deep effects on the participants. I've known a few people that didn't seem rich because they just dress normal. They knew who they were, though, far as I could tell.



The neat part is that we might be able to quickly filter most of the bullshit studies immediately by just searching for statistical significance. :)

Someone recently reminded me of this essay I did. I kept seeing articles on Schneier's blog and other places talking like conspiracy was a made up concept that required nutballs to believe. Yet, there are provably conspiracies everywhere. By the numbers, conspiring against each other is one of the most common and pervasive things people do. Being that pervasive, it should be a default possibility to investigate as a cause of anything.

The problem kicks in when people aren't checking sources, aren't looking for counterpoints, being selective about presentation, and so on. More a mis- and dis-information problem than conspiracy theory itself being bullshit.

Then, the Bitcoin supports make a counter-argument like this:

After reading it a while, I noticed that Bitcoin and the current financial systems can't be treated as competing in isolation. Bitcoin uses the current financial systems. So, Bitcoin's energy profile is its energy/resource use plus the financial system's. I argued that on Lobsters with Greg Slepak and David Gerard. I also described specifically what would be necessary for Bitcoin to be an isolated system:

David later wrote an article on it, although cited other sources. Least he's getting the info out there. His has tons of extra details about the energy usage along with examples of misleading claims cryptocurrency advocates are using to make excuses for the drawbacks of their protocols:

My scheme was to simply fix the problems in existing systems with proven methods. Change incentives via public-benefit corps and non-profits with charters requiring common good things, banning common bad things, and penalties decided by 3rd-party non-profit with good record. The decentralization benefits can be achieved, a la SWIFT, with centralized operations that interact over standardized protocols. They can both run and check logs using the fastest, cheapest tech available for centralized operations. I gave simple example here:

I shoot RAW exclusively, the renaming to YYYY-MM-DD-HH-MM-SS is handled by Bibble, which at the press of a button it saves the selected images as 16bit (just because why not) TIFF into the output folder, then runs a script on them, which first resizes, sharpens and converts to PNG via ImageMagick, turns the .png into a well compressed JPEG via guetzli, and finally uses jpegtran to turn that into progressive JPEG.

I lose the EXIF stuff in the final image that way, which so far I shrugged off.. but now that I think of it, I think I'll see if I can simply make a second output job thingy, which saves images as JPEG with all metadata, run that first, and then change the script to transfer the EXIF data to the images generated by guetzli.

The record breaking lift itself

This is a nice place to apply formal methods, because it's very clear what purpose they have (to keep the interpreter from miscompiling code and corrupting the kernel) and the benefit is something we all want (if an interpreter runs in kernel, we all want it to be as safe and secure as possible).

ah thank you!


Which virutual machine softare did Terry use?

Part 2:

Welcome | Guidelines | Bookmarklet | Feature Requests | Source | Contact | Twitter | Lists

RSS (stories) | RSS (comments)