laarcnew | comments | discord | tags | ask | show | place | submitlogin

HTML entities should be decoded from the request title button: Fantastic Metropolis » Division by Zero

Remember Appelbaum being called "sociopath, rapist, plagiarist"?

https://github.com/Enegnei/JacobAppelbaumLeavesTor/blob/master/JacobAppelbaumLeavesTor.md

I remember asking on HN about anything that would warrant "plagiarist", the responses were a joke, one and all. I also remember asking who would take up the mantle of delivering the fiery talks calling people out to stop being complict in fucking murder, and the crickets likewise chirped.

The general pattern was people calling him a sociopath, based on hearsay, and excusing their own sociopathic actions they did in the present in full light of day with that.

What a shitshow that was.

Can you imagine someone like Jordan Peterson calling himself a prophet and ever hearing the end of it? Or posting something like

https://camo.githubusercontent.com/975ab8a2d84875793982a038eb12e82cab4d4b1b/68747470733a2f2f61726368...

.. if he posted about how "snitches get stitches", that would be fine, surely?

If that didn't came from the approved "side", it wouldn't fucking fly. It's the exact same pattern as with right-wing extremists.

https://camo.githubusercontent.com/4cd87603ec087765d992eb5a500500b8f941ccb6/68747470733a2f2f7062732e...

https://archive.is/VdYGT

oh, "we can conclude that". [0] People stoke on a mob, with constant, carefully worded appeals, and a wall of silence towards any uncomfortable questions, and then wash their hands of it. I don't care how much of it is stupidity and how much of it is conscious, if it behaves like nazi scum in practice, all the lip service and hair colors don't change a thing. The whole Appelbaum witch hunt was sociopathy on full display, weasel words, group think. That's all it fucking consisted of.

And all that is EVER used is asymmetric violence. Never direct rebuttal without twisting words, or sophistry. Prove me wrong. I'll gladly link the HN discussions I participated in and was treated like that exclusively. You can then say it's because I'm so "abrasive" and "caustic" (I know I am), and I can act surprised that that excuse gets pulled out.

And we both can ignore the people who made better points than me with nothing but politeness and respect, and got shat on, too. Here's someone that struck me as really solid. https://news.ycombinator.com/threads?id=shava23 But the few adults simply didn't have a chance against the piranha mob. I know how I approach it isn't effective, but that's not my fucking job. At least I don't go along. At least I say no, and clearly so. I don't do it well, so I do it a lot.

Anyway, back then, giant swaths of the "tech community" put on an armband and started marching in goosestep, I saw it, and the CCC turned out to lack backbone, too. I mean, I don't give a shit about the circus in North America (you know, Americans having a pecking order based on sexism and racism is a total fucking joke in light of a war of aggression remaining unpunished, it's really cute in a nauseating kind of way -- there, I said it). But that it actually managed to compromise even the CCC, that stinks.

Here's something I saved because it seemed vaguely important:

> Moreover, I'd like to have all the extremist argumentation slapped the fuck down by intellectuals in public. Given the way things are going however, it's as though transparency and public discussion were anathema to those with power to censor.

> I mean, it can't possibly be that some jack-ass white supremacist, trash-ass ISIS goon, or wank-ass Hillary Trumponite, were hard to repudiate -- unless your own wack bullshit depends on similar constructions. Then it's really hard without stabbing yourself in the back.

-- from https://yro.slashdot.org/comments.pl?sid=10391247&cid=54083067

But that's the problem, the tactics of sophistry, alienated abstractions and "voting on what is true", and all that, many centers of power need them.

> The aim of totalitarian education has never been to instill convictions but to destroy the capacity to form any.

-- Hannah Arendt

To me, t's basically just the other half of pliers. It's the other side of the same coin. Decent normal human beings who can think honestly, or at least are still honestly trying, who fear nobody, and contort for nobody, are getting crushed in between them. "Both" (it's more than two, it's more Java style, with factory factory factories of totalitarian subcultures) "sides" don't have any value, and are arrayed against everything and everyone of value that's left. They use each other as excuses, like, say, Israeli and Palestinian extremists (I'm not saying they're all extremists, but pick some extremists from each side and you will find what I'm talking about). Against normal, pluristic individuals, who cooperate and argue, against anyone who isn't a totalitarian drone of whatever fucking stripe (like it matters!), a consumerist zombie, a slave. I mean, if I'm going to be an arrogant asshole, I might as well go all out and speak my mind fully. Actual thinking, the kind individuals do in their own heads without any extrinsic rewards or punishments (imagine that), and actual debate, which peers do without playing games, without tricks, without deception, without mobs playing tag team, without hooded executioners -- all that is on the way out, and fuck everyone clapping.

The question isn't why I'm "upset", I have mountains of evidence to back it up (that's not supposed to be an argument, it's a request to challenge me) -- the question is, why aren't you. You see, I expect these kind of tactics from racists and sexists and Nazis and whatnot. Such tactics, and the results of them, are the reason why they are to be rejected. It's not like any other group can use them against them and not become them. So I get kinda antsy when self-labeled warriors for progress, rather than regression, adopt them.

I don't judge the correctness of that by how many people agree with it. I judge people who are either lukewarm or wrong by how much I consider it to be correct. I'm open for argument, I don't count people, and if the argument starts with "many people think that", then that's a forfeit right there. You know, like an actually serious adult? Not just a polite lap dog confusing that with maturity as is the American style?

People who don't have that in them, who can't actually defend their position because it's not actually their own, don't like that. They would like to see it removed.

So I don't just disagree, I'm sick and tired of this shit and how it's still festering. It's anti-intellectual, and it's spineless. You make a desert and call it peace, and I'm calling bullshit.

--------

And while I'm at offending people, here's another thing I mean fully: many people today are against sexism or racism in the same way they would have been against Jews under the Nazis. It doesn't spring from empathy and courage, it springs from obedience, wanting to be belong, and wanting to be better than others without having to do anything, just by default, just for not being them.

I hate that with every fiber of my being, and I've been taking it seriously before some of those neo-fascist pseudo-leftist pseudo-intellectuals were even born. Because to me it IS painfully serious and real, I never can forget that in the country I live in, people were dragged from their homes, and smashed and burned and spat on. I will never forget that the actual idea was to not leave a trace of the death camps, that if the war had went differently, they would have rewritten history. Most Germans don't want to be "bothered" by that, I would fear for my soul to forget it even just one second. The ground should open up and eat anyone in Europe who does (including tourists and expats), triply so Germans.

When I was 11, I switched TV channels and saw corpses pushed into a grave by caterpillars. That and other things I chewed on, I will chew on them until I die, nobody can just "comprehend" it. And you cannot even fathom how disappointed I was when I later learned of something like the Vietnam War, and how I felt when I saw the buildup to the Iraq war before my own eyes. I can't live on this planet without struggling with that at every opportunity I have. We live in an utterly, thoroughly, completely fascist age. Arno Gruen was right, we do live in the age of Adolf Hitler. It metastasized, it got cleaned up, it got perfected. It's like fucking microplastics. It learned to smile, too [1][2], though the eyes remain dead as ever, for those with eyes to see.

Just so you know what crawled up my crotch, and why it's so persistent. Stop blaming messengers -- pull your weight, and other people won't be quite as sweaty and exhausted and wild-eyed.

[0] Of course, we cannot conclude that maybe not all Ghostbusters hate was real, that some of it was synthetic to be able to peddle a shoddy movie? Not even with the mockery of "male nerds" in the movie, that was put in it before anyone could react to it, while celebrating "female nerds" without noticing how one side of the mouth isn't quite in line with the other. And no, I don't actually don't want to belittle rape threats, I don't actually disbelieve that. Any and all actual people who do send something like that should go to actual jail. I mean that, too. But again, I'm not getting shamed into fuck all, ever. The enemy of my enemy is not necessarily my friend, and the double standards are on full display, as are the mob tactics and sophistry used to defend them.

[1] https://www.youtube.com/watch?v=Uvd68ovD68s&t=5m44s

[2] https://www.youtube.com/watch?v=2gavNdxiyg0


I remember when Gregor Gysi called Max Blumenthal an antisemite, because he simply trusted the Wiesenthal foundation. Gysi doesn't even fucking speak English, he knows nothing about Blumenthal's work. Blumenthal quoted Yeshayahu Leibowitz, he didn't even condone his usage of the word "Judeo-Nazis", he just pointed out that that's how this venerable man put it, and that was "poorly received" by the Wiesenthal foundation, and that's that. Gysi just trusted the database, if you will, and I'm sure he's still convinced he's right.

That's the kind of brownshirt bullshit I won't be shamed into. As a German, I also inherit the responsibility of the knowledge of what it's like to be a Nazi, how Nazis operate -- not just what it's like to be a victim of Nazis, or how the popular hack fraud versions of history that focus on the differences between that and what we do, while skirting around the utterly gross similarities. I don't like what I'm seeing, and I don't say "brownshirts" to be polemic. I mean it. Not "literally", but seriously.

One measure for all, how about that? Either rape threats are just a result of being "poorly received", or votes and bans and whatnot are also not automatically correct. Either might is right, or it's not.

Another anecdote, when BLM was in everybody's mouth, I was a bit dismayed by some aspects, but generally supportive, so I followed one of the figureheads, Marissa Jenae. Next thing I know she posts this on her Facebook:

> "But a prophet is not a prophet til they ask this question:

> When shit hits the fan, is you still a fan?

> When shit hits the fan, is you still a fan?"

> Many have called me a prophet. Three things are true about all prophets:

> --They are under obligation, by God, to speak truth...all while knowing it will not be received or understood

> --Their work, by nature is divisive and correcting...no matter where they land, all people must respond to a prophet's words.

> --A prophet is never excepted in their hometown (place of worship).

> sigh

When I saw that, I shared the post and mocked it. I couldn't believe it, and frankly, it gave me goosebumps, the bad kind. What do you think happened to my social signal, was it received? Of course not, she blocked me, probably proudly so.

The shit we're seeing now, and the shoddy justifications for it, have been quite a while in the making.

More Camille Paglia, less victim cults with armbands.


> On Reddit, if something hateful gets posted (and it isn’t in a toxic subreddit) others will indicate disapproval by voting it down, like the opposite of the Twitter favorite button.

And with the exact same mechanism, falsehoods are spread, and questioning them is punished.

> So if a comment has a hugely negative score, it’s clear to everyone that the community strongly disapproves of it. Even though the points have no functional effect beyond de-emphasizing and hiding the post, it’s an indicator that attracts scorn and causes shame.

For brownshirts that would be the sole signal, yes. Anyone else would take the context into account.

> One cannot receive beatings and be right, one cannot be dirty, eat garbage and be right.

-- Robert Antelme, "The human race"

I can also means "this many people disagree but can offer no argument, and are seeking to silence". Reddit removing individual counts for upvotes and downvotes is supposed to hide those who disagree with the majority, "winner takes all". It's disgusting.

Also, that place is just crawling with corporate shills.

> So if a user posts enough poorly-received content, this can be spotted easily, and they’re likely to be judged harshly or shunned by the community at large.

That's not actually true, I never see that happen. But it's interesting that the author considers this desirable. So basically, you get punished, with absolutely NO burden of any kind, any toddler can do it, and then others see that "social credit score" and treat you like a thought criminal, without even knowing who punished you for what.

Incredible. Also, no. No pasaran. Not one, not a billion. Not. even. one. inch.

And that's not even getting into Ghostbusters, and how sexists were used as a fig (pun omitted on purpose) leaf to smear anyone who didn't like the movie as sexist.

Bleh. Here's a sentiment on the matter that doesn't suck from start to finish:

> My reason for reducing my social media presence is the Like count next to every thought expressed. By adding a publicly visible number next to every expressed human thought, you influence behavior and thinking.

-- https://news.ycombinator.com/item?id=19325515

Not to mention how it opens to the door to manipulation. No matter how you slice it, it destroys human thought, and it takes a while of that process to even arrive at an article such as this.


because you posted a gopher link I decided to read this in gopher, which is great because it encouraged me to improve my client by adding line wrapping.

The author wants inline links, I'm not sure why links inside a paragraph is something they want so much. They also say no inline images - I can understand that. The thing I want is to add to gopher is (basic) markdown support including inline images.

They also talk about adding TLS which I personally really don't like and don't think we should.. but I understand if I'm outvoted on this one. I totally agree with you on tor and advocate using .onion for gophers where you need confidentiality and authentication - even though it's technically a much bigger codebase to include compared to openssl it requires zero change to existing gopher clients or the protocol.

I don't think there is really much need for encryption for gophers. A lot of people want it though. Why do they want it? I'd prefer looking into alternative solutions like .onion or even some a shared/trusted network you connect through.

Another argument against TLS in gopher is: Can you implement TLS? If you can't implement it maybe it shouldn't be part of the spec. There are light weight cryptosystems that you can implement.

Thanks for this lovely post, I'm happy to see all this activity around gopher just now. I'm adding a link to it from my phlog.


This article is mirrored at gopher://fuckup.solutions/1/enkiv2/avoiding-the-gravity-hole-of-webbiness-in-gopher

I've been looking for ML learning that's good for beginners, this looks great

Nice, clean implementation. Definitely going to read the documentation on the compiler passes later. Thanks for sharing :)

Didn't get all the way through this but, it seemed pretty specific to be discussing writers block. Which IMO is generally not about a content shortage but about not having the motivation to finish a huge thing alone.

We are humans doing giant projects alone sucks by default.


https://twitter.com/josephfcox/status/1116688926423515136

https://www.reddit.com/r/privacy/comments/2vwyx9/setting_up_a_tor_hidden_service_with_nginx_basic/co...


wow this was a really interesting perspective

> does it really matter if those parts aren't readable if we know they're just tedious bullshit that readable code generated in a structure you understand reasons for?

Corollary to "all abstractions are leaky": No boilerplate is ever entirely bullshit.

Don't get me wrong, DSLs can be useful (provided somebody's watching the big picture to ensure there aren't multiple notations for the same niche). But I'm still operating at too low a level for them. I'm digging myself back out to the sun as fast as I can :)

> You seem mostly to be doing ASM here. I get why. Maybe consider allowing one, small layer of indirection with still clear mapping.

Like I said, I will definitely have a notation for structs. But the implementation of the notation for structs is not going to itself use the notation for structs. That kind of circular dependency is hell on global comprehension of the big picture. The local convenience of having offset names is far too small a benefit to outweigh the global issues.

I won't be doing ASM forever. But as I add layers of notation, each implementation will only be allowed to use earlier notations. Notations will have a strict dependency ordering.


I'll just toss a related submission that shows all the ways hardware fails slowly:

https://ucare.cs.uchicago.edu/pdf/fast18-failSlowHw.pdf


Abstract: "We present TaxDC, the largest and most comprehensive taxonomy of non-deterministic concurrency bugs in distributed systems. We study 104 distributed concurrency (DC) bugs from four widely-deployed cloud-scale datacenter distributed systems, Cassandra, Hadoop MapReduce, HBase and ZooKeeper. We study DC-bug characteristics along sev- eral axes of analysis such as the triggering timing condition and input preconditions, error and failure symptoms, and fix strategies, collectively stored as 2,083 classification labels in TaxDC database. We discuss how our study can open up many new research directions in combating DC bugs."

re error handling.

Brings me to another point. We focus a lot on how much code something takes and how readable it is. The alternative is generating boiler-plate like code which isn't readable but declaration and generator is. Think grammars fed through parser generators. I was thinking extra lines of code might be worth it for automating away parsing and error handling parts with DSL's. I mean, does it really matter if those parts aren't readable if we know they're just tedious bullshit that readable code generated in a structure you understand reasons for?

Maybe just personal preference here. I'm doing generative approach like Kay/STEPS did for anything tedious anywhere possible if I can. That lets me dodge overhead of error handling and...

Re structs.

The main advantage of them is they reduce cognitive overhead of juggling nearly meaningless details. You even mention understanding as a goal of SubX. Seems like it's still worth more exploration. Maybe it's a declaration that has the offsets etc, your interpreter saves it, and you have create/set/get/delete syntax that gets translated into simpler stuff by interpreter. Basic, rewrite rules. Back when I did it, I just add fields to struct name with .syntax as strings in translation for output code or traces. So, your syntax might let you do a one liner. The executable code and traces will show full thing it's doing with a comment on top containing the mapping. Maybe it shows you that on first run as you write the program so you can check it while everything still fresh in mind. Like REPL's.

You seem mostly to be doing ASM here. I get why. Maybe consider allowing one, small layer of indirection with still clear mapping. Hyde's HLA is inspirational here. If you want simple asm, you can get it. If you want understandable code (!= asm), you can selectively use the "high-level" stuff which still isn't high-level like C++ or Java. Obviously, yours would be simpler than Hyde's.

https://en.wikipedia.org/wiki/High_Level_Assembly

re zero emphasis on portability

That's different. It will stop most undefined behavior. Also, that there's just one implementation.

re type definitions

Definitely look at Cyclone and Typed Assembly for ideas on attaching different, semantic meanings to low-level things like pointers. Ada's combo of type and bit represenation is worth trying. Maybe look at whatever Zig guy is doing, too, since he's aiming for safety and simplicity. If you want to get wild, you can throw a whole Prolog interpreter in there like Shen guy. I don't suggest that.

Although clueless about them, I do notice similarities to basic pattern matching and rewrite rules when I look at type systems in papers. That's what also powers macros and HLL-to-LLL conversions. There could be a set of primitives you could adapt to handle them all by just attaching the context somehow. Maybe also with templating where the primitive expresses core idea but is adapted to setters/getters, bitsize, and ranges. Pulled right out of type/bit definitions.

re side by side

Glad you love it. Look forward to seeing what use you come up with.


Thanks for the comments! Really appreciated.

> When considering minimal implementation, one will often try to choose between the easiest thing to parse ('b8') or easiest thing for human comprehension ('copy-to-EAX'). Why is 'b8/' in there? Are you using 'b8' + stop at whitespace for easy parsing while leaving ignored characters in between only for readability? And were there any drawbacks when you considered just dropping the b8/'s and such.

You kinda deduced the reason. I'm trying to walk a fine line between keeping the implementation small and keeping the interface at least in some realm of ergonomic. This isn't a notation to bootstrap as quickly as possible and then treat like a red-headed stepchild. I want it to be habitable[1] all by itself. A uniform but strict syntax with lots of room for free-form comments is the best approach I've found so far. The nice thing about restricting readability concerns to comments: error handling stays simple. And error handling is often where a compiler spends a lot of LoC (and still ends up with a highly sub-optimal result).

> The one thing I'm unsure on, which I didn't see in yours (maybe not applicable) was structs. They were the one change in C that let UNIX be ported from assembly.

Indeed. I do use structs, but so far they're entirely described in comments. For example, you may have noticed the list of offsets I describe for streams (https://github.com/akkartik/mu/blob/73d253fd/subx/Readme.md#data-structures). Here's how I describe it in code comments: https://github.com/akkartik/mu/blob/73d253fd/subx/055stream.subx#L3. Here's how I describe it when defining new streams: https://github.com/akkartik/mu/blob/73d253fd/subx/057write.subx#L148

I'm already starting to think about type definitions once I finish bootstrapping the SubX implementation. Definitely the #1 high-level feature I want. The plan is to grow a small notation that occupies the same niche as C, but with heavy emphasis on keeping the implementation small and transparent, and zero emphasis on portability. That should eliminate C's problems of undefined behavior.

It may look a lot like C, but that's still open. I think having it look different may help reinforce the right expectations: it's not going to be C, there's never going to be any sort of bug compatibility. I'm also planning to avoid complex nested syntax for arithmetic or pointers. I'd like to preserve a 1-1 correspondence with machine code as long as possible. Makes error messages much easier to design.

Your idea of showing before/after is awesome. I'm never going to do it in the context of C :) but I am going to totally steal it at some point.

[1] http://akkartik.name/post/habitability


Really do love the name. A few observations.

"b8/copy-to-EAX"

When considering minimal implementation, one will often try to choose between the easiest thing to parse (i.e. something like b8) or easiest thing for human comprehension (i.e. copy-to-EAX). Why is b8/ in there? Are you using b8 + stop at whitespace for easy parsing while leaving ignored characters in between only for readability? And were there any drawbacks when you considered just dropping the b8/'s and such.

"(SubX doesn't support floating-point registers yet. Intel processors support an 8-bit mode, 16-bit mode and 64-bit mode. SubX will never support them. There are other flags. SubX will never support them. There are also many more instructions that SubX will never support.)"

You also might want to say this near the beginning since it's an introductory statement. You also don't really say why. I know you're bootstrapping with minimal implementation a goal but that isn't mentioned in intro. This might give newcomers wrong idea. Intro still mostly gives them right idea of it, though.

"The Intel processor manual is the final source of truth on the x86 instruction set, but it can be forbidding to make sense of, so here's a quick orientation. " "The good news is that internalizing the next 500 words will give you a significantly deeper understanding of your computer."

The ultimate value of SubX might be eliminating the need to learn x86 the hard way. I had one article describing a few instructions that I was thinking of cheating with. Also thinking of going straight to ARM on a Pi or MIPS on a router since their subset are simple and more flexible (eg stack vs register vs whatever). SubX brings x86 complexity down to their level.

Only drawback is all my stuff ix x86_64. A SubX for it with same instruction coverage might be useful.

"Here's a sample of what a trace looks like"

That looks neat and very helpful. Mainly debugging but potentially high-assurance tools, too. They have to prove object code does what source says. Traces with equivalence proofs and tests are one method.

Your choice of primitives is well-thought, too. For bootstrapping, I identified scalar variables, arrays, trees (or just linked lists), a while statement, and ability to read/write a file. An interpreter for just those hand-done in x86 should be able to bootstrap all low-level tools that step one up. Modifying the interpreter to output rather than execute the assembly creates primitive compiler. The one thing I'm unsure on, which I didn't see in yours (maybe not applicable) was structs. They were the one change in C that let UNIX be ported from assembly.

Another thing from a year ago I don't know if I brought up. People are starting with TCC partly because it's so small. That makes whatever amount of C we need to handle small or, if we rewrite into primitive language (eg the interpreter), there's less labor. That assumes we do everything by hand. The amount of code in TCC or the C-based GCC, esp if legacy code not made for us, might argue a tool-assisted approach isn't really cheating so much as basic economics.

The idea I had was to use an untrusted tool for easy rewrites of C code. I've seen lots of tools used for compilers or static analyzers but easy is an unknown for now. Quicker and more interesting than tedious recoding, eh? The developer creates a mapping from whatever C statements and structuring is in GCC to the primitive language. The tool transforms GCC into that with either line-by-line diffs or before and after shots with explanations of each change. Verifiers review however much they want to make generated code trusted. They run the primitive interpreter on the trusted code to bootstrap GCC.

Might save a lot of time on top of eliminating need for TCC. Additionally, the tech might be reused for other projects. Maybe use a term-rewriting language combined with existing tech for producing the parse trees for C.

What you think of generating simpler version of large compilers for hand-coded interpreters? Also, people not verifying the code might just verify the parser and translation rules. An ancient, GCC backdoor probably wouldn't anticipate those components' code.


Main title is clickbait but has good content. Repost my HN comment about a huge omission:

Sports, people! They keep saying men are better at visualizing stuff from various angles and hitting stuff with projectiles. I expected several paragraphs worth of all the outdoor, sporting activities most men are raised to do. Throwing, catching, or hitting balls by itself could cause some of this. Wrestling or fighting, too. Wandering is probably a factor, esp if involving jumps or climbing. Or just wandering through woods where everything is initially weird with much imagery and terrain misleading. Gradually build up a mental picture of what everything actually is. Bust your behind less, too. ;)


good one!

"Towards the end of his life, Kurt Gödel developed severe mental problems and he died of self-starvation in 1978. His insights into the foundations of logic were the most profound ones since the development of proof by the ancient Greeks."

Damn. I didn't know that. The security version of this is the Karger-Thompson attack or really subversion risk applied pervasively. I warn people to just... stop thinking about it. They'll go crazy.


The source code is far from breathtaking, but available upon request.

Despite the problems, folks adapted the prank into a language usable enough that universities started cranking out competent programmers. This threatened to commoditize us with huge hit on salaries. Another evil mastermind, Bjarne Stroustrup, launched his own prank to prevent that:

http://harmful.cat-v.org/software/c++/I_did_it_for_you_all


Just sent you a prototype of this idea.

I just noticed that you'd made this suggestion earlier: https://niu.moe/@rain/101562575487971130

I'd seen it then but not fully internalized it.


Thinking about the problems you sometimes face by using a camera as a scanner can be a useful way into this question/problem.

I don't have time to write out all the reasoning I could try to apply to this right now. I might try to come back to it this evening.

I will say that, as a person that has written software to build systems, that understanding some of the underlying details of how something works has often been helpful in understanding why things do what they do. All abstractions leak, as it were, and there's a cost when they do.

I suspect that getting to the right answer by just reasoning, without actually breaking down how the scanning mechanism works, is potentially building on faulty assumptions, but I don't have the time to work the details on that now.

Something that may be unrelated, but comes to mind: Many people, myself included, scan documents by using a cell phone camera, which would change the result here, by changing how the scanner works. But perhaps that would show up in the way the question is phrased.

Definitely that sort of brain tickler that is probably more interesting to think about than it might have any right to be.


... (thinks) ... no, I don't think so.

I think it is a little clearer.

Is this partially about if a mirror actually has an appearance of it's own?


I can see that you're trying to find the question, and I appreciate your response. Even so, you're still not getting to the question I'm trying to ask, but you have given me a few ideas.

So let me try asking it a different way.

========

Suppose that I have here a device which, when a leaflet, pamphlet, printout, or other "image" is laid on it, will produce a piece of paper with a copy of the appearance of the object given.

From that alone, without knowing anything about how it works, can you deduce what will be produced when you put a mirror on it?

========

Is that clearer?


If this is intended as a philosophical question, then it might help to frame it as such. Otherwise you're asking a question about to very physical objects that have very concrete properties that would interact in predictable, if unexpected ways in this scenario. (I think you'd get a view of the inside of the photocopier, if there is enough light reflected for that, depending on the angle of the cameras inside the machine, or just a smear of the sensor bar moving across)

What does a photocopier do that's fundamentally different than "take a certain type of photo, then print it"? Are you wanting to introduce a type of paradox? Is there a greater point here that might be served by a different example?

This feels like something that you found profound at some point?

More

Welcome | Guidelines | Bookmarklet | Feature Requests | Source | Contact | Twitter | Lists

RSS (stories) | RSS (comments)

Search: