• 0 Posts
  • 44 Comments
Joined 1 year ago
cake
Cake day: July 1st, 2023

help-circle



  • Yeah, that’s a tough one.

    I’d say “well, don’t,” but that’s pretty obviously useless. :p

    I think I can say this much, though: Silent Hill, being so metaphorical, it’s kind of built like a big puzzle? It’s okay to “not get it” while you’re playing. Most of the stuff I know I’m pretty sure I picked up from fan wikis later on.

    So, easier said than done, but: try not to think so much, haha.

    The remake is also a little better about explaining its own plot, I think just by being a little more obvious. So, by the end, some things might click into place a little more.

    And yeah, the sound design, (minor early game spoiler) have you noticed that >!the radio!< >!is a positional sound?!<

    !Normally, you hear it kind of left and behind you. My instinct that the sound is supposed to be telling me where to look was so strong that I kept turning away from enemies I knew were right in front of me. That’s so genius I almost feel it must have been an accident.!<



  • Ah, but here we have to get pedantic a little bit: producing an AGI through current known methods is intractable.

    I didn’t quite understand this at first. I think I was going to say something about the paper leaving the method ambiguous, thus implicating all methods yet unknown, etc, whatever. But yeah, this divide between solvable and “unsolvable” shifts if we ever break NP-hard and have to define some new NP-super-hard category. This does feel like the piece I was missing. Or a piece, anyway.

    e.g. humans don’t fit the definition either.

    I did think about this, and the only reason I reject it is that “human-like or -level” matches our complexity by definition, and we already have a behavior set for a fairly large n. This doesn’t have to mean that we aren’t still below some curve, of course, but I do struggle to imagine how our own complexity wouldn’t still be too large to solve, AGI or not.


    Anyway, the main reason I’m replying again at all is just to make sure I thanked you for getting back to me, haha. This was definitely helpful.


  • Hey! Just asking you because I’m not sure where else to direct this energy at the moment.

    I spent a while trying to understand the argument this paper was making, and for the most part I think I’ve got it. But there’s a kind of obvious, knee-jerk rebuttal to throw at it, seen elsewhere under this post, even:

    If producing an AGI is intractable, why does the human meat-brain exist?

    Evolution “may be thought of” as a process that samples a distribution of situation-behaviors, though that distribution is entirely abstract. And the decision process for whether the “AI” it produces matches this distribution of successful behaviors is yada yada darwinism. The answer we care about, because this is the inspiration I imagine AI engineers took from evolution in the first place, is whether evolution can (not inevitably, just can) produce an AGI (us) in reasonable time (it did).

    The question is, where does this line of thinking fail?

    Going by the proof, it should either be:

    • That evolution is an intractable method. 60 million years is a long time, but it still feels quite short for this answer.
    • Something about it doesn’t fit within this computational paradigm. That is, I’m stretching the definition.
    • The language “no better than chance” for option 2 is actually more significant than I’m thinking. Evolution is all chance. But is our existence really just extreme luck? I know that it is, but this answer is really unsatisfying.

    I’m not sure how to formalize any of this, though.

    The thought that we could “encode all of biological evolution into a program of at most size K” did made me laugh.











  • And how ads on TV are sometimes so much louder than the show they’re cut between. And the glitches! Sometimes, you have to completely power cycle your phone to fix something simple. And how Facebook’s curated, algorithmic feed sends people down extremist pipelines, fueling things like public shootings and the January 2021 Capital riots. And how the continued atomization of society into smaller and smaller pieces (e.g. suburbia) has made people lonelier than they ever have been. And how the displacement of work onto capable machines never seems to yield benefits onto the people whose work is being displaced, only their bosses.

    I guess if all you remember are Letterman’s fumbling grandpa jokes about what the Internet is, gosh dang, even useful for, I could see why you’d think nobody’s criticisms are real.