6 comments

  • causal 1 hour ago
    Title is editorialized and needs to be fixed; the paper does not say what this title implies, nor is that the title of the paper.
    • wongarsu 37 minutes ago
      HN automatically removes the word "How" from the beginning of titles. I suspect this title is one instance of that
    • btilly 47 minutes ago
      The exact phrase appears in the title. There is a title length limit. In this case, I don't think that it is wrong to pick the most interesting piece of that title that fits in the limit.
  • matja 1 hour ago
    The eigenvalue distribution looks somewhat similar to Benford's Law - isn't that expected for a human-curated corpus?
    • btilly 43 minutes ago
      I would expect that for any sampling of data that has a roughly similar distribution over many scales.

      Which will be true of many human curated corpuses. But it will also be similar to, for natural data as well. Such as the lengths of random rivers, or the brightness of random stars.

      The law was first discovered because logarithm books tended to wear out at the front first. That turned out to because most numbers had a small leading digit, and therefore the pages at the front were being looked up more often.

  • jdonaldson 1 hour ago
    (Pardon the self promotion) Libraries like turnstyle are taking advantage of shared representation across models. Neurosymbolic programming : https://github.com/jdonaldson/turnstyle
  • ACCount37 2 hours ago
    The "platonic representation hypothesis" crowd can't stop winning.

    Potentially useful for things like innate mathematical operation primitives. A major part of what makes it hard to imbue LLMs with better circuits is that we don't know how to connect them to the model internally, in a way that the model can learn to leverage.

    Having an "in" on broadly compatible representations might make things like this easier to pull off.

    • causal 1 hour ago
      You seem to be going off the title which is plainly incorrect and not what the paper says. The paper demonstrates HOW different models can learn similar representations due to "data, architecture, optimizer, and tokenizer".

      "How Different Language Models Learn Similar Number Representations" (actual title) is distinctly different from "Different Language Models Learn Similar Number Representations" - the latter implying some immutable law of the universe.

      • dnautics 23 minutes ago
        > latter implying some immutable law of the universe

        I think the implications is slightly weaker -- it implies some immutable law of training datasets?

    • FrustratedMonky 1 hour ago
      Same with images maybe?

      Saw similar study comparing brain scans of person looking at image, to neural network capturing an image. And were very 'similar'. Similar enough to make you go 'hmmmm, those look a lot a like, could a Neural Net have a subjective experience?'

    • LeCompteSftware 1 hour ago
      "using periodic features with dominant periods at T=2, 5, 10" seems inconsistent with "platonic representation" and more consistent with "specific patterns noticed in commonly-used human symbolic representations of numbers."

      Edit: to be clear I think these patterns are real and meaningful, but only loosely connected to a platonic representation of the number concept.

      • ACCount37 21 minutes ago
        Is it an actual counterargument?

        The "platonic representation" argument is "different models converge on similar representations because they are exposed to the same reality", and "how humans represent things" is a significant part of reality they're exposed to.

      • brentd 38 minutes ago
        Regardless of whether the convergence is superficial or not, I am interested especially in what this could mean for future compression of weights. Quantization of models is currently very dumb (per my limited understanding). Could exploitable patterns make it smarter?
        • ACCount37 20 minutes ago
          That's more of a "quantization-aware training" thing, really.
  • dboreham 2 hours ago
    It's going to turn out that emergent states that are the same or similar in different learning systems fed roughly the same training data will be very common. Also predict it will explain much of what people today call "instinct" in animals (and the related behaviors in humans).
  • gn_central 2 hours ago
    Curious if this similarity comes more from the training data or the model architecture itself. Did they look into that?
    • OtherShrezzing 2 hours ago
      They describe that both are important, and researched in the paper, within the opening paragraph.