On Perceptrons #2

Open
opened 2024-12-03 17:35:55 +00:00 by scott · 0 comments
Owner

Just leaving a few crumbs of research here:

  • Minsky and Papert's Perceptrons is pretty rough going for me. It's a bunch of proofs which I can reason through, but don't especially enjoy.

    • Their most famous proof is that a perceptron can be trained to simulate AND and OR, but not XOR.
    • This isn't especially interesting in itself. However, it does highlight the ways in which neural nets/machine learning are a departure from "classical" computing grounded in boolean algebra.
  • Dan Shiffman has a chapter on perceptrons in The Nature of Code. Naturally, the second version (which is the PDF I have on my computer) gives up pretty early. It introduces a perceptron, and then farms all the interesting stuff out to ml5.js.

  • I realize also that The Little Learner also comes with its own software library, malt. I was so surprised at having to download all the packages just to start with the Racket version of the thing.

This conduces to our usual question in Ludus: what does the language have to be able to do to make encountering this kind of stuff easy?

First of all, I have to actually work through this stuff to find out.

Second of all, I think perceptrons have to come after objects. It's possible to do them in a functional style (and perhaps The Little Learner will teach me this), but it strikes me that a perceptron really does encapsulate both state and functionality--which makes it a real goram object.

Just leaving a few crumbs of research here: * Minsky and Papert's _Perceptrons_ is pretty rough going for me. It's a bunch of proofs which I _can_ reason through, but don't especially enjoy. - Their most famous proof is that a perceptron can be trained to simulate `AND` and `OR`, but not `XOR`. - This isn't especially interesting in itself. However, it does highlight the ways in which neural nets/machine learning are a departure from "classical" computing grounded in boolean algebra. * Dan Shiffman has a chapter on perceptrons in _The Nature of Code_. Naturally, the second version (which is the PDF I have on my computer) gives up pretty early. It introduces a perceptron, and then farms all the interesting stuff out to ml5.js. - But! There's a Coding Train neural nets path that seems to dig in to techniques: https://thecodingtrain.com/tracks/neural-networks. - Also: the old version of _The Nature of Code_ seems to have more stuff about neural nets, published as it was before the "AI revolution" of the past few years. Code is here: https://github.com/nature-of-code/noc-examples-processing/tree/master/chp10_nn. The text is here: https://github.com/nature-of-code/The-Nature-of-Code-archive/blob/master/raw/chapters/10_nn.asc. * I realize also that _The Little Learner_ also comes with its own software library, `malt`. I was so surprised at having to download all the packages just to start with the Racket version of the thing. This conduces to our usual question in Ludus: what does the language have to be able to do to make encountering this kind of stuff easy? First of all, I have to actually work through this stuff to find out. Second of all, I think perceptrons have to come after objects. It's possible to do them in a functional style (and perhaps _The Little Learner_ will teach me this), but it strikes me that a perceptron really _does_ encapsulate both state and functionality--which makes it a real goram object.
Sign in to join this conversation.
No Label
No Milestone
No project
No Assignees
1 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: scott/hocbe#2
No description provided.