Talk:ANN: Difference between revisions

From OLPC
Jump to navigation Jump to search
(New page: I assume you mean real arbitrary neural nets, and not just "neural nets" (3-layered backpropagated toys). Would you even include backpropagation? I wouldn't think it would be necessary for...)
 
No edit summary
 
Line 4: Line 4:


[[User:Homunq|Homunq]] 17:05, 29 March 2008 (EDT)
[[User:Homunq|Homunq]] 17:05, 29 March 2008 (EDT)

Absolutely! Although there are endless applications for 'hidden layer + backpropogation' type neural networks, I think it would be more helpful to teach children about the basics of logic and how many simple units working together can generate complex patterns. I agree that backprop is not necessary for the first version. At some point, adding some different learning rules would be helpful to make some of the advanced labs include networks that adapt to changing conditions or something along the lines.

It's good to know there is some interest!

[[User:Braingram|Braingram]] 16:11, 30 March 2008 (EDT)

Latest revision as of 20:12, 30 March 2008

I assume you mean real arbitrary neural nets, and not just "neural nets" (3-layered backpropagated toys). Would you even include backpropagation? I wouldn't think it would be necessary for version 1.

Sounds cool!

Homunq 17:05, 29 March 2008 (EDT)

Absolutely! Although there are endless applications for 'hidden layer + backpropogation' type neural networks, I think it would be more helpful to teach children about the basics of logic and how many simple units working together can generate complex patterns. I agree that backprop is not necessary for the first version. At some point, adding some different learning rules would be helpful to make some of the advanced labs include networks that adapt to changing conditions or something along the lines.

It's good to know there is some interest!

Braingram 16:11, 30 March 2008 (EDT)