heartlogic-dev
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Heartlogic-dev] brainstorm review


From: Joshua N Pritikin
Subject: [Heartlogic-dev] brainstorm review
Date: Sat, 13 Mar 2004 19:19:06 +0530
User-agent: Mutt/1.5.4i

Here is a conversation which occurred privately off-list.  Then we
decided that it was interesting enough to re-post here.

---

From: "William L. Jarrold" <address@hidden>
Date: Tue, 2 Mar 2004 00:00:23 -0600 (CST)

On Mon, 1 Mar 2004, Josh White wrote:
> > josh white suggested mkaing the ai model interactive.
> > EXCELLENT idea josh!
>
> I have little to contribute other than that, but I'm excited about the
> technology and would be happy to offer opinoins.  Also, I love a good
> chat about practical applications, so if you're looking for
> brainstorming in that regard, just say the word.

yes, brainstorm away.

here are mine...

I have got to get to bed...But games are one.  The sims might benefit  
from being more intelligent.  I just bought it the other day to learn
more....

Advertisers might want to be able to reason about what makes
a particular potential customer happy...E.g. if we can infer that a
google user is a Yankee's fan we don't want to show him a photo of the
Yankee's loosing...But the opposite should be true if use is a Mets
fan.

Parents of autistics/asperger's or other pervasive developmental
disabilities or non-verbal learning disability or any related
disorder in which social skills training is recommended might wanna
use tutoring software based on this to help with empathy, perspective
taking.

Parents of bullies might want their kids to learn empathy.  Tutoring
software here too.

People wanting cognitive therapy might want it...If our model is highly
generative, then, amongst its many different ways of appraising a given
situation, it should be able to generate some nice positive reframes
of the current situation.

Bill

---
Date: Fri, 5 Mar 2004 19:40:03 -0600 (CST)
From: "William L. Jarrold" <address@hidden>   

there are many cool things we can there.  that is when it starts
to get interesting.  e.g. we can apply machine learning and/or
evolutionary computation to automatically create models that map
from Scenario Cues to Appraisals.

---
Date: Wed, 10 Mar 2004 20:37:34 +0530
From: Joshua N Pritikin <address@hidden>

Personally, I never liked neural-nets.  Why?  Because if a neural-net
can learn something then why not just extract the learned mathematical
formula into a more standard form?  Of course this doesn't work if you
keep asking the neural-net to learn different things, but why do you
need to be in a constant state of re-modelling to simulate emotion?

---
From: Josh White <address@hidden>
Date: Wed, 10 Mar 2004 07:20:13 -0800

Joshua N Pritikin <address@hidden> wrote:
> Personally, I never liked neural-nets.  Why?  Because if a
> neural-net can learn something then why not just extract the
> learned mathematical formula into a more standard form?  Of
> course this doesn't work if you keep asking the neural-net to
> learn different things, but why do you need to be in a
> constant state of re-modelling to simulate emotion? Maybe
> that's what Mr. Wilson means by "general purpose behavior"?
>

I see what you mean about neural nets. I've always seen them as valuable
for two reasons:

1) they seem to be a closer model to the way our brains work than
math/logic methods, thus intuitively seem like a better system for
emotion/human simulation

2) they offer a horsepower-centric (ie, just use bigger computers, not
smarter people)  way to discover previously unknown math/logic methods.
I'm always looking for ways to make bigger computers more useful to
people, and neural nets seem to offer good opportunities there.

---
Date: Thu, 11 Mar 2004 06:40:44 +0530
From: Joshua N Pritikin <address@hidden>

> I see what you mean about neural nets. I've always seen them as valuable
> for two reasons:
>
> 1) they seem to be a closer model to the way our brains work than
> math/logic methods, thus intuitively seem like a better system for
> emotion/human simulation

Yah, but it's so hopelessly low-level.  I mean, can you expect more
accurate simulations by modelling things at the protein level instead
of at the neuron level?

> 2) they offer a horsepower-centric (ie, just use bigger computers, not
> smarter people)  way to discover previously unknown math/logic methods.
> I'm always looking for ways to make bigger computers more useful to
> people, and neural nets seem to offer good opportunities there.

My opinion is that _people_ need to find a simple, elegant, and
intuitive way to model emotions.  Otherwise you're giving that job to
the neural-net.  I just can't believe that a neural-net is going to be
as insightful as the 10-20 Ph.D. research people working full-time on
the problem.

Think of it this way, have you heard of any neural-net which has
gotten a Ph.D.?

---
From: Josh White <address@hidden>
Date: Wed, 10 Mar 2004 17:55:41 -0800

> > 1) they seem to be a closer model to the way our brains work than
> > math/logic methods, thus intuitively seem like a better system for
> > emotion/human simulation
>
> Yah, but it's so hopelessly low-level.  I mean, can you
> expect more accurate simulations by modelling things at the
> protein level instead of at the neuron level?

I know they're hopelessly simple compared to neurons, but it seems to me
that if people spend energy evolving them, vs logic/math models, they
have the potential to be more similar to brains. Mind you, I think
neural nets have no nearly long-term potential for things like database
handling, as compared to logic/math models, but when we're talking about
modeling the human emotion system, it seems better (to a semi-layman) to
choose a a model that works in a similar way to neurons.

I say this not to convince you to change research focus (though I am
interested in why it's wrong, if it is), but to show you my low-level,
even subliminal, reactions to the whole problem. If my semi-layman
reactions are more similar to your average audience member than yours
would be, then maybe that mindset is interesting

> My opinion is that _people_ need to find a simple, elegant,
> and intuitive way to model emotions.  Otherwise you're giving
> that job to the neural-net.  I just can't believe that a
> neural-net is going to be as insightful as the 10-20 Ph.D.
> research people working full-time on the problem.

I agree that no machine in our lifetimes will ever compete with a human
PhD.

I'm thinking that a neural net solution could be used for a different
goal than a simple, elegant or intuitive model (or indeed any consistent
model at all). In other words maybe a neural net will simply work, in
the same way our brains work, even though we can't model the results
well.  Yes, this implies that maybe (and I don't necessarily believe
this at all) realistic human emotion simulation is outside the scope of
science.  I doubt it, but it's possible.

---
Date: Thu, 11 Mar 2004 08:46:49 +0530
From: Joshua N Pritikin <address@hidden>

> I say this not to convince you to change research focus (though I am
> interested in why it's wrong, if it is), but to show you my low-level,
> even subliminal, reactions to the whole problem. If my semi-layman
> reactions are more similar to your average audience member than yours
> would be, then maybe that mindset is interesting

When you say that you are a "semi-layman," does that mean that you
have not built any neural-nets yourself?  What I would encourage is
that you learn exactly how neural-net software works.  Something
called "neural-nets" might seem to promise mythical powers of
computation to people who haven't slogged through an implementation in C.

Am I guessing wrong?

> I'm thinking that a neural net solution could be used for a different
> goal than a simple, elegant or intuitive model (or indeed any consistent
> model at all). In other words maybe a neural net will simply work, in
> the same way our brains work, even though we can't model the results
> well.  Yes, this implies that maybe (and I don't necessarily believe
> this at all) realistic human emotion simulation is outside the scope of
> science.  I doubt it, but it's possible.

Just to preempt further speculation, I hold approximately the same
view of "genetic algorithms" that I hold of neural-nets.  My
preference is to think about the problem myself instead of delegating
that task to a "smart" computer.

If you still believe that any of these "smart algorithms" are really
smart then I strongly encourage you to implement them in C.

---
Date: Wed, 10 Mar 2004 21:39:34 -0600 (CST)
From: "William L. Jarrold" <address@hidden>   

On Thu, 11 Mar 2004, Joshua N Pritikin wrote:
> On Wed, Mar 10, 2004 at 05:55:41PM -0800, Josh White wrote:
> > the same way our brains work, even though we can't model the results
> > well.  Yes, this implies that maybe (and I don't necessarily believe
> > this at all) realistic human emotion simulation is outside the scope of
> > science.  I doubt it, but it's possible.
>
> Just to preempt further speculation, I hold approximately the same
> view of "genetic algorithms" that I hold of neural-nets.  My
> preference is to think about the problem myself instead of delegating
> that task to a "smart" computer.

I disagree.  But we can work on that issue in time.

I believe that we are collecting data at the website to maybe help us
give feedback to a learning algorithm.  That learning algorithm might
be btwn your and my ears or it maybe in some cool learning algorithm
being thunk up by Jordan Pollack or god knows who.

It is worth hedging our bets.  Either a set of humans will manually
encode the first AI or a learning algorithm will do it, or some combo
of the two.  But what is an important aid to doing either one of
those is having a set of data that can evaluate a given model (whether
that model is created by a human mind or in silico).

The statistics based experimental methodology used in my dissertation can
be considered a kind of learning algorithm.

Attachment: signature.asc
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]