SCIENTIFIC REASONING IN AI AND PHILOSOPHY OF SCIENCE LO23823

From: dpdash@ximb.ac.in
Date: 01/25/00


Replying to LO23807 --

Dear co-researchers,

Arun-Kumar has given us 5 questions (see below) on comparing Artificial
Intelligence thinking and scientific thinking. Some of these questions
contain sub-questions within themselves. All the questions appear quite
'deep' and difficult to answer.

Although I do not consider that I already have answers to these questions,
I still feel attracted to the questions and the possible answers. To me,
AI is among the boldest ventures of the human mind in the last millennium:
The mind trying to describe itself (or, if you like, one mind trying to
describe another!!).

Arun-Kumar's questions were:

On 21 Jan 00, at 11:20, Arun-Kumar Tripathi wrote:

> 1) What are typical AI problems to which scientific reasoning can be
> applied? How can these problems be characterised? Can these
> characteristics be formalised?
>
> 2) What are typical problems in scientific methodology to which AI
> techniques for ampliative reasoning (abduction, induction, confirmation,
> etc) can be applied? How can these problems be characterised? Can these
> characteristics be formalised?
>
> 3) What logical frameworks are appropriate for reasoning about the
> differences and similarities among types of scientific reasoning? Is the
> question of distinguishing or identifying them merely dependent on the
> level of abstraction?
>
> 4) Is there a substantial difference between scientific reasoning as
> conceived in the philosophy of science and in artificial intelligence?
>
> 5) What are the computational challenges for implementing processes such
> as scientific discovery, theory development and truth approximation?

Please allow me to open the discussion by focusing on the first part of
the first question: 1) What are typical AI problems to which scientific
reasoning can be applied? Let me try to describe one such typical problem:

The Problem of Interaction Between Two (or More) Intelligent Entities.

I suppose, there is not a unique way to deal with this problem within AI
(i.e., there are many possible ways). This however seems to be a typical
problem that can benefit from an understanding of the scientific
enterprise in general. Science, in some sense, provides a model for
interaction among intelligent entities. For example, scientific culture
and its institutions provide a space and a set of methods for scientific
interactions. These methods do not require that each scientist should have
the same intelligence, or even the same TYPE of intelligence. These
methods do not require that the participants (or interactants or actors)
have a complete knowledge of their environment, or have a long term
commitment to the scientific enterprise (or to any other enterprise for
that matter), or have a shared mental model of 'something', etc. There are
some elements these methods classically emphasise:

i. In reporting an observation, it should be clear (to all) what the
report is. (The attribute or property being reported should be observable
to all.)

ii. Further, it should also be clear, what is it a report of (The 'object'
whose property is being reported should also be observable to all.)

In curcumstances where these two principles cannot be fully adhered to,
the scientific enterprise in general appears to make an EFFORT to modify
(or alter) the circumstances such that the two classical principles BECOME
applicable again. Such EFFORT is is quite prevalent in social science,
management science, systems science, etc., but also in physical and life
sciences too.

Ending abruptly to let the reader speak.

DP
----------------
Prof. D. P. Dash
Xavier Institute of Management
Bhubaneswar 751013
India
New E-Mail: dpdash@ximb.ac.in

-- 

dpdash@ximb.ac.in

Learning-org -- Hosted by Rick Karash <rkarash@karash.com> Public Dialog on Learning Organizations -- <http://www.learning-org.com>


"Learning-org" and the format of our message identifiers (LO1234, etc.) are trademarks of Richard Karash.