You are here
La pluralité des méthodes numériques dans les simulations numériques et leur analyse philosophique
Les méthodes numériques sont les conditions sine qua non des simulations numériques. Sans méthode numérique, il ne peut pas y avoir de simulation numérique. Les méthodes numériques sont utilisées pour résoudre les équations mathématiques contenues dans les modèles des simulations, en particulier, lorsque les équations ne sont pas solubles analytiquement ou sont trop longues pour être résolues par d’autres moyens. Autrement dit, les méthodes numériques sont le lien nécessaire entre le modèle théorique et la simulation.
Ce lien est-il transparent ou contient-il une fonction de représentation qui interfère avec celle du modèle théorique ?
Si les méthodes numériques sont transparentes, comment expliquer leur pluralité ? Est-ce qu’à chaque méthode correspond une définition spécifique du concept de simulation numérique ? Est-il possible de fournir une unique définition du concept de simulation numérique ?
Par ailleurs, les méthodes numériques doivent répondre à des contraintes relatives à l’architecture computationnelle (parallèle, séquentiel, numérique ou analogique) et aux caractéristiques particulières de la machine (en termes de puissance computationnelle, de mémoire, ou de ressource du système). Dans quelle mesure ces contraintes menacent-elles la justesse des représentations des système étudiés que nous fournissent les modèles des simulations ?
Un autre ensemble de questions est lié à la pluralité des méthodes numériques, fréquemment sous-estimée par les philosophes. Soit un problème donné, pour quelles raisons choisit-on une méthode numérique plutôt qu’une autre ? Est-ce que le choix dépend de la nature du problème même ?
Programme
Invités
Robert Batterman (University of Pittsburgh)
François Dubois (CNAM, Université Paris Sud)
Paul Humphreys (University of Virginia)
Mark Wilson (University of Pittsburgh)
Jeudi 3 novembre 2011
09:00-09:35 Johannes Lenhard, Universität Bielefeld
A Predictive Turn in Pre-Computer and Computer Numerics
09:35-10:10 Claus Beisbart, TU Dortmund
No risk no fun virtualized. How Monte Carlo simulations represent
10:10-10:45 Maarten Bullynck, Université Paris 8, Liesbeth De Mol, Universiteit Gent, Martin Carlé, Athens
ENIAC, matrix of numerical simulation(s!)
10:45-11:15 Pause café
11:15-12:15 Paul Humphreys, University of Virginia
Applying Mathematics and Applied Mathematics
12:15-13:45 Déjeuner
13:45-14:45 Mark Wilson, University of Pittsburgh
Concepts and Computation: A Philosophical Survey
14:45-15:15 Pause café
15:15-16:50 Vincent Ardourel, Université Paris 1
Is Discretization a Change in Mathematical Idealization?
16:50-17:25 Greg Lusk, University of Toronto
Faithfulness Restored: Data Analysis and Data Assimilation
Vendredi 4 novembre 2011
09:30-10:30 François Dubois, Université Paris Sud
Lattice Boltmann Equations and Finite-Difference Schemes
10:30-11:00 Pause café
11:00-11:35 Sorin Bangu, University of Illinois
Analytic vs. Numerical: Scientific Modeling and Computational Methods
11:35-12:10 Thomas Boyer, Université Paris 1
What numerical methods are not. The case of multilayered simulations, with several computational models
12:10-13:45 Déjeuner
13:45-14:45 Robert Batterman, University of Pittsburgh
The Tyranny of Scales
14:45-15:15 Pause café
15:15-15:50 Nicolas Fillion, Robert Corless, University of Western Ontario
Computation and Explanation
15:50-16:25 Robert Moir, Robert Corless, University of Western Ontario
Computation for Confirmation
Résumés
Johannes Lenhard, Universität Bielefeld
A Predictive Turn in Pre-Computer and Computer Numerics
Recent numerical methodology has experienced a predictive turn. To give an account of this (post-mainframe) turn, it will be compared to another predictive turn, a pre-computer turn, that took place roughly one century ago. At that time, two competing or complementary conceptions of numerical methodology were established on the boundary of mathematics, science, and technology. These competitors have been called the constructive and the relational conception. Both have been combined on the basis of recent computing instrumentation, namely highly available and networked smaller computing machines.
Claus Beisbart, TU Dortmund, Institute for Philosophy and Political Science
No risk no fun virtualized. How Monte Carlo simulations represent
The aim of this paper is to show against some authors that some so-called Monte Carlo simulations are in fact computer simulations of real-world systems. The focus is on methods that generate sample paths of a probabilistic model of a target. Such methods differ in two crucial ways from deterministic simulations. They can nevertheless be used to represent a system, either because they estimate probabilities that constrain the statistical features of the target or because the collection of sample paths is in some respects similar to a collection of physical systems.
Maarten Bullynck, Université Paris 8
Liesbeth De Mol, Universiteit Gent
Martin Carlé, University of Athens
ENIAC, matrix of numerical simulation(s!)
Although it is generally known that the first numerical simulation ever using the Monte Carlo method ran on the ENIAC around 1946, the diversity of other numerical simulations on the ENIAC from 1946 to 1950 is mostly underestimated and rarely taken into consideration. We want to contrast von Neumann's approach to numerical simulation with H.B. Curry's development of another simulation program, both for the ENIAC. Their differences are found on the threefold level of their handling 1) the mathematics (numerical methods); 2) the logical organisation of the program (translation into a computer program); and 3) the physicality of the computer.
Paul Humphreys, University of Virginia
Applying Mathematics and Applied Mathematics
Some scientific imaging devices, such as those that use computed tomography and positron emission tomography, recreate structure from data using inverse inference techniques such as filtered back-projection reconstruction and iterated statistical image reconstruction. Related techniques occur in simulations that are used to design the instruments. Some of the numerical methods used, including optimization techniques, have no rigorous mathematical justification and their use is justified, in part, inductively. My talk will discuss in what sense, if any, these methods count as approximations and whether their use supports the view that parts of mathematics are empirical.
Mark Wilson, University of Pittsburgh
Concepts and Computation: A Philosophical Survey
The talk will examine, from a historical vantage point, a number of ways in which computational routines have played an important, if often unacknowledged, role within major philosophical controversies with respect to the nature of “concepts”.
Vincent Ardourel, Université Paris 1
Is Discretization a Change in Mathematical Idealization?
In this paper, I shall claim that discretizing differential equations in order to solve them on computers does not consist in a change in the idealization of physical phenomena. I shall start by endorsing Pincock’s claim according to which the heat differential equation is a “mathematical idealization”. This is a deliberately false mathematical representation of a physical phenomenon since matter in which heat propagates is idealized as continuous. However, contrary to what he suggests, I shall show that the differential heat equation does not require more false assumptions about the physical phenomenon than a discretized heat equation. Therefore, the latter equation cannot be viewed as less idealized than the differential equation.
Greg Lusk, University of Toronto
Faithfulness Restored: Data Analysis and Data Assimilation
I will argue that there are numerical techniques that can assist in assuring that our simulation outputs appropriately capture the target system of interest. These techniques are unique because they both constrain, and are constrained by, the simulation of which they are a part. The example I will put forth to support this argument is the employment of data assimilation and data analysis in weather and climate forecasting. Data assimilation is a method of analyzing and processing observational data so that it can be integrated into a simulation cycle and thus iteratively correct the model output to produce a better forecast, as well as establish acceptable initial conditions for the next forecast run. Data assimilation is philosophically interesting because it provides an example of a numerical technique that does not detract from the theoretic model, provides for better overall forecasts, and, unlike many simulations, it allows nature to “have a say” in the model output.
François Dubois, Conservatoire National des Arts et Métiers, Paris
An introduction to lattice Boltzmann schemes
We propose an elementary introduction to the lattice Boltzmann scheme. We recall the physical (Boltzmann equation) and algorithmic (cellular automata) origins of this numerical method. We present in detail the two characteristic steps of the algorithm: the nonlinear collision step, local in space and the exact linear propagation phase, explicit in time.
We consider the so-called "Taylor expansion method" in order to derive an equivalent partial differential equation.
We obtain in this way formally a purely numerical so-called Chapman-Enskog development where the small parameter is the discretization step of the scheme.
At order zero, the lattice Boltzmann scheme satisfies a local thermodynamical equilibrium. At first order, it satisfies the Euler equations of gas dynamics and at second order the Navier-Stokes equations.
If we have enough time, we will detail the classical case of the nine velocities model on a square lattice.
References
Dubois, F. “Une introduction au schéma de Boltzmann sur réseau", ESAIM Proceedings, volume 18, pages 181-215, 2007.
Dubois, F. "Equivalent partial differential equations of a lattice Boltzmann scheme", Computers and Mathematics with Applications, volume 55, pages 1441-1449, 2008
Sorin Bangu, University of Illinois
Analytic vs. Numerical: Scientific Modeling and Computational Methods
The aim of this paper is to challenge the belief that the analytic solutions to the equations modelling a physical process are epistemically superior to the numerical solutions, typically needed in performing simulations. The fact that the former are precise while the latter are only approximate is the main reason for claiming this epistemic superiority. While attractive, I show that this idea is problematic, the difficulty originating in the distinction between the mathematical, computational and numerical equivalence of algorithms.
Thomas Boyer, Université Paris 1
What numerical methods are not. The case of multilayered simulations, with several computational models
This paper is a contribution to the definition of numerical methods, essentially by contrasting them with what they are not. I discuss them within Humphreys' (2004) definition of a computer simulation, and his concept of a computational model. For him, numerical methods are (at least implicitly) what helps the computation of the solutions to the equations of the computational model, and doesn't bring a new representation of the phenomena. I challenge this view by providing counter-examples, with simulations in quantum mechanics. I argue that what Humphreys would label as numerical methods are in fact other computational models. Finally, I give criteria to identify numerical methods, contrasting them with computational models.
Robert Batterman, University of Pittsburgh
The Tyranny of Scales
This paper examines a fundamental problem in applied mathematics. How can one model the behavior of materials that display radically different, dominant behaviors at different length scales. Although we have good models for material behaviors at small and large scales, it is often hard to relate these scale-based models to one another. Macroscale models represent the integrated effects of very subtle factors that are practically invisible at the smallest, atomic, scales. For this reason it has been notoriously difficult to model realistic materials with a simple bottom-up-from-the-atoms strategy. Both analytic and computational methods invariably fail. The widespread failure of that strategy forced physicists interested in overall macro-behavior of materials toward completely top-down modeling strategies familiar from traditional continuum mechanics. The question is whether we can exploit our rather rich knowledge of intermediate micro- (or meso-) scale behaviors in a manner that would allow us to bridge between these two dominant methodologies. Macroscopic scale behaviors often fall into large common classes of behaviors such as the class of isotropic elastic solids, characterized by two phenomenological parameters---so-called elastic coefficients. Can we employ knowledge of lower scale behaviors to understand this universality---to determine the coefficients and to group the systems into classes exhibiting similar behavior?
Nicolas Fillion, University of Western Ontario
Robert Corless, University of Western Ontario
Computation and Explanation
We argue that a residual-based a posteriori backward error analysis (based on the powerful error-theoretic concepts of backward error, conditioning, and residual) provides a general framework to establish the correctness of mathematical inferences in numerical contexts. Moreover, thanks to its similarity with standard perturbation methods used in dynamical systems, it increases our epistemological understanding of scientific theorizing by making clear that a key aspect to mathematical modeling is to exactly solve nearby problems.
Robert Moir, Rotman Institute of Philosophy, University of Western Ontario
Robert Corless, University of Western Ontario
Computation for Confirmation
Confirmation of a (mathematical) theory generically requires computation to obtain solutions of mathematical models, thereby connecting the theory to either data or phenomena. Based on an important distinction between theory and model motivated by consideration of scientific practice, we will argue that what we confirm in practice is not a theory, but rather an entire “space” of models generated by theory. We show that the picture of confirmation, and the role played by numerical methods, is clarified and simplified with the use of an approach to numerical error analysis called backward error analysis, enabling numerics to be seen as essentially continuous with the mathematical modeling process.