Sunday, April 8, 2012

Mathematical Foundations of V&V Pre-publication NAS Report


From the Various Consequences Blog, I found that the National Academy Press is about to release a report on  the Mathematical Foundations of Validation, Verification and Uncertainty Quantification. The pre-publication version is available on the National Academies Press site.

While reading the pre-publication version, I noted that the study did not seem to make reference to the (too !) recent connection between Compressive Sensing and Uncertainty Quantification as pointed out by Alireza Doostan.. If you recall, his most recent presentations on the subject include:
Alireza applies this technique to speed up the finding of the largest coefficients of a polynomial chaos expansion. 




Also from the Various Consequences Blog there was this V&V Workshop at Notre Dame late last year. Abstracts are here.

Monday, February 6, 2012

Statistically Discernable ?

Andrew Gelman started a good discussion on his blog in The inevitable problems with statistical significance and 95% intervals. The comments are, as usual, right on the money.

Wednesday, December 28, 2011

Why Economics Needs Data Mining

From Mathbabe's Economist versus quant (Video is featured in INET's website)

Sunday, November 27, 2011

How biased are maximum entropy models?

From Yaroslav's blog, this is of interest to the Experimental Probabilistic Hypersurface approach which computes the probability distribution.that maximizes entropy :for difficult to compute models (read too long to run on a computer). Here is the paper: How biased are maximum entropy models? by Jakob H. Macke, Iain Murray, Peter E. Latham. The abstract reads:
Maximum entropy models have become popular statistical models in neuroscience and other areas in biology, and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e. the true entropy of the data can be severely underestimated. Here we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We show that if the data is generated by a distribution that lies in the model class, the bias is equal to the number of parameters divided by twice the number of observations. However, in practice, the true distribution is usually outside the model class, and we show here that this misspecification can lead to much larger bias. We provide a perturbative approximation of the maximally expected bias when the true model is out of model class, and we illustrate our results using numerical simulations of an Ising model; i.e. the second-order maximum entropy distribution on binary data.

Friday, November 25, 2011

Uncertainty Quantification at the Statistical and Applied Mathematical Sciences Institute

(there is an entry on Nukt Blanche pointing to the connection with compressive sensing and uncertainty quantification )



Dr. Adrian Sandu - tutorial lecture on Data Assimilation for Uncertainty Quantification
Habib Najm's tutorial lecture on Foundations for Uncertainty Quantification
Peter Kitanidis Inverse Problem and Calibration Uncertainty Quantification tutorial lecture
Susie Bayarri's tutorial lecture on Representation and Propagation of Uncertainty
Dan Cooley: Statistics of Extremes (Tutorial talk)
Uncertainty Quantification Summer School presentation by Dr. Adrian Sandu: Variational Data Assimilation
Uncertainty Quantification Summer School presentation by Dr. Dan Cooley: Statistical Analysis of Rare Events
Uncertainty Quantification Summer School presentation by Dr. Dongbin Xiu: Sensitivity Analysis and Polynomial Chaos for Differential Equations
Uncertainty Quantification Summer School presentation by Dr. Doug Nychka: Data Assimilation and Applications in Climate Modeling
Dr. Douglas Nychka,Director of the Institute of Mathematics Applied to Geosciences for the National Center for Atmospheric Research (NCAR), spoke to an audience on February 15 about climate change.




Friday, October 14, 2011

IFIP Working Conference on Uncertainty Quantification in Scientific Computing

I just came across the following presentations at the IFIP Working Conference on Uncertainty Quantification in Scientific Computing held at the Millennium Harvest House in Boulder, on August 1-4, 2011. Here are the talks and some abstracts:


Part I: Uncertainty Quantification Need: Risk, Policy, and Decision Making
Keynote Address
Pasky Pascual, Environmental Protection Agency, US

In 2007, the U.S. National Academy of Sciences issued a report, "Toxicity Testing in the 21st Century: a Vision and a Strategy," which proposed a vision and a roadmap for toxicology by advocating the use of systems-oriented, data-driven predictive models to explain how toxic chemicals impact human health and the environment. The report noted the limitations of whole animal tests that have become the standard basis for risk assessments at the U.S. Environmental Protection Agency. That same year, in response to the recall of the pain-killing drug, Vioxx, Congress passed the the Food and Drug Administration Act (FDAA).Vioxx had been approved for release by the U.S. government, and only belatedly was it discovered that the drug increased the risk of heart disease. This presentation suggests that these two events anticipate the need to build on developments in genomics, cellular biology, bioinformatics and other fields to craft predictive models that provide the rationale for regulating risks to public health and the environment. It suggests that both are a step in the right direction, but that long-standing issues of uncertainty in scientific inference must be more widely appreciated and understood, particularly within the regulatory system, if society hopes to capitalize on these scientific advances.
Mark Cunningham, Nuclear Regulatory Commission, US

In early 2011, a task force was established within the Nuclear Regulatory Commission (NRC) to develop proposals for a long-term vision on using risk information in its regulatory processes. This task force, established by NRC's Chairman Jaczko, is being led by Commissioner Apostolakis, and has a charter to "develop a strategic vision and options for adopting a more comprehensive and holistic risk-informed, performance-based regulatory approach for reactors, materials, waste, fuel cycle, and transportation that would continue to ensure the safe and secure use of nuclear material." This presentation will discuss some of the issues being considered by the task force in the context of how to manage the uncertainties associated with unlikely but potentially high consequence accident scenarios.
Alberto Pasanisi, Electricité de France, France

Simulation is nowadays a major tool in industrial R&D and engineering studies. In industrial practice, in both design and operating stages, the behavior of a complex system is described and forecasted by a computer model, which is, most of time, deterministic. Yet, engineers coping with quantitative predictions using deterministic models deal actually with several sources of uncertainties affecting the inputs (and eventually the model itself) which are transferred to the outputs, i.e. the outcomes of the study. Uncertainty quantification in simulation has gained more and more importance in the last years and it has now become a common practice in several industrial contexts. In this talk we will give an industrial feedback and viewpoint on this question. After a reminder of the main stakes related to uncertainty quantification and probabilistic computing, we will particularly insist on the specific methodology and software tools which have been developed for dealing with this problem. Several examples, concerning different physical framework, different initial questions and different mathematical tools, will complete this talk.
Living with Uncertainty
Patrick Gaffney, Bergen Software Services International, Norway

This talk describes 12 years of experience in developing simulation software for automotive companies. By building software from scratch, using boundary integral methods and other techniques, it has been possible to tailor the software to address specific issues that arise in painting processes applied to vehicles and to provide engineers with results for real-time optimization and manufacturing analysis. The talk will focus on one particular simulator for predicting electrocoat deposition on a vehicle and will address the topics of verification, validation, and uncertainty quantification as they relate to the development and use of the simulator in operational situations. The general theme throughout the talk is the author's belief of an almost total disconnection between engineers and the requirements of computational scientists. This belief is quite scary, and was certainly unexpected when starting the work 12 years ago. However, through several examples, the talk demonstrates the problems in attempting to extract from engineers the high quality input required to produce accurate simulation results. The title provides the focus and the talk describes how living under the shadow of uncertainty has made us more innovative and more resourceful in solving problems that we never really expected to encounter when we started on this journey in 1999.
Jon Helton, Sandia National Laboratories, US

The importance of an appropriate treatment of uncertainty in an analysis of a complex system is now almost universally recognized. As consequence, requirements for complex systems (e.g., nuclear power plants, radioactive waste disposal facilities, nuclear weapons) now typically call for some form of uncertainty analysis. However, these requirements are usually expressed at a high level and lack the detail needed to unambiguously define the intent, structure and outcomes of an analysis that provides a meaningful representation of the effects and implications of uncertainty. Consequently, it is necessary for the individuals performing an analysis to show compliance with a set of requirements to define a conceptual structure for the analysis that (i) is consistent with the intent of the requirements and (ii) also provides the basis for a meaningful uncertainty and sensitivity analysis. In many, if not most analysis situations, a core consideration is maintaining an appropriate distinction between aleatory uncertainty (i.e., inherent randomness in possible future behaviors of the system under study) and epistemic uncertainty (i.e., lack of knowledge with respect to the appropriate values to use for quantities that have fixed but poorly known values in the context of the particular study being performed). Conceptually, this leads to an analysis involving three basic entities: a probability space (ASA, pA) characterizing aleatory uncertainty, a probability space (ESE, pE) characterizing epistemic uncertainty, and a model that predicts system behavior (i.e., a function f(t|a,e) that defines system behavior at time t conditional on elements a and e of the sample spaces A and E for aleatory and epistemic uncertainty). In turn, this conceptual structure leads to an analysis in which (i) uncertainty analysis results are defined by integrals involving the function f(t|a,e) and the two indicated probability spaces and (ii) sensitivity analysis results are defined by the relationships between epistemically uncertain analysis inputs (i.e., elements ej of e) and analysis results defined by the function f(t|a,e) and also by various integrals of this function. Computationally, this leads to an analysis in which (i) high-dimensional integrals must be evaluated to obtain uncertainty analysis results and (ii) mappings between high-dimensional spaces must be generated and explored to obtain sensitivity analysis results. The preceding ideas and concepts are illustrated with an analysis carried out in support of a license application for the proposed repository for high-level radioactive waste at Yucca Mountain, Nevada.
Interpreting Regional Climate Predictions
Doug Nychka, National Center for Atmospheric Research, US

As attention shifts from broad global summaries of climate change to more specific regional impacts there is a need for statistics to quantify the uncertainty in regional projections. This talk will provide an overview on interpreting regional climate experiments (physically based simulations based on coupled global and regional climate models) using statistical methods to manage the discrepancies among models, their internal variability, regridding errors, model biases and other factors. The extensive simulations being produced in the North American Regional Climate Change and Assessment Program (NARCCAP) provide a context for our statistical approaches. An emerging principle is adapting analysis of variance decompositions to test for equality of mean fields, to quantify the variability due to different components in a numerical experiment and to identify the departures from observed climate fields.
William Oberkampf, Sandia National Laboratories, US (retired)

Within the technical community, it is instinctive to conduct uncertainty quantification analyses and risk assessments for use in risk-informed decision making. The value and reliability of some formal assessments, however, have been sharply criticized not only by well-known scientists, but also by the public after high-visibility failures such as the loss of two Space Shuttles, major damage at the Three-Mile Island nuclear power plant, and the disaster at the Fukushima plant. The realities of these failures, and many others, belie the predicted probabilities of failure for these systems and the credibility of risk assessments in general. The uncertainty quantification and risk assessment communities can attempt to defend and make excuses for notoriously poor (or misused) analyses, or we can learn how to improve technical aspects of analyses and to develop procedures to help guard against fraudulent analyses. This talk will take the latter route by first examining divergent goals of risk analyses; neglected sources of uncertainty in modeling the hazards or initiating events, external influences on the system of interest, and the system itself; and the importance of mathematical representations of uncertainties and their dependencies. We will also argue that risk analyses are not simply mathematical activities, but they are also human endeavors that are susceptible to a wide range of human weaknesses. As a result, we discuss how analyses can be distorted and/or biased by analysts and sponsors of the analysis, and how results of analyses can be miscommunicated and misinterpreted, either unintentionally or deliberately.
Panel Discussion: UQ and Decision Making
Mac Hyman, Tulane University, US (Moderator)
Sandy Landsberg, Department of Energy, US [Opening Remarks]
Larry Winter, University of Arizona, US [Opening Remarks]
Charles Romine, NIST, US [Opening Remarks]

________________________________________________________________________________________________
Part II: Uncertainty Quantification Theory
Keynote Address
Michael Goldstein, Durham University, UK

Most large and complex physical systems are studied by mathematical models, implemented as high dimensional computer simulators. While all such cases differ in physical description, each analysis of a physical system based on a computer simulator involves the same underlying sources of uncertainty. These are: condition uncertainty (unknown initial conditions, boundary conditions and forcing functions), parametric uncertainty (as the appropriate choices for the model parameters are not known), functional uncertainty (as models are typically expensive to evaluate for any choice of parameters), structural uncertainty (as the model is different from the physical system), measurement uncertainty (in the data used to calibrate the model), stochastic uncertainty (arising from intrinsic randomness in the system equations), solution uncertainty (as solutions to the system equations can only be assessed approximately) and multi-model uncertainty (as there often is a family of models, at different levels of resolution, possibly with different representations of the underlying physics). There is a growing field of study which aims to quantify and synthesize all of the uncertainties involved in relating models to physical systems, within the framework of Bayesian statistics, and to use the resultant uncertainty specification to address problems of forecasting and decision making based on the application of these methods. Examples of areas in which such methodology is being applied include asset management for oil reservoirs, galaxy modeling, and rapid climate change. In this talk, we shall give an overview of the current status and future challenges in this emerging methodology, illustrating with examples of current areas of application.
Les Hatton, Kingston University, UK

For the last couple of centuries, the scientific method whereby we have followed Karl Popper's model of endlessly seeking to refute new and existing discoveries, forcing them to submit to repeatability and detailed peer review of both the theory and the experimental methods employed to flush out insecure conclusions, has served us extremely well. Much progress has been made. For the last 40 years or so however, there has been an increasing reliance on computation in the pursuit of scientific discovery. Computation is an entirely different animal. Its repeatability has proved unreliable, we have been unable to eliminate defect or even to quantify its effects, and there has been a rash of unconstrained creativity making it very difficult to make systematic progress to align it with the philosophy of the scientific method. At the same time, computation has become the dominant partner in many scientific areas. This paper will address a number of issues. Through a series of very large experiments involving millions of lines of code in several languages along with an underpinning theory, it will put forward the viewpoint that defect is both inevitable and essentially a statistical phenomenon. In other words looking for purely technical computational solutions is unlikely to help much - there very likely is no silver bullet. Instead we must urgently promote the viewpoint that for any results which depend on computation, the computational method employed must be subject to the same scrutiny as has served us well in the years preceding computation. Baldly, that means that if the program source, the method of making and running the system, the test results and the data are not openly available, then it is not science. Even then we face an enormous challenge when digitally lubricated media can distort evidence to undermine the strongest of scientific cases.
Michael Eldred, Sandia National Laboratories, US

Uncertainty quantification (UQ) is a key enabling technology for assessing the predictive accuracy of computational models and for enabling risk-informed decision making. This presentation will provide an overview of algorithms for UQ, including sampling methods such as Latin Hypercube sampling, local and global reliability methods such as AMV2+ and EGRA, stochastic expansion methods including polynomial chaos and stochastic collocation, and epistemic methods such as interval-valued probability, second-order probability, and evidence theory. Strengths and weaknesses of these different algorithms will be summarized and example applications will be described. Time permitting, I will also provide a short overview of DAKOTA, an open source software toolkit that provides a delivery vehicle for much of the UQ research at the DOE defense laboratories.
A Compressive Sampling Approach to Uncertainty Propagation
Alireza Doostan, University of Colorado

Uncertainty quantification (UQ) is an inevitable part of any predictive modeling practice. Intrinsic variabilities and lack of knowledge about system parameters or governing physical models often considerably affect quantities of interest and decision-making processes. Efficient representation and propagation of such uncertainties through complex PDE systems are subjects of growing interests, especially for situations where a large number of uncertain sources are present. One major difficulty in UQ of such systems is the development of non-intrusive approaches in which deterministic codes are used in a black box fashion, and at the same time, solution structures are exploited to reduce the number of deterministic runs. Here we extend ideas from compressive sampling techniques to approximate solutions of PDEs with stochastic inputs using direct, i.e., non-adapted, sampling of solutions. This sampling can be done by using any legacy code for the deterministic problem as a black box. The method converges in probability (with probabilistic error bounds) as a consequence of sparsity of solutions and a concentration of measure phenomenon on the empirical correlation between samples. We show that the method is well suited for PDEs with high-dimensional stochastic inputs. This is a joint work with Prof. Houman Owhadi from California Institute of Technology.
Keynote Address
Scott Ferson, Applied Biomathematics, US

Interval analysis is often offered as the method for verified computation, but the pessimism in the old saw that "interval analysis is the mathematics of the future, and always will be" is perhaps justified by the impracticality of interval bounding as an approach to projecting uncertainty in real-world problems. Intervals cannot account for dependence among variables, so propagations commonly explode to triviality. Likewise, the dream of a workable 'probabilistic arithmetic', which has been imagined by many people, seems similarly unachievable. Even in sophisticated applications such as nuclear power plant risk analyses, whenever probability theory has been used to make calculations, analysts have routinely assumed (i) probabilities and probability distributions can be precisely specified, (ii) most or all variables are independent of one another, and (iii) model structure is known without error. For the most part, these assumptions have been made for the sake of mathematical convenience, rather than with any empirical justification. And, until recently, these or essentially similar assumptions were pretty much necessary in order to get any answer at all. New methods now allow us to compute bounds on estimates of probabilities and probability distributions that are guaranteed to be correct even when one or more of the assumptions is relaxed or removed. In many cases, the results obtained are the best possible bounds, which means that tightening them would require additional empirical information. This talk will present an overview of probability bounds analysis, as a computationally practical implementation of imprecise probabilities that combines ideas from both interval analysis and probability theory to sidestep the limitations of each.
Hermann Matthies, Technische Universität Braunschweig, Germany

Parametric versions of state equations of some complex system - the uncertainty quantification problem with the parameter as a random quantity is a special case of this general class - lead via association to a linear operator to analogues of covariance, its spectral decomposition, and the associated Karhunen-Loève expansion. This results in a generalized tensor representation. The parameter in question may be a number, a tuple of numbers - a finite dimensional vector or function, a stochastic process, or a random tensor field. Examples of stochastic problems, dynamic problems, and similar will be given to explain the concept. If possible, the tensor factorization may be cascaded, leading to tensors of higher degree. In numerical approximations this cascading tensor decomposition may be repeated on the discrete level, leading to very sparse representations of the high dimensional quantities involved in such parametric problems. This is achieved by choosing low-rank approximations, in effect an information compression. These representations allow also for very efficient computation. Updating of uncertainty for new information is an important part of uncertainty quantification. Formulated in terms or random variables instead of measures, the Bayesian update is a projection and allows the use of the tensor factorizations also in this case. This will be demonstrated on some examples.
________________________________________________________________________________________________
Part III: Uncertainty Quantification Tools
Keynote Address
William Kahan, University of California at Berkeley, US

If suspicions about the accuracy of a computed result arise, how long does it take to either allay or justify them? Often diagnosis has taken longer than the computing platform's service life. Software tools to speed up diagnosis by at least an order of magnitude could be provided but almost no scientists and engineers know to ask for them, though almost all these tools have existed, albeit not all together in the same place at the same time. These tools would cope with vulnerabilities peculiar to Floating-Point, namely roundoff and arithmetic exceptions. But who would pay to develop the suite of these tools? Nobody, unless he suspects that the incidence of misleadingly anomalous floating-point results rather exceeds what is generally believed. And there is ample evidence to suspect that.
Accurate Prediction of Complex Computer Codes via Adaptive Designs
William Welch, University of British Columbia, Canada

There are many useful classes of design for an initial computer experiment: Latin hypercubes, orthogonal array Latin hypercubes, maximin-distance designs, etc. Often, the intitial experiment has about n = 10d runs, where d is the dimensionality of the input space (Loeppky, Sacks, and Welch, "Choosing the Sample Size of a Computer Experiment," Technometrics 2009). Once the computer model has been run according to the design, a first step is usually to build a computationally inexpensive statistical surrogate for the computer model, often via a Gaussian Process / Random Function statistical model. But what if the analysis of the data from this initial design provides poor prediction accuracy? Poor accuracy implies the underlying input-output function is complex in some sense. If the complexity is restricted to a few of the inputs or to local subregions of the parameter space, there may be opportunity to use the initial analysis to guide further experimentation. Subsequent runs of the code should take account of what has been learned. Similarly, analysis should be adaptive. This talk will demonstrate strategies for experimenting sequentially. Difficult functions, including real computer codes, will be used to illustrate. The advantages will be assessed in terms of empirical prediction accuracy and theoretical measures.
Peter Challenor, National Oceanography Centre, UK

The Managing Uncertainty in Complex Models project has been developing methods for estimating uncertainty in complex models using emulators. Emulators are statistical descriptions of our beliefs about the models (or simulators). They can also be thought of as interpolators of simulator outputs between previous runs. Because they are quick to run, emulators can be used to carry out calculations that would otherwise require large numbers of simulator runs, for example Monte Carlo uncertainty calculations. Both Gaussian and Bayes Linear emulators will be explained and examples given. One of the outputs of the MUCM project is the MUCM toolkit, an on-line "recipe book" for emulator based methods. Using the toolkit as our basis we will illustrate the breadth of applications that can be addressed by emulator methodology and detail some of the methodology. We will cover sensitivity and uncertainty analysis and describe in less detail other aspects such as how emulators can also be used to calibrate complex computer simulators and how they can be modified for use with stochastic simulators.
Brian Smith, Numerica 21 Inc., US

The test harness TH is a tool developed by Numerica 21 to facilitate the testing and evaluation of scientific software during the development and maintenance phases of such software. This paper describes how the tool can be used to measure uncertainty in scientific computations. It confirms that the actual behavior of the code when subjected to changes, typically small, in the code input data reflects formal analysis of the problem's sensitivity to its input. Although motivated by studying small changes in the input data, the test harness can measure the impact of any changes, including those that go beyond the formal analysis.
________________________________________________________________________________________________
Part IV: Uncertainty Quantification Practice
Keynote Address
Maurice Cox, National Physical Laboratory, UK

We examine aspects of quantifying the numerical accuracy in results from a measurement uncertainty computation in terms of the inputs to that computation. The primary output from such a computation is often an approximation to the PDF (probability density function) for the measurand (the quantity intended to be measured), which may be a scalar or vector quantity. From this PDF all results of interest can be derived. The following aspects are considered:
1.       The numerical quality of the PDF obtained by using Monte Carlo or Monte Carlo Markov Chain methods in terms of (a) the random number generators used, (b) the (stochastic) convergence rate, and its possible acceleration, and (c) adaptive schemes to achieve a (nominal) prescribed accuracy.
2.       The production of a smooth and possibly compact representation of the approximate PDF so obtained, for purposes such as when the PDF is used as input to a further uncertainty evaluation, or when visualization is required.
3.       The sensitivities of the numerical results with respect to the inputs to the computation.
We speculate as to future requirements in the area and how they might be addressed.
Antonio Possolo, National Institute of Standards and Technology, US

Model-based interpolation, approximation, and prediction are contingent on the choice of model: since multiple alternative models typically can reasonably be entertained for each of these tasks, and the results are correspondingly varied, this often is a major source of uncertainty. Several statistical methods are illustrated that can be used to assess this uncertainty component: when interpolating concentrations of greenhouse gases over Indianapolis, predicting the viral load in a patient infected with influenza A, and approximating the solution of the kinetic equations that model the progression of the infection.
James Glimm, State University of New York at Stony Brook, US

Uncertainty Quantification (UQ) for fluid mixing depends on the lengths scales for observation: macro, meso and micro, each with its own UQ requirements. New results are presented for each. For the micro observables, recent theories argue that convergence of numerical simulations in the Large Eddy Simulation (LES) should be governed by probability distribution functions (pdfs, or in the present context, Young measures) which satisfy the Euler equation. From a single deterministic simulation in the LES, or inertial regime, we extract a pdf by binning results from a space time neighborhood of the convergence point. The binned state values constitute a discrete set of solution values which define an approximate pdf. Such a step coarsens the resolution, but not more than standard LES simulation methods, which typically employ an extended spatial filter in the definition of the filtered equations and associated subgrid scale (SGS) terms. The convergence of the resulting pdfs is assessed by standard function space metrics applied to the associated probability distribution function, i.e. the indefinite integral of the pdf. Such a metric is needed to reduce noise inherent in the pdf itself. V&V/UQ results for mixing and reacting flows are presented to support this point of view.
Visualization of Error and Uncertainty
Chris Johnson, University of Utah, US

As former English Statesmen and Nobel Laureate (Literature), Winston Churchill said, "True genius resides in the capacity for evaluation of uncertain, hazardous, and conflicting information." Churchill is echoed by Nobel Prize winning Physicist Richard Feynman, "What is not surrounded by uncertainty cannot be the truth." Yet, with few exceptions, visualization research has ignored the visual representation of errors and uncertainty for three-dimensional (and higher) visualizations. In this presentation, I will give an overview of the current state-of-the-art in uncertainty visualization and discuss future challenges.
Adrian Sandu, Virginia Tech, US

Data assimilation reduces the uncertainty with which the state of a physical system is known by combining imperfect model results with sparse and noisy observations of reality. Chemical data assimilation refers to the use of measurements of trace gases and particulates to improve our understanding of the atmospheric composition. Two families of methods are widely used in data assimilation: the four dimensional variational (4D-Var) approach, and the ensemble Kalman filter (EnKF) approach. In the four dimensional variational (4D-Var) framework data assimilation is formulated as an optimization problem, which is solved using gradient based methods to obtain maximum likelihood estimates of the uncertain state and parameters. A central issue in 4D-Var data assimilation is the construction of the adjoint model. Kalman filters are rooted in statistical estimation theory, and seek to obtain moments of the posterior distribution that quantifies the reduced uncertainty after measurements have been considered. A central issue in Kalman filter data assimilation is to manage the size of covariance matrices by employing various computationally feasible approximations. In this talk we review computational aspects and tools that are important for chemical data assimilation. They include the construction, analysis, and efficient implementation of discrete adjoint models in 4D-Var assimilation, optimization aspects, and the construction of background covariance matrices. State-of-the-art solvers for large scale PDEs adaptively refine the time step and the mesh in order to control the numerical errors. We discuss newly developed algorithms for variational data assimilation with adaptive models. Particular aspects of the use of ensemble Kalman filters in chemical data assimilation are highlighted. New hybrid data assimilation ideas that combine the relative strengths the variational and ensemble approaches are reviewed. Examples of chemical data assimilation studies with real data and widely used chemical transport models are given.
Michael Heroux, Sandia National Laboratory, US

Computer architecture is changing dramatically. Most noticeable is the introduction of multicore and GPU (collectively, manycore) processors. These manycore architectures promise the availability of a terascale laptop, petascale deskside and exascale compute center in the next few years. At the same time, manycore nodes will forces a universal refactoring of code in order to realize this performance potential. Furthermore, the sheer number of components in very high-end systems increases the chance that user applications will experience frequent system faults in the form of soft errors. In this presentation we give an overview of architecture trends and their potential impact on scientific computing in general and uncertainty quantification (UQ) computations specifically. We will also discuss growing opportunities for UQ that are enabled by increasing computing capabilities, and new opportunities to help address the anticipated increase in soft errors that must be addressed at the application level.
Rafi Muhanna, Georgia Tech Savannah, US

Latest scientific and engineering advances have started to recognize the need for defining multiple types of uncertainty. The behavior of a mathematical model of any system is determined by the values of the model's parameters. These parameters, in turn, are determined by available information which may range from scarce to comprehensive. When data is scarce, analysts fall back to deterministic analysis. On the other hand, when more data is available but insufficient to distinguish between candidate probability functions, analysts supplement the available statistical data by judgmental information. In such a case, we find ourselves in the extreme either/or situation: a deterministic setting which does not reflect parameter variability, or a full probabilistic analysis conditional on the validity of the probability models describing the uncertainties. The above discussion illustrates the challenge that engineering analysis and design is facing in how to circumvent situations that do not reflect the actual state of knowledge of considered systems and are based on unjustified assumptions. Probability Bounds (PB) methods offer a resolution to this problem as they are sufficiently flexible to quantify uncertainty absent assumptions in the form of the probability density functions (PDF) of system parameters, yet they can incorporate this structure into the analysis when available. Such approach will ensure that the actual state of knowledge on the system parameters is correctly reflected in the analysis and design; hence, design reliability and robustness are achieved. Probability bounds is built on interval analysis as its foundation. This talk will address the problem of overestimation of enclosures for target and derived quantities, a critical challenge in the formulation of Interval Finite Element Methods (IFEM). A new formulation for Interval Finite Element Methods will be introduced where both primary and derived quantities of interest are included in the original uncertain system as primary variables.
Wayne Enright, University of Toronto, Canada

In the numerical solution of ODEs, it is now possible to develop efficient techniques that compute approximate solutions that are more convenient to interpret and understand when used by practitioners who are interested in accurate and reliable simulations of their mathematical models. We have developed a class of ODE methods and associated software tools that will deliver a piecewise polynomial as the approximate solution and facilitate the investigation of various aspects of the problem that are often of as much interest as the approximate solution itself. These methods are designed so that the resulting piecewise polynomial will satisfy a perturbed ODE with an associated defect (or residual) that is reliably controlled. We will introduce measures that can be used to quantify the reliability of an approximate solution and how one can implement methods that, at some extra cost, can produce very reliable approximate solutions. We show how the ODE methods we have developed can be the basis for implementing effective tools for visualizing an approximate solution, and for performing key tasks such as sensitivity analysis, global error estimation and investigation of problems which are parameter-dependent. Software implementing this approach will be described for systems of IVPs, BVPs, DDEs, and VIEs. Some numerical results will be presented for mathematical models arising in application areas such as computational medicine or the modeling of predator-prey systems in ecology.


Wednesday, December 15, 2010

On the Difficulty of Price Modeling

I was recently looking for a clean example of a service or an item that could clearly show the difficulty of the pricing of said service or item. I just found one on Dan Ariely's blog: Locksmiths. Here is the video:






Do you have other examples ?