Friday, October 29, 2010

Robust Optimization and the Donoho-Tanner Phase Transition

Of interest to this blog on Robust modeling here is an excerpt from Nuit Blanche:

Similarly, Sergey points me to this arxiv preprint which made a passing reference to CS: Theory and Applications of Robust Optimization by Dimitris Bertsimas, David B. Brown, Constantine Caramanis. The abstract reads:
In this paper we survey the primary research, both theoretical and applied, in the area of Robust Optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multi-stage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.
Reading the paper yields to this other paper I had mentioned back in April:
which makes a statement about Robust Linear Regression which in our world translates into multiplicative noise. More Rosetta Stone moments....In the meatnime, you might also be interested in the NIPS 2010 Workshop, entitled Robust Statistical learning (robustml): 

Many of these approaches are based on the fact that data are used to learn or fit models. In effect, most of the literature is focused on linear modeling. Quite a few interesting results have come out of these areas including what I have called the Donoho-Tanner phase transition. I will come back to this subject in another blog entry.

Credit: NASA.

Thursday, October 28, 2010

Call for Help / Bleg: Seeking Technical Areas Where Modeling Is Difficult.

While the recent presentations at SCM were enlightening with regards to known problems that are hard  to model, I wonder if any of the readers have a specific knowledge in a certain subject area where modeling is difficult. Please contact me and we can probably run a Q&A on this blog. If you want to remain anonymous, because you are feeling uncertain about discussing the uncertainties of the modeling in your area, I can also anonymize the Q&A.

The number of readers of this blog is currently at about 80 but I expect it to grow as this issue of robust modeling keeps raising its ugly head in many different field of science and engineering.  Let us recall the areas where robust mathematical modeling might be beneficial:

  • The data are missing or corrupted ;
  • The laws describing the phenomena are not completely known ;
  • The objectives are multiple and contradictory.
  • The computational chain has too many variables.


In the meantime, I'll feature some of the problematic I have seen that never had an easy modeling answer. 

P.S:A bleg is a beg on a blog :-)

Credit photo: ESA, NASA

Tuesday, October 26, 2010

SCM Talks: Electricity Production Management and the MOX Computational Chain

In a different direction than certain communities that are wondering if outreach to applied communities is a good thing, Bernard Beauzamy, a mathematician by trade and owner of SCM, hosted a small workshop last week on the limits of modelisation  ("Les limites de la modélisation"  in French). The workshop featured a set of speakers who are specialists in their fields yet will present their domain expertise in light of how mathematical modeling help or did not help answer their complex issues. We are not talking about just some optimization function with several goals but rather some deeper questioning on how the modeling of reality and reality itself clash with each other. While the presentation were in French, some of the slides do not need much translation if you are coming from English. Here is the list of talks with a link to the presentations:

9 h – 10 h : Dr. Riadh Zorgati, EdF R&D : Le management de l'énergie ; tentatives de modélisation : succès et échecs.
11 h – 12 h : Dr. France Wallet, Evaluation des risques sanitaires et environnementaux, EdF, Service de Santé : Modélisation en santé-environnement : pièges et limites.

14 h – 15 h : M. Giovanni Bruna, Directeur adjoint, Direction de la Sûreté des Réacteurs, Institut de Radioprotection et de Sûreté Nucléaire : Simulation-expérimentation : qui a raison ? L’expérience du combustible MOX.
16 h – 17 h : M. Xavier Roederer, Inspecteur Mission Contrôle Audit Inspection, Agence Nationale de l'Habitat : Peut-on prévoir sans modéliser ?


I could attend only two of the talks: the first and the third. In the first talk, Riadh Zorgati talked about modeling as applied in the context electricity production. He did a great job of providing the different timescales and attendant need for algorithm simplification when it comes to planning/scheduling electricity production in France. Every power plant and hydraulic resources owned by EDF (the main utility in France) have different operating procedures and capabilities as respect to how they can produce power to the grid. Since electricity has to have a continuous equilibrium between production and consumption, an aspect of the scheduling involves computing the need of the country the day after given various input a day before. As it turns out the modeling could be very detailed, but it would lead to a prohibitive computational time to get an answer for the next day of planning (more than a day's worth). The modeling is simplified to a certain extent by resorting to greedy algorithms if I recall to enable quicker predictions. The presentation has much more in it but it was interesting to see that a set of good algorithms were clearly money makers for the utility.


The third presentation was by Giovanni Bruna who talked about the problematic of figuring out how to extract meaningful information out of a set of experiments and computations in the case of plutonium use in nuclear reactors.He spent the better half of the presentation going through a nuclear engineering 101 class that featured a good introduction on the subject of plutonium use in nuclear reactors. Plutonium is a by-product of  the consumption of uranium in a nuclear reactor. In fact, after an 18 month cycle, more than 30 percent of the power of an original uranium rod is produced by the plutonium created in that time period. After some time in the core, the rod is retrieved so that it can be reprocessed yielding the issue of how plutonium can be reused in a material called MOX (at least in France, in the U.S. a policy of no reprocessing is the law of the land). It turns out that plutonium is different from uranium because of its high epithermal cross section yielding a harder spectrum than the one found with uranium. The conundrum faced by the safety folks resides in figuring out how the current measurements and attendant extrapolation to power levels can be done in confidence when replacing uranium by plutonium. The methods used with uranium have more than 40 years of history, with plutonium not so much. It turns out to be a difficult endeavor that can only be managed with a constant investigation between well done experiments and a revision of the calculation processes and a heavy use of margins. This example is also fascinating because this type of exercise reveals all the assumptions built in the computational chain starting from the cold subcritical assemblies Monte Carlo runs all the way to the expected power level found in actual nuclear reactor cores.  It is a computational chain because the data from the experiment does not say anything directly about the actual variable of interest (here the power level). As opposed to Riadh's talk, the focus here is to make sure that the mathematical modeling is robust to changes in assumptions on the physics of the system.

Thanks Bernard for hosting the workshop, it was enlightening.