Universitat de València

Spain

jose.m.bernardo@uv.es

 

Recent Developments in Objective Bayesian Statistics

 

Abstract

Important statistical inference summaries include point estimation, region estimation, and precise hypotheses testing. From a Bayesian viewpoint, those summaries may appropriately be described as the solution to specific decision problems which depend on the particular loss function chosen. The use of a continuous loss function leads to an integrated set of solutions where the same prior distribution may be used throughout.  Objective Bayesian methods use a non-subjective prior and produce results which only depend on the assumed model and the data obtained.  The combined use of the intrinsic discrepancy, an invariant information-based loss function, and appropriately defined reference priors, provides an integrated objective Bayesian solution to both estimation and hypothesis testing problems.
 
 
 
 
University of North Carolina at Chapel Hill

USA
pksen@bios.unc.edu

 

Shifting Goals and Mounting Challenges for Statistical Science in the Wake of

Multi-Disciplinary Research 
  

Abstract

The genesis of statistical science signals a clear-cut picture of diverse factors that led to t5he development of this broad field of research and practice. Although mathematical foundations were paramount in the development of theoretical perspectives, the basic need for reconciliation with various other disciplines in all walks of life and science has been a prime factor in the augmentation of theory to suit the imparting of better understanding and interpretation of their underlying stochastics as well as deterministic aspects. The goals have always been shifting from the very inception and will be even more in future. Nevertheless, the current emphasis on information technology and data mining in all walks of life and science, particularly in financial economics, social and economic science, clinical research, environmental health sciences, pharmaco- and toxico-genetics and bioinformatics at large, has opened a pandora's box for statistical and computer sciences. Coping with the emergence of massive data sets in astounding details and incredible pace has been genuinely challenging from statistical validation and interpretation perspectives. Some of these mounting challenges are assessed with possible statistical resolutions.

 
 
 
 

University of British Columbia

Canada

jim@stat.ubc.ca

 

Forecasting Phenological Events

Abstract

Although it is certain that world climate is changing, the degree, nature and impact of that change are not. Thus attention has increasingly turned to dynamically managing the risks of that change as it progresses in the future. Declining food production is one such risk and the subject of this talk, which falls under the heading of agroclimate risk management. More specifically it concerns the prediction of phenological change both within - year and over time. For example, in any one season an apple tree bears fruit after a sequence of other phenological events including bud - burst and blooming. The successive times of these events will vary randomly from year - to - year due to weather, while exhibiting trends over time due to climate. Modeling such sequences lies in the domain of time – to – event analysis, a branch of survival analysis. However, it has special features that put it outside the ambit of existing theory. First the events are progressive, i.e. irreversible, if they occur at all. Second the time to occurrence of any one event, becomes a predictor to the time to the next. Third the covariates are time – varying and the occurrence of any one event depends not just on the covariate's value at that time, but on the whole sequence of its values since the beginning of the year of interest. Finally, the goal is prediction, not hypothesis testing, the usual goal of survival analysis. Thus the work to be described in this talk will consist first of a description of an extension to time - to – event theory to cover this application. Then I will describe its application to prediction of the bloom dates of perennial crops in the Okanagan region of British Columbia. That application entails the construction of a predictive distribution for the relevant within year covariates (climate variables). I expect to be able to report on the results of downscaling climate models to enable the determination of future trends based on ongoing work. (Co-author: S. Cai)

 
 

The American University in Cairo

Egypt

http://www1.aucegypt.edu/faculty/hadi/

 

Multi-Class Data Exploration Using Space Transformed Visualization Plots 

 

Abstract

Visualization of large data sets is computationally expensive. For this reason, enveloping methods have been used to visualize such data sets. Using enveloping methods, we visualize summary statistics of the data in the space transformed visualization (STV) plots, such as the traditional parallel coordinate plot (TPCP), instead of the actual data records. Existing enveloping methods, however, are limited only to the TPCP and they can also be misleading. This is because the parallel coordinates are parameter transformations and the summary statistics computed for the original data records are not preserved throughout the transformation to the parallel coordinates space. We propose enveloping methods that avoid this drawback and that can be applied not only to the TPCP but also to a family of STV plots such as the smooth parallel coordinate plot (SPCP) and the Andrews plot. We apply the proposed methods to the min-max, the quartiles, and the concentration interval envelopes (CINES). These enveloping methods allow us to visually describe the geometry of given classes without the need of visualizing each single data record. These methods are effective for visualizing large data sets, as illustrated for real data sets, because they mitigate the cluttering effect in visualizing large-sized classes in the STV plots. Supplemental materials, including R-code, are available online to enable readers to reproduce the graphs in this paper and/or apply the proposed methods to their own data. (Co-authors:  R. E. Moustafa and J. Symanzik)

 
 

Massachusetts Institute of Technology

USA

rclarson@mit.edu

 

Service Industries and the Emergence of “Service Science”

 

 Abstract

"Number please." These words were one once heard when picking up the telephone to make a call. Yes, a human telephone operator was involved in making each connection.  This snippet from post-World-War-II history is illustrative of what once was and no longer is in service industries.  Most services which were once labor intensive have replaced human servers with technology and/or with the customer herself performing the service, i.e., "self-service."  Examples include ATM’s (Automatic Teller Machines), elevators, supermarkets, self-service gasoline stations, purchase of goods and services via the Internet, check-in at airports, and even various postal services.  Almost all of this has occurred in the last 60 years, post World War II, a time during which the service sector has grown to 75% or more of the economies of most industrialized nations. In this presentation, we review these trends by example and then illustrate some of the decision and modeling technologies that have played key roles in the transformation.  This focus on services has created a new field called "Service Science."  From an Operations Research perspective, Service Science has methodological roots going back centuries:  Euler’s birth of graph theory, so important in transportation and logistics; A.K. Erlang’s birth of queueing theory, vital in almost all service industries; and optimization – also important almost everywhere.  But the coalescing of the emerging field now known as Service Science has created new opportunities, building upon our OR traditions and expanding to include rich aspects of management and social science.  The new journal, Service Science, may soon become the next in the portfolio of INFORMS journals. We give examples in communication, health care, transportation/logistics, energy management and education.

 
 

Karlsruhe Institute of Technology and

Fraunhofer Institute for Industrial Mathematics

Germany

Stefan.Nickel@kit.edu

 

Mathematical Models for Territory Design and Extensions

 

 Abstract

Territory design may be viewed as the problem of grouping small geographic areas called basic areas (e.g. counties, zip code areas, company trading areas) into larger geographic clusters called territories in such a way that the latter meet the relevant planning criteria. Especially, the availability of GIS on computers and the growing interest in Geo-Marketing lead to an increasing importance of this area. Territory design problems treated by operations researchers are motivated by quite different applications ranging from political districting to sales and service territory design. Hereby, one can observe that only few papers consider districting problems independent from a practical background. However, when taking a closer look at the proposed models for the different applications, a lot of similarities can be noticed. Indeed the developed models are many times quite similar and can often be, more or less directly, carried over to other applications. Therefore, our aim is to provide a general, application–independent model for territory design problems and present efficient solution techniques. In this talk we will first review several typical applications for territory design problems and try to identify essential elements, common to all applications. Afterwards we will compile a model, which covers several of these aspects. Then a short overview of models and solution techniques found in the literature for solving districting problems will be given.  We will then focus on two methods for solving the problem: the commonly used location–allocation approach combined with optimal split resolution techniques and a new method which is based on ideas from the field of computational geometry. Some computational results of the new approach and possible extensions are presented. We also show how the presented techniques are successfully integrated into a commercial GIS and give some general idea on how GIS and optimization methods can interact.  In the last part of the talk we will address a recent variant of the territory design problem arising in the context of reverse logistics. The problem is motivated by the new recycling directive WEEE of the European Community. The core of this law is that each company which sells electrical or electronic equipment in a European country has the obligation to recollect and recycle an amount of returned items which is proportional to its market share. To assign collection stations to companies, in Germany for one product type a territory design approach is planned. However, in contrast to classical territory design, the territories should be geographically as dispersed as possible to avoid that a company, resp. its logistics provider responsible for the recollection, gains a monopoly in some region. First, we identify an appropriate measure for the dispersion of a territory. Afterwards, we present a first mathematical programming model for this new problem as well as some improvements. Extensive computational results illustrate the suitability of the model.

 
 

University of Texas at Dallas

USA

axb046100@utdallas.edu

 

Real Options

 

 Abstract

Real Options theory is an approach to mitigate risks of investment projects, which is based on two ideas. The first one is Hedging, borrowed from financial options, when market considerations can be introduced. The project risk must be correlated to the market risk, in which case tradable assets can be used to hedge. The second idea is flexibility. There is Flexibility in the process of decision making. In particular, one may scale down or up the project, one may stop it, one may change orientation. This flexibility allows reacting properly when Information is obtained on the uncertainties of the evolution. We review in this presentation some of the major possibilities of flexibility, to defer, to abandon, mothballing. We also consider some extensions on the investment cost, for instance the situation of tax incentives. We show that the technique of Variational Inequalities is the right mathematical tool to model these situations.  The models are developed in continuous time, where the elegant rules of Ito’s calculus apply. The concepts are discussed independently of these techniques. We also consider the possibility of competition and the situation of incomplete markets. (Co-author: A. Smith)
 
  
 

Gary Cokins

Performance Management Solutions, SAS

USA

Gary.Cokins@sas.com

http://blogs.sas.com/cokins

 

Business Analytics for Decision Making: Making It Work

 

Abstract

A recent survey by the consulting firm Accenture reported that most companies are far from where they want and need to be when it comes to implementing analytics and are still relying on gut feeling, rather than hard data, when making decisions. What is needed today is the seamless integration of managerial methodologies such as balanced scorecards, strategy maps, risk management, budgets, activity-based costing (ABC), forecasts, customer relationship and value management, and resource capacity planning. Each one should be embedded with business analytics, especially predictive analytics. Volatility is the new normal.  Analytics with statistics, including regression and correlation analysis, provide organizations with insights to make better decisions and take actions. The performance management methodologies are collectively intended to align manager and employee behavior and limited resources to focus on the organization’s strategic priorities and objectives. Performance management focuses on execution. Its purpose is not just better financial reporting and monitoring dashboard dials but on moving the dials – improving performance.  Information technology specialists complicate progress with a common misconception by equating business intelligence (BI) technologies such as query and reporting techniques with advanced analytics like data mining and forecasting. But in practice experienced analysts don’t use BI, instead they first speculate that two things are related or that some underlying behavior is driving a pattern to be seen in various data. They apply business analytics more as confirmatory than somewhat random exploratory.  In this presentation, the following topics will be covered.

·      What forces have caused interest in business analytics and statistics?
·      What the difference is between business analytics and business intelligence.
·      How applying business analytics increases the power of performance management methodologies.
·      Why business analytics, with emphasis on predictive analytics and pro-active decision making, is becoming
      a competitive advantage differentiator and an enabler for trade-off analysis.
·      How activity-based cost management (ABC/M) provides not only accurately traced calculated costs (relative
     to arbitrary broad-averaged cost allocations), but more importantly provides cost transparency back to the
     work processes and consumed resources, and to what drivers cause work activities.
·      How all levels of management can quickly see and assess how they are doing on what is important – typically
     with only a maximum of three key performance indicators (KPIs).
·      How to integrate performance measurement scorecards and ABC/M data with:

o  Strategy formulation.

o  Process-based thinking and operational productivity improvement.
o  Channel/customer profitability and value analysis and CRM.

o  Supply chain management.

o  Quality and lean management (Six Sigma, cost of quality).

 
Comments